uuid int64 541B 3,299B | dataset stringclasses 1
value | text stringlengths 1 4.29M |
|---|---|---|
1,314,259,994,863 | arxiv |
\section{PAC Performance with Different Shots} \label{sec:diff-shots}
In Figure \ref{fig:diff_shots}, we plot the target accuracy of 4 methods on the \emph{real} to \emph{clipart} adaptation scenario of Office-Home, for different number of labelled target examples. The method ``CR'' represents the consistency regularization part of PAC, meaning it starts with an Imagenet pretrained backbone, same as S+T and MME \cite{saito2019semi}. We see that being an unsupervised domain adaptation method, MME performs better at 0-shots, but PAC does not lag behind in performance by a lot. With a few labelled target examples, PAC and CR start performing better. When the number of labelled target examples increases further, MME and S+T start closing this gap, possibly just due to higher supervision from labels.
It is less evident in Figure \ref{fig:diff_shots} but still discernible that the advantage provided by rotation pretraining is greater in the case of fewer shots, than when there are more labels in the target domain.
\input{figures/diff_shots}
\section{More questions}
\noindent\textbf{Can pretraining and consistency help other methods?} An indication towards the affirmative is seen when we train MME with pretraining and consistency on the 3-shot \emph{real} to \emph{sketch} scenario of DomainNet using a Resnet-34 backbone. The results are shown in Table \ref{tab:wMME}, where we can see that pretraining and consistency both individually help MME's performance, and their combination helps it the most.
\begin{table}[h]
\begin{center}
\begin{tabular}{ccc}
\toprule[1.2pt]
Rot$^n$ & CR & Accuracy \\
\midrule
& & 61.9 \\
\checkmark & & 65.8 \\
& \checkmark & 70.4 \\
\checkmark & \checkmark & 71.5 \\
\bottomrule[1.2pt]
\end{tabular}
\end{center}
\vspace{-5mm}
\caption{Pretraining and consistency with MME.}
\label{tab:wMME}
\end{table}
\noindent\textbf{What if pretraining uses rotation prediction only on target?} We train the backbone only on target domain data for pretraining with rotation prediction, and then train it like PAC using consistency regularization. On the 3-shot \emph{real} to \emph{clipart} SSDA scenario of Office-Home using an Alexnet backbone, this achieves a final target accuracy of $57.5$\% compared to $58.9$\% of PAC. This is indicative of target-only rotation prediction helping the initial feature extractor, but not as much as in the case when source domain data is used along with it.
\begin{table}[h]
\begin{center}
\begin{tabular}{cccc}
\toprule[1.2pt]
Rot$^n$ & CR & \thead{Accuracy \\ (with source)} & \thead{Accuracy \\ (only target)} \\
\midrule
& \checkmark & 56.6 & 35.5 \\
\checkmark & \checkmark & 58.9 & 36.7 \\
\bottomrule[1.2pt]
\end{tabular}
\end{center}
\vspace{-5mm}
\caption{Ablating source domain information.}
\label{tab:semi-sup}
\end{table}
\noindent\textbf{How big is the role of source domain data in final target performance?} To see this, we train our method with no access to source domain data. This is similar to the semi-supervised learning problem. Target accuracy with only 3 labelled target examples and access to all other unlabelled examples, on the \emph{clipart} domain of Office-Home using an Alexnet backbone, are in the last column of Table \ref{tab:semi-sup}. For reference, the accuracies of our method with source domain data from the \emph{real} domain (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot R2C adaptation scenario) are provided in the 3$^{rd}$ column.
\section{Results on Office and Office-Home} \label{sec:full-results}
Table \ref{tab:office_home_all} shows the results of PAC on the different scenarios of Office-Home, the average accuracy over all these scenarios was also reported in Table \ref{tab:office_home_small}. Table \ref{tab:office_all} shows the accuracy of PAC on two scenarios of Office. We see that PAC performs comparably to state of the art. It lags behind a little in the 1-shot scenarios as compared to 3-shot ones.
\input{tables/office_all}
\input{tables/office_home_all}
\section{Experiment details} \label{sec:expt-deets}
As mentioned in Section \ref{subsec:details}, all our experiments were implemented in PyTorch \cite{pytorch} using W\&B \cite{wandb} for managing experiments.
\subsection{PAC experiments}
We used three different backbones for evaluation in different experiments---Alexnet \cite{krizhevsky2012imagenet}, VGG-16 \cite{simonyan2014very} and Resnet-34 \cite{he2016deep}. As mentioned in Section \ref{subsec:details}, while using an Alexnet or VGG-16 feature extractor, we use 1 fully connected layer as the classifier, and while using the Resnet-34 backbone, we use a 2-layer MLP with 512 intermediate nodes. The classifier $C$ uses a temperature parameter set to $0.05$ to sharpen the distribution it outputs using a softmax.
Same as \cite{saito2019semi}, we train the models using minibatch-SGD, with $s$ source examples, $s$ labelled target examples and $2s$ unlabelled target examples that the learner ``sees'' at each training step. $s=24$ for the VGG and Resnet backbones, while $s=32$ for Alexnet. The SGD optimizer used a momentum parameter $0.9$ and a weight decay (coefficient of $\ell_2$ regularizer on parameter norm) of $0.0005$. For all experiments, the parameters of the backbone are updated with a learning rate of $0.001$, while the parameters of the classifier are updated with a learning rate $0.01$. Both of these are decayed as training progresses using a decay schedule similar to \cite{ganin2015unsupervised}. Learning rate at step $i$ ($\eta_i$) is set as below:
\begin{align*}
\eta_i = \frac{\eta_0}{\left(1 + 0.0001 \times i \right)^{0.75}}
\end{align*}
For experiments on the Office and Office-Home dataset, we trained PAC using both an Alexnet and a VGG-16 backbone, and the models were trained for 10000 steps with the stopping point chosen using best validation accuracy.
For the experiments on DomainNet, we use both Alexnet and Resnet-34 backbones, while for VisDA-17, we use only Resnet-34. All models in these experiments were trained for 50000 steps, using validation accuracy for determining the best stopping point.
\subsection{Pretraining}
As mentioned in Section 4 of the main paper, we pretrain our models for rotation prediction starting from Imagenet pretrained weights. A comparison of PAC with a backbone trained for rotation prediction starting from imagenet pretraining (final target accuracy = $58.9$\%) vs one that does not use any imagenet pretraining (final target accuracy = $43.7$\%), revealed that there is important feature space information in imagenet pretrained weights that rotation prediction could not capture on its own. This comparison was done using an Alexnet on the \emph{real} to \emph{clipart} adaptation scenario of Office-Home.
Following Gidaris~\emph{et al}\onedot~\cite{gidaris2018unsupervised}, we trained the model on all 4 rotations of a single image in each minibatch. Each minibatch contained $s$ images each from source and target domains, which translates to $4s$ images considering all rotations.
The Alexnet backbones are trained using a learning rate of $0.01$ and $s = 128$. The Resnet-34 and VGG backbones are both trained using $s = 16$ and a learning rate of $0.001$. We found that beyond a certain point early on in training, the number of steps of training for rotation prediction did not make a big difference to the final task accuracy, and finally the chosen number of training steps was 4000 for Alexnet, 2000 for VGG-16 and 5000 for Resnet-34 backbones.
\subsection{Other Experiments}
\noindent \textbf{MoCo pretraining.} Using the Alexnet backbone, we trained momentum contrast \cite{he2020momentum} for 5000 training steps, where in each step the model saw 32 images each from the \emph{real} and the \emph{clipart} domains of Office-Home. The queue length used for MoCo was 4096 and the momentum parameter was $0.999$.
\noindent \textbf{Virtual Adversarial Training.} For adding a VAT criterion to our model, we closely followed the VAT criterion in VADA \cite{shu2018dirt}. We used a radius of $3.5$ for adversarial perturbations and a coefficient of $0.01$ for the VAT criterion, which is the KL divergence between the outputs of the perturbed and the unperturbed input from the target domain.
\section{Conclusion}
We showed that consistency regularization and pretraining using rotation prediction are powerful techniques in semi-supervised domain adaptation. Our method, using simply a combination of these without any adversarial domain alignment, could outperform recent state of the art on this task, most of which use adversarial alignment. We presented a thorough analysis of both of our model components showing why they are better than other options for similar approaches. We hope this can help in their use in combination with other methods on the same or related tasks.
\noindent\textbf{Acknowledgements.} This work was supported by the Hariri Institute at Boston University.
\section{Experiments} \label{sec:expts}
\subsection{Datasets} \label{subsec:datasets}
We evaluate our method, PAC, on four different datasets: DomainNet \cite{peng2019moment}, VisDA-17 \cite{peng2017visda}, Office-Home \cite{venkateswara2017Deep} and Office \cite{saenko2010adapting}. DomainNet \cite{peng2019moment} is a recent large scale domain adaptation benchmark with 6 different visual domains and 345 classes. We use a subset of four domains (Clipart, Paintings, Real, and Sketch) and 126 classes for our experiments. This subset had close to 36500 images per domain. A total of 7 different scenarios out of the possible 12 were used for evaluation. VisDA-17 is another large scale adaptation benchmark with a single adaptation scenario : the source domain consists of 152,398 synthetic images from 12 categories, and the target domain consists of 55,388 real images.
Office-Home \cite{venkateswara2017Deep} is a dataset with 65 categories of objects found in typical office and home environments. It has 4 different visual domains (Art, Clipart, Product, and Real), and we evaluate our methods on all 12 different adaptation scenarios. The 4 domains have close to 3800 images on average. Office \cite{saenko2010adapting} is a dataset of objects of 31 different categories commonly found in an office. It has 3 different categories---amazon, webcam and dslr, with approx. 2800, 800 and 500 images respectively. Following \cite{saito2019semi} we evaluated only on the 2 cases with amazon as the target domain, since the other two domains have a lot fewer images.
For each dataset and adaptation scenario, following \cite{saito2019semi}, we use one-shot and three-shot settings for evaluation, where one and three target labels per class are available to the learner respectively. For each scenario, 3 examples in the target domain are held out for validation, except in VisDA-17 where 20 examples per class were held out because of the fewer number of categories.
\subsection{Implementation Details} \label{subsec:details}
All our experiments were implemented in PyTorch \cite{pytorch} using W\&B \cite{wandb} for tracking experiments. On the Office and Office-Home datasets, we evaluated PAC using both an Alexnet \cite{krizhevsky2012imagenet} and a VGG-16 \cite{simonyan2014very} backbone. The experiments on the DomainNet dataset, used an Alexnet and a Resnet-34 \cite{he2016deep} backbone, while on VisDA-17 we evaluated our method with a ResNet-34 backbone.
While using an Alexnet or VGG-16 feature extractor, we use 1 fully connected layer as the classifier $C$, and while using the Resnet-34 backbone, we use a 2-layer MLP with 512 intermediate nodes. Our backbones before being trained using the rotation prediction task, are pretrained on the Imagenet \cite{imagenet_cvpr09} dataset, same as other methods used for comparison. Similar to \cite{saito2019semi}, in every minibatch for training, we sampled an equal number of labelled and unlabelled examples. Labelled examples came in equal numbers from both the source and target domains. We used an SGD optimizer with momentum of 0.9 and a learning rate decay schedule similar to that used by \cite{ganin2015unsupervised}. For consistency regularization, the confidence threshold $\tau$ was set to 0.9 across all experiments, having validated on the \emph{real} to \emph{sketch} scenario of DomainNet. Complete details of all experiments have been included in Appendix \ref{sec:expt-deets}.
\subsection{Results} \label{subsec:results}
\input{tables/domainnet_results}
\input{tables/visda17}
\noindent \textbf{Comparison to other approaches.} We compare PAC with different recent semi-supervised domain adaptation approaches : MME \cite{saito2019semi}, BiAT \cite{jiangbidirectional}, Meta-MME \cite{li2020online}, APE \cite{kim2020attract}, using results reported by these papers. Besides this, we also include in the tables, baseline approaches using adversarial domain alignment---DANN \cite{ganin2016domain}, ADR \cite{saito2017adversarial} and CDAN \cite{long2018conditional}, that were evaluated by Saito~\emph{et al}\onedot~\cite{saito2019semi}. The baseline ``S+T'' is a method that simply uses all labelled data available to it to train the network using cross-entropy loss. Note that PAC can be construed as ``S+T'' along with additional consistency regularization and with a warm start using rotation prediction for pretraining.
In Table \ref{tab:domainnet}, we compare the accuracy of PAC with different recent approaches on DomainNet. Remarkably, our simple approach outperforms current state of the art by 3-5\% on this benchmark with different backbones. In Table \ref{tab:visda17} holding the VisDA-17 results, besides our method, we report results of S+T and MME, that we replicated from the the implementation of \cite{saito2019semi}. We see that PAC shows strong performance, with close to 10\% improvement in accuracy over MME in the 3-shot scenario. On the smaller Office-Home dataset, as seen from the average accuracies in Table \ref{tab:office_home_small}, our method is comparable to state of the art in the 3-shot scenarios, but starts lagging a little in the 1-shot scenario. This is an effect seen in our results across Tables~\ref{tab:domainnet}, \ref{tab:visda17} and \ref{tab:office_home_small} where our improvements over state of the art are larger in the 3-shot scenario than in the case of 1-shot. We delve deeper into this and report an analysis of our method with different number of labelled target examples in Appendix \ref{sec:diff-shots}. Complete results on different scenarios of Office-Home and results on the Office dataset can be found in Appendix \ref{sec:full-results}.
\input{tables/office_home_small}
\noindent \textbf{Ablation analysis.} In Table \ref{tab:ablation}, we see what rotation prediction pretraining and consistency regularization do for final target classification performance separately. The two components provide boosts to the final performance individually, with the combination of both performing best. We see that in most cases consistency regularization helps performance by a lot, especially in the 3-shot scenarios.
\input{tables/ablation}
\noindent\textbf{Feature space analysis.} In Fig \ref{fig:tsne} we plot the 2-D TSNE \cite{maaten2008visualizing} embeddings for features generated by 5 differently trained Alexnet backbones. The embeddings are plotted for all points from 5 randomly picked classes. The source domain points which are light colored circles, come from \emph{real} images of Office-Home and the target domain points which are dark colored markers come from \emph{clipart} images. The labelled target examples are marked with X's. The two plots on the left compare differently pretrained backbones and the three on the right use backbones at the end of different SSDA training processes. In the plots we can see that pretraining the backbone for rotation prediction starts to align and cluster points according to their classes a little better than what a backbone pretrained just on Imagenet can do. Out of the methods trained on the SSDA task on the right, we see that both PAC and MME create well separated classes in feature space allowing for the classifier to have decision boundaries in low-density regions. MME explicitly minimizes conditional entropy which may draw samples even further apart from the classifier boundaries, as compared to our method which simply tries to ensure that the classifier does not separate an example and its perturbed version.
\input{figures/tsne}
In Table \ref{tab:feat_space}, we quantitatively analyze the features via three different metrics : The $\mathcal A$-distance is a distance metric between the two domains in feature space computed using an SVM trained to classify domains as done in \cite{ben2006analysis}. The higher the error of the SVM domain classifier, the lower is the $\mathcal A$-distance. The other two metrics are accuracies of distance based classifiers in feature space. The first one, ``Dist Acc. (Target)'' is the accuracy of a classifier that assigns any unlabelled target example, the class label of the target labelled examples closest to it on average in the feature space. ``Dist. Acc. (Source)'' similarly uses only the source examples, all of which are labelled, to compute the class label for an unlabelled target example. Comparing the pretrained backbones, we see that rotation pretraining improves the feature space both by bringing closer the features across the two domains (as indicated by the low $\mathcal A$-distance) and aligning them so that features from the same class are closer to one another (indicated by the higher accuracies). When it comes to final feature spaces of the SSDA methods, we see that MME, being a domain alignment method, reduces $\mathcal A$-distance more than PAC. However, PAC is able to better maintain the class-defined neighborhood of features, as indicated by the higher accuracies. This also indicates that metrics like domain discrepancy may be secondary to the performance of a good classifier that maintains a class-defined feature space neighborhood across both source and target domains.
\input{tables/feature_space}
\input{figures/aug_choice}
\noindent\textbf{Which perturbation technique is best?}
We compared three different image augmentation approaches : RandAugment \cite{cubuk2020randaugment} involves a list of 14 different augmentation schemes like translations, rotations, shears, color/brightness enhancements etc., 2 out of which are chosen randomly anytime an image is augmented. We also evaluated color jittering, since common objects in our datasets are largely invariant to small changes in color. Finally we tried a combination of both, and found that this performed best for our method. Fig \ref{fig:aug_choice} shows the comparison of the final target accuracies achieved using an Alexnet backbone on the \emph{real} to \emph{clipart} adaptation scenario of Office-Home. Besides perturbations based on augmentation, we also evaluated adversarial image perturbation via virtual adversarial training (VAT) \cite{miyato2018virtual}. When using VAT, we found improvements over the simple ``S+T'' method (48.3\% using VAT vs 44.6\% without), but as seen from Fig \ref{fig:aug_choice}, we found this was much lower than image augmentation approaches. This is quite likely because image augmentation imposes a more meaningful neighborhood on images where class labels do not change, while adversarial perturbation does not have this guarantee.
\noindent\textbf{Can consistency regularization fix more errors than MME?} Short answer : yes. In Sec. \ref{sec:related_work}, we mentioned that consistency regularization, because of the perturbations it makes in image space, can fix errors of the kind that simple conditional entropy minimization, the way it is done in MME, cannot. We validate this hypothesis by training both methods from a randomly initialized feature extractor, where we expect initial features to have a much less meaningful neighborhood in feature space. In Table \ref{tab:from_scratch}, we see a larger gap in the performance of MME starting from a pretrained vs a randomly initialized backbone, which tells us that consistency regularization can fix a lot more errors in the initial feature space than MME. Note that ``Ours (CR)'' method here does not include any rotation pretraining for this comparison.
\input{tables/from_scratch}
\noindent\textbf{Sensitivity to confidence threshold.}
Our consistency regularization approach uses soft targets based on outputs of the classifier only in cases where the confidence of labelling is high. In Fig \ref{fig:thres_choice}, we compare the sensitivity of our method to this threshold. We see that higher confidence thresholds up to 0.9 help final target classification performance.
\input{figures/thres_choice}
\noindent\textbf{How does pretraining with rotation prediction compare to a constrastive method?} Contrastive pretraining methods \cite{he2020momentum, chen2020simple} have been shown to attain remarkable performance in learning features from unlabelled images that are useful for tasks like image recognition and object detection. We evaluate how momentum contrast (MoCo) \cite{he2020momentum} performs for pretraining our feature extractor on both source and target images, compared with rotation prediction. Table \ref{tab:pt_comparison} compares the same metrics as Table \ref{tab:feat_space} with the addition of final method (training with labels and consistency regularization) performance on target classification. We see that, like rotation prediction, MoCo improves the imagenet pretrained features to some extent. It has marginally better class-defined structure across domains, but a poorer structure in the target domain indicated by the accuracies of the distance based classifiers. Finally as seen under ``Final Acc.'' in the table, when training our method from different initializations, with a MoCo pretrained backbone we see better results than an Imagenet pretrained one, but poorer than one that was pretrained on the rotation prediction task.
\input{tables/pt_comparison}
\section{Introduction}
The problem of visual domain adaptation arises, when a learner must leverage labelled source domain data to classify instances in the target domain, where it has limited access to ground-truth annotated labels. An example of this is the problem of learning to classify real-world images based on hand-sketched depictions. The problem is challenging because discriminative features that are learnt while training to classify source domain instances may not be meaningful or sufficiently discriminative in the target domain. As described in prior works, this situation can be viewed as arising from a ``domain-shift'', where the joint distribution of features and labels in the source domain does not follow the same law in the target domain.
\input{figures/hard_cases}
We consider the problem of semi-supervised domain adaptation (SSDA). Namely, given ground-truth labelled source instances, a few target labelled examples, and unlabeled target-domain data, a learner's goal is to classify unlabelled examples in the target domain. In this context, a number of prior approaches have proposed to address the domain-shift problem by aligning features of source and target domain. This is so that a classifier learnt on source domain labels also correctly classifies target domain examples. In particular, a substantial set of these works propose methods rooted in adversarial domain alignment \cite{ajakan2014domain, long2018conditional, saito2018maximum, saito2019semi, tzeng2017adversarial, zhang2019bridging}, deriving from the theory presented by Ben-David~\emph{et al}\onedot~\cite{ben2010theory}.
In this paper, we use simple label consistency \cite{sajjadi2016regularization} and rotation prediction for pretraining \cite{gidaris2018unsupervised} to propose a method, PAC (Pretraining and Consistency), that is competitive with the current state of the art on semi-supervised domain adaptation over multiple datasets. Our method does not use any adversarial domain alignment but is able to outperform them in most cases. Notably, PAC achieves better target accuracy than all comparable state-of-the-art methods on the large and challenging DomainNet \cite{peng2019moment} benchmark by 3-5 \% on the 1-shot and 3-shot (1 and 3 labelled target examples respectively) SSDA scenarios.
Fig \ref{fig:hard} (a), shows feature space data distribution of points in a hypothetical binary classification problem where simply aligning features can lead to many target domain features being mapped close to source features of a different class. A classifier learnt with source labels and possibly a few target labels can make errors here. Starting with an initial feature extractor that generates features somewhat meaningful to the image categories, can remedy this situation to some extent.
Most recent domain adaptation approaches use an Imagenet \cite{imagenet_cvpr09} pretrained backbone as the starting point for their feature extractor. While adopted to be universally meaningful, these features are still limited by the kind of images in the Imagenet dataset. We use self-supervision via rotation prediction to enhance this Imagenet pretrained backbone for the particular domain adaptation task. This was first proposed by Gidaris~\emph{et al}\onedot~\cite{gidaris2018unsupervised} for learning features from unlabelled images, and it was recently found in a study by Wallace~\emph{et al}\onedot~\cite{wallace2020extending} to be more semantically meaningful than an array of other self-supervision objectives.
Also key to our approach is label consistency using image space perturbations. This relates to cluster assumption for an ideal classifier \cite{chapelle2005semi}, which states that points in the same cluster in input space should be classified similarly. It is equivalent to the \emph{low-density separation assumption} meaning that a classifier's decision boundaries should lie in regions of low density of data since that makes the classifier less likely to change its output on nearby/perturbed data \cite{verma2019interpolation}. Label consistency or consistency regularization is a way of enforcing this by making a classifier invariant to small input perturbations and thus to neighborhoods that may form clusters. In our approach, we do this using image augmentation methods like RandAugment \cite{cubuk2020randaugment} and color jittering, and the model is trained to produce the same output for both a perturbed and an unperturbed version of the image.
In Fig \ref{fig:hard} (b) we try to motivate a scenario where consistency regularization can help fix errors that adversarial domain alignment might make. By way of perturbing data, the classifier is encouraged to not have decision boundaries close to these data points, allowing them to cluster. We note here that data augmentation is a powerful perturbation technique since it indicates a meaningful neighborhood in images. In other words, manipulating the image via small translations, rotations, color manipulations etc. does not change the category of the image, as far as it is done on common objects where some fine detail in the image may not play a big role in recognition. Via these perturbations, consistency regularization can help correctly cluster points that may initially lie on the wrong side of the decision boundary.
Our contributions in this paper are two-fold:
\begin{itemize}
\item We propose a simple semi-supervised domain adaptation method PAC, based on label consistency and rotation prediction for pretraining, which achieves state of the art accuracy on SSDA across multiple datasets.
\item We analyze thoroughly, individual components of our method and how they affect performance, providing an understanding of these components and thus possibly how they can be combined with other techniques.
\end{itemize}
\section{Pretraining and Consistency (PAC)}
\input{figures/model_fig}
Before describing our approach, we introduce notation to describe the problem and our model's components. Available to the model are two sets of labelled images : $\mathcal{D}_s = \{(\boldsymbol{x}_i^s, y_i^s)\}_{i=1}^{n_s}$, the labelled source images and $\mathcal{D}_t = \{(\boldsymbol{x}_i^t, y_i^t)\}_{i=1}^{n_t}$, the few labelled target images, and additionally the set of unlabelled target images $\mathcal{D}_u = \{x_i^u\}_{i=1}^{n_u}$. The goal is to predict labels for these images in $\mathcal{D}_u$. The final classification model consists of two components : the feature extractor $F$ and the classifier $C$. $F$ generates features $F(\boldsymbol{x})$ for an input image $\boldsymbol{x}$ which the classifier operates on to produce output class scores $C(F(\boldsymbol{x})) \in \mathbb{R}^{K}$, where $K$ is the number of categories that the images in the dataset could belong to. In our experiments, $F$ is a convolutional network and produces features with unit $\ell_2$-norm, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot $\norm{F(x)}_2 = 1$ (following \cite{saito2019semi}). $C$ consists of one or two fully connected layers.
An overview of PAC is shown in Fig \ref{fig:model}. Our final model is trained in two stages:
\subsection{Rotation Pretraining} \label{subsec:rot}
We first train our feature extractor $F$ with the self-supervised task of predicting image rotations (Fig \ref{fig:model} (left)) on both the source and target datasets, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, all images in $\mathcal{D}_s, \mathcal{D}_t$ and $\mathcal{D}_u$. Without using image category labels, we train a 4-way classifier to predict one out of 4 possible angles ($0^\circ$, $90^\circ$, $180^\circ$, $270^\circ$) that an input image has been rotated by. We follow the procedure of Gidaris~\emph{et al}\onedot~\cite{gidaris2018unsupervised} and in each minibatch, we introduce all 4 rotations of a given image to the classifier. This backbone is then used as the initialization for semi-supervised domain adaptation training.
\subsection{Consistency Regularization} \label{subsec:cons}
Consistency regularization promotes the final model $C \circ F$ to produce the same output for both an input image $\boldsymbol{x}$ and a perturbed version $\boldsymbol{x} + \delta$. We introduce these perturbations using image level augmentations: RandAugment \cite{cubuk2020randaugment} along with additional color jittering. Given an unlabelled image $\boldsymbol{x} \in \mathcal{D}_u$, we first compute the model's predicted class distributions
\begin{gather*}
p_x = p(y | \boldsymbol{x}; F, C) = \textnormal{softmax}(C(F(\boldsymbol{x}))) \\
q_x = p(y | \boldsymbol{x} + \delta; F, C) = \textnormal{softmax}(C(F(\boldsymbol{x} + \delta)))
\end{gather*}
$p_x$ is then confidence thresholded using a threshold $\tau$ and the following is used as the consistency regularization loss.
\begin{align}
\mathcal L_{CR}(\boldsymbol{x}) = \mathbbm{1}[\max_{k \in K} p_x(k) \ge \tau] H(p_x, q_x)
\end{align}
where $\mathbbm{1}$ is an indicator function and $H(p_x, q_x) = \sum_{k \in K} -p_x(k)\log (q_x(k))$ is cross-entropy. Note that $p_x(\cdot)$ has been used to index into the $K$ elements of $p_x$. Intuitively, an unperturbed version of image $\boldsymbol{x}$ is used to compute \emph{pseudo-targets} for the perturbed version $\boldsymbol{x} + \delta$, which is only used when the pseudo-target has high confidence ($\max_{k} p_x(k) \ge \tau$). We also note here that the target $p_x$ is not used in computing gradients for the parameters of the network. For the labelled examples from $\mathcal{D}_s$ and $\mathcal{D}_t$, we use the same perturbations but with ground truth labels as targets.
The model is optimized using minibatch-SGD, with minibatches $M_s, M_t$ and $M_u$ sampled from $\mathcal{D}_s, \mathcal{D}_t$ and $\mathcal{D}_u$ respectively. The final optimization criterion used is
\begin{align*}
\mathcal L = \frac{1}{|M_s|} \sum_{(\boldsymbol{x}, y) \in M_s} & H(\bar{y}, \boldsymbol{x}) + \frac{1}{|M_t|} \sum_{(\boldsymbol{x}, y) \in M_t} H(\bar{y}, \boldsymbol{x}) \\
&+ \frac{1}{|M_u|} \sum_{\boldsymbol{x} \in M_u} \mathcal L_{CR} (\boldsymbol{x})
\end{align*}
where $\bar{y} \in \mathbb{R}^{K}$ is the one-hot representation of $y \in [K]$ or $\bar{y}(i) = \mathbbm{1} [i=y]$.
\section{Outline}
Try related work at the end? Maybe this is quite related to prior work to warrant related work being 2nd
\textbf{Introduction : }
Write about the problem, example of visual domain adaptation. Application : sim 2 real. (Image?)
Define semi-supervised DA. Some problems with classical domain alignment DA approaches that now have additional info for semi-supervised DA.
Attempts to solve SSDA? Not many have used class conditional alignment argument ()
\textbf{Related Work:}
\textbf{Method:}
Optimal Transport, Discrepancy reduction using that, Data augmentation and consistency loss, Rotation for pre-alignment (?).
\textbf{Experiments:}
Table of results. Ablations. Others? (one could be sensitivity to params)
\section{Random notes}
Color augmentations were big in ``Domain Randomization for Sim-Real''
OT for a ``holistic'' alignment. It is also a soft alignment, allows for early errors (?)
Somewhere : put the intuition for why a good initial feature extractor is needed. the XOR problem.
{\small
\bibliographystyle{ieee_fullname}
\section{Background and Related Work} \label{sec:related_work}
\noindent\textbf{Adversarial Domain Alignment.}
Ben-David~\emph{et al}\onedot~\cite{ben2010theory} presented a theory of domain adaptation where they introduced an upper bound on target domain error of a classifier. Given a classifier $h$ in the hypothesis space $\mathcal H$. They showed that,
\begin{align*}
\epsilon_T(h) \leq \epsilon_S(h) + \frac12 d_{\mathcal H \Delta \mathcal H} (X_S, X_T) + \lambda
\end{align*}
where $\epsilon_S(h)$ and $\epsilon_T(h)$ are the source and target domain errors of the classifier $h$, $\lambda = \min_{h \in \mathcal H} \epsilon_S(h) + \epsilon_T(h)$ is the error of the ideal joint classifier and
\begin{align*}
d_{\mathcal H \Delta \mathcal H} (X_S, X_T) \triangleq 2 \sup_{h, h' \in \mathcal H} | & \mathbb{E}_{x \sim X_S}[h(x) \neq h(x')] \\
&- \mathbb{E}_{x \sim X_T}[h(x) \neq h(x')] |
\end{align*}
where $X_S$ and $X_T$ represent the source and target domain data distributions. Intuitively, this domain divergence $d_{\mathcal H \Delta \mathcal H}$ is a measure of the maximum change in a classifier's outputs on the target domain, when it changes only a little on source. Note that the divergence is equivalently defined using a single classifier from $\mathcal H \Delta \mathcal H$, as opposed to using two from $\mathcal H$ (readers are recommended to refer to Definition 3 of \cite{ben2010theory} for a more elaborate description).
A range of visual domain adaptation approaches tries to find a feature space $\phi$ such that the distributions $\phi(X_S)$ and $\phi(X_T)$ have low divergence.
Computing the divergence requires a classifier $f_d \in \mathcal H \Delta \mathcal H$, which may or may not have any relation to the classifier for categories in the final classification task.
Since finding the divergence involves a supremum over the classifier parameters, this results in a minimax objective for reducing domain divergence. This, along with minimizing the error of a classifier $f_c$ on $\phi(X_S)$ is broadly the approach adopted by a range of unsupervised domain adaptation methods \cite{ajakan2014domain, ganin2015unsupervised, long2018conditional, saito2018maximum, tzeng2017adversarial, zhang2019bridging}.
Adversarial domain alignment makes the assumption that aligning source and target features along with learning a good classifier on source features are sufficient for learning a good target domain classifier. Shu~\emph{et al}\onedot~\cite{shu2018dirt} showed, using the example of a domain discriminator \cite{ganin2015unsupervised} that this may not necessarily be true, especially when feature extractors have very high capacity---which is the case for commonly used deep convolutional networks. They also showed that cluster assumption \cite{chapelle2005semi} for an ideal classifier can have a role to play in domain adaptation.
\noindent\textbf{Semi-supervised learning and Cluster Assumption.}
Semi-supervised learning, where a classifier learns using a mixture of labelled and unlabelled examples, has a long running literature. Cluster assumption \cite{chapelle2005semi} is widely accepted for robust image classification and its enforcement has been shown to lead to good performance in a range of semi-supervised learning methods. Conditional entropy minization is a way of enforcing cluster assumption \cite{grandvalet2005semi}.
Consistency regularization, as discussed in the introduction, is another method of enforcing that a classifier's labelling be consistent between input examples and their perturbed versions. Making a classifier invariant to these perturbations makes its decision boundaries lie in regions of low data density. Both random \cite{sajjadi2016regularization} and adversarial perturbations \cite{miyato2018virtual} have been proposed in this context.
More recently, Fixmatch \cite{sohn2020fixmatch}, with a simple consistency regularization approach using image based augmentations with RandAugment \cite{cubuk2020randaugment}, achieved remarkable performance on semi-supervised classification on CIFAR-10 with very few labelled examples per class. PAC, like Fixmatch, uses RandAugment for consistency regularization. We additionally use color jittering. In our experiments we found these image augmentations to work better than adversarial perturbations, possibly because the latter may not guarantee preserving an image's semantic meaning.
\noindent\textbf{Semi-supervised Domain Adaptation.}
As mentioned above, conditional entropy minimization has been used to enforce cluster assumption.
Saito~\emph{et al}\onedot~\cite{saito2019semi} cleverly built this into a minimax optimization problem to propose a domain alignment method, called minimax entropy (MME) that also plays by the cluster assumption. They evaluated MME on semi-supervised domain adaptation, and found that it performed better than other domain alignment approaches without conditional entropy minimization.
Their approach uses entropy maximization with the classifier in an attempt to move the boundary close to and in between target unlabelled examples. Subsequent conditional entropy minimization using the feature extractor clusters the target unlabelled examples largely according to the ``split'' created by the classifier. Note that with this approach the neighborhood already gets defined by the classifier and if errors are made here, they are harder to fix. So, while PAC can fix errors like the ones in Fig. \ref{fig:hard} (b), MME may find it harder to do so. We demonstrate this in Sec \ref{subsec:results} by comparing the performance of both methods from a randomly initialized backbone, and find that our method performs better.
Saito~\emph{et al}\onedot~\cite{saito2019semi} also used a benchmark that has been used by subsequent approaches for evaluation of SSDA methods. Here we describe some of these approaches.
APE \cite{kim2020attract} uses different feature alignment objectives, within and across domains along with a perturbation consistency objective. BiAT\cite{jiangbidirectional} uses multiple adversarial perturbation strategies and consistency losses alongside MME.
Li~\emph{et al}\onedot~\cite{li2020online} propose an online meta-learning framework using target domain labelled data for meta-testing. They evaluate this approach with multiple domain alignment methods. We use their meta-MME model on semi-supervised domain adaptation for comparison.
\noindent\textbf{Self Supervision and Domain Adaptation.}
In the absence of any labelled training data, different self-supervision objectives \cite{carlucci2019domain, chen2020simple, he2020momentum, gidaris2018unsupervised, misra2020self} have been proposed that can learn semantically meaningful image features for tasks like image classification and object detection.
In domain adaptation some recent approaches \cite{carlucci2019domain, sun2019unsupervised, xu2019self} have used self-supervision tasks as an auxiliary objective to regularize their model. Saito~\emph{et al}\onedot~\cite{saito2020universal}, used a self-supervised feature space clustering objective for universal domain adaptation. PAC differs from these approaches in that we use rotation prediction to pretrain our feature extractor. This helps our initial feature extractor output more relevant and semantically meaningful features for the classification task at hand. In our experiments, we compared this to MoCo \cite{he2020momentum}, a recent contrastive self-supervision approach, and found features learnt using rotation prediction to have better properties for our task. This is also in line with the findings of Wallace~\emph{et al}\onedot~\cite{wallace2020extending}, where rotation prediction was found to be more semantically meaningful compared to other self-supervision objectives, for a range of classification tasks across different datasets.
\section*{Appendices}
|
1,314,259,994,864 | arxiv | \section{Introduction}\label{sec:intro}
Among all the applications of the Heegaard Floer package, the mapping cone formula proved by Ozsv\'ath and Szab\'o first for integer surgery \cite{integer} then for rational surgery \cite{rational}, is one of the most influential tools. It connects the Heegaard Floer theory with three and four manifold problems, and has seen applications in all aspects of low dimensional topology, to name a few examples, in the cosmetic surgery conjecture \cite{cosmetic}, surgery obstructions \cite{surgeryob1,surgeryob2}, the Berge conjecture \cite{berge1, berge2, bergehedden}, the cabling conjecture \cite{cabling} and exceptional surgeries \cite{reducible,half-integer}.
There are a handful of generalizations of the original mapping cone formula, including the filtered mapping cone formula \cite{HL}, the link surgery formula \cite{link}, the involutive mapping cone formula \cite{invocone} and the involutive filtered mapping cone formula \cite{filteredinvolutive}.
Hedden and Levine's filtered mapping cone formula defines a second filtration on the original mapping cone, and thus computes the knot Floer complex of the dual knot in the knot surgery. There has been quite a few success in utilizing this tool to understand topological questions, see for example \cite{homologyconcordance,hugo}. In the other direction, Truong proved the ``large surgery" theorem for the $(n,1)$--cable of the dual knot in \cite{Truong}. Her key observation was that the original diagram used by Ozsv\'ath and Szab\'o to compute the large surgery of a knot also specifies the $(n,1)$--cable of the dual knot, with the addition of a second basepoint.
Using this observation as an ingredient, we proved a filtered mapping cone formula for the $(n,1)$--cable of the knot meridian, generalizing both Hedden-Levine and Truong's results.
Moreover, same as Hedden and Levine's filtered mapping cone formula, our mapping cone agrees with Ozsv\'ath and Szab\'o's original mapping cone apart from the second filtration. Indeed, the $2$-handle cobordism maps in the exact surgery triangle are all the same in the above constructions, while the placement of an extra basepoint determines the second filtration. This perspective was adopted by Eftekhary as early as in \cite{Eftekhary1} and \cite{Eftekhary2}. Comparing to Hedden-Levine's result, we merely shift the second basepoint, which results in a refiltering of the mapping cone.
From this perspective, our result also can be seen as a generalization of the original mapping cone formula, where it further demonstrates how the different placement of a second basepoint affects the knot filtration on the mapping cone.
On the other hand, our new formula is practically meaningful. For the knots in $3$-manifolds other than $S^3$, even those in integer homology spheres, in the regard of knot Floer information there is very little known to us.
This is mainly due to a lack of computable examples. At present, for the knots in homology spheres, the only effective tools for computing the knot Floer complex are the filtered mapping cone formula and the knot lattice homology (proved invariant in \cite{invariance1,invariance2}). We hope that by studying the examples produced by this new filtered mapping cone formula, one can understand more about the properties of various different knots in homology spheres and as well as the manifolds themselves.
The construction of our mapping cone follows closely Hedden-Levine's framework in \cite{HL}, but it comes with its own challenges. Basically, there are a few choices to make when constructing a filtered mapping cone, and the choices that are suitable for generalization for our purpose do not always agree with those made in \cite{HL}. We will point out each time when we make a different choice.
\begin{figure}
\labellist
\pinlabel $K_{n,\lambda}$ at 72 130
{ \Large
\pinlabel $K$ at 117 30
}
\endlabellist
\includegraphics{knot}
\caption{The $(n,1)$--cable of a meridian of a knot $K$, where $n=3,$ inside the $\lambda$--framed surgery on $K$.}
\label{fig: knot}
\end{figure}
\subsection{Applications}
It turns out that the examples produced by the new filtered mapping cone formula are rich and abundant. We have
\begin{theorem}\label{thm: phi}
For any integers $i$ and $j$ such that $i>j\geq 0,$ and any given $h\in \mathbb{Z}$,
there exists $(Y,K)\in \widehat{\mathcal{C}}_\Z$ such that $\varphi_{i,j}(K)=h$.
\end{theorem}
To explain the notations used here, the (smooth) homology concordance group $\widehat{\mathcal{C}}_\Z$ is generated by pairs $(Y,K),$ where $Y$ is an integer homology sphere bounding a homology ball and $K$ a knot in $Y.$ Two classes $(Y_1,K_1)$ and $(Y_2,K_2)$ are equivalent in $\widehat{\mathcal{C}}_\Z$ if there is a homology cobordism from $Y_1$ to $Y_2$ in which $K_1$ and $K_2$ cobound a smoothly embedded annulus. The concordance invariants $\varphi_{i,j}$ defined in \cite{Homoconcor}
are homomorphisms $\widehat{\mathcal{C}}_\Z \rightarrow \mathbb{Z},$ where $(i,j)\in (\mathbb{Z} \times \mathbb{Z}^{\geq 0}) - (\mathbb{Z}^{<0} \times \{0\})$. They generalize the concordance homomorphisms $\varphi_i$ defined in \cite{Moreconcor} and are used to prove the existence of a $\mathbb{Z}^\infty$ summand in $\widehat{\mathcal{C}}_\Z/{\mathcal{C}}_\Z$, where ${\mathcal{C}}_\Z$ is the subgroup of $\widehat{\mathcal{C}}_\Z$ generated by the knots in $S^3.$ The reader is referred to the original sources to learn more about those concordance homomorphisms. We also offer a brief review in Section \ref{ssec: ring}.
Note that Theorem \ref{thm: phi} is in contrast to the examples that come from the original filtered mapping cone formula. It was shown according to \cite[Corollary 1.3]{filteredinvolutive} that for any knot $K\subset S^3,$ the knot meridian inside the $1/p$-surgery on $K$ for any integer $p$ will have $\varphi_{i,j}=0$ for all $\abs{i-j}>2.$ (\cite[Corollary 1.3]{filteredinvolutive} only proved the case for $p=\pm 1$, but the case for rational surgeries follows similarly.)
Theorem \ref{thm: phi} is the immediate consequence of the following computational result. Performing $+1$-surgery on the torus knot $T_{2,4k+3}$, let $J_{n,k}$ denote the $(n,1)$--cable of the knot meridian in $S^3_1(T_{2,4k+3})$ connected sum with the unknot\footnote{ The invariants $\varphi_{i,j}$ were defined for knots in any integer homology spheres, so we could also talk about $\varphi_{i,j}$ of the $(n,1)$--cable of the knot meridians. Since connected summing with the unknot does not change $\varphi_{i,j}$, those $\varphi_{i,j}$ would have the same values as in Proposition \ref{prop: phi}.} in $-S^3_1(T_{2,4k+3})$. The ambient manifold $S^3_1(T_{2,4k+3}) \mathbin{\#} -S^3_1(T_{2,4k+3})$ is homology cobordant to $S^3$.
\begin{restatable}[]{proposition}{propphi}
\label{prop: phi}
For any $k\geq 0$ and $n\geq 1$, we have
\begin{align*}
\varphi_{i,k}(J_{n,k})=
\begin{cases}
-1 & i=k+n\\
0 & i>k+n .
\end{cases}
\end{align*}
\end{restatable}
Proposition \ref{prop: phi} is proved in Section \ref{ssec: examples}, and we also refer the reader to Figure \ref{fig: complex_example} for one example of this infinite family.
Next, let $\widehat{\mathcal{C}}_{\mathbb{Z},\text{top}}$ be the topological version of the homology concordance group, where two classes $(Y_1,K_1)$ and $(Y_2,K_2)$ are equivalent if $K_1$ and $K_2$ cobound a locally flat annulus in a \emph{topological} homology cobordism from $Y_1$ to $Y_2$ (which does not need to have a smooth structure). Let $\psi \colon\thinspace \widehat{\mathcal{C}}_\Z \rightarrow \widehat{\mathcal{C}}_{\mathbb{Z},\text{top}}$ be the natural map that forgets the smooth structure.
\begin{theorem}\label{thm: phiker}
The classes $(Y,K)$ in Theorem \ref{thm: phi} can be taken inside $\operatorname{ker}\psi.$
\end{theorem}
\begin{proof}
The proof proceeds the same way as the proof of \cite[Theorem 1.7]{hugo}. For the convenience of the reader, we repeat the complete proof here. According to \cite[Proposition 6.1]{twoTorsion}, up to quotienting acyclic complexes, the positive-clasped untwisted Whitehead double of $T_{2,3}$, denoted by $D$, has the same knot Floer complex as $T_{2,3}$. Thus instead of applying the filtered mapping cone formula to $T_{2,4k+3}$, applying it to the knot $D_{k} \coloneqq (2k+1)D$ yields a complex with identical concordance invariants.
On the other hand, by the work of Freedman and Quinn \cite{FQbook}, the knot $D_k$ has trivial Alexander polynomial, thus is topologically slice. Consider the $4$-manifold $W_k$ obtained by attaching a $+1$-framed two-handle to $B^4$ along $D_k$. Inside $W_k$, the core of the two-handle and the (topologically) slice disk of $D_k$ form a sphere; let $Z$ denote a tubular neibourhood of this sphere. Then $Z$ is a disk bundle with Euler number one, therefore has $S^3$ as its boundary. Deleting $Z$ and gluing back in a $B^4$, the resulting manifold $W'_k$ has the same homology as a point according to Mayer-Vietoris, thus is contractible due to Whitehead Theorem. At the same time, in the $+1$-surgery of $D_K$, the dual knot is isotopic to a $0$-framed longitude, which bounds a locally flat disk in $B_4$ disjoint from the slice disk of $D_k$. Therefore $S^3_1(D_k)$ bounds a contractible manifold $W'_k$, in which the dual knot of $D_k$ bounds a locally flat disk. Finally notice that the $(n,1)$--cable of a slice knot is slice.
\end{proof}
\begin{remark}
It is worth noting the contractible manifold that $S^3_1(D_k)$ bounds is a topological manifold. In fact, the $d$--invariant obstructs $S^3_1(D_k)$ from bounding a contractible smooth manifold.
\end{remark}
So far, the examples that generate the $\mathbb{Z}^\infty$ summand in $\widehat{\mathcal{C}}_\Z/{\mathcal{C}}_\Z$ are necessarily in distinct integer homology spheres. Answering a question raised in \cite{Homoconcor}, we have the following.
\begin{theorem}
The $\mathbb{Z}^\infty$ summand in $\widehat{\mathcal{C}}_\Z/{\mathcal{C}}_\Z$ can be generated by knots inside one single integer homology sphere $Y$.
\end{theorem}
\begin{proof}
By fixing $k$ and varying $n,$ we obtain an infinite family of knots $\{J_{n,k}\}_{n\in \mathbb{N}}$, which immediately implies that the homomorphism $\bigoplus_{i>k} \varphi_{i,k}$ is surjective onto $\mathbb{Z}^\infty$ by Proposition \ref{prop: phi}.
\end{proof}
As a slight variant of the group $\widehat{\mathcal{C}}_\Z$, \cite[Remark 1.13]{homologyconcordance} considered the subgroup of $\widehat{\mathcal{C}}_\Z$ consisting of all pairs $(Y,K)$ such that $Y$ bounds a homology $4$-ball in which $K$ is freely nullhomotopic (or equivalently, in which $K$ bounds an immersed disk). As they remarked, this variant is arguably a more appropriate generalization of the concordance group than $\widehat{\mathcal{C}}_\Z$, as it measures the failure
of replacing immersed disks by embedded ones. Clearly, if $Y$ bounds a smooth $4$-manifold that is contractible, the above requirements are automatically satisfied for any knot $K\subset Y$. This prompts the question:
\begin{question}
Can the $\mathbb{Z}^\infty$ summand in $\widehat{\mathcal{C}}_\Z/{\mathcal{C}}_\Z$ be generated by knots inside an integer homology sphere $Y$ that bounds a contractible smooth manifold?
\end{question}
Another interesting aspect worth pointing out is that the new filtered mapping cone captures all the information from the input if $n$ is sufficiently large. As a comparison, recall that according to \cite[Theorem 3.1]{hugo}, up to local equivalence, for the $+1$-surgery on $L$--space knots, the knot Floer complex of the dual knot only depends on two parameters: the total length of horizontal edges in the top half of the complex, and the congruence class of the number of generators mod $4$. (This is more or less natural in view of \cite[Theorem 1.2]{filteredinvolutive}, which states that the dual knot complex admits a relatively small local model.) In contrast, we have the following.
\begin{proposition}\label{prop: intro_middle}
Suppose knot $K\subset S^3$ is a knot of genus $g$. When $n\geq 2g$, there is a quotient complex of the filtered mapping cone of the $(n,1)$--cable of the knot meridian that is filtered homotopy equivalent to $\operatorname{CFK}^\infty(S^3,K).$
\end{proposition}
See Proposition \ref{prop: app_middle} for the precise statement. In general, up to local equivalence, the knot Floer complex of the $(n,1)$--cable of the dual knot depends on much more than the two parameters mentioned above.
Moreover, there is a curious phenomenon which we shall name \emph{stabilization} that is associated to the behavior of the knot Floer complex of the $(n,1)$--cable of the dual knot when $n\geq 2g$. Basically, when $n\geq 2g$ and as $n$ increases, the length of some edges of the complex will increase, but aside from that, the complex stops changing ``shape'', instead, a new copy of the complex $\operatorname{CFK}^\infty(S^3,K)$ gets added to the complex ``in the middle'' each time $n$ increases by $1$. For more details, read the discussion below Proposition \ref{prop: app_middle}.
While the algebraic reason for stabilization is rather straightforward, it is somewhat mysterious why this phenomenon happens geometrically. Are $(n,1)$--cables of the knot meridian for $n\geq 2g$ in fact special topologically? We ask the following question:
\begin{question}
What is the geometrical explanation for stabilization?
\end{question}
In the subsection that follows, we present the statement of the new filtered mapping cone formula.
\subsection{Statement of the main theorem.}\label{ssec: statement}
We start by introducing some notations.
Suppose $K$ represents a class of order $d>0$ in $H_1(Y;\mathbb{Z})$. Fix a tubular neighborhood $\nbd(K)$ of $K$, let $\mu$ be the canonical choice of the right-handed meridian in $\nbd(K)$ and $\lambda$ a choice of longitude, both of which we may view as curves on $\partial(\nbd(K))$.
Seen as elements of $H_1(\partial (\nbd(K)))$, $\lambda$ and $\mu$ satisfy $\partial F = d\lambda - k\mu$ for some $k\in\mathbb{Z}$, where $F$ is a rational Seifert surface of $K$. In fact, the framing is completely determined by $k$. Assume from now on $k\neq 0.$
Let $Y_\lambda(K)$ denote the surgery on $K$ with the framing specified by $\lambda$. Note that inside $Y_\lambda(K)$, $\mu$ is isomorphic to the core of surgery solid torus. Following Hedden-Levine's convention, we will let our cables of the meridian inherit the orientation from the \emph{left-handed} meridian. More precisely, define $K_{n,\lambda} \in Y_\lambda(K)$ to be $n$ copies of $-\mu$ joined with trivial band sums (see Figure \ref{fig: knot}). The knot $K_{n,\lambda}$ is the $(n,1)$--cable of the dual knot with its framing specified by $\partial(\nbd(K))$. In rest of the paper, the standalone letters $k,d$ and $n$ are in general reserved for the quantities described above.
We will assume from the reader certain familiarity of Heegaard Floer package. The basic construction and certain helpful propositions are reviewed in Section \ref{sec: prelim}. For a more thorough introduction or survey on the topic, see for example
\cite{IntroHeegaard,survey}.
Recall that the \emph{relative spin$^c$ structures} are the set of homology classes of vector fields that are non-vanishing on $Y \smallsetminus \nbd(K)$ and tangent to the boundary along $\partial (\nbd(K))$, denoted by $\underline\Spin^c(Y,K)$. Ozsv\'ath and Szab\'o defined the map
\[
G_{Y,K} \colon\thinspace \underline\Spin^c(Y,K) \to \Spin^c(Y),
\]
which is equivariant with respect to the restriction map
\[
H^2(Y,K;\mathbb{Z}) \to H^2(Y;\mathbb{Z}).
\]
Following the convention in \cite{HL}, the \emph{Alexander grading} of each $\xi \in \underline\Spin^c(Y,K)$ is defined as
\begin{equation} \label{eq: spinc-alex}
A_{Y,K}(\xi) = \frac{\gen{c_1(\xi), [F]} + [\mu] \cdot [F] }{2 [\mu] \cdot [F]} \in \frac{1}{2d} \mathbb{Z},
\end{equation}
where $F$ is a rational Seifert surface for $K$. For each $\mathfrak{s} \in \Spin^c(Y)$, the values of $A_{Y,K}(\xi)$, for all the $\xi \in \underline\Spin^c(Y,K)$ such that $G_{Y,K}(\xi)=\mathfrak{s}$, belong to a single coset in $\mathbb{Q}/\mathbb{Z}$; denote it by $A_{Y,K}(\mathfrak{s})$. Moreover, any $\xi \in \underline\Spin^c(Y,K)$ is uniquely determined by the pair $(G_{Y,K}(\xi), A_{Y,K}(\xi))$.
Choose a spin$^c$ structure $\mathfrak{t}$ on $Y_\lambda(K)$.
There is a bijection between $\Spin^c(W_{\lambda})$ and $\underline\Spin^c(Y,K)$, where $W_{\lambda}$ is the two handle cobordism from $Y$ to $Y_\lambda$ (see Definition \ref{def: spincbij} for more details). Consider all the spin$^c$ structures in $\Spin^c(W_{\lambda})$ that extend $\mathfrak{t}$. We may see these as spin$^c$ structures in $\underline\Spin^c(Y,K)$ through the bijection; denote them by $(\xi_l)_{l \in \mathbb{Z}}$. Let $\mathfrak{s}_l = G_{Y,K}(\xi_l)$ and $s_l = A_{Y,K}(\xi_l)$. The index is pinned down first by the relation $\mathfrak{s}_{l+1} = \mathfrak{s}_l + \PD[K]$, so that the sequence $(\mathfrak{s}_l)_{l \in \mathbb{Z}}$ repeats with period $d$, while $s_{l+1} = s_l + \frac{k}{d}$, and then by the conventions (for more discussions see the end of Section \ref{sssec: spinc})
\begin{gather}
\label{eq: xil-bound-pos}
\frac{(2l-1)k}{2d} < A_{Y,K}(\xi_l) \le \frac{(2l+1)k}{2d} \qquad \text{if } k>0, \\
\label{eq: xil-bound-neg}
\frac{(2l+1)k}{2d} \le A_{Y,K}(\xi_l) < \frac{(2l-1)k}{2d} \qquad \text{if } k<0.
\end{gather}
For each $l \in \mathbb{Z}$, let $A^\infty_{\xi_l}$ and $B^\infty_{\xi_l}$ each denote a copy of $\operatorname{CFK}^\infty(Y,K,\mathfrak{s}_l)$. Define a pair of filtrations $\mathcal{I}_\mathfrak{t}$ and $\mathcal{J}_\mathfrak{t}$ and an absolute grading $\gr_\mathfrak{t}$ on these complexes as follows:
\begin{align}
\intertext{For $[\mathbf{x},i,j] \in A^\infty_{\xi_l}$,}
\label{eq: It-def-A} \mathcal{I}_\mathfrak{t}([\mathbf{x},i,j]) &= \max\{i,j-s_l\} \\
\label{eq: Jt-def-A} \mathcal{J}_\mathfrak{t}([\mathbf{x},i,j]) &= \max\{i-n,j-s_l\} + \frac{2nds_l+nk-n^2 d}{2k} \\
\label{eq: grt-def-A} \gr_\mathfrak{t}([\mathbf{x},i,j]) &= \operatorname{\widetilde{gr}}([\mathbf{x},i,j]) + \frac{(2ds_l - k)^2 }{4dk} + \frac{2-3\sign(k)}{4} \\
\intertext{For $[\mathbf{x},i,j] \in B^\infty_{\xi_l}$,}
\label{eq: It-def-B} \mathcal{I}_\mathfrak{t}([\mathbf{x},i,j]) &= i \\
\label{eq: Jt-def-B} \mathcal{J}_\mathfrak{t}([\mathbf{x},i,j]) &= i-n + \frac{2nds_l+nk-n^2 d}{2k} \\
\label{eq: grt-def-B} \gr_\mathfrak{t}([\mathbf{x},i,j]) &= \operatorname{\widetilde{gr}}([\mathbf{x},i,j]) + \frac{(2ds_l - k)^2 }{4dk} + \frac{-2-3\sign(k)}{4}
\end{align}
Here $\operatorname{\widetilde{gr}}$ denotes the $\mathbb{Q}$--valued Malsov grading on $\operatorname{CFK}^\infty(Y,K,\mathfrak{s}_l)$. The values of $\mathcal{I}_\mathfrak{t}$ are integers, while the values of $\mathcal{J}_\mathfrak{t}$ live in the coset $A_{Y_\lambda,K_{n,\lambda}}(\mathfrak{t})$. See Figure \ref{fig: filtration} for an example of the filtrations when $k=d=1.$
For each $\xi_l$, there is a filtered chain homotopy equivalence
\[
\Psi^\infty_{\xi_l} \colon\thinspace \operatorname{CFK}^\infty(Y,K,\mathfrak{s}_l) \longrightarrow \operatorname{CFK}^\infty(Y,K,\mathfrak{s}_{l+1}),
\]
often referred to as the ``flip map". (We will discuss this in more details in Section \ref{sec: prelim}.) In particular, for any null-homologous knot in an $L$--space, the reflection map that exchanges $i$ and $j$ suffices to play the role of $\Psi^\infty_{\xi_l} $.
Let $A^-_{\xi_l}$ (resp.~$B^-_{\xi_l}$) denote the subcomplex of $A^\infty_{\xi_l}$ (resp.~$B^\infty_{\xi_l}$) with $\mathcal{I}<0$, and let $A^+_{\xi_l}$ (resp.~$B^+_{\xi_l}$) denote the quotient.
Define $v^\infty_{\xi_l} \colon\thinspace A^\infty_{\xi_l} \to B^\infty_{\xi_l}$ to be the identity map on $\operatorname{CFK}^\infty(Y,K,\mathfrak{s}_l)$, and $h^\infty_{\xi_l} \colon\thinspace A^\infty_{\xi_l} \to B^\infty_{\xi_{l+1}}$ given by the ``flip map'' $\Psi^\infty_{\xi_l} $. Both $v^\infty_{\xi_l}$ and $h^\infty_{\xi_l}$ are doubly-filtered and homogeneous of degree $-1$. So $v^\infty_{\xi_l}$ (resp.~$h^\infty_{\xi_l}$ ) restricted to $A^-_{\xi_l}$ maps to $B^-_{\xi_l}$ (resp.~$B^-_{\xi_{l+1}}$), and hence also induces a map from $A^+_{\xi_l}$ to $B^+_{\xi_l}$ (resp.~$B^+_{\xi_{l+1}}$). Define these maps to be $v^-_{\xi_l}$ (resp.~$h^-_{\xi_l}$ ) and $v^+_{\xi_l}$ (resp.~$h^+_{\xi_l}$ ). The above definitions agree with those in \cite{HL}.
Let $\mathbb{A}_{\lambda,\mathfrak{t},a,b}=\bigoplus_{l=a}^{b} A^\infty_{\xi_l} $ and $\mathbb{B}_{\lambda,\mathfrak{t},a,b}=\bigoplus_{l=a+1}^{b} B^\infty_{\xi_l} $, both inherit the double filtrations. Let $v^\infty = \bigoplus_{l} v^\infty_{\xi_l} $ and $h^\infty = \bigoplus_{l} h^\infty_{\xi_l} $ be maps from $\mathbb{A}_{\lambda,\mathfrak{t},a,b}$ to $\mathbb{B}_{\lambda,\mathfrak{t},a,b}$. Define $X^\infty_{\lambda, \mathfrak{t},n, a,b}$ to be the mapping cone of $v^\infty + h^\infty$. The following is our main theorem.
\begin{theorem} \label{thm: mapping-cone}
Let $K$ be a knot in a rational homology sphere $Y$, let $\lambda$ be a nonzero framing on $K$, and let $\mathfrak{t} \in \Spin^c (Y_\lambda(K))$. Then for all $a \ll 0$ and $b \gg 0$, the chain complex $X^\infty_{\lambda,\mathfrak{t},n,a,b}$, equipped with the filtrations $\mathcal{I}_\mathfrak{t}$ and $\mathcal{J}_\mathfrak{t}$, is doubly-filtered chain homotopy equivalent to $\operatorname{CFK}^\infty(Y_\lambda, K_{n,\lambda}, \mathfrak{t})$.
\end{theorem}
Theorem \ref{thm: mapping-cone} can be generalized to the case of rational surgeries as well. See Section \ref{sec: rational} for details.
\begin{remark}
Although not clear from the construction, the $a$ and $b$ in the above theorem can be decided by $d,n$ and $g$ quite straightforwardly, where $g$ is the genus of the knot $K$. Suppose $K$ is null-homologous, so we have $d=1.$ In this case taking $a=-g+1$ and $b=g+n-1$ would suffice. See the beginning of Section \ref{sec: examples} for an explanation.
\end{remark}
\begin{remark}
The filtration in \cite{Truong} only agrees with ours after a reflection with respect to the line $i=j$. This is because Truong used the right-handed meridian while we use the left-handed meridian. In fact, there are two versions of the filtered mapping cone formula that one can prove: the current one with $(w,z,z_n)$ basepoints where the additional basepoint $z_n$ is placed to the left of $w$ and $z$, and another version with $(w,z,w_n)$ basepoints where $w_n$ basepoint is placed to the right of $w$ and $z$. The latter will result in a filtration matching the one in \cite{Truong}. We chose the current version since it generalizes the filtrations defined in \cite{HL}.
\end{remark}
We learned that Ian Zemke is working on a different method that would achieve the similar filtered mapping cone formula for the $(n,1)$--cables of the knot meridian: if one takes the connected sum of $K$ with a $T_{2,2n}$ torus link, the $(n,1)$--cable of the meridian of $K$ can be obtained by performing a surgery on one of the link components. The gradings are then readily read off by considering the grading changes in the cobordism maps. The base case with the $T_{2,2}$ torus link is worked out in \cite{filteredinvolutive}.
\subsection*{Organization} In Section \ref{sec: prelim}, we review the basic construction of the knot Floer homology. In Section \ref{sec: alex}, we construct the diagrams for the $(n,1)$--cable of the knot meridian and compute the Alexander grading shifts and first Chern class evaluations of holomorphic polygons. Those would form the theoretical core for the upcoming sections. Then in Section \ref{sec: largesurgery}, \ref{sec: exacttri} and \ref{sec: proofcone}, we prove the large surgery formula for the $(n,1)$--cable of the knot meridian, the filtered surgery exact triangle and finally the filtered mapping cone formula. In Section \ref{sec: rational} we generalize the filtered mapping cone formula to the case of rational surgeries. In Section \ref{sec: examples} we perform the computations that lead to the proof of Proposition \ref{prop: phi} and Theorem \ref{thm: phi}.
\subsection*{Acknowledgement}
I want to thank my advisor Jen Hom for her continued support and encouragement. I am grateful to Adam Levine, Ian Zemke, Matt Hedden, John Etnyre, JungHwan Park and Chuck Livingston for helpful comments. I would like to thank Matt Hedden and Adam Levine for providing such a clear and thorough guideline in \cite{HL}, which inspired the current paper. I also want to thank the group of young mathematicians I met during GSTGC at Georgia Tech, to whom I attribute much of the motivation for accomplishing this project.
\section{Preliminary on knot Floer complexes} \label{sec: prelim}
Ozsv\'ath and Szab\'o defined a package of Floer invariants, including the Heegaard Fleor complex for three manifolds (see \cite{OSthreemanifold}) and the knot Floer complex for the knots (see \cite{OSknot}). In this section we collect some basic definitions and propositions necessary for the rest of the paper. For a more thorough introduction or survey on the topic, see
\cite{IntroHeegaard,survey}.
A \emph{pointed Heegaard diagram} for a three manifold $Y$ is a quadruple $(\Sigma,\bm\alpha,\bm\beta,w),$ where $\Sigma$ is an oriented surface of genus $g$, $\bm\alpha =\{ \alpha_1,\cdots, \alpha_g\}$ is a set of disjoint simple close curve on $\Sigma$ that indicates the attaching circles for one-handles of $Y$, and similarly $\bm\beta =\{\beta_1,\cdots, \beta_g\}$ indicates the attaching circles for $2$-handles of $Y$. Then
\[
z\in \Sigma -\alpha_1 -\cdots -\alpha_g -\beta_1 -\cdots -\beta_g
\]
is a choice of reference point. Together $(\Sigma,\bm\alpha,\bm\beta,w)$ enables the construction of a suitable variant of Lagrangian Floer homology in the $g$--fold symmetric product of $\Sigma$. Define
\[
\mathbb{T}_\alpha=\alpha_1 \times\cdots \times\alpha_g \subset \text{Sym}^g(\Sigma) \quad \text{ and } \quad T_\beta=\beta_1 \times\cdots \times\beta_g \subset \text{Sym}^g(\Sigma).
\]
The complex $\operatorname{CF}^\infty(\Sigma,\bm\alpha,\bm\beta,w)$ is freely generated over $\mathbb{F}$ by generators $[\mathbf{x},i]$, where $\mathbf{x}$ is an intersection point of $\mathbb{T}_\alpha$ and $\mathbb{T}_\beta$ and $i$ is any integer. The differential is given by
\[
\partial([\mathbf{x},i]) = \sum_{\mathbf{y} \in \mathbb{T}_\alpha \cap \mathbb{T}_\beta} \sum_{\substack{ \phi \in \pi_2(\mathbf{x},\mathbf{y}) \\ \mu(\phi)=1}} \# \widehat\mathcal{M}(\phi) \, [\mathbf{y}, i-n_z(\phi)],
\]
where $\pi_2(\mathbf{x},\mathbf{y})$ denotes the space of homotopy class of holomorphic disks from $\mathbf{x}$ to $\mathbf{y}$, $\mu(\phi)$ denotes its Maslov index, $\widehat\mathcal{M}(\phi)$ denotes the moduli space of pseudo-holomorphic representatives of $\phi$ quotient by $\mathbb{R}$, and $n_w(\phi)$ counts the algebraic intersection number of $\phi$ with $\{z\}\times \text{Sym}^{g-1}(\Sigma)$. There is an $U$--action reflected on the integer component, by the relation $U\cdot[\mathbf{x},i]=[\mathbf{x},i-1]$. The complex $\operatorname{CF}^\infty(\Sigma,\bm\alpha,\bm\beta,z)$ has underlying structure an $\mathbb{F}[U]$ module. Define the subcomplex generated by $[\mathbf{x},i]$ with $i<0$ to be $\operatorname{CF}^-(\Sigma,\bm\alpha,\bm\beta,z)$, the quotient of it $\operatorname{CF}^+(\Sigma,\bm\alpha,\bm\beta,z)$ and the sub-quotient complex generated by $[\mathbf{x},i]$ with $i=0$ as $\operatorname{\widehat{CF}}(\Sigma,\bm\alpha,\bm\beta,z)$. For $t\in \mathbb{N}$, the "truncated complex" $\operatorname{CF}^t(\Sigma,\bm\alpha,\bm\beta,z)$ is defined to be the sub-quotient complex generated by $[\mathbf{x},i]$ with $0\leq i \leq t$; or equivalently, the kernel of action $U^{t+1}$ on $\operatorname{CF}^+(\Sigma,\bm\alpha,\bm\beta,z)$. Note that $\operatorname{CF}^t(\Sigma,\bm\alpha,\bm\beta,z)$ is isomorphic to the quotient
\[
\operatorname{CF}^-(\Sigma, \bm\alpha, \bm\beta, z, \mathfrak{s}) / (U^{t+1} \cdot \operatorname{CF}^-(\Sigma, \bm\alpha, \bm\beta, z, \mathfrak{s})),
\]
up to a grading shift; or equivalently $[\mathbf{x},i]$ with $-t\leq i \leq 0$.
It is proved in \cite{OSclosed} that the chain homotopy type of the chain complex $\operatorname{CF}^\circ(\Sigma,\bm\alpha,\bm\beta,z)$ is a topological invariant of $Y$, so it makes sense to drop the choice of the Heegaard diagram from the notation and write $\operatorname{CF}^\circ(Y)$.
As a refinement of above construction, the knot Floer complex is a topological invariant for the isotopy class of the knot $K\subset Y$. For simplicity, specialize $Y$ to be a rational homology sphere from now on. First, a \emph{doubly pointed Heegaard diagram} $(\Sigma,\bm\alpha,\bm\beta,w,z)$ is a pointed Heegaard diagram with the addition of a basepoint $z$, which specifies a knot $K\subset Y$ in the following way: letting $t_\alpha$ be an arc from $z$ to $w$ in $\Sigma \setminus \bm\alpha$, and $t_\beta$ an arc from $w$ to $z$ in $\Sigma \setminus \bm\beta$, then $K$ is obtained from pushing $t_\alpha$ slightly into the $\alpha$--handlebody and $t_\beta$ slightly into the $\beta$--handlebody.
Ozsv\'ath and Szab\'o defined a map $\mathfrak{s}_z \colon\thinspace \mathbb{T}_\alpha \cap\mathbb{T}_\beta \to \Spin^c(Y)$, which associate each $\mathbf{x}\in \mathbb{T}_\alpha \cap\mathbb{T}_\beta$ a spin$^c$ structure $\mathfrak{s}_z(\mathbf{x}) \in \Spin^c(Y)$. The map $\mathfrak{s}_w$ can also be defined in the same way, with the relation between them given by $\mathfrak{s}_z(\mathbf{x}) = \mathfrak{s}_w(\mathbf{x}) + PD[K]$. The Heegaard Floer complex decomposes over $\Spin^c(Y)$ as
\[
\operatorname{CF}^\circ(Y)=\bigoplus_{\mathfrak{s} \in \Spin^c(Y)} \operatorname{CF}^\circ(Y,\mathfrak{s}),
\]
where $\operatorname{CF}^+(Y,\mathfrak{s})$ is generated by $[\mathbf{x},i]$ satisfying $\mathfrak{s}_z(\mathbf{x})=\mathfrak{s}.$ Similarly, Ozsv\'ath and Szab\'o also defined a map $\underline\mathfrak{s}_{w,z} \colon\thinspace \mathbb{T}_\alpha \cap\mathbb{T}_\beta \to \underline\Spin^c(Y,K)$ with the property that $G_{Y,K} (\underline\mathfrak{s}_{w,z}(\mathbf{x})) = \mathfrak{s}_w(\mathbf{x})$, where $G_{Y,K} \colon\thinspace \underline\Spin^c(Y,K) \to \Spin^c(Y)$ is the restriction map defined in Section \ref{ssec: statement}. The Alexander grading of $\mathbf{x}$ is defined as
\begin{align}
\label{eq: alex-def}
A_{w,z}(\mathbf{x}) &= A_{Y,K}(\underline\mathfrak{s}_{w,z}(\mathbf{x})),
\end{align}
where the Alexander grading of a relative spin$^c$ structure is given by (\ref{eq: spinc-alex}). Suppose generators $\mathbf{x}$ and $\mathbf{y}$ satisfy $\mathfrak{s}_z(\mathbf{x}) = \mathfrak{s}_z(\mathbf{y})$, for any disk $\phi \in \pi_2(\mathbf{x},\mathbf{y})$, then we have
\begin{equation} \label{eq: rel-alex}
A_{w,z}(\mathbf{x}) - A_{w,z}(\mathbf{y}) = n_z(\phi) - n_w(\phi).
\end{equation}
In particular, the difference of the Alexander grading is an integer. Thus the Alexander grading $A_{w,z}(\mathbf{x})$ of the generators in the same spin$^c$ class $\mathfrak{s}_z(\mathbf{x})=\mathfrak{s}$ belongs to the same coset of $\mathbb{Q}/\mathbb{Z}$; denote it $A_{w,z}(\mathfrak{s})$. For each $\mathfrak{s} \in \Spin^c(Y)$, the knot Floer complex $\operatorname{CFK}^\infty(\Sigma, \bm\alpha, \bm\beta, w, z, \mathfrak{s})$ is freely generated over $\mathbb{F}$ by all $[\mathbf{x}, i, j]$, where $\mathfrak{s}_z(\mathbf{x}) = \mathfrak{s}$, $i \in \mathbb{Z}$, and $j-i = A_{w,z}(\mathbf{x})$. The $U$--action is given by
$U \cdot [\mathbf{x},i,j] = [\mathbf{x},i-1,j-1]$, and the differential is given by
\begin{equation} \label{eq: CFKi-diff}
\partial [\mathbf{x},i,j] = \sum_{\mathbf{y} \in \mathbb{T}_\alpha \cap \mathbb{T}_\beta} \sum_{\substack{\phi \in \pi_2(\mathbf{x},\mathbf{y}) \\ \mu(\phi)=1}} \# \widehat{\mathcal{M}}(\phi) [\mathbf{y}, i-n_w(\phi), j-n_z(\phi)].
\end{equation}
Note that the $j$ coordinate of any generator is in the coset specified by $A_{Y,K}(\mathfrak{s})$.
There is a forgetful map $\Omega_z \colon\thinspace \operatorname{CFK}^\infty(\Sigma, \bm\alpha, \bm\beta, w, z, \mathfrak{s}) \to \operatorname{CFK}^\infty(\Sigma, \bm\alpha, \bm\beta, z, \mathfrak{s})$ given by $\Omega_z([\mathbf{x},i,j]) = [\mathbf{x},i]$. Under this forgetful map, one can also view the $j$ coordinate as a second filtration on $\operatorname{CFK}^\infty(\Sigma, \bm\alpha, \bm\beta, z, \mathfrak{s})$, which we call the \emph{Alexander filtration}. Define $A_{w,z}([\mathbf{x},i]) = A_{w,z}(\mathbf{x}) + i.$ By equipping different flavors of $\operatorname{CF}^\circ$ with this filtration, the corresponding doubly-filtered chain complexes are denoted by $\operatorname{CFK}^-,\operatorname{CFK}^+, \widehat{\operatorname{CFK}}$ and $\operatorname{CFK}^t$. Similar to the Heegaard Floer complex, the filtered chain homotopy type of $\operatorname{CFK}^\circ$ is a topological invariant of the isotopy class of the knot $K\subset Y$, and at times we drop the Heegaard diagram infomation and simply write $\operatorname{CFK}^\circ(Y,K)$.
The difference between two distinct $\mathbb{Q}$--valued Maslov gradings is given by (see for example \cite{zemkegrading})
\[
\operatorname{\widetilde{gr}}_w(\mathbf{x}) - \operatorname{\widetilde{gr}}_z(\mathbf{x}) = 2A_{w,z}(\mathbf{x}).
\]
We stick to the Maslov grading specified by the basepoint of Heegaard Floer complex throughout the paper, and drop the index when the context is clear.
For each $s\in A_{w,z}(\mathfrak{s}),$ there is a map
\[
\Psi^\infty_{\mathfrak{s},s}\colon\thinspace \operatorname{CFK}^\infty(\Sigma, \bm\alpha, \bm\beta, w, z, \mathfrak{s}) \longrightarrow \operatorname{CFK}^\infty(\Sigma, \bm\alpha, \bm\beta, w, z, \mathfrak{s}+ \PD[K])
\]
that by the work of Juh\'asz–Thurston–Zemke in \cite{mappingclassheegaard} and the work of Zemke in \cite{graph}, is independent of any choice up to homotopy equivalence for any knot in a rational homology sphere. By \cite[Lemma 2.16]{HL}, the map $\Psi^\infty_{\mathfrak{s}, s}$ is a filtered homotopy equivalence with respect to the $j$ filtration on the domain and the $i$ filtration on the range, in the sense that, for any $t \in A_{Y,K}(\mathfrak{s})$, $\Psi^\infty_{\mathfrak{s}, s}$ restricts to a homotopy equivalence from the $j \le t$ subcomplex of $\operatorname{CFK}^\infty(\Sigma, \bm\alpha, \bm\beta, w, z, \mathfrak{s})$ to the $i \le t-s$ subcomplex of $\operatorname{CFK}^\infty(\Sigma, \bm\alpha, \bm\beta, w, z, \mathfrak{s} + \PD[K])$.
It is enough to know the map $\Psi^\infty_{\mathfrak{s}, s}$ for a singular value of $s$, since for other values of $s$, the maps $\Psi^\infty_{\mathfrak{s}, s}$ are related by:
\[
\Psi^\infty_{\mathfrak{s}, s+1} = U \circ \Psi^\infty_{\mathfrak{s}, s} = \Psi^\infty_{\mathfrak{s}, s} \circ U.
\]
The pair $(\mathfrak{s},s)$ determines and is determined by a relative spin$^c$ structure $\xi \in \underline\Spin^c(Y,K),$ thus we can denote the map by $\Psi^\infty_{\xi}.$ This is the ``flip map" used in the mapping cone.
In general, the map $\Psi^\infty_{\xi}$ is difficult to determine from the definition. However, for any null-homologous knot in an $L$--space, by \cite[Lemma 2.18]{HL}, $\Psi^\infty_{\xi}$ is filtered homotopy equivalent to the reflection map that exchanges $i$ and $j$.
At the end of this section, we recall two technical lemmas from \cite{HL} for the reader's convenience. The first deal with the relation between $\operatorname{CF}^t$ and $\operatorname{CF}^-$ complexes, and the second is the filtered version of the exact triangle detection lemma.
\begin{lemma}[Lemma 2.7 in \cite{HL}] \label{lemma: cft}
Let $\operatorname{CF}^-(Y_1)$ and $\operatorname{CF}^-(Y_2)$ be Heegaard chain complexes of three manifolds equipped with a second filtration. Suppose for all $t\geq 0$, the complexes $\operatorname{CF}^t(Y_1)$ and $\operatorname{CF}^t(Y_2)$ are $\mathbb{F}[U]$--equivariantly doubly-filtered quasi-isomorphic, then $\operatorname{CF}^-(Y_1)$ and $\operatorname{CF}^-(Y_2)$ are $\mathbb{F}[U]$--equivariantly doubly-filtered quasi-isomorphic.
\end{lemma}
This is stated and proved in \cite[Section 2]{HL}.
\begin{lemma} [Lemma 2.9 in \cite{HL}]\label{lemma: mapping-cone}
Let $(C_i, \partial_i)_{i \in \mathbb{Z}}$ be a family of filtered chain complexes (over any ring). Suppose we have filtered maps $f_i \colon\thinspace C_i \to C_{i+1}$ and $h_i \colon\thinspace C_i \to C_{i+2}$ so that:
\begin{enumerate}
\item $f_i$ is an anti-chain map, i.e., $f_i \circ \partial_i + \partial_{i+1} \circ f = 0$.
\item $h_i$ is a null-homotopy of $f_{i+1} \circ f_i$, i.e., $f_{i+1} \circ f_i + h_i \circ \partial_i + \partial_{i+2} \circ h_i = 0$;
\item $h_{i+1} \circ f_i + f_{i+2} \circ h_i$ is a filtered quasi-isomorphism from $C_i$ to $C_{i+3}$.
\end{enumerate}
Then the anti-chain map
\[
\begin{pmatrix} f_i \\ h_i \end{pmatrix}\colon\thinspace C_i \to \Cone(f_{i+1})
\]
is a filtered quasi-isomorphism (and hence a filtered homotopy equivalence when working over a field).
\end{lemma}
\begin{proof}
This follows from the proof of \cite[Lemma 4.2]{branched}, adding the key word "filtered" when necessary.
\end{proof}
\section{Alexander grading and surgery cobordisms} \label{sec: alex}
In this section we study the cobordisms involved in the surgery exact triangle, mostly through the lens of periodic domains. We then compute the Alexander grading of holomorphic polygons. These computations play a crucial role in the proof of the filtered surgery exact triangle. We also examine the spin$^c$ structure in the cobordisms more carefully.
\subsection{Triply-periodic domains and relative periodic domains.} \label{ssec: domains}
Assume $Y$ is a rational homology sphere, $K\subset Y$ is a knot of rational genus $g$ with order $d>0$ in $H_1(Y;\mathbb{Z}).$ Let $\lambda$ be a nonzero framing for $K$, and $W_\lambda$ the $2$-handle cobordism from $Y$ to $Y_\lambda.$
Ozsv\'ath and Szab\'o explained how a \emph{pointed Heegaard triple} $( \Sigma,\bm\alpha,\bm\beta,\bm\gamma,z)$ gives rise to the cobordism $W_\lambda$ in \cite[Section 2.2]{fourmanifold}. First let the curve $\gamma_g$ be a $\lambda$--framed longitude that intersects $\beta_g$ at one point and disjoint from all other $\beta$ curves, and each $\gamma_i$ for $i= 1,\cdots,g-1 $ be a small pushoff of $\beta_i$, intersecting $\beta_i$ at two points. Clearly a pointed Heegaard triple specifies three manifolds $Y_{\alpha\beta}=Y$, $Y_{\alpha \gamma}=Y_\lambda$ and $Y_{\beta \gamma}=\mathbin{\#}^{g-1}(S^1\times S^2)$. These will be the boundary of the cobordism.
Let $U_\alpha$, $U_\beta$ and $U_\gamma$ denote the handlebody corresponding to each set of attaching circles. Let $\Delta$ denote the two-simplex, with vertices $v_\alpha$, $v_\beta$, $v_\gamma$ labeled clockwise, and let $e_i$
denote the edge $v_j$ to $v_k$, where $\{i, j, k\} = {\alpha, \beta, \gamma}$. Then, form the identification
space
\[
X_{\alpha \beta \gamma}= \frac{(\Delta\times \Sigma)\sqcup (e_\alpha \times U_\alpha)\sqcup (e_\beta \times U_\beta) \sqcup (e_\gamma \times U_\gamma) }{ (e_\alpha \times \Sigma) \sim (e_\alpha \times \partial U_\alpha),(e_\beta \times \Sigma) \sim (e_\beta \times \partial U_\beta),(e_\gamma \times \Sigma) \sim (e_\gamma \times \partial U_\gamma)}
.
\]
Following the standard topological argument, one can smooth out the corners
to obtain a smooth, oriented, four-dimensional manifold we also call $X_{\alpha \beta \gamma}$. Under the natural orientation conventions
implicit in the above description, we have
\[
\partial X_{\alpha \beta \gamma} = -Y_{\alpha \beta} \sqcup -Y_{\beta \gamma} \sqcup Y_{\alpha \gamma}.
\]
One can use three-handles to kill the boundary component $Y_{\beta \gamma}$, which consists of copies of $S^1\times S^2$. As a result, $X_{\alpha \beta \gamma}$ represents the cobordism $W_\lambda.$ Ozsv\'ath and Szab\'o defined also defined a \emph{triply-periodic domain} to be a two-chain on $\Sigma,$ with multiplicity zero at the basepoint $z$ and whose boundaries consists of multiples of $\alpha, \beta$ and $\gamma$ curves. The triply-periodic domains represent homology classes in $H_2(X_{\alpha \beta \gamma} ;\mathbb{Z}).$ Ozsv\'ath and Szab\'o proved a formula for the evaluation of the first Chern class (see \cite[Section 6.1]{fourmanifold}) using triply-periodic domains and holomorphic triangles.
In order to obtain a periodic domain that represents the dual knot in the knot surgery, and suitable for certain homological computations, we make some modification to the above constructions. This is demonstrated in \cite[Section 4]{HL}. First on $( \Sigma,\bm\alpha,\bm\beta,w,z)$, we now require that $\beta_g$ to be a meridian of $K$ that intersects $\alpha_g$ at one point and disjoint from all other $\alpha$ curves. Moreover, there is an arc $t_\alpha$ from $z$ to $w$ intersecting $\beta_g$ in a single point and disjoint from all other $\alpha$ and $\beta$ curves.
Orient the curves $\alpha_g, \beta_g, \gamma_g$ so that $\#(\alpha_g \cap \beta_g) = \#(\gamma_g \cap \beta_g) = \#(t_\alpha \cap \beta_g) = 1 $. Pick the orientation for $\alpha_i, \beta_i$ arbitrarily for $i\in \{1,\cdots,g-1 \}$, and orient $\gamma_i$ parallel to $\beta_i$.
We wind the $\gamma_g$ curve extra $\text{max}\{k,n\}$ times in the \emph{winding region}, which is a local region that contains all the basepoints, add a second basepoint $z_n$ to the left of the $n$-th winding, and then wind $\gamma_g$ back the same number of times to preserve the framing. The purposes of the windings are such that
\begin{itemize}
\item every spin$^c$ structure is represented by a generator in the winding region;
\item the $w$ (or equivalently $z$) and $z_n$ basepoints represent the knot $K_{n,\gamma}.$
\end{itemize}
With these modifications in place, a \emph{relative periodic domain} is defined to be a two-chain on $\Sigma,$ with multiplicity zero at the basepoint $z$, and whose boundary consists of multiples of $\alpha, \beta$ curves and a longitude for the knot (specified by $w$ and $z$). More details are discussed in \cite{periodic}.
There is a triply-periodic domain $P_\gamma$ (see Figure \ref{fig: twistgreen}), with $n_z(P_\gamma) = 0$, $n_w(P_\gamma) = k$, and
\begin{equation}\label{eq: Pgamma-boundary}
\partial P_\gamma = -d\alpha_g -k \beta_g + d\gamma_g + \sum_{i=1}^{g-1} (a_i \alpha_i + b_i \beta_i)
\end{equation}
for some integers $a_i, b_i$. We add extra reference points $z_j$ for $j=1,\cdots, n-1$ such that each $z_j$ is in the region on the left of the $j$--th winding of the $\gamma_g$ curve to the left of $\beta_g.$
This periodic domain can be viewed as either:
\begin{enumerate}
\item a triply-periodic domain $(\Sigma,\bm\alpha,\bm\beta,\bm\gamma, z)$ representing the class of a capped-off Seifert surface in $H_2(W_\lambda)$ (note the multiplicity of $\gamma_g$);
\item a relative periodic domain $(\Sigma,\bm\alpha,\bm\beta,w,z)$ representing $\lambda$--framed longitude of $K$ in $Y$;
\item a relative periodic domain $(\Sigma,\bm\alpha,\bm\gamma,w,z_1)$ representing the dual knot of $K$ in $Y_\lambda$ (which we would not use in this paper).
\end{enumerate}
As of now, $P_\gamma$ is not a relative periodic domain $(\Sigma,\bm\alpha,\bm\gamma,w,z_n)$ for the knot $K_{n,\lambda}$ (since the boundary consists of merely copies of the meridian), but we will demonstrate the modification necessary to achieve this in Section \ref{ssec: spinc}.
\begin{remark}
Note that our $P_\gamma$ is slightly different from the setup in \cite{HL}. First, their second basepoint is to the right of $\beta_g$ curve. But in order to represent $(n,1)$--cable of the left-handed meridian, we have to choose our basepoint $z_n$ to the left of $\beta_g$. By setting $n=1$, the argument in this paper will recover their results, despite a different choice of second basepoint.
Our winding is also chosen to the left of $\beta_g$, different from the choice in \cite{HL}. Loosely speaking, this is in order to capture the information of the extra windings coming from $K_{n,\lambda}$.
\end{remark}
\begin{figure}
\labellist
\pinlabel $w$ at 295 70
\pinlabel $z$ at 255 70
\pinlabel $z_1$ at 231 140
\pinlabel $z_n$ at 151 140
\pinlabel $\Theta_{\beta\gamma}$ at 285 123
\pinlabel {{\color{red} $\alpha_g$}} [l] at 390 48
\pinlabel {{\color{blue} $\beta_g$}} [r] at 267 173
\pinlabel {{\color{OliveGreen} $\gamma_g$}} [l] at 388 120
{\tiny
\pinlabel $(n+1)d$ at 141 52
}
\pinlabel $k+d$ at 325 40
\pinlabel $k+d$ at 327 145
\pinlabel $d$ at 245 40
\pinlabel $\cdots$ at 195 40
\pinlabel $nd$ at 137 32
\pinlabel $nd$ at 141 102
\pinlabel $\cdots$ at 177 102
\pinlabel $d$ at 209 102
\pinlabel $0$ at 249 100
\pinlabel $k$ at 325 100
\endlabellist
\includegraphics{green1}
\caption{The winding region of diagram $P_{\gamma}$, where $n=3$.}
\label{fig: twistgreen}
\end{figure}
Similarly, one can define a relative periodic domain $P_{\delta}$ for large surgery.
For an integer $b$ such that $0\leq b \leq m-n$, let $\bm\delta^{m,b} = (\delta_1^{m,b}, \dots, \delta_g^{m,b})$ be a tuple of curves obtained from $\bm\gamma$ as follows: Let $\delta_g$ be a parallel pushoff of $\gamma_g$ by performing $m$ left-handed Dehn twists parallel to $\beta_g$, where $b$ (resp. $m-b$) of these twists are performed in the winding region on the same side of $\beta_g$ as $w$ (resp. $z$). For $i=1, \dots, g-1$, $\delta_i^{m,b}$ is simply a small pushoff of $\gamma_i$ meeting it in two points. The pointed Heegaard triple $(\Sigma, \bm\alpha, \bm\beta, \bm\delta^{m,b}, z)$ represents the cobordism $W_{\lambda + m\mu}$ from $Y$ to $Y_{\lambda + m\mu}$. When $m$ and $b$ are understood from the context, we omit the superscripts from the $\delta$ curves.
The relative periodic domain $P_\delta$ shown in Figure \ref{fig: twistpurple} satisfies $n_z(P_\delta) = 0$, $n_w(P_\delta) = k+md$, $n_{z_j}(P_\delta) = jd$ for $j=1,\cdots, n$ and
\begin{align}
\label{eq: Pdelta-boundary}
\partial P_\delta &= -d\alpha_g -(k+md) \beta_g + d\delta_g + \sum_{i=1}^{g-1} (a_i \alpha_i + b_i \delta_i).
\end{align}
for some integers $a_i, b_i$.
This periodic domain can be viewed as either:
\begin{enumerate}
\item a triply-periodic domain $(\Sigma,\bm\alpha,\bm\beta,\bm\delta, z)$ representing the class of a capped-off Seifert surface in $H_2(W_{\lambda+m\mu})$ (note the multiplicity of $\delta_g$);
\item a relative periodic domain $(\Sigma,\bm\alpha,\bm\beta,w,z)$ representing a ${\lambda+m\mu}$--framed longitude of $K$ in $Y$;
\item a relative periodic domain $(\Sigma,\bm\alpha,\bm\delta,w,z_1)$ representing the dual knot of $K$ in $Y_{\lambda+m\mu}$ (which we would not use in this paper).
\end{enumerate}
Similar as before, $P_\delta$ also is not a relative periodic domain $(\Sigma,\bm\alpha,\bm\delta,w,z_n)$ for the knot $K_{n,\lambda+m\mu}$ yet, while the modification necessary to achieve this is demonstrated in Section \ref{ssec: spinc}.
For $j=1, \dots, g-1$, there are small periodic domains $S_{\beta\gamma}^j$ and $S_{\gamma\delta}^j$ with $\partial S_{\beta\gamma}^j = \beta_j - \gamma_j$ and $\partial S_{\gamma\delta}^j = \gamma_j - \delta_j$, supported in a small neighborhood of each pair of curves. We will refer to these as \emph{thin domains}.
Relating the periodic domains $P_\gamma$ to $P_\delta,$ there is a $(\beta,\gamma,\delta)$ triply periodic domain $Q$ with
\[
\partial Q = m\beta_g + \gamma_g - \delta_g ,
\]
so that
\[
P_\gamma - P_\delta = dQ + \text{thin domains}.
\]
Furthermore, between the basepoints $z$ and $z_n$, we add some extra reference points $u_j$ for $j=1,\cdots, n$, where each $u_j$ is in the region on the right of the $j$--th winding of $\gamma_g$ to the left of $\beta_g$, as shown in Figure \ref{fig: Q}. These reference points are needed to compute the Alexander grading shifts in certain cobordisms. Note that for $j=1,\cdots, n$, $z_j$ and $u_j$ are only separated by the curve $\gamma_g$ and $z_{j-1}$ and $u_j$ are only separated by the curve $\delta_g$ (considering $z$ to be $z_0$ here).
Finally, define the $(\alpha, \gamma, \delta)$ periodic domain
\[
R = \tfrac{m}{\nu} P_\gamma + \tfrac{k}{\nu} Q
\]
where $\nu = \gcd(k,m)$; it has
\[
\partial R = -\frac{md}{\nu} \alpha_g + \frac{k+md}{\nu}\gamma_g - \frac{k}{\nu} \delta_g + \frac{m}{\nu} \sum_{i=1}^{g-1} (a_i \alpha_i + b_i \gamma_i).
\]
\begin{figure}
\labellist
\pinlabel $w$ at 210 70
\pinlabel $z$ at 170 70
\pinlabel $z_n$ at 66 120
\pinlabel {{\color{red} $\alpha_g$}} [l] at 345 35
\pinlabel {{\color{blue} $\beta_g$}} [r] at 178 160
\pinlabel {{\color{purple} $\delta_g$}} [l] at 345 70
\tiny
\pinlabel $\Theta_{\delta\beta}$ [br] at 204 80
\pinlabel $k+md$ at 320 70
\pinlabel $-bd$ at 320 60
\pinlabel $(m-b+1)d$ at 30 130
\pinlabel $k+md$ at 200 120
\pinlabel $+d$ at 200 110
\pinlabel $k+md$ at 234 120
\pinlabel $k+md$ at 266 120
\pinlabel $-d$ at 266 110
\pinlabel $k+md$ at 310 120
\pinlabel $-(b-1)d$ at 310 110
\pinlabel $k+md$ at 210 35
\pinlabel $+d$ at 210 25
\pinlabel $k+md$ at 250 35
\pinlabel $...$ at 280 35
\pinlabel $k+md$ at 317 35
\pinlabel $-(b-1)d$ at 317 25
\pinlabel $nd$ at 55 72
\pinlabel $nd$ at 85 32
\pinlabel $...$ at 90 72
\pinlabel $...$ at 120 32
\pinlabel $d$ at 150 120
\pinlabel $d$ at 155 32
\pinlabel $0$ at 155 62
\endlabellist
\includegraphics{purple1}
\caption{The winding region of diagram $P_{\delta}$, in the case where $m=6$ and $b=3$.}
\label{fig: twistpurple}
\end{figure}
\begin{figure}
\labellist
\pinlabel $w$ at 285 70
\pinlabel $z$ at 260 70
{\scriptsize
\pinlabel $z_3$ at 161 145
\pinlabel $u_3$ at 178 145
\pinlabel $z_2$ at 194 145
\pinlabel $u_2$ at 209 145
\pinlabel $z_1$ at 241 145
\pinlabel $u_1$ at 255 120
}
\pinlabel {{\color{red} $\alpha_g$}} [l] at 390 48
\pinlabel {{\color{purple} $\delta_g$}} [l] at 389 74
\pinlabel {{\color{blue} $\beta_g$}} [r] at 267 173
\pinlabel {{\color{OliveGreen} $\gamma_g$}} [l] at 388 120
{\tiny
\pinlabel $\Theta_{\delta\beta}$ at 265 93
\pinlabel $\Theta_{\gamma\delta}$ at 303 136
\pinlabel $\Theta_{\beta\gamma}$ at 263 136
\pinlabel $0$ at 213 80
\pinlabel $-1$ at 196 80
\pinlabel $0$ at 182 80
\pinlabel $-1$ at 165 80
\pinlabel $0$ at 140 72
\pinlabel $-1$ at 109 72
\pinlabel $-2$ at 80 72
\pinlabel $-3$ at 45 72
\pinlabel $-m-1$ at 285 116
\pinlabel $-m$ at 300 100
\pinlabel $-m+1$ at 320 88
\pinlabel $-m+2$ at 360 100
\pinlabel $-m+3$ at 370 72
\pinlabel $-m$ at 285 145
\pinlabel $\cdots$ at 324 145
\pinlabel $-m+3$ at 366 145
\pinlabel $-1$ at 135 116
\pinlabel $-2$ at 100 116
\pinlabel $-3$ at 71 116
\pinlabel $-4$ at 41 115
\pinlabel $0$ at 246 73
\pinlabel $0$ at 245 40
\pinlabel $-1$ at 220 40
\pinlabel $0$ at 203 40
\pinlabel $\cdots$ at 182 40
\pinlabel $-1$ at 249 105
}
\endlabellist
\includegraphics{Q}
\caption{The relative periodic domain $Q$, where $m=6$ and $n=3$. Here $\alpha_g$ has multiplicity zero in $\partial Q$.}
\label{fig: Q}
\end{figure}
The multiplicities of the periodic domains at the various basepoints are as follows:
\begin{center}
\begin{tabular} {|c|c|c|c|c|} \hline
& $n_z$ & $n_w$ & $n_{z_n}$ \\ \hline
$P_\gamma$ & $0$ & $k$ & $nd$ \\
$P_\delta$ & $0$ & $k+md$ & $nd$ \\
$Q$ & $0$ & $-m$ & $0$ \\
$R$ & $0$ & $0$ & $\frac{nmd}{\nu}$ \\ \hline
\end{tabular}
\end{center}
\subsection{Topology of the cobordisms} \label{ssec: cob}
Consider the topology of the cobordisms related to the Heegaard diagram $(\Sigma,
\bm\alpha, \bm\beta, \bm\gamma, \bm\delta)$.
According to the construction demonstrated in \cite[Section 8.1.5]{OSclosed}, we have three separate $4$-manifolds $X_{\alpha\beta\gamma\delta}$, $X_{\alpha\gamma\delta\beta}$, and $X_{\alpha\delta\beta\gamma}$, with:
\begin{align*}
\partial X_{\alpha\beta\gamma\delta} &= {-Y_{\alpha\beta}} \sqcup {-Y_{\beta\gamma}} \sqcup
{-Y_{\gamma\delta}} \sqcup {Y_{\alpha\delta}} \\
\partial X_{\alpha\gamma\delta\beta} &= {-Y_{\alpha\gamma}} \sqcup {-Y_{\gamma\delta}} \sqcup {-Y_{\delta\beta}} \sqcup Y_{\alpha\beta} \\
\partial X_{\alpha\delta\beta\gamma} &= {-Y_{\alpha\delta}} \sqcup {-Y_{\delta\beta}} \sqcup {-Y_{\beta\gamma}} \sqcup {Y_{\alpha\gamma}}
\end{align*}
These $4$-manifolds each admit a pair of decompositions as follows:
\begin{align}
\label{eq: Xabgd-decomp}
X_{\alpha\beta\gamma\delta} &= X_{\alpha\beta\gamma} \cup_{Y_{\alpha\gamma}} X_{\alpha\gamma\delta} = X_{\alpha\beta\delta} \cup_{Y_{\beta\delta}} X_{\beta\gamma\delta} \\
\label{eq: Xagdb-decomp}
X_{\alpha\gamma\delta\beta} &= X_{\alpha\gamma\delta} \cup_{Y_{\alpha\delta}} X_{\alpha\delta\beta} =
X_{\alpha\gamma\beta} \cup_{Y_{\gamma\beta}} X_{\gamma\delta\beta} \\
\label{eq: Xadbg-decomp}
X_{\alpha\delta\beta\gamma} &= X_{\alpha\delta\beta} \cup_{Y_{\alpha\beta}} X_{\alpha\beta\gamma} =
X_{\alpha\delta\gamma} \cup_{Y_{\delta\gamma}} X_{\delta\beta\gamma},
\end{align}
where the $3$-manifolds in above notations are precisely:
\begin{align*}
Y_{\alpha\beta} &= Y & Y_{\alpha\gamma}&= Y_\lambda(K) & Y_{\alpha\delta} &= Y_{\lambda+m\mu}(K) \\
Y_{\beta\gamma} &= \mathbin{\#}^{g-1}(S^1 \times S^2) & Y_{\gamma\delta} &= L(m,1) \mathbin{\#}^{g-1}(S^1 \times S^2) & Y_{\delta\beta} &= \mathbin{\#}^{g-1}(S^1 \times S^2) \\
Y_{\gamma\beta} &= -Y_{\beta\gamma} & Y_{\delta\gamma} &= -Y_{\gamma\delta} & Y_{\beta\delta} &= -Y_{\delta\beta}
\end{align*}
Note also that $X_{\alpha\gamma\beta} = -X_{\alpha\beta\gamma}$, and so on.
If we let $\bar X_{\alpha\beta\gamma}$, $\bar X_{\alpha\beta\gamma\delta}$, etc. denote the manifolds obtained by attaching $3$-handles to kill the $S^1 \times S^2$ summands in $Y_{\beta\gamma}$, $Y_{\gamma\delta}$, and $Y_{\delta\beta}$, then we have analogues of \eqref{eq: Xabgd-decomp}, \eqref{eq: Xagdb-decomp}, and \eqref{eq: Xadbg-decomp} for these manifolds as well.
The periodic domains $\{P_\gamma, P_\delta, Q, R\}$ represent homology classes which survive in $H_2(\bar X_{\alpha\beta\gamma\delta})$ and the following relations hold:
\begin{align*}
[P_\delta] &= [P_\gamma] - d[Q], & [R] = \tfrac{m}{\nu} [P_\gamma] + \tfrac{k}{\nu} [Q].
\end{align*}
Hence, we may also write $[R] = \tfrac{m}{\nu}[P_\delta] + \tfrac{k+md}{\nu}[Q]$. The same relations are also satisfied in $H_2(\bar X_{\alpha\gamma\delta\beta})$ and $H_2(\bar X_{\alpha\delta\beta\gamma})$.
We can obtain the cobordism $W_\lambda$ from $\bar X_{\alpha\beta\gamma}$, simply by gluing a $4$-handle to kill the $S^3$ boundary component left over from $Y_\beta\gamma$. Let $\mathfrak{s}^0_{\beta\gamma}$ denote the unique torsion spin$^c$ structure on $Y_{\beta\gamma}$. Let $\Theta_{\beta\gamma}$ and $\Theta_{\gamma\beta}$ denote the standard top-dimensional generators for $\operatorname{CF}^{\le0}(\Sigma, \bm\beta, \bm\gamma, z)$ and $\operatorname{CF}^{\le0}(\Sigma, \bm\gamma, \bm\beta, z)$, both of which use the unique intersection point in $\beta_g \cap \gamma_g$ as shown in Figure \ref{fig: twistgreen}.
Similarly, one can close off $\bar X_{\alpha\delta\beta}$ by attaching a $4$-handle to the remaining $S^3$ boundary component of $Y_{\delta\gamma}$. This will give us cobordism $W'_{\lambda+m\mu}$, which is $W_{\lambda+m\mu}$ with the orientation reversed, viewed as a cobordism from $Y_{\lambda+m\mu}(K)$ to $Y$. Define $\mathfrak{s}^0_{\beta\delta}$, $\Theta_{\beta\delta}$, and $\Theta_{\delta\beta}$ analogously. Both $\Theta_{\beta\delta}$ and $\Theta_{\delta\beta}$ use the unique intersection point in $\beta_g \cap \delta_g$ as shown in Figure \ref{fig: twistpurple}.
Next let $W_{\gamma\beta\delta}$ denote the $4$-manifold obtained from $\bar X_{\gamma\beta\delta}$ by deleting a neighbourhood of an arc connecting $Y_{\beta\gamma}$ and $Y_{\beta\delta}$ (both are $\mathbin{\#}^{g-1}S^1\times S^2$). If we let $B_m$ be the Euler number $m$ disk bundle over $S^3,$ which has the boundary $L(m,1)$, $W_{\gamma\beta\delta}$ is diffeomorphic to $((\#^{g-1} S^1 \times S^2) \times I) \mathbin{\natural} B_m$, and $Q$ corresponds to the homology class of the zero section in $B_m$.
Following \cite[Definition 6.3]{rational}, define $\mathfrak{s}^0_{\gamma\delta}$ to be the unique spin$^c$ structure on $Y_{\gamma\delta}$ that is torsion and has an extension $\mathfrak{t}$ to $W_{\gamma\beta\delta}$ which satisfies $\gen{c_1(\mathfrak{t}), [S^2]} = \pm m$. Pair the $m$ intersection points of $\gamma_g \cap \delta_g$ the top-dimensional intersection points of $\gamma_j \cap \delta_j$ ($j=1, \dots, g-1$) to obtain $m$ canonical cycles in $\operatorname{CF}^{\le 0}(\Sigma, \bm\gamma, \bm\delta, z)$, each of which represents a different torsion spin$^c$ structure on $Y_{\gamma\delta}$. Let $\Theta_{\gamma\delta}$ be the generator which uses the point of $\gamma_j \cap \delta_j$ that is adjacent to $w$, as shown in Figure \ref{fig: Q}.
The following is proved by Hedden and Levine:
\begin{lemma}[Lemma 5.2 in \cite{HL}] \label{lemma: Theta-gamma-delta}
The generator $\Theta_{\gamma\delta}$ represents $\mathfrak{s}^0_{\gamma\delta}$.
\end{lemma}
Moreover, there is a class $\tau^+_0 \in \pi_2(\Theta_{\beta\gamma}, \Theta_{\gamma\delta}, \Theta_{\beta\delta})$ such that the intersection of its domain with the winding region is the small triangle above $w$ in Figure \ref{fig: Q}. A similar computation as in \cite{HL} yields
\[
\gen{c_1(\mathfrak{s}_z(\tau_0^+)), [Q]} = -m.
\]
Let $X_*$ be any of the $4$-manifolds defined above (either with three or four subscripts). For the rest of the paper, we will use $\Spin^c_0(X_*)$ to denote the set of spin$^c$ structures that restricts to $\mathfrak{s}^0_{\beta\gamma}$ on $Y_{\beta\gamma}$, $\mathfrak{s}^0_{\gamma\delta}$ on $Y_{\gamma\delta}$, and $\mathfrak{s}^0_{\delta\beta}$ on $Y_{\delta\beta}$, for whichever applicable. Note that all such spin$^c$ structures extend uniquely to $\bar X_*$.
We take a look at the intersection forms on different $4$-manifolds $X_*$ to finish this subsection. In $X_{\alpha\beta\gamma\delta}$, the classes $[P_\gamma]$, $[P_\delta]$, $[Q]$, and $[R]$ can be represented by surfaces contained in $X_{\alpha\beta\gamma}$, $X_{\alpha\beta\delta}$, $X_{\beta\gamma\delta}$, and $X_{\alpha\gamma\delta}$, respectively. In particular, the pair $[P_\gamma]$ and $[R]$ can be represented by disjoint surfaces in $X_{\alpha\beta\gamma\delta}$. The same is for the pair $[P_\delta] $and $ [Q]$. Thus
\begin{equation} \label{eq: abgd-int-form}
[P_\gamma] \cdot [R] = [P_\delta] \cdot [Q] = 0,
\end{equation}
With orientation given by the cobordisms, the other pairs are
\begin{equation} \label{eq: abgd-PD-self-int}
\begin{aligned}
[P_\gamma]^2 &= dk & [P_\delta]^2 &= d(k+md) \\ [Q]^2 &=-m & [R]^2 &= -\frac{m k (k+md)}{\nu^2} .
\end{aligned}
\end{equation}
In $X_{\alpha\gamma\delta\beta}$, the same classes have different self-intersection numbers due to a change of the orientation, and different pairs are disjoint according to the decomposition given by (\ref{eq: Xagdb-decomp}). In this case,
\begin{gather*}
[R] \cdot [P_\delta] = [P_\gamma]\cdot [Q] = 0\\
\begin{aligned}
[P_\gamma]^2 &= -dk & [P_\delta]^2 &= -d(k+md) \\
[Q]^2 &=-m & [R]^2 &= -\frac{m k (k+md)}{\nu^2}.
\end{aligned}
\end{gather*}
The sign of the self intersection $[P_\gamma]^2$ (resp.~$[P_\delta]^2$) is reversed because it is contained in $X_{\alpha\gamma\beta}$ (resp.~$X_{\alpha\delta\beta}$), which is diffeomorphic to $-X_{\alpha\beta\gamma}$ (resp.~$-X_{\alpha\beta\delta}$). This reverse of the orientation is equivalent to turning the cobordism $W_\lambda$ (resp.~$W_{\lambda+m\mu}$) around; denote the resulting cobordism $W'_\lambda(K)$ (resp.~$W'_{\lambda+m\mu}$). Parallel results hold for $X_{\alpha\delta\beta\gamma}$ as well.
\begin{figure}[!ht]
\subfloat[]{
\labellist
\tiny
\pinlabel $nd$ at 130 120
\pinlabel $2nd$ at 100 120
\pinlabel $n(k+md$ at 185 130
\pinlabel $+d)$ at 185 115
\pinlabel $nd$ at 155 30
\pinlabel $2nd$ at 118 30
\pinlabel $...$ at 78 30
\pinlabel $0$ at 145 50
\pinlabel $n(k+md)$ at 195 67
\endlabellist
\includegraphics[width=0.5\textwidth]{purple2}
\label{subfig: purplea}
}
\subfloat[]{
\labellist
\tiny
\pinlabel {{\color{purple} $k+md$}} at 100 30
\pinlabel {{\color{purple} $...$}} at 145 30
\pinlabel {{\color{purple} $k+md$}} at 76 120
\pinlabel {{\color{purple} $2(k$}} at 120 120
\pinlabel {{\color{purple} $+md)$}} at 118 100
\endlabellist
\includegraphics[width=0.5\textwidth]{purple3}
\label{subfig: purpleb}
}
\subfloat[]{
\labellist
\pinlabel $z$ at 150 70
\pinlabel $w$ at 205 70
\pinlabel $z_n$ at 52 135
\tiny
\pinlabel $-n(k+md)$ at 17 135
\pinlabel $+(n+1)nd$ at 18 125
\pinlabel $-n(k+md)$ at 48 105
\pinlabel $+n^2d$ at 48 95
\pinlabel $-(n-1)(k+md)$ at 44 53
\pinlabel $n^2d$ at 56 63
\pinlabel $...$ at 78 105
\pinlabel $...$ at 100 105
\pinlabel $nd$ at 170 120
\pinlabel $-(k+md)$ at 125 120
\pinlabel $+nd$ at 117 110
\pinlabel $0$ at 225 120
\pinlabel $-nd$ at 258 120
\pinlabel $-2nd$ at 304 120
\pinlabel $k+md$ at 175 52
\pinlabel $k+md-nd$ at 245 52
\pinlabel $...$ at 120 32
\pinlabel $...$ at 137 32
\pinlabel $...$ at 282 52
\pinlabel $nd$ at 185 32
\pinlabel $0$ at 240 32
\pinlabel $-nd$ at 272 32
\pinlabel $-2nd$ at 304 32
\endlabellist
\includegraphics{purple4}
\label{subfig: purplec}
}
\caption{The construction of the relative periodic domain $P_{n,\delta}$, in the case where $m=6, b=3$ and $n=3$.}
\label{fig: purple}
\end{figure}
\subsection{Polygons, spin$^c$ structures, and Alexander gradings} \label{ssec: spinc}
In this section, we compute of the Alexander grading shift and the first Chern class evaluation with respect to the holomorphic triangles and rectangles. We will make use of these computations in both the proof of the large surgery and the proof of the surgery exact triangle.
Following the convention from \cite{HL}, throughout the rest of the paper we will generally refer to elements of $\mathbb{T}_\alpha \cap \mathbb{T}_\beta$ as $\mathbf{x}$ or $\mathbf{y}$, elements of $\mathbb{T}_\alpha \cap \mathbb{T}_\gamma$ as $\mathbf{q}$ or $\mathbf{r}$, and elements of $\mathbb{T}_\alpha \cap \mathbb{T}_\delta$ as $\mathbf{a}$ or $\mathbf{b}$. We also introduce the following notational shorthands:
\begin{align*}
A(\mathbf{x}) &= A_{w,z}(\mathbf{x}) & A(\mathbf{q}) &= A_{w,z_n}(\mathbf{q}) & A(\mathbf{a}) &= A_{w,z_n}(\mathbf{a}) \\
\tilde A(\mathbf{x}) &= d A_{w,z}(\mathbf{x}) & \tilde A(\mathbf{q}) &= k A_{w,z_n}(\mathbf{q}) & \tilde A(\mathbf{a}) &= (k+md) A_{w,z_n}(\mathbf{a}).
\end{align*}
Although implicit in the notation, we remind the reader that $A(\mathbf{q}), \tilde A(\mathbf{q}), A(\mathbf{a})$ and $\tilde A(\mathbf{a})$ are dependent on the number $n$ through the basepoint $z_n.$
The following Alexander grading formula using relative periodic domains and holomorphic triangles, proved in \cite[Section 2.3]{HL}, is going to be our main tool along with the first Chern class formula from \cite{fourmanifold}.
\begin{proposition}[Proposition 1.3 in \cite{HL}] \label{prop: abs-alex} Let $(\Sigma, \alpha,\beta,w,z)$ be a doubly-pointed Heegaard diagram for a knot $(Y,K)$ representing a class in $H_1(Y)$ of order $d$, and let $P$ be a relative periodic domain specifying a homology class $[P]\in H_2(Y,K)$. Then
the absolute Alexander grading of a generator $\mathbf{x} \in \mathbb{T}_\alpha \cap \mathbb{T}_\beta$ is given by
\begin{align}
\label{eq: abs-alex} A_{w,z}(\mathbf{x}) &= \frac{1}{2d} \left( \hat\chi(P) + 2n_\mathbf{x}(P) - n_{\bar z}(P) - n_{\bar w}(P) \right)
\end{align}
where $\hat\chi(P)$ is the Euler measure of $P$, $n_\mathbf{x}(P)$ denotes the sum of the average of the four local multiplicities of $P$ in the regions abutting $x_j$ for all the $x_j \in \mathbf{x}$, and $n_{\bar w}(P)$ (resp.~$n_{\bar z}(P)$) denotes the average of the multiplicities of $P$ on either side of the longitude at $w$ (resp.~$z$).
\end{proposition}
\subsubsection{Modified relative periodic domains.}
A key ingredient of our argument is the relative periodic domain that represents the $(n,1)$--cable of the meridian. We now demonstrate the construction.
Starting with n copies of $P_\delta$, isotope the $\beta_g$ curve as in Figure \ref{subfig: purplea}. First add rectangle strips with multiplicity $(n-1)(k+md),\cdots,2(k+md),k+md$, from right to left, as shown in Figure \ref{subfig: purpleb}. In this stage we already obtain a periodic domain whose boundary, after gluing in the disks along $\alpha$ and $\delta$ attaching curves, is identified with $k+md$ copies of the $(n,1)$--cable of the meridian, as required. The only problems are that the new blue curve is immersed, and the multiplicity at $z$ basepoint is not $0$. Next, add the shadowed rectangle strip with multiplicity $k+md$ depicted in Figure \ref{subfig: purplec}: this is equivalent to sweeping the horizontal portion of the blue curve across the disk attached along $\alpha_g$, thus replacing this portion of the blue curve with the remaining boundary of the disk. Finally adjust the multiplicity by adding $-n(k+md)\Sigma$ to the domain, resulting in the relative periodic domain in Figure \ref{subfig: purplec}; denote it as $P_{n,\delta}$.
The two-chain $P_{n,\delta}$ is a relative periodic domain $(\Sigma,\bm\alpha,\bm\delta,w,z_n)$ for the knot $K_{n,\lambda+m\mu}$ in $Y_{\lambda+m\mu}(K)$. In $P_{n,\delta}$, $w$ and $z$ basepoints are interchangeable, $n_z(P_{n,\delta}) = n_w(P_{n,\delta}) = 0$, $n_{z_n}(P_\gamma) = -n(k+md) + n^2 d$, and
\[
\partial P_\gamma = (k+md-nd)\alpha_g - (k + md) \beta_{n,g} + nd\gamma_g + \sum_{i=1}^{g-1} (na_i \alpha_i + nb_i \beta_i)
\]
for some integers $a_i, b_i$.
We have $[P_{n,\delta}]=n[P_\delta] \in H_2(W_{\lambda})$, since each time attaching a rectangle strip is equivalent to a homotopy.
\begin{proposition}
The Euler measure $\hat\chi(P_{n,\delta})=n\hat\chi(P_{\delta}).$
\end{proposition}
\begin{proof}
According to \cite[Lemma 6.2]{fourmanifold}, for a triply-periodic domain $P=\sum_i n_i \mathcal{D}_i,$ the Euler measure can be calculated by
\[
\hat\chi(P) = \sum_i n_i \left( \chi(\text{int}\mathcal{D}) - \frac{1}{4}(\# \text{corner points of }\mathcal{D}) \right).
\]
From the formula we see that isotopy, adding rectangle domains and copies of $\Sigma$ do not change the Euler measure. Thus $\hat\chi(P_{n,\delta})=n\hat\chi(P_{\delta}).$
\end{proof}
\subsubsection{Alexander grading shifts on holomorphic triangles.} \label{sssec: triangle}
We study the intersection points in $(\Sigma, \bm\alpha, \bm\beta, \bm\delta^{m,b}, w, z, z_n)$ more carefully (see Figure \ref{fig: twistpurple}) before moving on to the computation.
The curve $\alpha_g $ and $\delta_g^{m,b}$ intersects $m$ times in the winding region, we will label the intersection points as such: following the orientation of $\delta_g^{m,b}$, let $p_{b-m}, \dots, p_{-1}$ denote the $b-m$ points to the left of $\beta_g$, with $p_{-1}$ being the closest, and $p_{0}, \dots, p_{b-1}$ denote the $b$ points to the right of $\beta_g$, with $p_{0}$ being the closest. Let $q$ be the unique intersecting point of $\alpha_g$ and $\beta_g$. For $\mathbf{x} \in \mathbb{T}_\alpha\cap \mathbb{T}_\beta$ and $l \in \{b-m, \dots, b-1\}$, we define $\mathbf{x}_l^{m,b} \in \mathbb{T}_\alpha \cap\mathbb{T}_{\delta^{m,b}}$ to be the point obtained by replacing $q$ with $p_l$ and taking ``nearest points'' in thin domains. There is a \emph{small triangle} $\psi_{\mathbf{x},l}^{m,b} \in \pi_2(\mathbf{x}_l^{m,b}, \Theta_{\delta\beta}, \mathbf{x})$ in the winding region satisfying
\begin{align}\label{eq: smalltriangle}
(n_w(\psi_{\mathbf{x},l}),n_z(\psi_{\mathbf{x},l}),n_{z_n}(\psi_{\mathbf{x},l}))=&
\begin{cases}
(l,0,0) & l\geq 0 \\
(0,-l,0) & -n<l<0 \\
(0,-l,-l-n) & l\leq -n.
\end{cases}
\end{align}
\begin{figure}
\labellist
\pinlabel $w$ at 275 95
\pinlabel $z$ at 245 95
{\scriptsize
\pinlabel $z_n$ at 163 145
}
\pinlabel {{\color{red} $\alpha_g$}} [l] at 390 48
\pinlabel {{\color{blue} $\beta_{n,g}$}} [l] at 389 74
\pinlabel {{\color{OliveGreen} $\gamma_g$}} [l] at 388 120
\pinlabel $0$ at 325 105
\pinlabel $k$ at 325 69
\pinlabel $nd$ at 320 40
\pinlabel $nd$ at 304 145
{\tiny
\pinlabel $-k$ at 196 80
\pinlabel $nd$ at 196 90
\pinlabel $-k$ at 184 105
\pinlabel $2nd$ at 188 115
\pinlabel $2nd$ at 167 90
\pinlabel $-2k$ at 165 80
\pinlabel $n(-k$ at 140 75
\pinlabel $+nd)$ at 140 65
\pinlabel $-nk$ at 131 135
\pinlabel $(n+1)nd$ at 131 145
\pinlabel $-nk$ at 100 106
\pinlabel $n^2d$ at 100 116
\pinlabel $-k$ at 223 35
\pinlabel $nd$ at 225 45
\pinlabel $-k$ at 203 35
\pinlabel $2nd$ at 205 45
\pinlabel $-2k$ at 185 35
\pinlabel $2nd$ at 188 45
\pinlabel $\cdots$ at 172 40
}
\endlabellist
\includegraphics{green2}
\caption{The winding region of diagram $P_{n,\gamma}$, where $n=3$.}
\label{fig: green2}
\end{figure}
The following computational result is one of our main ingredients. Compare \cite[Proposition 5.5]{HL}.
\begin{proposition} \label{prop: alex-triangle}
Let $\mathbf{x} \in \mathbb{T}_\alpha \cap \mathbb{T}_\beta$, $\mathbf{q} \in \mathbb{T}_\alpha \cap \mathbb{T}_\gamma$, and $\mathbf{a} \in \mathbb{T}_\alpha \cap \mathbb{T}_\delta$. Assume also $\mathbf{a}$ uses $p_l$, for some $l=b-m,\cdots,b-1$ in the winding region.
\begin{enumerate}
\item
For any $\psi \in \pi_2(\mathbf{x}, \Theta_{\beta\gamma}, \mathbf{q})$,
\begin{align} \label{eq: abg-alex}
n\tilde A(\mathbf{x}) - \tilde A(\mathbf{q}) &= -nd n_w(\psi) - (k - nd) n_z(\psi) + k n_{z_n}(\psi) -\frac{nk-n^2 d}{2} \\
\label{eq: abg-c1-x} \gen{c_1(\mathfrak{s}_z(\psi)), [P_\gamma]}
&= 2 \tilde A(\mathbf{x}) + 2d n_w(\psi) - 2d n_z(\psi) + k \\
\label{eq: abg-c1-q}\gen{c_1(\mathfrak{s}_z(\psi)), [P_{n,\gamma}]}
&= 2 \tilde A(\mathbf{q}) + 2k n_{z_n}(\psi) - 2k n_w(\psi) + n^2 d
\end{align}
\item
For any $\psi \in \pi_2(\mathbf{q}, \Theta_{\gamma\delta}, \mathbf{a})$,
\begin{align}
\label{eq: agd-alex}
\tilde A(\mathbf{q}) - \tilde A(\mathbf{a}) &= -(k+ md) n_z(\psi) + (k+ md) n_{z_n}(\psi) - md \sum_{j=1}^n \Big( n_{z_j}(\psi) - n_{u_j}(\psi) \Big) - \frac{nmd}{2} \\
\nonumber \langle c_1(\mathfrak{s}_z(\psi)), n[R]\rangle &= \frac{m}{\nu} \Bigg( 2 \tilde A(\mathbf{a}) + 2(k+md)\Big(n_{z_n}(\psi) - n_z(\psi) - \sum_{j=1}^n \left( n_{z_j}(\psi) - n_{u_j}(\psi) \right)\Big) \\
\label{eq: agd-c1-a} & - n(k + md) + n^2 d \Bigg) \\
\label{eq: agd-c1-q}
&= \frac{m}{\nu} \left( 2 \tilde A(\mathbf{q}) + 2k \sum_{j=1}^n \Big( n_{z_j}(\psi) - n_{u_j}(\psi) \Big) - nk + n^2 d \right)
\end{align}
\item
For any $\psi \in \pi_2(\mathbf{a}, \Theta_{\delta\beta}, \mathbf{x})$,
\begin{equation} \label{eq: adb-alex}
\tilde A(\mathbf{a}) - n\tilde A(\mathbf{x}) = (k+md) n_{z_n}(\psi) - nd n_w(\psi) - (k+md - nd) n_z(\psi) + \frac{n(k+md) - n^2 d}{2}
\end{equation}
\begin{align}
\label{eq: adb-c1-a}
\gen{c_1(\mathfrak{s}_z(\psi)), [P_{n,\delta}]}
&= 2\tilde A(\mathbf{a}) + 2(k+md)n_w(\psi) - 2(k+md) n_{z_n}(\psi) + n^2 d \\
\label{eq: adb-c1-x}
\gen{c_1(\mathfrak{s}_z(\psi)), [P_\delta]}&= 2\tilde A(\mathbf{x}) + 2d n_z(\psi) - 2d n_w(\psi) + (k+md)
\end{align}
\end{enumerate}
\end{proposition}
\begin{proof}
We will compute part ($3$) first, namely the statements about $(\alpha,\delta,\beta)$ triangles.
Up to permuting the indices of the $\beta$ curves, each $\mathbf{x}\in \mathbb{T}_\alpha \cap \mathbb{T}_\beta$ consists of points $\mathbf{x}_j \in \alpha_j \cap \beta_j$ for $j=1,\cdots,g,$ where $x_g$ is the unique point $q\in \alpha_g \cap \beta_g$. For $j=1,\cdots,g-1,$ suppose the local multiplicities of $P_\delta$ around $x_j$ are $c_j, c_j+a_j, c_j+a_j+b_j, c_j+b_j$ for some $c_j$. Hence, we compute
\begin{align*}
&n_\mathbf{x}(P_\delta) = \frac{k+md+d}{2} + \sum_{j=1}^{g-1} \left(c_j + \frac{a_j + b_j}{2} \right) \\
&n_{\bar w}(P_\delta) + n_{\bar z}(P_\delta) = k+md+d\\
&n_{\bar w}(P_{n,\delta}) + n_{\bar z_{n}}(P_{n,\delta}) =-(n-1)(k+md)+n^2 d.
\end{align*}
We begin by showing (\ref{eq: adb-alex}) and (\ref{eq: adb-c1-x}) hold for $\psi_{\mathbf{x},l}^{m,b} \in \pi_2(\mathbf{x}_l^{m,b}, \Theta_{\delta\beta}, \mathbf{x})$, where $\psi_{\mathbf{x},l}^{m,b}$ are the small triangles in winding region defined above.(We henceforth omit the superscripts for simplicity.) For $l=b-m,\cdots,b-1,$
\begin{align*}
n_{\mathbf{x}_l}(P_{n,\delta})=\frac{k+md}{2} + n \sum_{j=1}^{g-1} \left(c_j + \frac{a_j + b_j}{2} \right) + &
\begin{cases}
-lnd & l\geq 0 \\
-l(-(k+md)+nd) & -n<l<0 \\
-n(k+md)-lnd & l\leq -n.
\end{cases}
\end{align*}
To prove (\ref{eq: adb-alex}), we compute using (\ref{eq: abs-alex})
\begin{align*}
(k+md)A_{w,z_n}(\mathbf{x}_{l})=\frac{1}{2}\left( n\hat\chi(P_\delta) + 2n_{\mathbf{x}_l}(P_{n,\delta}) - (n_{\bar z}(P_{n,\delta}) - n_{\bar w}(P_{n,\delta})) \right)\\
=ndA_{z,w}(\mathbf{x}) +\frac{n(k+md)-n^2 d}{2} +
\begin{cases}
-lnd & l\geq 0 \\
-l(-(k+md)+nd) & -n<l<0 \\
-n(k+md)-lnd & l\leq -n.
\end{cases}
\end{align*}
Comparing with (\ref{eq: smalltriangle}), the last term on the right hand side is exactly equal to
\[
(k+md)n_{z_n}(\psi_{\mathbf{x},l})-ndn_w(\psi_{\mathbf{x},l})-(k+md-nd)n_z(\psi_{\mathbf{x},l})
\]
as required.
For (\ref{eq: adb-c1-x}), we use the first Chern class formula from \cite[Proposition 6.3]{fourmanifold}.(There is a sign inconsistency in the definition of the dual spider number, see the footnote in \cite{HL} below the proof of Lemma 4.2.)
\begin{align*}
\gen{c_1(\mathfrak{s}_z(\psi_{\mathbf{x},l})), [P_\delta]} &= \hat\chi(P) + \#(\partial P_\delta) - 2n_z(P_\delta) + 2\sigma(\psi_{\mathbf{x},l}, P_\delta) \\
&= \hat\chi(P_\delta) + \left(-(k+md) + \sum_{j=1}^{g-1} (a_j + b_j) \right) - 2 \cdot 0 + 2 \left( k+md-dl + \sum_{j=1}^{g-1} c_j \right) \\
&= \hat\chi(P_\delta) + \sum_{j=1}^{g-1} (a_j + b_j + 2c_j) + 2d (-l) + k+md \\
&= \hat\chi(P_\delta) + 2 n_\mathbf{x}(P_\delta) - n_{\bar w}(P_\delta) - n_{\bar z}(P_\delta) + 2d( n_z(\psi_{\mathbf{x},l}) - n_w(\psi_{\mathbf{x},l}) )+ k+md \\
&= 2d A_{w,z}(\mathbf{x}) + 2d n_z(\psi_{\mathbf{x},l}) - 2d n_w(\psi_{\mathbf{x},l}) + k + md
\end{align*}
as required.
Next, we consider an aribitrary triangle $\phi \in \pi_2(\mathbf{a},\Theta_{\delta\beta},\mathbf{x}).$ There are some $r\in \mathbb{Z}$ and some $l\in \{b-m,\cdots,b-1 \}$ such that
\[
n_w(\phi) - n_z(\phi)=r(k+md)+l.
\]
Let $\psi' = \psi - rP_\delta \in \pi_2(\mathbf{x}, \Theta_{\beta\gamma}, \mathbf{q})$; then $n_w(\psi') - n_z(\psi') = l$. The composite domain $\phi$ with $\mathcal{D} (\phi)= \mathcal{D}(\psi') - \mathcal{D}(\psi_{\mathbf{x},l})$ is a disk in $\pi_2(\mathbf{a}, \mathbf{x})$, so $\mathfrak{s}_z(\psi') = \mathfrak{s}_z(\psi_{\mathbf{x},l})$. We then compute:
\begin{align*}
A_{w,z_n}(\mathbf{a}) - A_{w,z_n}(\mathbf{x}_l) &= n_{z_n}(\phi) - n_w(\phi) \\
&= n_{z_n}(\psi') - n_{z_n}(\psi_{\mathbf{x},l}) - n_w(\psi') + n_w(\psi_{\mathbf{x},l}) \\
&= n_{z_n}(\psi) - n_w(\psi) -r(nd-(k+md)) +
\begin{cases}
l & l\geq 0 \\
0 & -n<l<0 \\
l+n & l\leq -n.
\end{cases}
\end{align*}
\begin{align*}
& (k+md)A_{w,z_n}(\mathbf{a})- ndA_{w,z}(\mathbf{x}) \\
&= ((k+md)A_{w,z_n}(\mathbf{x}_l) - ndA_{w,z}(\mathbf{x}) ) + (k+md)(A_{w,z_n}(\mathbf{a}) - A_{w,z_n}(\mathbf{x}_l)) \\
&= \frac{n(k+md)-n^2 d}{2} +(k+md)(n_{z_n}(\psi) - n_w(\psi)) + (k+md-nd)(k+md)r + (k+md-nd)l \\
&= \frac{n(k+md)-n^2 d}{2} +(k+md)(n_{z_n}(\psi) - n_w(\psi)) +(k+md-nd)(n_w(\psi) - n_z(\psi))
\end{align*}
as required. Similarly, we have:
\begin{align*}
\gen{c_1(\mathfrak{s}_z(\psi)), [P_\delta]}
&= \gen{c_1(\mathfrak{s}_z(\psi' + rP_\delta)), [P_\delta]} \\
&= \gen{c_1(\mathfrak{s}_z(\psi') + r \PD[P_\delta]), [P_\delta]} \\
&= \gen{c_1(\mathfrak{s}_z)(\psi'), [P_\delta]} + 2r [P_\delta]^2 \\
&= \gen{c_1(\mathfrak{s}_z)(\psi_{\mathbf{x},l}), [P_\delta]} - 2r (k+md)d \\
&= 2d A_{w,z}(\mathbf{x}) + k + md - 2dl - 2r (k+md)d \\
&= 2d A_{w,z}(\mathbf{x}) + k + md - 2d (n_w(\psi) - n_z(\psi))
\end{align*}
as required.
Then, (\ref{eq: adb-c1-a}) follows immediately from (\ref{eq: adb-alex}) and (\ref{eq: adb-c1-x}). This concludes the proof of part ($3$).
For part ($1$), namely the statements about $(\alpha,\beta,\gamma)$ triangles can be proved in a similar manner and are left for the reader. The periodic diagram $P_{n,\gamma}$ is displayed in Figure \ref{fig: green2}.
Lastly, we prove part ($2$), namely the statements about $(\alpha,\gamma,\delta)$ triangles. Consider $\psi \in \pi_2(\mathbf{q}, \Theta_{\gamma\delta}, \mathbf{a})$ for some $\mathbf{q} \in \mathbb{T}_\alpha \cap \mathbb{T}_\gamma$ and $\mathbf{a} \in \mathbb{T}_\alpha \cap \mathbb{T}_\delta$ . Note that $n_z(\psi) = n_w(\psi)$ since $w$ and $z$ are only separated by $\beta_g$.
Choose an arbitrary triangle $\phi \in \pi_2(\mathbf{x}, \Theta_{\beta\gamma}, \mathbf{q})$ for some $\mathbf{x} \in \mathbb{T}_\alpha \cap \mathbb{T}_\beta$. By \eqref{eq: abg-alex},
\[
n\tilde A(\mathbf{x}) - \tilde A(\mathbf{q}) = -nd n_w(\phi) + k n_{z_n}(\phi) - (k-nd) n_z(\phi) - \frac{nk-n^2 d}{2}.
\]
For the simplification of notation, let $\tau = \tau_0^+ \in \pi_2(\Theta_{\beta\gamma}, \Theta_{\gamma\delta}, \Theta_{\beta\delta})$ be the class represented by the small triangle in the center of Figure \ref{fig: Q} (see Lemma \ref{lemma: Theta-gamma-delta}). The intersection number is zero for $\tau$ at all reference points. If we let $\sigma$ be the composite domain with $\mathcal{D}(\sigma) = \mathcal{D}(\phi) + \mathcal{D}(\psi) - \mathcal{D}(\tau)$, then $\sigma$ is almost the domain of a triangle in $\pi_2(\mathbf{x}, \Theta_{\beta\delta}, \mathbf{a})$, except that the boundary of $\sigma$ includes $\gamma_g$ with multiplicity
\begin{align*}
r &= n_{z_j}(\sigma) - n_{u_j}(\sigma), \quad j=1,\cdots,n.
\end{align*}
As a result
\begin{align*}
nr &= \sum^n_{j=1} \left(n_{z_j} (\sigma) - n_{u_j}(\sigma) \right)\\
&=\sum^n_{j=1} \left(n_{z_j} (\psi) - n_{u_j}(\psi) \right) + \sum^n_{j=1} \left(n_{z_j} (\phi) - n_{u_j}(\phi) \right) \\
&=\sum^n_{j=1} \left(n_{z_j} (\psi) - n_{u_j}(\psi) \right) +n_{z_n}(\phi) - n_z(\phi).
\end{align*}
There is an actual triangle class $\sigma' \in \pi_2(\mathbf{x}, \Theta_{\beta\delta}, \mathbf{a})$ with $\mathcal{D}(\sigma') = \mathcal{D}(\sigma) - rQ$. Moreover, the composites $\rho_1 = \phi * \psi$ and $\rho_2 = \sigma' * \tau$ are each quadrilaterals in $\pi_2(\mathbf{x}, \Theta_{\beta\gamma}, \Theta_{\gamma\delta}, \mathbf{a})$ satisfying $\mathcal{D}(\rho_1) = \mathcal{D}(\rho_2) + rQ$.
We compute
\begin{align*}
n\tilde A(\mathbf{x}) &- \tilde A(\mathbf{a})
= -nd n_w(\sigma') + (k+md) n_{z_n}(\sigma') - (k+md -nd) n_z(\sigma') - \frac{n(k+md) -n^2 d}{2} \\
&= -nd n_w(\sigma) + (k+md) n_{z_n}(\sigma) - (k+md -nd) n_z(\sigma) -mdnr - \frac{n(k+md) -n^2 d}{2} \\
&= (k+md) n_{z_n}(\psi) - (k+md) n_z(\psi) -nd n_w(\phi) + (k+md) n_{z_n}(\phi)\\
&\qquad - (k+md -nd) n_z(\phi) -mdnr - \frac{n(k+md) -n^2 d}{2}\\
& =(k+md)(n_{z_n}(\psi)-n_z(\psi)) -nd n_w(\phi) + (k+md) n_{z_n}(\phi) - (k+md -nd) n_z(\phi) \\
& \qquad -md \left( \sum^n_{j=1} \left(n_{z_j} (\psi) - n_{u_j}(\psi) \right) +n_{z_n}(\phi) - n_z(\phi)\right) - \frac{n(k+md) -n^2 d}{2} \\
&= (k+md)(n_{z_n}(\psi)-n_z(\psi)) -md \sum^n_{j=1} \left(n_{z_j} (\psi) - n_{u_j}(\psi) \right) \\
&\qquad -nd n_w(\phi) + k n_{z_n}(\phi) - (k -nd) n_z(\phi) - \frac{n(k+md) -n^2 d}{2}
\end{align*}
The first equality above follows from the same computations in part ($3$), but using $(\alpha,\beta,\delta)$ triangles instead. Subtracting the value of $n\tilde A(\mathbf{x}) - \tilde A(\mathbf{q})$, we have:
\begin{align*}
\tilde A(\mathbf{q}) - \tilde A(\mathbf{a})
&= (k + md ) n_{z_n}(\psi) - (k+md) n_z(\psi) -md \sum^n_{j=1} \Big(n_{z_j} (\psi) - n_{u_j}(\psi)\Big) - \frac{nmd}{2}
\end{align*}
proving \eqref{eq: agd-alex}. Likewise, using \eqref{eq: abgd-PD-self-int}, we have:
\begin{align*}
&\quad \gen{ c_1(\mathfrak{s}_z(\psi)), n[R]} \\
&= \gen{c_1(\mathfrak{s}_z(\rho_1)), n[R]} \\
&= \gen{c_1(\mathfrak{s}_z(\rho_2) + r\PD[Q]), n[R]} \\
&= \gen{c_1(\mathfrak{s}_z(\rho_2)) + 2r\PD[Q], \frac{nm}{\nu} [P_\delta] + \frac{n(k+md)}{\nu}[Q]} \\
&= \frac{nm}{\nu} \gen{c_1(\mathfrak{s}_z(\sigma')), \ [P_\delta]} + \frac{n(k+md)}{\nu} \gen{c_1(\mathfrak{s}_z(\tau)), [Q]} + \frac{2nr(k+md)}{\nu}[Q]^2 \\
&= \frac{m}{\nu} \left( 2 \tilde A(\mathbf{a}) + 2(k+md)(n_{z_n}(\sigma') - n_z(\sigma')) + n^2 d\right) + \frac{n(k+md)}{\nu}(-m) + \frac{2 nr(k+md)}{\nu}(-m) \\
&= \frac{m}{\nu} \left( 2 \tilde A(\mathbf{a}) + 2(k+md)(n_{z_n}(\sigma') - n_z(\sigma') -nr) - n(k+md) + n^2 d \right)- n_z(\phi) \\
&= \frac{m}{\nu} \left( 2 \tilde A(\mathbf{a}) + 2(k+md) \Big(n_{z_n}(\psi) - n_z(\psi) + n_{z_n}(\phi) -n_z(\phi) -nr \Big) + n(k+md) + n^2 d \right) \\
&= \frac{m}{\nu} \left( 2 \tilde A(\mathbf{a}) + 2(k+md)\Big(n_{z_n}(\psi) - n_z(\psi) - \sum^n_{j=1} \left(n_{z_j} (\psi) - n_{u_j}(\psi) \right)\Big) - n(k+md) + n^2 d \right)
\end{align*}
thus proving \eqref{eq: agd-c1-a}. Finally, \eqref{eq: agd-c1-q} follows immediately from \eqref{eq: agd-alex} and \eqref{eq: agd-c1-a}.
\end{proof}
\begin{remark}
The first Chern class formulas in Proposition \ref{prop: alex-triangle} have their $\mathfrak{s}_w$ version as well. For example,
\begin{align*}
& \gen{c_1(\mathfrak{s}_w(\psi)), [P_\gamma]}\\
&=\gen{c_1(\mathfrak{s}_z(\psi) - \PD[C]), [P_\gamma]}\\
&=\gen{c_1(\mathfrak{s}_z(\psi) ) - 2\PD[C], [P_\gamma]}\\
&= 2 \tilde A(\mathbf{x}) + 2d n_w(\psi) - 2d n_z(\psi) - k,
\end{align*}
which gives the $\mathfrak{s}_w$ version of (\ref{eq: abg-c1-x}). The same computation applies to (\ref{eq: abg-c1-q}), (\ref{eq: adb-c1-a}) and (\ref{eq: adb-c1-x}) as well (using the core disk in each corresponding cobordism). For the $(\alpha,\delta,\gamma)$ triangles the two versions coincide due to the absence of the $\beta$ curves.
\end{remark}
\begin{remark} \label{re: nprime}
Given a suitable diagram $(\Sigma, \bm\alpha, \bm\beta, \bm\gamma, \bm\delta, w,z,z_n)$ for a fixed $n,$ we can construct $P_{n',\gamma}$ and $P_{n',\delta}$ for all the $n'$ with $1\leq n' \leq n.$ Simply by going through the same proof process, the results state for $n$ in the previous proposition holds (simultaneously) for all the $n'$ as well. Note that the Alexander gradings are dependent on $n$, even though it is not explicit in the notation. Adopting this point of view, in general the results stated for $n$ hold simultaneously for all $n'$ with $1\leq n' \leq n$ for the rest of the paper, substituting $z_{n'} $ and $u_{n'}$ as appropriate.
\end{remark}
\subsubsection{Spin$^c$ structures in cobordisms.} \label{sssec: spinc}
Proposition \ref{prop: alex-triangle} can help us understand the spin$^c$ structures in cobordisms. We focus on the cobordism $W_\lambda$, induced by $(\Sigma,\bm\alpha,\bm\beta,\bm\gamma).$ As before, let $K\subset Y$ be a knot of order $d>0$ in $H_1(Y;\mathbb{Z})$ and $F$ a rational surface for $K$. Recall that $[\partial F]= d\lambda - k \mu$ in $H_1(\partial(Y-\text{nbh}(K))).$ Inside $W_\lambda$, let $C$ denote the core of the $2$-handle attached to $Y$, $C^*$ the cocore. Then $[C]$ (resp.~ $[C^*]$) generates $H_2(W,Y)$ (resp.~ $H_2(W,Y_\lambda)$). We abuse the notation and use $[C]$ and $[C^*]$ to denote the corresponding classes in $H_2(W)$ as well. Finally let $\hat F$ be the capped seifert surface in $W_\lambda$, formed by capping a rational surface $F$ with $d$ parallel copies of $C$. Since $[\hat F]$ maps to $d[C]$ in $H_2(W,Y)$ and to $k[C^*]$ in $H_2(W, Y_\lambda(K))$, it follows that $[\hat F]^2=dk$.
In \cite[Section 2.2]{rational}, Ozsv\'ath and Szab\'o defined the map
\[
G_{Y,K} \colon\thinspace \underline\Spin^c(Y,K) \to \Spin^c(Y),
\]
which is equivariant with respect to the restriction map
\[
H^2(Y,K;\mathbb{Z}) \to H^2(Y;\mathbb{Z}).
\]
The fibers of $G_{Y,K}$ are exactly the orbits of $\underline\Spin^c(Y,K)$ under the action of $\gen{\PD[\mu]} \subset H^2(Y,K;\mathbb{Z})$. For any $\mathbf{x} \in \mathbb{T}_\alpha \cap \mathbb{T}_\beta,$ $G_{Y,K}(\underline\mathfrak{s}_{w,z}(\mathbf{x})) = \mathfrak{s}_w(\mathbf{x}). $ They also construct a bijection
\[
E_{Y, \lambda, K} \colon\thinspace \Spin^c(W_\lambda) \to \underline\Spin^c(Y,K)
\]
characterized by the property that for $\psi \in \pi_2(\mathbf{x}, \Theta_{\beta\gamma}, \mathbf{q})$,
\[
E_{Y,\lambda, K}(\mathfrak{s}_w(\psi)) = \underline\mathfrak{s}_{w,z}(\mathbf{x}) + (n_z(\psi) - n_w(\psi))\PD[\mu].
\]
As noted by Hedden and Levine, (\ref{eq: abg-c1-x}) in Proposition \ref{prop: alex-triangle} allows us to give an explicit and diagram-independent description of $E_{Y,\lambda, K}$ as follows
\begin{definition}\label{def: spincbij}
For any $\mathfrak{v} \in \Spin^c(W_\lambda) $, $E_{Y,\lambda, K}(\mathfrak{v})$ is the relative spin$^c$ structure satisfying
\[
G_{Y,K}(E_{Y,\lambda,K}(\mathfrak{v})) = \mathfrak{v}|_Y \quad \text{and} \quad
A_{Y,K}(E_{Y,\lambda, K}(\mathfrak{v})) = \frac{\gen{c_1(\mathfrak{v}), [\hat F]} + k}{2d} .
\]
\end{definition}
We claim the two definitions coincide. Note that any relative spin$^c$ structure is determined by the pair $(G_{Y,K},A_{Y,K})$. Given $\mathfrak{v} \in \Spin^c(W_\lambda) $, suppose $\mathfrak{s}_w(\psi)=\mathfrak{v}$ for some $\psi \in \pi_2(\mathbf{x}, \Theta_{\beta\gamma}, \mathbf{q})$. First we have $G_{Y,K}(E_{Y,\lambda, K}(\mathfrak{v})) = G_{Y,K}(\underline\mathfrak{s}_{w,z}(\mathbf{x}))$, since the action of $\PD[\mu]$ falls into the same orbit of $\Spin^c(Y)$. Then according to (\ref{eq: abg-c1-x}),
\begin{align*}
A_{Y,K}(E_{Y,\lambda, K}(\mathfrak{s}_w(\psi)))&= A(\mathbf{x}) + n_w(\psi) - n_z(\psi)\\
&=\frac{\gen{c_1(\mathfrak{s}_z(\psi)), [\hat F]} - k}{2d}\\
&=\frac{\gen{c_1(\mathfrak{s}_w(\psi))+2PD[C], [\hat F]} - k}{2d}\\
& =\frac{\gen{c_1(\mathfrak{s}_w(\psi)), [\hat F]} + k}{2d}\\
& =\frac{\gen{c_1(\mathfrak{v}), [\hat F]} + k}{2d}
\end{align*}
as required.
Since $\underline\Spin^c(Y,K)$ and $\underline\Spin^c(Y_\lambda,K_\lambda)$ (where $K_\lambda = K_{1,\lambda}$ is the dual knot) only depend on the knot complement, they are canonically identified. Therefore there is also a bijection between $\Spin^c(W_\lambda) $ and $\underline\Spin^c(Y_\lambda,K_\lambda)$. Interestingly, Proposition \ref{prop: alex-triangle} further provides a bijection between $\Spin^c(W_\lambda) $ and $\underline\Spin^c(Y_\lambda,K_{n,\lambda})$, and thus a bijection between $\underline\Spin^c(Y,K)$ and $\underline\Spin^c(Y_\lambda,K_{n,\lambda})$, even though their knot complements are distinct.
\begin{definition} \label{def: Enbij}
For any $\mathfrak{v} \in \Spin^c(W_\lambda) $, suppose $\mathfrak{s}_w(\psi)=\mathfrak{v}$ for some $\psi \in \pi_2(\mathbf{x}, \Theta_{\beta\gamma}, \mathbf{q})$.
Let
\[
E_{Y_\lambda, K_{n,\lambda}} \colon\thinspace \Spin^c(W_\lambda) \to \underline\Spin^c(Y_\lambda,K_{n,\lambda})
\]
be the map
\[
E_{Y_\lambda, K_{n,\lambda}}(\mathfrak{s}_z(\psi)) = \underline\mathfrak{s}_{w,z_n}(\mathbf{q}) + (n_{z_n}(\psi) - n_w(\psi))\PD[K].
\]
Or equivalently, if $\mathfrak{s}_z(\psi)=\mathfrak{v}$,
\[
\qquad E_{Y_\lambda, K_{n,\lambda}}(\mathfrak{s}_w(\psi)) = \underline\mathfrak{s}_{w,z_n}(\mathbf{q}) + (n_{z_n}(\psi) - n_w(\psi) - n)\PD[K].
\]
\end{definition}
And the following diagram-independent reinterpretation:
\begin{lemma}\label{le: spincbij}
For any $\mathfrak{v} \in \Spin^c(W_\lambda) $, the map $E_{Y_\lambda, K_{n,\lambda}} \colon\thinspace \Spin^c(W_\lambda) \to \underline\Spin^c(Y_\lambda,K_{n,\lambda})$ is a bijection, characterized by
\[
G_{Y_\lambda,K_{n,\lambda}}(E_{Y_\lambda, K_{n,\lambda}}(\mathfrak{v})) = \mathfrak{v}|_{Y_\lambda} \quad \text{and} \quad
A_{Y_\lambda,K_{n,\lambda}}(E_{Y_\lambda, K_{n,\lambda}}(\mathfrak{v})) = \frac{\gen{c_1(\mathfrak{v}), n[\hat F]} - n^2 d}{2k}.
\]
\\
Values of $A_{Y_\lambda,K_{n,\lambda}}(E_{Y_\lambda, K_{n,\lambda}}(\mathfrak{v}))$ for all the $\mathfrak{v}$ with the same restriction in $\Spin^c(Y_\lambda(K))$ form a $\mathbb{Q}/n\mathbb{Z}$ coset.
\end{lemma}
\begin{proof}
Fix $\mathfrak{t} \in \Spin^c(Y_\lambda(K)),$ the spin$^c$ structures in $\Spin^c(W_\lambda)$ that restricts to $\mathfrak{t}$ form an orbit with the action of $\PD[C]$. Denote by $\mathfrak{v}$ any such a spin$^c$ structure from this orbit. Since
\[
\gen{c_1(\mathfrak{v} + \PD[C]), n[\hat F]} = \gen{c_1(\mathfrak{v}), n[\hat F]} + 2nk,
\]
the values of $A_{Y_\lambda,K_{n,\lambda}}(E_{Y_\lambda, K_{n,\lambda}}(\mathfrak{v}))$ in Lemma \ref{le: spincbij}
form a $\mathbb{Q}/n\mathbb{Z}$ coset. On the other hand, suppose $\mathfrak{s}_z(\psi)=\mathfrak{v}$ for some $\psi \in \pi_2(\mathbf{x}, \Theta_{\beta\gamma}, \mathbf{q})$, then we have
\begin{align*}
G_{Y_\lambda,K_{n,\lambda}}(E_{Y_\lambda, K_{n,\lambda}}(\mathfrak{s}_z(\psi))) &= G_{Y_\lambda,K_{n,\lambda}}(\underline\mathfrak{s}_{w,z_n}(\mathbf{q})) = \mathfrak{v}|_{Y_\lambda}\\
A_{Y_\lambda,K_{n,\lambda}}(E_{Y_\lambda, K_{n,\lambda}}(\mathfrak{s}_z(\psi))) &= A(\mathbf{q}) + n_{z_n}(\psi) - n_w(\psi)\\
&=\frac{\gen{c_1(\mathfrak{v}), n[\hat F]} - n^2 d}{2k}.
\end{align*}
as required. The last two equalities are according to Definition \ref{def: Enbij}, and (\ref{eq: abg-c1-q}) respectively. (Note that the image $E_{Y_\lambda, K_{n,\lambda}}(\mathfrak{v})$ forms an orbit in $\underline\Spin^c(Y_\lambda,K_{n,\lambda})$ with the action of $\PD[K]$.)
\end{proof}
Lemma \ref{le: spincbij} (together with Definition \ref{def: spincbij}) concretely describe the bijection between $\underline\Spin^c(Y,K)$ and $\underline\Spin^c(Y_\lambda,K_{n,\lambda})$. If we take $n=1$, it also recovers \cite[Corollary 4.5]{HL}.
The spin$^c$ structures in $\Spin^c(W_\lambda)$ that has the same restriction in $\Spin^c(Y)$ form an orbit with the action given by $\PD[C^*]$. Their image under the bijection $E_{Y,\lambda,K}$ forms an orbit with the action given by $\PD[\mu]$, whose $A_{Y,K}$ values form a $\mathbb{Q}/\mathbb{Z}$ coset. On the other hand, given $\mathfrak{t} \in \Spin^c(Y_\lambda(K)),$ the spin$^c$ structures in $\Spin^c(W_\lambda)$ that restricts to $\mathfrak{t}$ form an orbit with the action given by $\PD[C]$. Their image under the bijection $E_{Y,\lambda,K}$ forms an orbit with the action given by $\PD[K]$, whose $A_{Y,K}$ values have step length $k/d$ and $G_{Y,K}$ is of period $d$.
\subsubsection{Alexander grading shifts on holomorphic rectangles.}
Following Hedden-Levine's approach, we introduce a function $\mathcal{N}$ to help with the computation. Note that the definition is adjusted to reflect the changes we made on the relative periodic domains. For any domain $S$, define
\begin{align}\label{eq: mult-comb}
\nonumber \mathcal{N}(S) = -nd n_w(S) - (k+md-nd) n_z(S) &+ (k+md) n_{z_n}(S)\\
&- md \sum_{j=1}^n\Big( n_{z_j}(S) - n_{u_j}(S) \Big).
\end{align}
Clearly, the definition of $\mathcal{N}$ depends on $n$, even though we omit it in the notation.
Similar to the function defined by Hedden and Levine, for any multi-periodic domain $S$ (including those with nonzero multiplicity at $z$), we claim $\mathcal{N}(S)=0$. To see this, first observe any domain is a linear combination of $P_\gamma$, $Q$, $\Sigma$, thin domains, and $(\alpha,\beta)$ periodic domains with $n_z=0$. Then one can check that $\mathcal{N}$ vanishes for each of these, proving the claim. Note that for different types of domains, the formula for $\mathcal{N}(S)$ can be simplified significantly, depending on which basepoints are in the same regions. We record the result as follows (compare with the table in \cite[Section 5.2]{HL}):
\begin{center}
\begin{tabular}{|c|c|}
\hline
Type of domain & $\mathcal{N}(S)$ \\ \hline
$(\alpha,\beta), (\beta,\gamma)$ and $(\beta,\delta)$ & $nd(n_z(S) - n_w(S))$ \\
$(\alpha,\gamma)$ & $k(n_{z_n}(S) - n_w(S))$ \\
$(\alpha,\delta)$ & $(k+md)(n_{z_n}(S) - n_w(S))$ \\
$(\gamma,\delta)$ & $-mdn(n_{z_n}(S) - n_u(S))$ \\
$(\alpha,\beta,\gamma)$ & $-nd n_w(S) - (k-nd) n_z(S) + k n_{z_n}(S)$ \\
$(\alpha,\gamma,\delta)$ & $- (k+md) n_z(S) + (k+md) n_{z_n}(S) - md \sum_{j=1}^n\Big( n_{z_j}(S) - n_{u_j}(S) \Big)$ \\
$(\alpha,\delta,\beta)$ & $-nd n_w(S) - (k+md-nd) n_z(S) + (k+md) n_{z_n}(S) $ \\
$(\beta,\gamma,\delta)$ & $-nd n_w(S) + nd n_z(S) -mdn(n_{z_n}(S) - n_u(S))$ \\
\hline
\end{tabular}
\end{center}
Note that on $(\gamma,\delta)$ and $(\beta,\gamma,\delta)$ domains, due to the absence of $\alpha$ curves, all $n_{u_j}$ takes the same value for $j=1,\cdots, n$, which we simply denote by $n_u$ in this context, and $n_{z_j}=n_z$($=n_w$, if further without $\beta$ curves). Now we are ready to compute the Alexander grading shifts on rectangles. Compare \cite[Proposition 5.6]{HL}.
\begin{proposition} \label{prop: alex-rectangle}
Let $\mathbf{x} \in \mathbb{T}_\alpha \cap \mathbb{T}_\beta$, $\mathbf{q} \in \mathbb{T}_\alpha \cap \mathbb{T}_\gamma$, and $\mathbf{a} \in \mathbb{T}_\alpha \cap \mathbb{T}_\delta$. Assume also $\mathbf{a}$ uses $p_l$, for some $l=b-m,\cdots,b-1$ in the winding region.
\begin{enumerate}
\item \label{it: rec1}
For any $\rho \in \pi_2(\mathbf{x}, \Theta_{\beta\gamma}, \Theta_{\gamma\delta}, \mathbf{a})$,
\begin{align}
\label{eq: abgd-alex}
n\tilde A(\mathbf{x}) - \tilde A(\mathbf{a}) &= \mathcal{N}(\rho) -\frac{n(k+md) - n^2 d}{2} \\
\label{eq: abgd-c1-Pg}
\gen{c_1(\mathfrak{s}_z(\rho)), [P_\gamma]} &= 2 \tilde A(\mathbf{x}) + 2d n_w(\rho) - 2d n_z(\rho) +k \\
\label{eq: abgd-c1-R}
\gen{c_1(\mathfrak{s}_z(\rho)), n[R]}
&= \frac{m}{\nu}\Bigg( 2 \tilde A(\mathbf{a}) + 2(k+md)\Big(n_{z_n}(\rho) - n_z(\rho) -\sum_{j=1}^n\left( n_{z_j}(\rho) - n_{u_j}(\rho) \right) \Big)\\
\nonumber &\qquad - n(k + md) + n^2 d \Bigg) \\
\label{eq: abgd-c1-Q}
\gen{c_1(\mathfrak{s}_z(\rho)), n[Q]} &= -2m\sum_{j=1}^n\left( n_{z_j}(\rho) - n_{u_j}(\rho) \right) -nm.
\end{align}
\item \label{it: rec2}
For any $\rho \in \pi_2(\mathbf{q}, \Theta_{\gamma\delta}, \Theta_{\delta\beta}, \mathbf{x})$,
\begin{align}
\label{eq: agdb-alex}
\tilde A(\mathbf{q}) - n\tilde A(\mathbf{x}) &= \mathcal{N}(\rho) + \frac{nk- n^2d}{2} \\
\label{eq: agdb-c1-R}
\gen{c_1(\mathfrak{s}_z(\rho)), n[R]} &= \frac{m}{\nu} \left( 2 \tilde A(\mathbf{q}) - 2k \sum_{j=1}^n\left( n_{z_j}(\rho) - n_{u_j}(\rho) \right) - nk + n^2 d \right) \\
\label{eq: agdb-c1-Pd}
\gen{c_1(\mathfrak{s}_z(\rho)), [P_\delta]} &= 2\tilde A(\mathbf{x}) + 2d n_z(\rho) - 2d n_w(\rho) + (k+md) \\
\label{eq: agdb-c1-Q}
\gen{c_1(\mathfrak{s}_z(\rho)), n[Q]} &= 2m \left( -n_z(\rho) + n_{z_n}(\rho) - \sum_{j=1}^n\left( n_{z_j}(\rho) - n_{u_j}(\rho) \right) \right) -nm.
\end{align}
\item \label{it: rec3}
For any $\rho \in \pi_2(\mathbf{a}, \Theta_{\delta\beta}, \Theta_{\beta\gamma}, \mathbf{q})$,
\begin{align}
\label{eq: adbg-alex}
\tilde A(\mathbf{a}) - \tilde A(\mathbf{q}) &= \mathcal{N}(\rho) - \frac{nmd}{2} \\
\label{eq: adbg-c1-Pd}
\gen{c_1(\mathfrak{s}_z(\rho)), [P_{n,\delta}]}
&= 2\tilde A(\mathbf{a}) +2(k+md) \left( n_z(\rho) - n_{z_n}(\rho) + \sum_{j=1}^n\left( n_{z_j}(\rho) - n_{u_j}(\rho) \right) \right) + n^2 d \\
\label{eq: adbg-c1-Pg}
\gen{c_1(\mathfrak{s}_z(\rho)), [P_{n,\gamma}]} &= 2 \tilde A(\mathbf{q}) + 2k \sum_{j=1}^n\left( n_{z_j}(\rho) - n_{u_j}(\rho) \right) + n^2 d \\
\label{eq: adbg-c1-Q}
\gen{c_1(\mathfrak{s}_z(\rho)), [Q]} &= 2 n_w(\rho) - 2 n_z(\rho) + m
\end{align}
\end{enumerate}
\end{proposition}
\begin{proof}
For part \eqref{it: rec1}, we consider $(\alpha, \beta, \gamma, \delta)$ rectangles.
For any $\mathbf{x} \in \mathbb{T}_\alpha \cap \mathbb{T}_\beta$, $\mathbf{a} \in \mathbb{T}_\alpha \cap \mathbb{T}_\delta$, and $\rho \in \pi_2(\mathbf{x}, \Theta_{\beta\gamma}, \Theta_{\gamma\delta}, \mathbf{a})$, choose $\mathbf{q} \in \mathbb{T}_\alpha \cap \mathbb{T}_\gamma$, $\psi_1 \in \pi_2(\mathbf{x}, \Theta_{\beta\gamma}, \mathbf{q})$, and $\psi_2 \in \pi_2(\mathbf{q}, \Theta_{\gamma\delta}, \mathbf{a})$ such that $\mathfrak{s}_z(\psi_1) = \mathfrak{s}_z(\rho)|_{X_{\alpha\beta\gamma}}$ and $\mathfrak{s}_z(\psi_2) = \mathfrak{s}_z(\rho)|_{X_{\alpha\gamma\delta}}$. Moreover, by adding copies of $\Sigma$, which does not change the spin$^c$ structure condition, we may assume that $n_z(\rho) = n_z(\psi_1) + n_z(\psi_2)$. Hence, $S = \mathcal{D}(\rho) - \mathcal{D}(\psi_1 * \psi_2)$ is a quadruply periodic domain with $n_z(S)=0$. Since the function $\mathcal{N}$ vanishes on all periodic domains, we have:
\begin{align*}
\tilde A(\mathbf{x}) - \tilde A(\mathbf{a}) &= (\tilde A(\mathbf{x}) - \tilde A(\mathbf{q})) + (\tilde A(\mathbf{q}) - \tilde A(\mathbf{a})) \\
&= \mathcal{N}(\psi_1) - \frac{nk-n^2 d}{2} + \mathcal{N}(\psi_2) - \frac{nmd}{2}
&= \mathcal{N}(\rho) - \frac{n(k+md)-n^2 d}{2}
\end{align*}
which proves \eqref{eq: abgd-alex}.
Next, we consider the spin$^c$ evaluations. Up to thin domains, we have $S = x P_\gamma + y R$, where the decomposition is chosen considering fact that classes $[P_\gamma]$ and $[R]$ can be represented by disjoint surfaces in $X_{\alpha\beta\gamma\delta}$, so $[P_\gamma] \cdot [R] = 0$ in the intersection form on $X_{\alpha\beta\gamma\delta}$. We need to solve for $x$ and $y.$ Observe $n_w(R)=n_z(R)=0,$ from which we solve
\[
x = \frac{n_w(S)}{k}.
\]
Similarly we have $-n_{z_n}(P_\gamma) + \sum_{j=1}^n\Big( n_{z_j}(P_\gamma) - n_{u_j}(P_\gamma) \Big)=0,$ so
\[
y = -\frac{\nu}{nk}\left(-n_{z_n}(S)+\sum_{j=1}^n\Big( n_{z_j}(S) - n_{u_j}(S) \Big)\right).
\]
Note that $x$ and $y$ need not be integers. Using \eqref{eq: abgd-PD-self-int}, \eqref{eq: abgd-int-form}, and \eqref{eq: abg-c1-x}, we compute:
\begin{align*}
\gen{c_1(\mathfrak{s}_z(\rho)), [P_\gamma]}
&= \gen{c_1(\mathfrak{s}_z(\psi_1 * \psi_2) + \PD[S]), [P_\gamma]} \\
&= \gen{c_1(\mathfrak{s}_z(\psi_1 * \psi_2)) + 2\PD[S], [P_\gamma]} \\
&= \gen{c_1(\mathfrak{s}_z(\psi_1)), [P_\gamma]} + 2x [P_\gamma]^2 + 2y[R] \cdot [P_\gamma] \\
&= 2 \tilde A(\mathbf{x}) + 2d(n_w(\psi_1)-n_z(\psi_1)) + k + 2dk\cdot \frac{ n_w(S)}{k} \\
&= 2 \tilde A(\mathbf{x}) + 2d(n_w(\psi_1)-n_z(\psi_1)) + k + 2d( n_w(\rho) - n_w(\psi_1) - n_w(\psi_2)) \\
&= 2 \tilde A(\mathbf{x}) + 2d(n_w(\rho) - n_z(\psi_1) -n_w(\psi_2)) +k \\
&= 2 \tilde A(\mathbf{x}) + 2d(n_w(\rho) - n_z(\psi_1) -n_z(\psi_2)) +k \\
&= 2 \tilde A(\mathbf{x}) + 2d(n_w(\rho) -n_z(\rho)) +k,
\end{align*}
which proves \eqref{eq: abgd-c1-Pg}. (Note the similarity with \eqref{eq: abg-c1-x}.) Formula \eqref{eq: abgd-c1-R} follows from a similar computation using \eqref{eq: agd-c1-a}, as shown below.
\begin{align*}
&\gen{c_1(\mathfrak{s}_z(\rho)), n[R]}\\
&= \gen{c_1(\mathfrak{s}_z(\psi_1 * \psi_2) + \PD[S]), [R]} \\
&= \gen{c_1(\mathfrak{s}_z(\psi_2)), n[R]} + 2x n[P_\gamma] \cdot [R] + 2yn[R]^2 \\
&= \frac{m}{\nu}\Bigg( 2 \tilde A(\mathbf{a}) + 2(k+md)\Big(n_{z_n}(\psi_2) - n_z(\psi_2) - \sum_{j=1}^n \left( n_{z_j}(\psi_2) - n_{u_j}(\psi_2) \right)\Big) - n(k + md) + n^2 d \Bigg) \\
&-\frac{2m(k+md)}{\nu}\left(-n_{z_n}(S)+\sum_{j=1}^n\Big( n_{z_j}(S) - n_{u_j}(S) \Big)\right) \\
&= \frac{m}{\nu}\Bigg( 2 \tilde A(\mathbf{a}) + 2(k+md)\Big(n_{z_n}(\psi_2) - n_z(\psi_2) - \sum_{j=1}^n \left( n_{z_j}(\psi_2) - n_{u_j}(\psi_2) \right) +n_{z_n}(S) \\
& -\sum_{j=1}^n\left( n_{z_j}(S) - n_{u_j}(S) \right)\Big) - n(k + md) + n^2 d \Bigg)\\
&= \frac{m}{\nu}\Bigg( 2 \tilde A(\mathbf{a}) + 2(k+md)\Big(n_{z_n}(\psi_2) - n_z(\psi_2) + n_{z_n}(\psi_1) - n_z(\psi_1) + n_{z_n}(\rho) - n_{z_n}(\psi_1) \\
& - n_{z_n}(\psi_2)-\sum_{j=1}^n\left( n_{z_j}(\rho) - n_{u_j}(\rho) \right) \Big)- n(k + md) + n^2 d \Bigg)\\
&=\frac{m}{\nu}\Bigg( 2 \tilde A(\mathbf{a}) + 2(k+md)\Big(n_{z_n}(\rho) - n_z(\rho) -\sum_{j=1}^n\left( n_{z_j}(\rho) - n_{u_j}(\rho) \right) \Big)- n(k + md) + n^2 d \Bigg)
\end{align*}
Finally, to prove \eqref{eq: abgd-c1-Q}, we have:
\begin{align*}
&\gen{c_1(\mathfrak{s}_z(\rho)), n[Q]} \\
&= \frac{\nu}{k} \gen{c_1(\mathfrak{s}_z(\rho)), n[R]} - \frac{m}{k}\gen{c_1(\mathfrak{s}_z(\rho)), n[P_\gamma]} \\
&= \frac{m}{k} \Bigg( 2 \tilde A(\mathbf{a}) + 2(k+md)\Big(n_{z_n}(\rho) - n_z(\rho) -\sum_{j=1}^n\left( n_{z_j}(\rho) - n_{u_j}(\rho) \right) \Big) \\
& \qquad - n(k + md) + n^2 d \Bigg) - \frac{m}{k} \left( 2 n\tilde A(\mathbf{x}) + 2nd\left(n_w(\rho) -n_z(\rho)\right) +nk \right) \\
&=\frac{m}{k} \Bigg( -2\mathcal{N}(\rho) + 2(k+md)\Big(n_{z_n}(\rho) - n_z(\rho) -\sum_{j=1}^n\left( n_{z_j}(\rho) - n_{u_j}(\rho) \right) \Big)\\
& \qquad - 2nd\left(n_w(\rho) -n_z(\rho)\right) -nk \Bigg) \\
&=-2m\sum_{j=1}^n\left( n_{z_j}(\rho) - n_{u_j}(\rho) \right) -nm
\end{align*}
as required. This concludes the proof for part \eqref{it: rec1}.
For part \eqref{it: rec2}, namely the case about $(\alpha,\gamma,\delta,\beta)$ rectangles, consider surfaces inside $X_{\alpha\gamma\delta\beta}$. We use the decomposition $S = x R + y P_\delta,$ where
\[
x=\frac{\nu}{n(k+md)}\sum_{j=1}^n\left( n_{z_j}(S) - n_{u_j}(S) \right) \quad \text{and} \quad y=\frac{n_w(S)}{k+md}.\quad \qquad \qquad
\]
Similarly, part \eqref{it: rec3} is the case about $(\alpha,\delta,\beta,\gamma)$ rectangles, where surfaces are inside $X_{\alpha\delta\beta\gamma}$. We use the decomposition $S = x P_\delta + y P_\gamma,$ where
\[
x=\frac{1}{nd}\left(n_{z_n}(S)- \sum_{j=1}^n\left( n_{z_j}(S) - n_{u_j}(S) \right) \right) \quad \text{and} \quad y=\frac{1}{nd}\sum_{j=1}^n\left( n_{z_j}(S) - n_{u_j}(S) \right).
\]
The rest of the computations for both of these cases are parallel to part \eqref{it: rec1} and left for the reader.
\end{proof}
\section{The filtered large surgery formula} \label{sec: largesurgery}
In this section, we prove the filtered large surgery formula for the $(n,1)$--cable of the knot meridian. The proof mostly follows the framework in \cite[Section 5.5]{HL}, with minor adjustments.
As before, let $K\subset Y$ be a knot of order $d>0$ in $H_1(Y;\mathbb{Z})$ and $F$ a rational surface for $K$. Recall that $W_{\lambda+m\mu}$ denotes the cobordism from $Y$ to $Y_{\lambda+m\mu}(K)$, induced by $(\Sigma,\bm\alpha,\bm\beta,\bm\delta),$ and turning the cobordism around, $W'_{\lambda+m\mu}$ is a cobordism from $Y_{\lambda+m\mu}(K)$ to $Y$, induced by $(\Sigma,\bm\alpha,\bm\delta,\bm\beta).$ For the ease of notation, we write $W_m=W_{\lambda+m\mu}$ and $W'_m=W'_{\lambda+m\mu}.$ We abuse the notation, and will denote by $C$ the core of the $2$-handle and $\hat F$ the capped seifert surface in the cobordism $W'_m$ as well.
Given $\mathfrak{u} \in \Spin^c(Y_{\lambda+m\mu}(K))$, let $\mathfrak{v}$ denote a spin$^c$ structure on $W'_m$ that extends $\mathfrak{u}$, we have
\[
\gen{c_1(\mathfrak{v} + \PD[C]), [\hat F]} = \gen{c_1(\mathfrak{v}), [\hat F]} + 2(k+md),
\]
so the values of $\gen{c_1(\mathfrak{v}), [\hat F]}$ taken over all such $\mathfrak{v}$ would form a coset in $\mathbb{Z}/2(k+md)$. We recall the following definition from \cite{HL}.
\begin{definition}[Definition 5.7 in \cite{HL}] \label{def: xu}
For each $\mathfrak{u} \in \Spin^c(Y_{\lambda+m\mu}(K))$, let $\mathfrak{x}_\mathfrak{u}$ denote the unique spin$^c$ structure on $W'_m$ extending $\mathfrak{u}$ such that
\begin{equation} \label{eq: c1(xu)}
-2(k+md) < \gen{c_1(\mathfrak{x}_\mathfrak{u}), [\hat F]} \le 0,
\end{equation}
Let $\mathfrak{y}_\mathfrak{u} = \mathfrak{x}_\mathfrak{u} + \PD[C]$, so that
\begin{equation} \label{eq: c1(yu)}
0 < \gen{c_1(\mathfrak{y}_\mathfrak{u}), [\hat F]} \le 2(k+md).
\end{equation}
and let $\mathfrak{s}_\mathfrak{u} = \mathfrak{x}_\mathfrak{u}|_Y = \mathfrak{y}_\mathfrak{u}|_Y - \PD[K]$.
Define
\begin{align}\label{eq: su-def}
s_\mathfrak{u} =& \frac{1}{2d}(\gen{c_1(\mathfrak{x}_\mathfrak{u}), [\hat F]} + k + md)\\
=&\frac{1}{2d}(\gen{c_1(\mathfrak{y}_\mathfrak{u}), [\hat F]} - k - md),
\end{align}
so that
\begin{equation} \label{eq: su-bound}
-\frac{k+md}{2d} < s_\mathfrak{u} \le \frac{k+md}{2d}.
\end{equation}
Finally, define
\begin{equation} \label{eq: Delta-u-def}
\Delta_\mathfrak{u} = \operatorname{\widetilde{gr}}(F^\infty_{W'_m, \mathfrak{x}_u}) = -\frac{(2ds_\mathfrak{u}-k-md)^2}{4d(k+md)} + \frac14.
\end{equation}
\end{definition}
We define a pair of filtrations $\mathcal{I}_\mathfrak{u}, \mathcal{J}_\mathfrak{u}$ on $\operatorname{CFK}^\infty(Y,K,\mathfrak{s}_\mathfrak{u})$ by the formula
\begin{align}
\label{eq: Iu-def} \mathcal{I}_\mathfrak{u}([\mathbf{x},i,j]) &= \max\{i, j-s_\mathfrak{u}\} \\
\label{eq: Ju-def} \mathcal{J}_\mathfrak{u}([\mathbf{x},i,j]) &= \max\{i-n, j-s_\mathfrak{u}\} + \frac{2nd(s_\mathfrak{u} - n) + k+ md}{2(k+md)} .
\end{align}
The following theorem strengthens the main result of \cite{Truong}, as it computes the (absolute) Alexander grading of generators in $\operatorname{CFK}^\infty(Y_{\lambda+m\mu}(K), K_{n,\lambda + m\mu})$, not only the filtration levels, and the knot $K$ could be in any rational homology sphere, not only in $S^3$.
Compare \cite[Theorem 5.8]{HL}.
\begin{theorem} \label{thm: large-surgery}
If $m$ is sufficiently large, then for every $\mathfrak{u} \in \Spin^c(Y_{\lambda+m\mu}(K))$, there is a doubly-filtered quasi-isomorphism whose grading shift equal to $\Delta_\mathfrak{u}$
\[
\Lambda^{\infty}_\mathfrak{u} \colon\thinspace \operatorname{CFK}^\infty(Y_{\lambda+m\mu}(K), K_{n,\lambda + m\mu}, \mathfrak{u}) \to \operatorname{CFK}^\infty(Y,K,\mathfrak{s}_\mathfrak{u}),
\]
where the latter is equipped with the filtrations $\mathcal{I}_\mathfrak{u}$ and $\mathcal{J}_\mathfrak{u}$, making the diagrams
\[
\xymatrix{
\operatorname{CFK}^\infty(Y_{\lambda+m\mu}(K), K_{n,\lambda + m\mu}, \mathfrak{u}) \ar[r]^-{\Lambda^{\infty}_\mathfrak{u}} \ar[d]^{=} & \operatorname{CFK}^\infty(Y,K,\mathfrak{s}_\mathfrak{u}) \ar[d]^{v^\infty} \\
\operatorname{CF}^\infty(Y_{\lambda+m\mu}(K), \mathfrak{u}) \ar[r]^-{F^\infty_{W'_m, \mathfrak{x}_\mathfrak{u}}} & \operatorname{CF}^\infty(Y,\mathfrak{s}_\mathfrak{u})
}
\]
and
\[
\xymatrix{
\operatorname{CFK}^\infty(Y_{\lambda+m\mu}(K), K_{n,\lambda + m\mu}, \mathfrak{u}) \ar[r]^-{\Lambda^{\infty}_\mathfrak{u}} \ar[d]^{=} & \operatorname{CFK}^\infty(Y,K,\mathfrak{s}_\mathfrak{u}) \ar[d]^{h^\infty_{\mathfrak{s}, s_\mathfrak{u}}} \\
\operatorname{CF}^\infty(Y_{\lambda+m\mu}(K), \mathfrak{u}) \ar[r]^-{F^\infty_{W'_m, \mathfrak{y}_\mathfrak{u}}} & \operatorname{CF}^\infty(Y,\mathfrak{s}_\mathfrak{u}+\PD[K])
}
\]
commute up to chain homotopy.
\end{theorem}
We make some remarks about the theorem statements before moving on to the proof.
\begin{remark} First, since the filtered chain homotopy type of the knot Floer complexes is a topological invariant of the pair $(Y,K)$, we drop the basepoints from the statements. Note that even though the definition of the map $\Lambda^{\infty}_\mathfrak{u}$(spelled out in the proof later in the section) depends on the basepoints, the map itself does not. Next, observe that the statements here are almost the same as those in \cite[Theorem 5.8]{HL}, except that $ K_{n,\lambda + m\mu}$ is in place for $K_{\lambda + m\mu}$. Indeed, disregarding the second filtration, the map $\Lambda^{\infty}_\mathfrak{u}$ here is identical to the one in \cite[Theorem 5.8]{HL}. The only difference is that $\operatorname{CFK}^\infty(Y_{\lambda+m\mu}(K), K_{n,\lambda + m\mu}, \mathfrak{u})$ is a refiltering of $\operatorname{CFK}^\infty(Y_{\lambda+m\mu}(K), K_{\lambda + m\mu}, \mathfrak{u})$, due to the different placement of a second basepoint. Reflecting this change, the $\mathcal{J}_\mathfrak{u}$ filtration on $\operatorname{CFK}^\infty(Y,K,\mathfrak{s}_\mathfrak{u})$ is adjusted accordingly. Finally, the Maslov grading shift is induced by cobordism maps $F^\infty_{W'_m, \mathfrak{x}_\mathfrak{u}}$ and $F^\infty_{W'_m, \mathfrak{y}_\mathfrak{u}}$, respectively, independent of the choice of a second basepoint. Therefore the statement about the Maslov grading follows directly from Hedden-Levine's proof.
\end{remark}
The bulk of Theorem \ref{thm: large-surgery} is proved by Ozsv\'ath and Szab\'o, we only need to show $\Lambda^{\infty}_\mathfrak{u}$ preserves the second filtration that is defined above by (\ref{eq: Ju-def}).
We focus on the Heegaard diagram $(\Sigma, \bm\alpha, \bm\beta, \bm\delta^{m,b}, w, z, z_n)$, see Figure \ref{fig: twistpurple} for an example. Recall from the discussion in Section \ref{ssec: domains} that $q$ is the unique intersecting point of $\alpha_g$ and $\beta_g$ and $p_l$ for $l=b-m,\cdots,b-1$ are the intersecting points of $\alpha_g$ and $\delta_g$, following the orientation of $\delta_g$. For $\mathbf{x} \in \mathbb{T}_\alpha\cap \mathbb{T}_\beta$ and $l \in \{b-m, \dots, b-1\}$, we define $\mathbf{x}_l^{m,b} \in \mathfrak{S}(\bm\alpha, \bm\delta^{m,b})$ to be the point obtained by replacing $q$ with $p_l$ and taking ``nearest points'' in thin domains. There is a \emph{small triangle} $\psi_{\mathbf{x},l}^{m,b} \in \pi_2(\mathbf{x}_l^{m,b}, \Theta_{\delta\beta}, \mathbf{x})$ in the winding region satisfying
\begin{align*}
(n_w(\psi_{\mathbf{x},l}),n_z(\psi_{\mathbf{x},l}),n_{z_n}(\psi_{\mathbf{x},l}))=&
\begin{cases}
(l,0,0) & l\geq 0 \\
(0,-l,0) & -n<l<0 \\
(0,-l,-l-n) & l\leq -n.
\end{cases}\\
\end{align*}
A key ingredient of Ozsv\'ath and Szab\'o's large surgery theorem is the notion that every spin$^c$ structure can be represented by generators in the winding region. Hedden and Levine defined the following stronger version:
\begin{definition}[Definition 5.10 in \cite{HL}]
We say $\mathfrak{u} \in \Spin^c(Y_{\lambda+m\mu}(K))$ is \emph{strongly supported in the winding region} if
\begin{itemize}
\item every $\mathbf{a} \in \mathbb{T}_\alpha \cap \mathbb{T}_\delta$ with $\mathfrak{s}_w(\mathbf{a})=\mathfrak{u}$ is of the form $\mathbf{x}_l^{m,b}$ for some $l$;
\item moreover $c_1(\mathfrak{s}_w(\psi^{m,b}_{\mathbf{x},l})) = \mathfrak{x}_\mathfrak{u}$, or equivalently, $c_1(\mathfrak{s}_z(\psi^{m,b}_{\mathbf{x},l})) = \mathfrak{y}_\mathfrak{u}$.
\end{itemize}
\end{definition}
Recall that $\mathfrak{s}_w(\mathbf{a})=\mathfrak{s}_z(\mathbf{a})$ since in the diagram $(\Sigma,\bm\alpha,\bm\delta)$ basepoints $w$ and $z$ are interchangeable. However, $\mathfrak{s}_w(\psi)$ and $\mathfrak{s}_z(\psi)$ differ by $PD[C]$ ; $\mathfrak{s}_w(\mathbf{x})$ and $\mathfrak{s}_z(\mathbf{x})$ differ by $\PD[K]$. Hedden and Levine proved that this improved condition can be satisfied.
\begin{lemma}[Lemma 5.12 in \cite{HL}]\label{lemma: strongly-supported}
There exists an $M$ such that for all $m\geq M$, for every $\mathfrak{u} \in \Spin^c(Y_{\lambda+m\mu}(K))$, there exists some $b$ such that $\mathfrak{u}$ is strongly supported in the winding region of $(\Sigma, \bm\alpha, \bm\beta, \bm\delta^{m,b}, w, z, z_n)$. (Note that $b$ does depend on the choice of $\mathfrak{u}$.)
\end{lemma}
As pointed out in \cite[Lemma 5.11]{HL}, a spin$^c$ structure $\mathfrak{u}$ is strongly supported in the winding region of $(\Sigma, \bm\alpha,
\bm\delta)$ iff
\begin{equation*} \label{eq: strongly-supported}
\mathfrak{S}(\bm\alpha, \bm\delta, w, \mathfrak{u}) = \{\mathbf{x}_{A(\mathbf{x})-s_\mathfrak{u}} \mid \mathbf{x} \in
\mathfrak{S}(\bm\alpha, \bm\beta, w, \mathfrak{s}_\mathfrak{u})\},
\end{equation*}
where $\mathfrak{S}(\bm\alpha, \bm\beta, w, \mathfrak{s}) = \{\mathbf{x} \in \mathbb{T}_\alpha\cap \mathbb{T}_\beta \mid \mathfrak{s}_w(\mathbf{x}) = \mathfrak{s}\}.$ This can be seen from (\ref{eq: adb-c1-x}) combined with the definition of $s_\mathfrak{u}$ by (\ref{eq: su-def}).
\begin{proof}[Proof of Theorem \ref{thm: large-surgery}]
When $m$ is large enough, for a given choice of $\mathfrak{u} \in \Spin^c(Y_{\lambda+m\mu}(K))$, fix some $b$ to satisfy Lemma {\ref{lemma: strongly-supported}}.
Define
\begin{equation} \label{eq: Lambda-u}
\Lambda^\infty_\mathfrak{u} \colon\thinspace \operatorname{CF}^\infty(\Sigma, \bm\alpha, \bm\delta, w, \mathfrak{u}) \to \operatorname{CFK}^\infty(\Sigma, \bm\alpha, \bm\beta, w, z, \mathfrak{s}_\mathfrak{u})
\end{equation}
by
\begin{equation} \label{eq: Lambda-u-def}
\Lambda^\infty_\mathfrak{u}([\mathbf{a},i]) =
\sum_{\substack{\mathbf{x} \in \mathbb{T}_\alpha \cap \mathbb{T}_\beta \\ \mathfrak{s}_w(\mathbf{x}) = \mathfrak{s}_\mathfrak{u} }}
\sum_{\substack{\psi \in \pi_2(\mathbf{a}, \Theta_{\delta\beta}, \mathbf{x}) \\ \mu(\psi) = 0 \\ \mathfrak{s}_w(\psi) = \mathfrak{x}_\mathfrak{u}}}
\#\mathcal{M}(\psi) [\mathbf{x}, i-n_w(\psi), i-n_z(\psi) + s_\mathfrak{u}].
\end{equation}
Precomposing with the identification of $\operatorname{CF}^\infty(\Sigma, \bm\alpha, \bm\delta, w, \mathfrak{u})$ with $\operatorname{CFK}^\infty(\Sigma, \bm\alpha, \bm\delta, w,z_n, \mathfrak{u})$, $\Lambda^\infty_\mathfrak{u}$ can also be seen as defined on $\operatorname{CFK}^\infty(\Sigma, \bm\alpha, \bm\delta, w,z_n, \mathfrak{u})$. We will prove $\Lambda^\infty_\mathfrak{u}$ is filtered with respect to $\mathcal{J}_\mathfrak{u}$ filtration. The proof for $\mathcal{I}_\mathfrak{u} $ filtration is easier and left for the reader.
Since $\mathfrak{u}$ is strongly supported in the winding region, every element of $\mathfrak{S}(\bm\alpha, \bm\delta,w, \mathfrak{u})$ is of the form $\mathbf{x}_l$, where $\mathbf{x} \in \mathfrak{S}(\bm\alpha,\bm\beta, w, \mathfrak{s}_\mathfrak{u})$ and $l = A(\mathbf{x}) - s_\mathfrak{u}$.
As in the proof of Proposition \ref{prop: alex-triangle}, we compute
\begin{align*}
\tilde A(\mathbf{x}_l)=&\tilde A(\mathbf{x}) +\frac{n(k+md)-n^2d}{2}+
\begin{cases}
-lnd & l\geq 0\\
l(k+md-nd) & -n<l<0 \\
-n(k+md)-lnd & l\leq -n.
\end{cases}\\
\end{align*}
So with $ \tilde A(\mathbf{x}) - lnd = nds_\mathfrak{u}$, we have
\begin{align*}
A_{w,z_n}([\mathbf{x}_l,i])=i+&\frac{nd(2s_\mathfrak{u}-n)}{2(k+md)}+\frac{n}{2}+
\begin{cases}
0 & l\geq 0\\
l & -n<l<0 \\
-n & l\leq -n.
\end{cases}\\
\end{align*}
On the other hand, we first consider the small triangles $\psi_{\mathbf{x},l} \in \pi_2(\mathbf{x}_l,\Theta_{\delta\beta},\mathbf{x})$ in the winding region. According to Ozsv\'ath and Szab\'o, with respect to an energy filtration, $\psi_{\mathbf{x},l}$ correspond to the main part of $\Lambda_\mathfrak{u} ^\infty $.
\begin{align*}
\mathcal{J}_{\mathfrak{u}}([\mathbf{x},i-n_w(\psi_{\mathbf{x},l}),i-n_z(\psi_{\mathbf{x},l})+s_\mathfrak{u}])=&\text{ max}\{ -n_w(\psi_{\mathbf{x},l}) -n, -n_z(\psi_{\mathbf{x},l}) \} + i + \frac{nd(2s_\mathfrak{u} - n)}{2(k+md)} + \frac{n}{2}\\
=i + \frac{nd(2s_\mathfrak{u} - n)}{2(k+md)} +& \frac{n}{2}+
\begin{cases}
0 & l\geq 0\\
l & -n<l<0 \\
-n & l\leq -n.
\end{cases}
\end{align*}
Therefore the small triangles $\psi_{\mathbf{x},l}$ preserves the $\mathcal{J}_\mathfrak{u}$ filtration. Next, for an arbitrary triangle $\psi \in \pi_2(\mathbf{x}_l,\Theta_{\delta\beta},\mathbf{x})$, $n_w(\psi) \geq n_w(\psi_{\mathbf{x},l})$ and $n_z(\psi) \geq n_z(\psi_{\mathbf{x},l})$. Thus $\psi$ decreases or preserves $\mathcal{J}_\mathfrak{u}$ from the above computation.
\end{proof}
\section{The surgery exact triangle} \label{sec: exacttri}
In this section, we will construct the surgery exact triangle relating the Floer homologies of $Y$, $Y_\lambda(K)$, and $Y_{\lambda+m\mu}(K)$ for $m$ large. The maps will be defined on the chain level. We start with a brief discussion about the Maslov grading.
For a diagram $(\Sigma, \bm\alpha, \bm\beta, \bm\gamma, \bm\delta^{b,m}, w, z, z_n)$, it is proved in \cite[Proposition 5.15]{HL} that for fixed large $m$, and $b$ within a small range of $\frac{m}{2}$, the Maslov grading of every generator $\mathbf{a} \in \mathbb{T}_\alpha \cap \mathbb{T}_{\delta^{m,b}}$ has constant lower and upper bound. We will adopt a choice of $m$ and $b$ satisfying the above condition, and henceforth drop them from the notation. Note that the Maslov grading is independent of the choice of $z_n$ basepoint, therefore the statements in \cite{HL} regarding the Maslov grading remain true in our set up as well, and we will restate them when needed.
\subsection{Construction of the exact sequence}
We start by pointing out a slight difference in our set up compared to the construction in \cite{HL}. In this section we will define our complexes $\operatorname{CF}^\circ (\Sigma, \bm\alpha, \bm\beta, z)$, $\operatorname{CF}^\circ (\Sigma, \bm\alpha, \bm\gamma, z)$ and $\operatorname{CF}^\circ (\Sigma, \bm\alpha, \bm\delta, z)$ with the basepoint $z$ instead of $w$ as in \cite{HL}. And accordingly, we define the triangle, rectangle and pentagon counting maps with respect to the reference point $z$ instead of $w$. This in turn, coincides with the definition in \cite{integer,rational}. The reason why we need to make this modification, loosely speaking, is that we need to capture the information of the windings to the left of the $\beta_g$ curve in the winding region. Aside from this slight change, the proof largely follows from the one in \cite[Section 6]{HL}, with the only difference being computational details.
The twisted complex $\underline{\operatorname{CF}}^\infty(\bm\alpha, \bm\beta, z; \Gamma_m)$ is generated over $\Gamma_m$ by all pairs $[\mathbf{x},i]$ as usual, where $\Gamma_m$ denote the group ring $\mathbb{F}[\mathbb{Z}/m\mathbb{Z}]$. The differential is given by
\begin{equation}
\partial(T^s \cdot [\mathbf{x},i]) = \sum_{\mathbf{y} \in \mathbb{T}_\alpha \cap \mathbb{T}_\beta} \sum_{\substack{ \phi \in \pi_2(\mathbf{x},\mathbf{y}) \\ \mu(\phi)=1}} \# \widehat\mathcal{M}(\phi) \, T^{s+ n_w(\phi) - n_z(\phi)} [\mathbf{y}, i-n_z(\phi)].
\end{equation}
Comparing to previous definitions, the $T$ exponent in \cite{integer,rational} was formulated as the intersection number of the holomorphic disk with an extra basepoint on $\beta_g$, but this quantity is the same as $n_w(\phi) - n_z(\phi)$. Comparing to the definition in \cite{HL}, we merely switched the reference point from $w$ to $z$.
We realize $\Gamma_m = \mathbb{F}[\mathbb{Z}/m\mathbb{Z}]$ as the quotient ring $\mathbb{F}[T]/(T^m-1)$, and often view it as a subring of $\mathbb{F}[\mathbb{Q}/m\mathbb{Z}]$. Recall that $\mathbb{F}[\mathbb{Q}/m\mathbb{Z}]$ is the ring of rational-exponent polynomials with variable $T$, where the coefficient is in $\mathbb{F}$.
The complex $\underline{\operatorname{CF}}^\infty(\bm\alpha, \bm\beta, z; \Gamma_m)$ is isomorphic to $m$ copies of $\operatorname{CF}^\infty(\Sigma, \bm\alpha, \bm\beta, z)$.
Let
\begin{equation} \label{eq: theta}
\theta\colon\thinspace \underline{\operatorname{CF}}^+(\bm\alpha, \bm\beta, z; \Gamma_m) \to
\bigoplus_{\mathfrak{s} \in \Spin^c(Y)} \operatorname{CF}^+(\bm\alpha, \bm\beta, \mathfrak{s}, z) \otimes T^{ - k/d -A(\mathfrak{s})} \Gamma_m
\end{equation}
be the trivializing map defined by
\begin{equation} \label{eq: theta-def}
\theta(T^s [\mathbf{x},i]) = [\mathbf{x},i] \otimes T^{s - k/d - A(\mathbf{x}) }.
\end{equation}
One can check that this gives an isomorphism between the two chain complexes.\footnote{This trivializing map differs from the one in \cite{HL} by a constant factor. This change amounts to shifting the sequence $s_l$ (defined in Section \ref{ssec: statement}) by one to the left, such that the terms in the sequence $(\mathfrak{s}_l,s_l)$ in the mapping cone (see the proof in Section \ref{sec: proofcone}) have the same index. Otherwise the index is off by one, due to the choice of $z$ reference point instead of $w$.}
For each $\mathfrak{s} \in \Spin^c(Y)$, recall that $A(\mathfrak{s})$ forms a $\mathbb{Q}/Z$ coset. So there are $m$ different powers of $T$ occurring in $\operatorname{CF}^+(\bm\alpha, \bm\beta, \mathfrak{s}, z) \otimes T^{ -k/d-A(\mathfrak{s})} \Gamma_m$, with exponents in $\mathbb{Q}/m\mathbb{Z}$. We regularly need to lift these exponents to $\mathbb{Q}$, by choosing the $m$ values of $r$ satisfying
\begin{equation} \label{eq: r-bounds}
r \equiv -\frac{k}{d} -A_{Y,K}(\mathfrak{s}) \pmod \mathbb{Z} \quad \text{and} \quad \frac{-k-md}{2d} \le r < \frac{-k+md}{2d}.
\end{equation}
Define chain maps
\begin{align}
\label{eq: f0}
f_0^+\colon\thinspace & \underline{\operatorname{CF}}^+(\bm\alpha, \bm\beta, z; \Gamma_m) \to \operatorname{CF}^+(\bm\alpha, \bm\gamma, z) \\
\label{eq: f1}
f_1^+ \colon\thinspace & \operatorname{CF}^+(\bm\alpha, \bm\gamma, z) \to \operatorname{CF}^+(\bm\alpha, \bm\delta, z) \\
\label{eq: f2}
f_2^+\colon\thinspace & \operatorname{CF}^+(\bm\alpha, \bm\delta, z) \to \underline{\operatorname{CF}}^+(\bm\alpha, \bm\beta, z; \Gamma_m)
\end{align}
by the following formulas:
\begin{align}
\label{eq: f0-def}
f_0^+(T^s \cdot [\mathbf{x},i]) &= \sum_{\mathbf{q} \in \mathbb{T}_\alpha \cap \mathbb{T}_\gamma} \sum_{\substack{\psi \in \pi_2(\mathbf{x}, \Theta_{\beta\gamma}, \mathbf{q}) \\ \mu(\psi)=0 \\ \mathclap{s + n_w(\psi) - n_z(\psi) \equiv 0 \pmod m}}} \#\mathcal{M}(\psi) \, [\mathbf{q}, i-n_z(\psi)] \\
\label{eq: f1-def}
f_1^+([\mathbf{q},i]) &= \sum_{\mathbf{a} \in \mathbb{T}_\alpha \cap \mathbb{T}_\delta} \sum_{\substack{\psi \in \pi_2(\mathbf{q}, \Theta_{\gamma\delta}, \mathbf{a}) \\ \mu(\psi)=0}} \#\mathcal{M}(\psi) \, [\mathbf{a}, i-n_z(\psi)] \\
\label{eq: f2-def}
f_2^+([\mathbf{a},i]) &= \sum_{\mathbf{x} \in \mathbb{T}_\alpha \cap \mathbb{T}_\beta} \sum_{\substack{\psi \in \pi_2(\mathbf{a}, \Theta_{\delta\beta}, \mathbf{x}) \\ \mu(\psi)=0}} \#\mathcal{M}(\psi) \, T^{n_w(\psi) - n_z(\psi)} \cdot [\mathbf{x}, i-n_z(\psi)].
\end{align}
Let $f_0^t$, $f_1^t$, $f_2^t$ denote the analogous maps on $\operatorname{CF}^t$.
Following \cite{HeddenMarkFractional}, the quadrilateral-counting maps
\begin{align}
\label{eq: h0}
h_0^+\colon\thinspace & \underline{\operatorname{CF}}^+(\bm\alpha, \bm\beta, z; \Gamma_m) \to \operatorname{CF}^+(\bm\alpha, \bm\delta, z) \\
\label{eq: h1}
h_1^+ \colon\thinspace & \operatorname{CF}^+(\bm\alpha, \bm\gamma, z) \to \underline{\operatorname{CF}}^+(\bm\alpha, \bm\beta, z; \Gamma_m) \\
\label{eq: h2}
h_2^+\colon\thinspace & \operatorname{CF}^+(\bm\alpha, \bm\delta, z) \to \operatorname{CF}^+(\bm\alpha, \bm\gamma, z)
\end{align}
are defined by the following formulas:
\begin{align}
\label{eq: h0-def}
h_0^+(T^s \cdot [\mathbf{x},i]) &= \sum_{\mathbf{a} \in \mathbb{T}_\alpha \cap \mathbb{T}_\delta} \sum_{\substack{\rho \in \pi_2(\mathbf{x}, \Theta_{\beta\gamma}, \Theta_{\gamma\delta}, \mathbf{a}) \\ \mu(\rho)=-1 \\ \mathclap{s + n_w(\rho) - n_z(\rho) \equiv 0 \pmod m}}} \#\mathcal{M}(\rho) \, [\mathbf{a}, i-n_z(\rho)] \\
\label{eq: h1-def}
h_1^+([\mathbf{q},i]) &= \sum_{\mathbf{x} \in \mathbb{T}_\alpha \cap \mathbb{T}_{\beta}} \sum_{\substack{\rho \in \pi_2(\mathbf{q}, \Theta_{\gamma\delta}, \Theta_{\delta\beta}, \mathbf{x}) \\ \mu(\rho)=-1}} \#\mathcal{M}(\rho) \, T^{n_w(\rho) - n_z(\rho)} \cdot [\mathbf{x}, i-n_z(\rho)] \\
\label{eq: h2-def}
h_2^+([\mathbf{a},i]) &= \sum_{\mathbf{q} \in \mathbb{T}_\alpha \cap \mathbb{T}_\gamma} \sum_{\substack{\rho \in \pi_2(\mathbf{a}, \Theta_{\delta\beta}, \Theta_{\beta\gamma}, \mathbf{q}) \\ \mu(\rho)=-1 \\ n_w(\rho) \equiv n_z(\rho) \pmod m}} \#\mathcal{M}(\rho) \, [\mathbf{q}, i-n_z(\rho)]
\end{align}
A standard argument shows that for each $j \in \mathbb{Z}/3$, the following holds. (The second statement relies on pentagon-counting maps, which we will discuss in Section \ref{ssec: pent-filt}.)
\begin{itemize}
\item $h_j^+$ is a null-homotopy of $f^+_{j+1} \circ f^+_j$;
\item $h_{j+1}^+ \circ f_j^+ + f_{j+2}^+ \circ h_j^+$ is a quasi-isomorphism.
\end{itemize}
Therefore, the exact triangle detection lemma \cite[Lemma 4.2]{branched} implies an exact sequence on homology. Using the same formulas, one can also define the maps on $\operatorname{CF}^t$, $\mathbf{CF}^-$, and $\mathbf{CF}^\infty$.
Each of the three complexes come with an $\mathcal{I}$ filtration, and the maps $f_j$ and $h_j$ respect the $\mathcal{I}$ filtration by definition. We define a second filtration on each complex as follows.(Compare \cite[Definition 6.1]{HL}.)
\begin{definition} \label{def: J-filtrations}
\begin{itemize}
\item
The filtration $\mathcal{J}_{\alpha\gamma}$ on $\operatorname{CF}^+(\bm\alpha, \bm\gamma, z)$ is simply the Alexander filtration:
\begin{equation} \label{eq: ag-filt}
\mathcal{J}_{\alpha\gamma}([\mathbf{q},i]) = A_{w,z_n}(\mathbf{q}) + i.
\end{equation}
\item
The filtration $\mathcal{J}_{\alpha\delta}$ on $\operatorname{CF}^+(\bm\alpha, \bm\delta, z)$ is the Alexander filtration shifted by a constant on each spin$^c$ summand. For any spin$^c$ structure $\mathfrak{u}$, and for any generator $\mathbf{a}$ with $\mathfrak{s}_z(\mathbf{a}) = \mathfrak{u}$, define
\begin{equation} \label{eq: ad-filt}
\mathcal{J}_{\alpha\delta}([\mathbf{a},i]) = A_{w,z_n}(\mathbf{a}) + i + \frac{ nd^2 m (2 s_\mathfrak{u} - n) }{2k(k+md)},
\end{equation}
where $s_\mathfrak{u}$ is the number from Definition \ref{def: xu}.
\item
The filtration $\mathcal{J}_{\alpha\beta}$ on the twisted complex $\underline{\operatorname{CF}}^+(\bm\alpha, \bm\beta, z; \Gamma_m)$ is defined via the trivialization map $\theta$. For any $\mathbf{x} \in \mathbb{T}_\alpha \cap \mathbb{T}_\beta$ with $\mathfrak{s}_z(\mathbf{x}) = \mathfrak{s}$, and any $r$ that satisfies \eqref{eq: r-bounds}, define
\begin{equation} \label{eq: ab-twisted-filt}
\mathcal{J}_{\alpha\beta}([\mathbf{x},i] \otimes T^r) = i - \frac{2ndr +nk+n^2 d}{2k}.
\end{equation}
Namely, $\mathcal{J}_{\alpha\beta}$ is the trivial filtration shifted by a constant
which depends linearly on the exponent of $T$, and it does not depend on $\mathbf{x}$ except via its associated spin$^c$ structure. We transport this back to $\underline{\operatorname{CF}}^+(\bm\alpha, \bm\beta, z; \Gamma_m)$ via the identification $\theta$.
\end{itemize}
\end{definition}
As is discussed in \cite{HL}, the maps $f^+_j$ are not filtered, due to the fact that the filtration shift of each map is a function of spin$^c$ evaluation. In order to get round this problem, note that it suffices to consider the maps $f^t_j$ to prove quasi-isomorphism. Recall that if we let $X_*$ denote one of the $4$-manifolds $X_{\alpha\beta\gamma}, X_{\alpha\gamma\delta}$ and $X_{\alpha\delta\beta}$ defined in Section \ref{ssec: cob} (either with three or four subscripts), $\Spin^c_0(X_*)$ denotes the set of spin$^c$ structures that restricts to the canonical spin$^c$ structure represented by the generator $\Theta_{\beta\gamma}$ on $Y_{\beta\gamma}$, $\Theta_{\gamma\delta}$ on $Y_{\gamma\delta}$ and $\Theta_{\delta\beta}$ on $Y_{\delta\beta}$, for whichever applicable. Each map $f^t_j$ decomposes over spin$^c$ structures as
\[
f_j^t = \sum_{\mathfrak{v} \in \Spin^c_0(X_*)} f^t_{j,\mathfrak{v}},
\]
where $X_*$ is the $4$-manifold corresponding to the cobordism and $f^t_{j,\mathfrak{v}}$ counts only the triangles with $\mathfrak{s}_z(\psi)=\mathfrak{v}$. The Maslov grading shift of each $f^t_{j,\mathfrak{v}}$ is given by a quadratic function of $c_1(\mathfrak{v})$, whereas for a fixed $t$, the grading of each generator in $\operatorname{CF}^t$ is bounded. As a result, only finitely many terms $f^t_{j,\mathfrak{v}}$ may be nonzero, allowing us to have control over the $\mathcal{J}$ filtration shift within this range. We will prove the following proposition. Compare \cite[Proposition 6.2]{HL}.
\begin{proposition} \label{prop: tri-filt}
Fix $t \in \mathbb{N}$. For all $m$ sufficiently large, the maps $f_0^t$, $f_1^t$, and $f_2^t$ are all filtered with respect to the filtrations $\mathcal{J}_{\alpha\beta}$, $\mathcal{J}_{\alpha\gamma}$, and $\mathcal{J}_{\alpha\delta}$. Moreover, for any triangle $\psi$ contributing to any of these maps, the filtration shift of the corresponding term equals $n_{z_n}(\psi)$.
\end{proposition}
The similar problem applies to the rectangle-counting maps. Following Hedden-Levine's argument, we will define truncated versions $\tilde h^t_j$, as a sum of certain terms $h^t_{j,\mathfrak{v}}$ satisfying specific constraints. In other words, we simply throw away the ``bad" terms. The resulting $\tilde h^t_j$ will be homotopic equivalent to the original $h^t_j$ maps but behave nicer with respect to the filtration. We will prove the following proposition, parallel to \cite[Proposition 6.3]{HL}.
\begin{proposition} \label{prop: rect-filt}
Fix $t \in \mathbb{N}$. For all $m$ sufficiently large, the maps $\tilde h^t_0$, $\tilde h^t_1$, and $\tilde h^t_2$ have the following properties:
\begin{itemize}
\item $\tilde h^t_j$ is a filtered null-homotopy of $f^t_{j+1} \circ f^t_j$.
\item $\tilde h^t_{j+1} \circ f^t_j + f^t_{j+2} \circ \tilde h^t_j$ is a filtered quasi-isomorphism.
\end{itemize}
\end{proposition}
For the Maslov grading, observe that the underlying cobordisms remain the same, independent of the choice of $z_n$ basepoint. It then follows from Hedden-Levine's argument that maps $f_j^t$ and $h^t_j$ are homogeneous and have the appropriate Maslov grading shifts (for the detailed statement see \cite[Proposition 6.5]{HL}). Combined with Proposition \ref{prop: rect-filt} and Proposition \ref{prop: tri-filt}, we have
\begin{theorem} \label{thm: CFt-cone-f2}
Fix $t \in \mathbb{N}$. For all $m$ sufficiently large, the map
\[
\begin{pmatrix} f_1^t \\ h_1^t \end{pmatrix} \colon\thinspace \operatorname{CF}^t(\Sigma, \bm\alpha, \bm\gamma, z) \to \Cone(f_2^t)
\]
is a filtered homotopy equivalence that preserves the grading.
\end{theorem}
\subsection{Triangle maps} \label{ssec: tri-filt}
We will prove Proposition \ref{prop: tri-filt} in this section by looking at $f_0^t$, $f_1^t$, and $f_2^t$ individually. (Throughout, we will write $f_j^\circ$ when making statements that apply all the flavors of Heegaard Floer homology.) The proof follows the outline in \cite{HL} closely.
\subsubsection{The map $f_0^t$} \label{sssec: f0-filt}
For any $\mathbf{x} \in \mathbb{T}_\alpha \cap \mathbb{T}_\beta$ with $\mathfrak{s}_z(\mathbf{x}) = \mathfrak{v}|_Y$, and any $r \in \mathbb{Q}/m\mathbb{Z}$, we have:
\begin{align*}
f_{0,\mathfrak{v}}^\circ(\theta^{-1}([\mathbf{x},i] \otimes T^r))
&= f_{0,\mathfrak{v}}^\circ (T^{r +A(\mathbf{x})} \cdot [\mathbf{x},i]) \\
&= \sum_{\mathbf{q} \in \mathbb{T}_\alpha \cap \mathbb{T}_\gamma} \ \sum_{\substack{\psi \in \pi_2(\mathbf{x}, \Theta_{\beta\gamma}, \mathbf{q}) \\ \mu(\psi)=0 \\ \mathfrak{s}_z(\psi) = \mathfrak{v} \\ \mathclap{r + k/d + A(\mathbf{x}) + n_w(\psi) - n_z(\psi) \equiv 0 \pmod m} }} \#\mathcal{M}(\psi) \, [\mathbf{q}, i-n_z(\psi)].
\end{align*}
According to \eqref{eq: abg-c1-x},
\[
\frac{k}{d} + A(\mathbf{x}) + n_w(\psi) - n_z(\psi) = \frac{\gen{c_1(\mathfrak{v}), [P_\gamma]}+k}{2d}.
\]
Thus, $f_{0,\mathfrak{v}}^\circ \circ \theta^{-1}$ is nonzero only on the summand
\[
\operatorname{CF}(\Sigma, \bm\alpha, \bm\beta, \mathfrak{s}, z) \otimes T^r \Gamma_m,
\]
where $\mathfrak{s} = \mathfrak{v} |_Y$ and
\[
r \equiv - \frac{1}{2d} (\gen{c_1(\mathfrak{v}), [P_\gamma]} + k) \pmod m.
\]
On this summand, if we neglect the power of $T$, the composition equals the untwisted cobordism map $F^\circ_{W_\lambda(K),\mathfrak{v}}$. We also lift $r$ to $\mathbb{Q}$ with the constraint
\[
\frac{-k-md}{2d} \le r < \frac{-k+md}{2d}.
\]
In \cite[Lemma 6.7]{HL} it is proved that for fixed $t\in \mathbb{N}$ and all large enough $m$, if $f_{0,\mathfrak{v}}^t \neq 0$ over spin$^c$ structure $\mathfrak{v},$ then for any $\epsilon >0,$
\[
\abs{ \gen{c_1(\mathfrak{v}), [P_\gamma]} } < \epsilon md.
\]
In particular, if we take $\epsilon < 1,$ this implies if $f_{0,\mathfrak{v}}^\circ \circ \theta^{-1}$ is nonzero, then
\begin{equation} \label{eq: f0-c1-r}
\gen{c_1(\mathfrak{v}), [P_\gamma]} = - 2dr - k.
\end{equation}
\begin{proposition}[Proposition 6.8 in \cite{HL}] \label{prop: f0-filt}
Fix $t \in \mathbb{N}$. For all $m$ sufficiently large, the map
\[
f_0^t \colon\thinspace \underline\operatorname{CF}^t(\bm\alpha, \bm\beta, z; \Gamma_m) \to \operatorname{CF}^t(\bm\alpha, \bm\gamma, z)
\]
is filtered with respect to the filtrations $\mathcal{J}_{\alpha\beta}$ and $\mathcal{J}_{\alpha\gamma}$.
\end{proposition}
\begin{proof}
Assume $m$ is large enough, and let $\mathfrak{v}$ be a spin$^c$ structure for which $f^t_{0,\mathfrak{v}} \ne 0$, which satisfies \eqref{eq: f0-c1-r}.
For any $\mathbf{x} \in \mathbb{T}_\alpha \cap \mathbb{T}_\beta$ and $\mathbf{q} \in \mathbb{T}_\alpha\cap \mathbb{T}_\gamma$ with $\mathfrak{s}_z(\mathbf{x}) = \mathfrak{v}|_{Y_{\alpha\beta}}$ and $\mathfrak{s}_z(\mathbf{q}) = \mathfrak{v}|_{Y_{\alpha\gamma}}$, and any triangle $\psi \in \pi_2(\mathbf{x}, \Theta_{\beta\gamma}, \mathbf{q})$ contributing to $f^t_{0,\mathfrak{v}}$, we compute
\begin{align*}
\mathcal{J}_{\alpha\beta} ([\mathbf{x},i] &\otimes T^r) - \mathcal{J}_{\alpha\gamma}([\mathbf{q},i-n_z(\psi)]) \\
&= - \frac{2ndr + nk +n^2 d}{2k} - \frac{\tilde A(\mathbf{q})}{k} + n_z(\psi) \\
&= -\frac{2ndr + nk + n^2 d}{2k} - \frac{n\gen{c_1(\mathfrak{v}), [P_\gamma]} - 2kn_{z_{n}}(\psi) + 2k n_z(\psi) - n^2 d}{2k} + n_z(\psi) \\
&= n_{z_{n}}(\psi) \\
&\ge 0
\end{align*}
as required. The last equality uses \eqref{eq: f0-c1-r}, and \eqref{eq: abg-c1-q}.
\end{proof}
\subsubsection{The map $f_1^t$} \label{sssec: f1-filt}The map $f_1^\circ$ decomposes as a sum
\begin{equation} \label{eq: f1-decomp}
f_1^\circ = \sum_{\mathfrak{v} \in \Spin^c_0(X_{\alpha\gamma\delta})} f_{1,\mathfrak{v}}^\circ.
\end{equation}
We need the following lemma from \cite{HL}:
\begin{lemma}[Lemma 6.9 in \cite{HL}]\label{le: f1range}
Fix $t\in \mathbb{N}$, when $m$ is sufficiently large, if $f_{1,\mathfrak{v}}^\circ\neq 0,$ then
\begin{equation} \label{eq: f1-trunc}
\abs{ \gen{c_1(\mathfrak{v}), [R]} } < \frac{m(k+md)}{\nu}.
\end{equation}
Moreover, for any $\mathfrak{u} \in \Spin^c(Y_{\lambda+m\mu}(K))$, there is at most one nonzero term landing in $\operatorname{CF}^t(\Sigma, \bm\alpha, \bm\delta, \mathfrak{u}, z)$, which satisfies
\begin{equation} \label{eq: f1-trunc-su}
\gen{c_1(\mathfrak{v}), [R]} = \frac{2 d m s_\mathfrak{u}}{\nu} .
\end{equation}
\end{lemma}
The second statement follows from the the range of $s_\mathfrak{u}$ and the fact that $\gen{c_1(\mathfrak{v}), [R]}$ has step length $ 2m(k+md)/\nu$. For each $\mathfrak{u} \in \Spin^c(Y_{\lambda+m\mu}(K)),$ recall that $s_\mathfrak{u}$ is the number appeared in Definition \ref{def: xu}.
\begin{proposition}[Proposition 6.10 in \cite{HL}] \label{prop: f1-filt}
For $m$ sufficiently large, the map
\[
f_1^t \colon\thinspace \operatorname{CF}^t(\bm\alpha, \bm\gamma, z) \to \operatorname{CF}^t(\bm\alpha, \bm\delta, z)
\]
is filtered with respect to the filtrations $\mathcal{J}_{\alpha\gamma}$ and $\mathcal{J}_{\alpha\delta}$.
\end{proposition}
\begin{proof}
Assume $m$ is large enough, and suppose $f^t_{1,\mathfrak{v}}$ is the only nonzero term landing in $\operatorname{CF}^t(\Sigma, \bm\alpha, \bm\delta, \mathfrak{u}, z)$, where $\mathfrak{u} = \mathfrak{v}|_{Y_{\alpha\delta}}$. For any $\mathbf{q}$ with $\mathfrak{s}_z(\mathbf{q})=\mathfrak{v}|_{Y_{\alpha\gamma}}$ and $\mathbf{a}$ with $\mathfrak{s}_z(\mathbf{a})=\mathfrak{u}$, and any triangle $\psi \in \pi_2(\mathbf{q}, \Theta_{\gamma\delta}, \mathbf{a})$ contributing to $f^t_{1,\mathfrak{v}}$, compute
\begin{align*}
A(\mathbf{q}) &- A(\mathbf{a}) \\
&= \frac{\tilde A(\mathbf{q})}{k} - \frac{\tilde A(\mathbf{a})}{k+md} \\
&= \frac{k(\tilde A(\mathbf{q}) - \tilde A(\mathbf{a})) + md \tilde A(\mathbf{q})}{k(k+md)} \\
&= \frac{1}{k+md} \left( (k+md) n_{z_n}(\psi) - (k+md) n_z(\psi) -md \sum_{j=1}^n \Big( n_{z_j}(\psi) - n_{u_j}(\psi) \Big)- \frac{nmd}{2} \right) \\
& \qquad + \frac{md}{k(k+md)} \left( \frac{n\nu}{2m} \gen{c_1(\mathfrak{v}), [R]} + k \sum_{j=1}^n \Big( n_{z_j}(\psi) - n_{u_j}(\psi) \Big) + \frac{nk}{2} - \frac{n^2 d}{2} \right) \\
&= n_{z_n}(\psi) - n_z(\psi) - \frac{2nmd}{2(k+md)} + \frac{md}{k(k+md)} \left( nd s_\mathfrak{u} + \frac{nk}{2} - \frac{n^2 d}{2} \right)\\
&= n_{z_n}(\psi) - n_z(\psi) + \frac{nd^2m ( 2s_\mathfrak{u} - n )}{2k(k+md)}.
\end{align*}
Thus,
\[
\mathcal{J}_{\alpha\gamma}([\mathbf{q},i]) - \mathcal{J}_{\alpha\delta}([\mathbf{a}, i-n_z(\psi)]) = n_{z_n}(\psi) \ge 0
\]
as required.
\end{proof}
\subsubsection{The map $f_2^t$} \label{sssec: f2-filt}
Let us first examine how the spin$^c$ decomposition of $f_2^\circ$ interacts with the trivializing map $\theta$. For any $\mathbf{a} \in \mathbb{T}_\alpha \cap \mathbb{T}_\delta$, we have:
\begin{align*}
\theta \circ f_2^\circ([\mathbf{a},i])
&= \sum_{\mathbf{x} \in \mathbb{T}_\alpha \cap \mathbb{T}_\beta} \sum_{\substack{\psi \in \pi_2(\mathbf{a}, \Theta_{\delta\beta}, \mathbf{x}) \\ \mu(\psi)=0}} \#\mathcal{M}(\psi) \, [\mathbf{x}, i-n_z(\psi)] \otimes T^{-k/d + n_w(\psi) - n_z(\psi) - A_{w,z}(\mathbf{x})}.
\end{align*}
By \eqref{eq: adb-c1-x}, the $T$ exponent is equal to
\[
-\frac{k}{d} + n_w(\psi) - n_z(\psi) - A_{w,z}(\mathbf{x}) = - \frac{\gen{c_1(\mathfrak{v}), [P_\delta]} +k-md}{2d}.
\]
Therefore, the term $\theta \circ f_{2,\mathfrak{v}}^\circ$ lands in the summand
\begin{equation} \label{eq: f2-t-target}
\operatorname{CF}^\circ(\bm\alpha, \bm\beta, \mathfrak{v}|_Y, z) \otimes T^r,
\end{equation}
where $r \in \mathbb{Q}/m\mathbb{Z}$ is given by
\[
r \equiv - \frac{\gen{c_1(\mathfrak{v}), [P_\delta]} +k+md}{2d} \pmod {m}
\]
Neglecting the power of $T$, the composition map equals the untwisted cobordism map $F^\circ_{W'_m, \mathfrak{v}}$ in the first factor. Fixing $t\in\mathbb{N}$ and $\epsilon>0$, according to \cite[Lemma 6.11]{HL}, for all $m$ sufficiently large, if $f_{2,\mathfrak{v}}^t \ne 0$, then
\begin{equation} \label{eq: f2-trunc}
\abs{ \gen{c_1(\mathfrak{v}), [P_\delta]} } < (1+\epsilon)(k+md).
\end{equation}
In particular, assuming that $(1+\epsilon)(k+md) < 2md$, then the only spin$^c$ structures that may contribute to $f^t_2$ are those denoted by $\mathfrak{x}_\mathfrak{u}$ and $\mathfrak{y}_\mathfrak{u}$ in Definition \ref{def: xu}.
\begin{proposition}[Proposition 6.12 in \cite{HL}] \label{prop: f2-filt}
Fix $t \in \mathbb{N}$. For all $m$ sufficiently large, the map
\[
f_2^t \colon\thinspace \operatorname{CF}^t(\bm\alpha, \bm\delta, z) \to \underline\operatorname{CF}^t(\bm\alpha, \bm\beta, z; \Gamma_m)
\]
is filtered with respect to the filtrations $\mathcal{J}_{\alpha\delta}$ and $\mathcal{J}_{\alpha\beta}$.
\end{proposition}
\begin{proof}
Assume $m$ is sufficiently large and $(1+\epsilon)(k+md) < 2md$. Suppose $\mathfrak{v}$ is a spin$^c$ structure on $W'_m$ for which $f^t_{2, \mathfrak{v}} \ne 0$, and let $\mathfrak{u} = \mathfrak{v}|_{Y_{\lambda+m\mu}(K)}$ and $\mathfrak{s} = \mathfrak{v}|_Y$. By \eqref{eq: f2-trunc}, we have
\[
-2md < -(1+\epsilon)(k+md) < \gen{c_1(\mathfrak{v}), [P_\delta]} < (1+\epsilon)(k+md) < 2md.
\]
Let $r$ denote the rational number satisfying
\[
-k-md \le 2dr < -k+md \quad \text{and} \quad 2dr \equiv -(\gen{c_1(\mathfrak{v}), [P_\delta]}+k+md) \pmod {2md}.
\]
Note that $r$ is one of the exponents appearing in \eqref{eq: r-bounds}. By \eqref{eq: f2-t-target}, $f^t_{2,\mathfrak{v}}$ lands in $\operatorname{CF}^\circ(\bm\alpha, \bm\beta, \mathfrak{v}|_Y, z) \otimes T^r$. At the same time, by \eqref{eq: su-bound} and \eqref{eq: adb-c1-x}, the number $s_\mathfrak{u}$ satisfies:
\[
-k-md < 2ds_\mathfrak{u} \le k+md \quad \text{and} \quad 2ds_\mathfrak{u} \equiv \gen{c_1(\mathfrak{v}), [P_\delta]} + k+md \pmod {2(k+md)}.
\]
There are two possible cases to consider.
\begin{enumerate}[label=(\roman*)]
\item \label{it:c1f2}
If $-(1+\epsilon)(k+md) < \gen{c_1(\mathfrak{v}), [P_\delta]} \le 0$, in this case $\mathfrak{v}=\mathfrak{x}_\mathfrak{u},$ and the above inequalities and congruences imply that
\begin{equation} \label{eq: f2-filt-c1-neg}
\gen{c_1(\mathfrak{v}), [P_\delta]} = -2dr-k-md = 2ds_\mathfrak{u} - k-md.
\end{equation}
\item \label{it:c2f2}
Otherwise if $0 < \gen{c_1(\mathfrak{v}), [P_\delta]} < (1+\epsilon)(k+md)$, in this case $\mathfrak{v}=\mathfrak{y}_\mathfrak{u},$ and we obtain
\begin{equation} \label{eq: f2-filt-c1-pos}
\gen{c_1(\mathfrak{v}), [P_\delta]} = -2dr-k+md = 2ds_\mathfrak{u} + k+md.
\end{equation}
\end{enumerate}
Suppose $\psi \in \pi_2(\mathbf{a}, \Theta_{\delta\beta}, \mathbf{x})$ is any triangle that contributes to $f^t_{2,\mathfrak{v}}$,
so that $\theta( f^t_{2,\mathfrak{v}}([\mathbf{a},i]))$ includes the term $[\mathbf{x},i-n_z(\psi)] \otimes T^r$. We compute:
\begin{align*}
\mathcal{J}_{\alpha\delta}([\mathbf{a},i]) &- \mathcal{J}_{\alpha\beta} ([\mathbf{x},i-n_z(\psi)] \otimes T^r) \\
&= A_{w,z_n}(\mathbf{a}) + \frac{2ndr + nk + n^2 d}{2k} + \frac{nd^2m (2 s_\mathfrak{u}-n)}{2k(k+md)} + n_z(\psi) \\
&= \frac{n\gen{c_1(\mathfrak{v}), [P_\delta]} - 2(k+md) n_z(\psi) + 2(k+md) n_{z_n}(\psi) -n^2 d}{2(k+md)} \\
& \qquad + \frac{2ndr + nk + n^2 d}{2k} + \frac{nd^2m (2 s_\mathfrak{u}-n)}{2k(k+md)} + n_z(\psi) \\
&= \frac{n\gen{c_1(\mathfrak{v}), [P_\delta]} -n^2 d }{2(k+md)} + \frac{2ndr + nk + n^2 d}{2k} + \frac{nd^2m (2 s_\mathfrak{u}-n)}{2k(k+md)} + n_{z_n}(\psi).
\end{align*}
Depending on case \ref{it:c1f2} or \ref{it:c2f2}, we will use either \eqref{eq: f2-filt-c1-neg} or \eqref{eq: f2-filt-c1-pos} to express the terms in first two fractions with $s_\mathfrak{u}$, which gives us:
\begin{align*}
\mathcal{J}_{\alpha\delta}([\mathbf{a},i]) &- \mathcal{J}_{\alpha\beta} ([\mathbf{x},i-n_z(\psi)] \otimes T^r) \\
&= \frac{2nds_\mathfrak{u} -n^2 d }{2(k+md)} - \frac{2nds_\mathfrak{u} - n^2 d}{2k} + \frac{nd^2m (2 s_\mathfrak{u}-n)}{2k(k+md)} + n_{z_n}(\psi) \\
&= n_{z_n}(\psi) \ge 0
\end{align*}
as required.
\end{proof}
\subsection{Rectangle maps} \label{ssec: rect-filt}
In this section we analyze rectangle-counting maps. Following Hedden-Levine's argument, we will introduce the truncated maps $\tilde h^t_0$, $\tilde h^t_1$, and $\tilde h^t_2$, and prove the first part of Proposition \ref{prop: rect-filt} with these maps. The proof follows completely from the recipe in \cite{HL}, replacing numerical values of Alexander filtration and spin$^c$ evaluation as appropriate. We write down the whole process for the completeness.
\subsubsection{The map $h_0^t$} \label{sssec: h0-filt}
Similar to the triangle-counting maps, $h^\circ_0$ splits over spin$^c$ structures $\mathfrak{v} \in \Spin^c_0(X_{\alpha\beta\gamma\delta})$. The composition map $h_{0,\mathfrak{v}}^\circ \circ \theta^{-1}$ is nonzero only on the summand $\operatorname{CF}^\circ(\bm\alpha, \bm\beta, \mathfrak{s}, z) \otimes T^r \Gamma_m$, where $\mathfrak{s} = \mathfrak{v} |_Y$ and
\[
r \equiv - \frac{1}{2d} (\gen{c_1(\mathfrak{v}), [P_\gamma]} + k) \pmod m,
\]
and on this summand $h_{0,\mathfrak{v}}^\circ \circ \theta^{-1}$ is equal to an untwisted count of rectangles. Recall the general strategy for the rectangle-counting maps is to throw away bad terms, and to prove the remaining terms still constitute a null-homotopy of the triangle-counting maps. The following definition and lemma are due to Hedden and Levine:
\begin{definition} [Definition 6.14 in \cite{HL}] \label{def: h0-trunc}
For given $\epsilon>0$, let $\tilde h_0^t$ be the sum of all terms $h^t_{0,\mathfrak{v}}$ corresponding to spin$^c$ structures $\mathfrak{v}$ which either satisfy both
\begin{align}
\label{eq: h0-filt-Pg-bound} \abs{\gen{c_1(\mathfrak{v}), [P_\gamma]}} &< \epsilon md \\
\label{eq: h0-filt-R-bound} \abs{ \gen{c_1(\mathfrak{v}), [R]}} &< \frac{m(k+md)}{\nu}
\end{align}
or satisfy
\begin{equation}
\label{eq: h0-filt-Q-bound} \abs{ \gen{c_1(\mathfrak{v}), [Q]}} = \pm m.
\end{equation}
\end{definition}
\begin{lemma}[Lemma 6.15 in \cite{HL}] \label{lemma: h0-trunc-nulhtpy}
Fix $t \in \mathbb{N}$ and $\epsilon>0$. For all $m$ sufficiently large, $\tilde h^t_0$ is a null-homotopy of $f^t_1 \circ f^t_0$.
\end{lemma}
We require an extra restraint on the spin$^c$ evaluation. The following lemma is given by an analysis on the absolute grading, which applies to our case as well.
\begin{lemma} [Lemma 6.13 in \cite{HL}]\label{lemma: h0-trunc}
Fix $t \in \mathbb{N}$ and $\epsilon>0$. For all $m$ sufficiently large, suppose $\mathfrak{v}$ is a spin$^c$ structure with $\abs{\gen{c_1(\mathfrak{v}), [Q]}} = m$, and $h^t_{0,\mathfrak{v}} \ne 0$, then
\[
\abs{\gen{c_1(\mathfrak{v}), [P_\delta]}} < (1+\epsilon)(k+md).
\]
\end{lemma}
With the restricted spin$^c$ evaluation, the filtration shifts on the truncated map can be much more effectively controlled. The following is parallel to \cite[Proposition 6.16]{HL}.
\begin{proposition} \label{prop: h0-filt}
Fix $t \in \mathbb{N}$ and $0 <\epsilon <1$. For all $m$ sufficiently large, the map $\tilde h^t_{0}$ is filtered with respect to $\mathcal{J}_{\alpha\beta}$ and $\mathcal{J}_{\alpha\delta}$.
\end{proposition}
\begin{proof}
We will start by looking at the filtration shift of a general term in $h^t_{0,\mathfrak{v}},$ before specializing to the case of $\tilde h^t_{0}$.
Suppose $\mathbf{x} \in \mathbb{T}_\alpha \cap \mathbb{T}_\beta$ and $\mathbf{a} \in \mathbb{T}_\alpha \cap \mathbb{T}_\delta$ are generators such that $\mathfrak{s}_z(\mathbf{x}) = \mathfrak{v}|_Y = \mathfrak{s}$ and $\mathfrak{s}_z(\mathbf{a}) = \mathfrak{v}|_{Y_{\lambda+m\mu}(K)}= \mathfrak{u}$. Similar as before, $h^t_{0,\mathfrak{v}} \circ \theta^{-1}$ in nonzero only on the summand $[\mathbf{x},i] \otimes T^r$, where $r\in \mathbb{Q}$ is given by
\[
\frac{-k-md}{2d} \le r < \frac{-k+md}{2d} \quad \text{and} \quad r \equiv -\frac{1}{2d}( \gen{c_1(\mathfrak{v}), [P_\gamma]} +k) \pmod m.
\]
So for some $p \in \mathbb{Z}$, we can write
\begin{equation} \label{eq: h0-filt-p-def}
\gen{c_1(\mathfrak{v}), [P_\gamma]} = -2dr-k + 2pmd.
\end{equation}
In other words, $p$ is the unique integer for which
\begin{equation} \label{eq: h0-filt-p-bounds}
(2p-1) md < \gen{c_1(\mathfrak{v}), [P_\gamma]} \le (2p+1)md.
\end{equation}
In particular, if $\mathfrak{v}$ satisfies \eqref{eq: h0-filt-Pg-bound}, then $p=0$.
On the other hand, consider the number $s_\mathfrak{u}$ associated with the spin$^c$ structure $\mathfrak{u}$. By its definition \eqref{eq: su-def}, combined with \eqref{eq: agd-c1-a} and \eqref{eq: adb-c1-a}, we have
\begin{align*}
2nd s_\mathfrak{u} &\equiv \gen{c_1(\mathfrak{v}),[P_{n,\delta}]} + n(k+md) \pmod{2n(k+md)} \\
&\equiv 2\tilde A(\mathbf{a}) + n^2 d - n(k+md) \pmod{2n(k+md)} \\
&\equiv \frac{\nu}{m} \gen{c_1(\mathfrak{v}),n[R]}. \pmod{2n(k+md)}
\end{align*}
Hence for some $q \in \mathbb{Z}$, we can write
\begin{equation} \label{eq: h0-filt-q-def}
\frac{\nu}{m} \gen{c_1(\mathfrak{v}), n[R]} = 2nd s_\mathfrak{u} + 2nq(k+md),
\end{equation}
so that
\begin{equation} \label{eq: h0-filt-q-bounds}
(2q-1)(k+md) < \frac{\nu}{m} \gen{c_1(\mathfrak{v}), [R]} \le (2q+1)(k+md).
\end{equation}
Again, suppose $\mathfrak{v}$ satisfies \eqref{eq: h0-filt-R-bound}, then $q=0$.
For a rectangle $\rho \in \pi_2(\mathbf{x},\Theta_{\beta\gamma}, \Theta_{\gamma\delta}, \mathbf{a})$ that contributes to $h^\circ_{0,\mathfrak{v}}$, compute:
\begin{align*}
\mathcal{J}_{\alpha\delta}([\mathbf{a}, &i-n_z(\rho)]) - \mathcal{J}_{\alpha\beta}([\mathbf{x},i] \otimes T^r) \\
&= \frac{\tilde A(\mathbf{a})}{k+md} -n_z(\rho) + \frac{nd^2m(2s_\mathfrak{u}-n)}{2k(k+md)} + \frac{2ndr + nk+n^2 d}{2k} \\
&= \frac{\frac{\nu}{m} \gen{c_1(\mathfrak{v}), n[R]} +n(k+md) -n^2 d}{2(k+md)} + \frac{md (\frac{\nu}{m} \gen{c_1(\mathfrak{v}), n[R]} - 2nq(k+md)) }{2k(k+md)} \\
& \qquad + \frac{-n\gen{c_1(\mathfrak{v}), [P_\gamma]} - nk + 2npmd}{2k} - \frac{nd^2m}{2k(k+md)} + \frac{nk+n^2 d}{2k}\\
&\qquad- n_{z_n}(\rho) + \sum_{j=1}^n\left( n_{z_j}(\rho) - n_{u_j}(\rho) \right)\\
&= \frac{n}{2k} \gen{c_1(\mathfrak{v}), \frac{\nu}{m}[R]- [P_\gamma]} + \frac{(p-q)nmd}{k} - \frac{n^2 d}{2k} + \frac{nk+n^2 d}{2k} \\
& \qquad - n_{z_n}(\rho) + \sum_{j=1}^n\left( n_{z_j}(\rho) - n_{u_j}(\rho) \right) \\
&= \frac{1}{2m} \gen{c_1(\mathfrak{v}), n[Q] } + \frac{(p-q)nmd}{k} + \frac{n}{2} - n_{z_n}(\rho) + \sum_{j=1}^n\left( n_{z_j}(\rho) - n_{u_j}(\rho) \right) \\
&= \frac{1}{2m} \left(-2m \sum_{j=1}^n\left( n_{z_j}(\rho) - n_{u_j}(\rho) \right) -nm \right) + \frac{(p-q)nmd}{k} + \frac{n}{2} \\
&\qquad - n_{z_n}(\rho) + \sum_{j=1}^n\left( n_{z_j}(\rho) - n_{u_j}(\rho) \right) \\
&= - n_{z_n}(\rho) + \frac{(p-q)nmd}{k}
\end{align*}
Therefore, in order to show that $\tilde h^t_{0}$ is filtered, we only need to show that $p-q = 0$ whenever $\mathfrak{v}$ satisfies the conditions from Definition \ref{def: h0-trunc}.
If $\mathfrak{v}$ satisfies \eqref{eq: h0-filt-Pg-bound} and \eqref{eq: h0-filt-R-bound}, we immediately deduce that $p=q=0$. This leaves us with only one case, when $\mathfrak{v}$ satisfies \eqref{eq: h0-filt-Q-bound}, namely $\gen{c_1(\mathfrak{v}), [Q]} = em$ where $e=\pm 1$. By Lemma \ref{lemma: h0-trunc}, we may also assume that $\abs{c_1(\mathfrak{v}), [P_\delta]} < (1+\epsilon)(k+md)$. Recall that $[P_\gamma] = [P_\delta] + d[Q]$ and $\frac{\nu}{m}[R] = [P_\delta] + \frac{k+md}{m}[Q]$. Therefore,
\begin{align*}
(e-1-\epsilon) md -(1+\epsilon)k < \gen{c_1(\mathfrak{v}), [P_\gamma]} &< (e+1+\epsilon)md + (1+\epsilon)k \\
(e-1-\epsilon)(k+md) < \frac{\nu}{m} \gen{c_1(\mathfrak{v}), [R]} &< (e+1+\epsilon)(k+md).
\end{align*}
Comparing with \eqref{eq: h0-filt-p-def} and \eqref{eq: h0-filt-q-def} respectively, for sufficiently large $m$, this implies that $p,q \in \{0,e\}$. Also through \eqref{eq: h0-filt-p-bounds}and \eqref{eq: h0-filt-q-bounds}, we have
\[
2(q-p-1)md + (2q-1)k < ke < 2(q-p+1)md + (2q+1)k.
\]
It then follows that $p=q$, as required.
\end{proof}
\subsubsection{The map $h_1^t$} \label{sssec: h1-filt}
We recall the definition of the truncated map $h^t_{1}$ from Hedden-Levine's argument.
\begin{definition}[Definition 6.17 in \cite{HL}] \label{def: h1-trunc}
Let $\tilde h^t_1$ denote the sum of all terms $h^t_{1,\mathfrak{v}}$ for which $\mathfrak{v}$ satisfies
\begin{equation} \label{eq: h1-trunc-Q-bound}
\gen{c_1(\mathfrak{v}), [Q]} = \pm m.
\end{equation}
\end{definition}
Similar to the previous case, the truncated map is enough to fulfill the required condition in Proposition \ref{prop: rect-filt}, as indicated by the following lemma.
\begin{lemma}[Lemma 6.19 in \cite{HL}] \label{lemma: h1-trunc-nulhtpy}
Fix $t \in \mathbb{N}$. For all $m$ sufficiently large, $\tilde h_1^t$ is a null-homotopy of $f_2^t \circ f_1^t$.
\end{lemma}
An extra spin$^c$ constraint is needed in the proof, given by the Maslov grading bound.
\begin{lemma} \label{lemma: h1-trunc}
Fix $t \in \mathbb{N}$. For all $m$ sufficiently large, if $\mathfrak{v}$ is any spin$^c$ structure with $\abs{\gen{c_1(\mathfrak{v}), [Q]}} = m$, and $h^t_{1,\mathfrak{v}} \ne 0$, then $\abs{\gen{c_1(\mathfrak{v}), [P_\gamma]}} < md$.
\end{lemma}
We are ready to prove the following proposition, parallel to \cite[Proposition 6.22]{HL}.
\begin{proposition} \label{prop: h1-filt}
Fix $t \in \mathbb{N}$. For all $m$ sufficiently large, the map $\tilde h_1^t$ is filtered with respect to the filtrations $\mathcal{J}_{\alpha\gamma}$ and $\mathcal{J}_{\alpha\beta}$.
\end{proposition}
\begin{proof}
Suppose $\mathfrak{v}$ is a spin$^c$ structure for which $h^t_{1,\mathfrak{v}} \ne 0$, and assume $\gen{c_1(\mathfrak{v}), [Q]} = em$, where $e=\pm 1.$ Let $\mathbf{q} \in \mathbb{T}_\alpha \cap \mathbb{T}_\gamma$ and $\mathbf{x} \in \mathbb{T}_\alpha \cap \mathbb{T}_\beta$ and be generators such that $\mathfrak{s}_z(\mathbf{q})=\mathfrak{v}|_{Y_\lambda}$ and $\mathfrak{s}_z(\mathbf{x})=\mathfrak{v}|_Y$.
Similar as before, note that $\theta \circ h^\circ_{1,\mathfrak{v}}$ lands in the summand $[\mathbf{x},i] \otimes T^r \Gamma_m$, where $r\in \mathbb{Q}$ is given by
\[
\frac{-k-md}{2d} \le r < \frac{-k+md}{2d} \quad \text{and} \quad r \equiv -\frac{1}{2d}( \gen{c_1(\mathfrak{v}), [P_\delta]} +k+md) \pmod m,
\]
so for some $p\in \mathbb{Z}$, we can write
\[
\gen{c_1(\mathfrak{v}), [P_\delta]} = -2dr-k + (2p-1) md,
\]
which implies that
\begin{equation} \label{eq: h1-filt-p-bounds}
(2p-2)md < \gen{c_1(\mathfrak{v}), [P_\delta]} \le (2p)md.
\end{equation}
By Lemma \ref{lemma: h1-trunc}, we may assume that $\abs{\gen{c_1(\mathfrak{v}), [P_\gamma]}} < md$. Recall that $[P_\gamma] = [P_\delta] + d[Q]$, and therefore
\[
\gen{c_1(\mathfrak{v}), [P_\delta]} = \gen{c_1(\mathfrak{v}), [P_\gamma]} - emd.
\]
If $e = 1$, this implies $p=0$; if $e = -1$, this implies $p=1$. Either way, it holds that
$2p-1=-e$.
By \eqref{eq: agdb-c1-Q}, we have
\begin{align*}
(2p-2)nm & = -\gen{c_1(\mathfrak{v}), n[Q]} -nm\\
&=-2m \left( -n_z(\rho) + n_{z_n}(\rho) - \sum_{j=1}^n\left( n_{z_j}(\rho) - n_{u_j}(\rho) \right) \right) +nm -nm\\
&=2m n_z(\rho) - 2m n_{z_n}(\rho) + 2m \sum_{j=1}^n\left( n_{z_j}(\rho) - n_{u_j}(\rho) \right).
\end{align*}
Now, we compute:
\begin{align*}
&\mathcal{J}_{\alpha\gamma}([\mathbf{q},i]) - \mathcal{J}_{\alpha\beta}([\mathbf{x},i-n_z(\rho)] \otimes T^r) \\
&= A_{w,z_n}(\mathbf{q}) + \frac{2nd r +nk+n^2 d}{2k} +n_z(\rho) \\
&= \frac{1}{2k} \left( 2\tilde A(\mathbf{q}) + 2ndr + nk + n^2 d \right) +n_z(\rho) \\
&= \frac{1}{2k} \left( 2\tilde A(\mathbf{q}) - n\gen{c_1(\mathfrak{v}),[P_\delta]} + (2p-1)nmd + n^2 d \right) +n_z(\rho) \\
&= \frac{1}{2k} \left( 2\tilde A(\mathbf{q}) -2\tilde A(\mathbf{x}) + 2d n_w(\rho) - 2dn_z(\rho) + (2p-1)nmd -n(k+md) + n^2 d \right) +n_z(\rho) \\
&= \frac{1}{2k} \Big( 2\mathcal{N}(\rho) + 2d n_w(\rho) - 2dn_z(\rho) + (2p-2)nmd \Big) +n_z(\rho) \\
&= \frac{1}{k} \Big( -nd n_w(\rho) - (k+md -nd) n_z(\rho) + (k+md)n_{z_n}(\rho) -md\sum_{j=1}^n\left( n_{z_j}(\rho) - n_{u_j}(\rho) \right)\\
&\qquad + d n_w(\rho) - dn_z(\rho) + md n_z(\rho) -md n_{z_n}(\rho) +md \sum_{j=1}^n\left( n_{z_j}(\rho) - n_{u_j}(\rho) \right) \Big) +n_z(\rho) \\
&= n_{z_n}(\rho) \ge 0.
\end{align*}
\end{proof}
\subsubsection{The map $h_2^t$} \label{sssec: h2-filt}
Let us start by recalling the definition of the truncated map $\tilde h^t_2$ from \cite{HL}, which is similar to the previous case.
For any $\mathfrak{v} \in \Spin^c_0(X_{\alpha\delta\beta\gamma})$, let $\tilde h^t_2$ denote the sum of all terms $h^t_{2,\mathfrak{v}}$ such that
\begin{equation} \label{eq: h2-trunc-Q-bound}
\gen{c_1(\mathfrak{v}), [Q]} = \pm m.
\end{equation}
As before, the next two lemmas show that $\tilde h^t_2$ satisfies the first condition in Proposition \ref{prop: rect-filt} and give an extra spin$^c$ evaluation bound that we will use in the proof, respectively.
\begin{lemma}[Lemma 6.24 in \cite{HL}] \label{lemma: h2-trunc-nulhtpy}
Fix $t \in \mathbb{N}$. For all $m$ sufficiently large, $\tilde h_2^t$ is a null-homotopy of $f_0^t \circ f_2^t$. \qed
\end{lemma}
\begin{lemma}[Lemma 6.25 in \cite{HL}] \label{lemma: h2-trunc}
Fix $t \in \mathbb{N}$ and $\epsilon > 0$. For all $m$ sufficiently large, if $\mathfrak{v}$ is any spin$^c$ structure with $\abs{\gen{c_1(\mathfrak{v}), [Q]}} = m$ (where $e$ is an odd integer), and $h^t_{1,\mathfrak{v}} \ne 0$, then
\begin{equation} \label{eq: h2-trunc-R-bound}
\abs{\gen{c_1(\mathfrak{v}), [R]}} < \frac{m(k+md)}{\nu}.
\end{equation}
\end{lemma}
The following proposition is parallel to \cite[Proposition 6.27]{HL}.
\begin{proposition} \label{prop: h2-filt}
Fix $t \in \mathbb{N}$. For all $m$ sufficiently large, the map $\tilde h_2^t$ is filtered with respect to the filtrations $\mathcal{J}_{\alpha\delta}$ and $\mathcal{J}_{\alpha\gamma}$.
\end{proposition}
\begin{proof}
Let $\mathbf{a} \in \mathbb{T}_\alpha \in \mathbb{T}_\delta$ and $\mathbf{q} \in \mathbb{T}_\alpha \cap \mathbb{T}_\gamma$ be generators such that $\mathfrak{s}_z(\mathbf{q})=\mathfrak{v}|_{Y_\lambda}$ and $\mathfrak{s}_z(\mathbf{a})=\mathfrak{v}|_{Y_{\lambda+m\mu}}=\mathfrak{u}$, and suppose $\rho \in \pi_2(\mathbf{a}, \Theta_{\delta\beta}, \Theta_{\beta\gamma}, \mathbf{q})$ is a rectangle that contributes to $h^t_{2,\mathfrak{v}}([\mathbf{a},i])$. Associated with $\mathfrak{u}$ is a
number $s_\mathfrak{u}$, which by its definition \eqref{eq: su-def} satisfies
\[
-(k+md) < 2d s_\mathfrak{u} \le k+md \quad \text{and} \quad 2nd s_\mathfrak{u} \equiv \gen{c_1(\mathfrak{v}),[P_{n,\delta}]} + n (k+md) \pmod {2n(k+md)}.
\]
Therefore, for some $q\in \mathbb{Z}$ we have
\[
\gen{c_1(\mathfrak{s}_z(\rho)), [P_{n,\delta}]} = 2nd s_\mathfrak{u} + (2q-1)n(k+md),
\]
so that
\[
(2q-2)(k+md) < \gen{c_1(\mathfrak{s}_z(\rho)), [P_\delta]} \le 2q(k+md).
\]
Next we will assume that $\gen{c_1(\mathfrak{v}), [Q]} = em$, where $e=\pm 1$, and $h^t_{2,\mathfrak{v}} \ne 0$, so that $\mathfrak{v}$ satisfies \eqref{eq: h2-trunc-R-bound}. We have
\begin{gather*}
\gen{c_1(\mathfrak{v}), [P_\delta]} = \frac{\nu}{m} \gen{c_1(\mathfrak{v}), [R]} - \frac{k+md}{m} \gen{c_1(\mathfrak{v}), [Q]}.
\end{gather*}
Using \eqref{lemma: h2-trunc}, compare the range of both sides of the equation. If $e=1$, this implies $q=0$; if $e=-1$, this implies $q=1$. In either case, it holds that $2q-1 = -e$.
Thus
\begin{align*}
-(2q-1)m&= em=\gen{c_1(\mathfrak{s}_z(\rho)), [Q]}\\
&=2n_w(\rho) - 2n_z(\rho) + m.
\end{align*}
We now compute:
\begin{align*}
&\mathcal{J}_{\alpha\delta}([\mathbf{a},i]) - \mathcal{J}_{\alpha\gamma}([\mathbf{q},i-n_w(\rho)] ) \\
&= \frac{\tilde A(\mathbf{a})}{k+md} + \frac{nd^2m(2s_\mathfrak{u}-n)}{2k(k+md)} - \frac{\tilde A(\mathbf{q})}{k} + n_z(\rho) \\
&= \frac{2k \tilde A(\mathbf{a}) - 2(k+md)\tilde A(\mathbf{q}) + nd^2m(2s_\mathfrak{u}-n)}{2k(k+md)} + n_z(\rho) \\
&= \frac{2(k+md)( \tilde A(\mathbf{a}) - \tilde A(\mathbf{q})) - 2md \tilde A(\mathbf{a}) + nd^2m(2s_\mathfrak{u}-n)}{2k(k+md)} + n_z(\rho) \\
&= \frac{1}{k} \left( \tilde A(\mathbf{a}) - \tilde A(\mathbf{q}) \right) + \frac{nd^2m(2s_\mathfrak{u}-n)}{2k(k+md)} + n_z(\rho) - \frac{md\tilde A(\mathbf{a})}{2k(k+md)}\\
&= \frac{1}{k} \left( \mathcal{N}(\rho) - \frac{nmd}{2} \right) + \frac{md\gen{c_1(\mathfrak{s}_z(\rho)), [P_{n,\delta}]} -(2q-1)m\cdot nd(k+md) -n^2 d^2 m}{2k(k+md)} + n_z(\rho) \\
& - \frac{ md\left( \gen{c_1(\mathfrak{s}_z(\rho)), [P_{n,\delta}]} -2(k+md)\left( n_z(\rho) - n_{z_n}(\rho) + \sum_{j=1}^n\left( n_{z_j}(\rho) - n_{u_j}(\rho) \right) \right) - n^2 d \right) } {2k(k+md)} \\
&= \frac{1}{k} \left( \mathcal{N}(\rho) - \frac{nmd}{2} \right) + \frac{nd(k+md)(2n_w(\rho) - 2n_z(\rho) + m )}{2k(k+md)} + n_z(\rho) \\
&+ \frac{ 2md(k+md)\left( n_z(\rho) - n_{z_n}(\rho) + \sum_{j=1}^n\left( n_{z_j}(\rho) - n_{u_j}(\rho) \right) \right)}{2k(k+md)}\\
&= \frac{1}{k} \left( -nd n_w(\rho) - (k+md -nd) n_z(\rho) + (k+md) n_{z_n}(\rho) -md\sum_{j=1}^n\left( n_{z_j}(\rho) - n_{u_j}(\rho) \right)\right) \\
& + \frac{ nd\left(n_w(\rho) - n_z(\rho)\right) + md \left( n_z(\rho) - n_{z_n}(\rho) + \sum_{j=1}^n\left( n_{z_j}(\rho) - n_{u_j}(\rho) \right) \right) } {k} + n_z(\rho) \\
&= \frac{1}{k}\Big( -nd n_w(\rho) - (k -nd) n_z(\rho) + k n_{z_n}(\rho) + nd\left(n_w(\rho) - n_z(\rho)\right) \Big) + n_z(\rho)\\
&= n_{z_n}(\rho) \\
&\ge 0.
\end{align*}
\end{proof}
\subsection{Pentagon maps} \label{ssec: pent-filt}
In this section we aim to prove the second part of Proposition \ref{prop: rect-filt}: $\tilde h^t_{j+1} \circ f^t_j + f^t_{j+2} \circ \tilde h^t_j$ (where $j \in \mathbb{Z}/3$) are filtered quasi-isomorphisms. We will only focus on the case of $j=0$, which is the most technically difficult because of the twisted coefficients. The arguments for $j=1$ and $j=2$ are similar.
Again, our argument here is completely parallel to the argument presented in \cite[Section 6.4]{HL}, and we will leave out some of the technical details and focus on the adjustment needed due to a different setting. For a more detailed read, see \cite[Section 6.4]{HL}.
To begin, let $\tilde{\bm\beta} = (\tilde \beta_1, \dots, \tilde\beta_g)$ denote a small Hamiltonian isotopy of $\bm\beta$, such that each $\tilde \beta_i$ meets $\beta_i$ in a pair of points. We further require that $v$ is an extra reference point such that $v$ is in the same region of $\Sigma \smallsetminus (\bm\alpha \cup \bm\beta)$ as $z$ and in the same region of $\Sigma \smallsetminus (\bm\alpha \cup \tilde{\bm\beta})$ as $w$. Finally, let $\Theta_{\beta\tilde\beta} \in \mathbb{T}_\beta \cap \mathbb{T}_{\tilde\beta}$ denote the canonical top-dimensional generator. \footnote{This is the same setting as depicted in \cite[Figure 6]{HL}, although in their text description the order of $w$ and $z$ is switched by mistake.}
Define
\[
\tilde \Psi^t_0 = \tilde h^t_{1} \circ f^t_0 + f^t_{2} \circ \tilde h^t_0.
\]
The fact that $\tilde \Psi^t_0$ is filtered follows immediately from the previous sections, since it is a sum of filtered maps. In order to show it is a quasi-isomorphism, we relate it to the map
\begin{equation} \label{eq: Phi0}
\Phi_0^t \colon\thinspace \underline\operatorname{CF}^t(\Sigma, \bm\alpha, \bm\beta, z; \Gamma_m) \to \underline\operatorname{CF}^t(\Sigma, \bm\alpha, \tilde{\bm\beta}, z; \Gamma_m)
\end{equation}
given by
\begin{equation} \label{eq: Phi0-def}
\Phi_0^t(T^s \cdot [\mathbf{x}, i]) =
\sum_{\tilde\mathbf{y} \in \mathbb{T}_\alpha \cap \mathbb{T}_{\tilde\beta}} \sum_{\substack{\psi \in \pi_2(\mathbf{x}, \Theta_{\beta\tilde\beta}, \tilde\mathbf{y}) \\ \mu(\psi)=0 }} \#\mathcal{M}(\psi) \, T^{s + n_w(\psi)-n_z(\psi)} \cdot [\tilde\mathbf{y}, i-n_z(\psi) ].
\end{equation}
By the work of \cite{HeddenMarkFractional}, $\Phi_0^t$ is a chain isomorphism. Moreover, for any $\psi \in \pi_2(\mathbf{x}, \Theta_{\beta\tilde\beta}, \tilde\mathbf{y})$, we have $A(\mathbf{x}) - A(\tilde\mathbf{y}) = n_z(\psi) - n_w(\psi)$. It follows that $\Phi_0^\circ$ is a filtered isomorphism with respect to $\mathcal{J}_{\alpha\beta}$ and $\mathcal{J}_{\alpha\tilde\beta}$.
The aim is then to show $\tilde \Psi^t_0$ is filtered homotopy equivalent to $\Phi_0^t$. The argument hinges on the following pentagon-counting map, defined in \cite{HL}:
\begin{equation} \label{eq: g0-Phi0}
g_0^\circ \colon\thinspace \underline\operatorname{CF}^\circ(\Sigma, \bm\alpha, \bm\beta, z; \Gamma_m) \to \underline\operatorname{CF}^\circ(\Sigma, \bm\alpha, \tilde{\bm\beta}, z; \Gamma_m)
\end{equation}
given by
\begin{equation}
\label{eq: g0-def}
g_0^\circ(T^s \cdot [\mathbf{x},i]) = \sum_{\mathbf{y} \in \mathbb{T}_\alpha \cap \mathbb{T}_{{\tilde \beta}}} \sum_{\substack{\sigma \in \pi_2(\mathbf{x}, \Theta_{\beta\gamma}, \Theta_{\gamma\delta}, \Theta_{\delta{\tilde \beta}}, \mathbf{y}) \\ \mu(\sigma)=-2 \\ \mathclap{s + n_w(\sigma) - n_v(\sigma) \equiv 0 \pmod m}}} \#\mathcal{M}(\sigma) \, T^{n_v(\sigma) - n_z(\sigma)} [\mathbf{y}, i-n_z(\sigma)].
\end{equation}
Hedden and Levine defined a truncated version of $g_0^t$, again by throwing away bad terms, and they proceed to show the truncated map $\tilde g^t_0$ establishes the required filtered homotopy equivalence between $\tilde \Psi^t_0$ and $\Phi_0^t$. This argument works for our case as well, except that the grading shift on $\tilde g^t_0$ needs to be recalculated.
Before we discuss the filtration shifts, let us state analogues of the results of Section \ref{sec: alex} for pentagons. To begin, let $V$ be the $(\beta,\tilde\beta)$ periodic domain with $\partial V = \beta_g - \tilde\beta_g$. In other word, $V$ is the thin domain between $\beta_g $ and $\tilde\beta_g$. We have $n_v(V) = 1$, and $n_w(V) = n_z(V) = n_{z_j}(V) = n_{u_j}(V) = 0$ for $j=1,\cdots, n$. Let $\tilde P_\gamma$, $\tilde P_\delta$, and $\tilde Q$ be the analogues of $P_\gamma$, $P_\delta$, and $Q$ with $\beta$ circles replaced by ${\tilde \beta}$ circles: up to thin domains, we have
\[
[\tilde P_\gamma] = [P_\gamma] + k[V], \quad [\tilde P_\delta] = [P_\delta]+(k+md)[V], \quad \text{and} \quad
[\tilde Q] = [Q]-m[V].
\]
The Heegaard diagram determines a $4$-manifold $X_{\alpha\beta\gamma\delta\tilde\beta}$, which admits various decompositions into the pieces described in Section \ref{ssec: cob}; for instance, we have
\[
X_{\alpha\beta\gamma\delta\tilde\beta} = X_{\alpha\beta\gamma\delta} \cup_{Y_{\alpha\delta}} X_{\alpha\delta\tilde\beta} = X_{\alpha\beta\gamma} \cup_{Y_{\alpha\gamma}} X_{\alpha\gamma\delta\tilde\beta} .
\]
In the intersection pairing form on $H_2(X_{\alpha\beta\gamma\delta\tilde\beta})$, we have
\[
[V] \cdot [V] = [V] \cdot [Q] = [V] \cdot [\tilde Q] = 0,
\]
and all other intersection numbers can be deduced accordingly. Compare the following with \cite[Lemma 6.31]{HL}.
\begin{lemma} \label{le: pentagonspinc}
For any $\mathbf{x} \in \mathbb{T}_\alpha \cap \mathbb{T}_\beta$, $\tilde\mathbf{y} \in \mathbb{T}_\alpha \cap \mathbb{T}_{\tilde\beta}$, and $\sigma \in \pi_2(\mathbf{x}, \Theta_{\beta\gamma}, \Theta_{\gamma\delta}, \Theta_{\delta\tilde\beta}, \tilde \mathbf{y})$, we have:
\begin{align}
\label{eq: abgdb-alex}
n\tilde A(\mathbf{x}) - n\tilde A(\tilde \mathbf{y}) &= -nd n_w(\sigma) - (k+md - nd) n_z(\sigma) + (k+md) n_{z_n}(\sigma) \\
\nonumber & \qquad \qquad - md \sum_{j=1}^n\left( n_{z_j}(\sigma) - n_{u_j}(\sigma) \right) \\
\label{eq: abgdb-c1-Pg}
\qquad \qquad \gen{c_1(\mathfrak{s}_z(\sigma)), [P_\gamma]} &= 2 \tilde A(\mathbf{x}) + 2dn_w(\sigma) - 2d n_v(\sigma) + k \\
\label{eq: abgdb-c1-Pdt}
\qquad \qquad \gen{c_1(\mathfrak{s}_z(\sigma)), [\tilde P_\delta]} &= 2 \tilde A(\tilde\mathbf{y}) + 2dn_z(\sigma) - 2d n_v(\sigma) + (k+md) \\
\label{eq: abgdb-c1-Q}
\qquad \qquad \gen{c_1(\mathfrak{s}_z(\sigma)), n[Q]} &= -2m \sum_{j=1}^n\left( n_{z_j}(\sigma) - n_{u_j}(\sigma) \right) -nm \\
\label{eq: abgdb-c1-Qt}
\qquad \qquad \gen{c_1(\mathfrak{s}_z(\sigma)), n[\tilde Q]} &= 2m\Big( -n_z(\sigma) + n_{z_n}(\sigma) - \sum_{j=1}^n\left( n_{z_j}(\sigma) - n_{u_j}(\sigma) \right) \Big) -nm \\
\label{eq: abgdb-c1-V}
\qquad \qquad \gen{c_1(\mathfrak{s}_z(\sigma)), n[V]} &= 2n_z(\sigma) - 2n_{z_n}(\sigma)
\end{align}
\end{lemma}
\begin{proof}
This proof resembles the one for Proposition \ref{prop: alex-rectangle}, building upon the calculations made in Propositions \ref{prop: alex-triangle} and \ref{prop: alex-rectangle}.
For any $\mathbf{x} \in \mathbb{T}_\alpha \cap \mathbb{T}_\beta$, $\tilde\mathbf{y} \in \mathbb{T}_\alpha \cap \mathbb{T}_{\tilde\beta}$, and $\sigma \in \pi_2(\mathbf{x}, \Theta_{\beta\gamma}, \Theta_{\gamma\delta}, \Theta_{\delta\tilde\beta}, \tilde \mathbf{y})$, choose $\mathbf{a} \in \mathbb{T}_\alpha \cap \mathbb{T}_\delta$, $\rho \in \pi_2(\mathbf{x}, \Theta_{\beta\gamma}, \Theta_{\gamma\delta},\mathbf{a})$ and $\psi \in \pi_2(\mathbf{a},\Theta_{\delta\tilde\beta}, \tilde \mathbf{y})$ such that $\mathfrak{s}_z(\rho) = \mathfrak{s}_z(\sigma)|_{X_{\alpha\beta\gamma\delta}}$ and $\mathfrak{s}_z(\psi) = \mathfrak{s}_z(\sigma)|_{X_{\alpha\delta{\tilde\beta}}}$. Up to adding copies of $\Sigma,$ which does not change the spin$^c$ structure condition, we may assume that $n_z(\sigma) = n_z(\rho) + n_z(\psi)$. Hence, $S = \mathcal{D}(\sigma) - \mathcal{D}(\rho * \psi)$ is a quintuply periodic domain with $n_z(S)=0$. Since the function $\mathcal{N}$ vanishes on all periodic domains, we have:
\begin{align*}
n\tilde A(\mathbf{x}) - n\tilde A(\tilde \mathbf{y}) &= (n\tilde A(\mathbf{x}) - \tilde A(\mathbf{a})) + (\tilde A(\mathbf{a}) - n\tilde A(\tilde \mathbf{y})) \\
&= \mathcal{N}(\rho) - \frac{n(k+md)-n^2 d}{2} + \mathcal{N}(\psi) + \frac{n(k+md)-n^2 d}{2}\\
&= \mathcal{N}(\sigma)
\end{align*}
which proves \eqref{eq: abgdb-alex}.
Next, up to thin domains, we have $S = x P_\gamma + y R$, similar as before, we can solve
\[
x = \frac{n_w(S)}{k}.
\]
Using \eqref{eq: abgd-PD-self-int}, \eqref{eq: abgd-int-form}, and \eqref{eq: abgd-c1-Pg}, we compute:
\begin{align*}
\gen{c_1(\mathfrak{s}_z(\sigma)), [P_\gamma]}
&= \gen{c_1(\mathfrak{s}_z(\rho)), [P_\gamma]} + 2x [P_\gamma]^2 \\
&= 2 \tilde A(\mathbf{x}) + 2d(n_w(\rho)-n_z(\rho)) + k + 2d( n_w(\sigma) - n_w(\rho) - n_w(\psi)) \\
&= 2 \tilde A(\mathbf{x}) + 2d(n_w(\sigma) - n_z(\rho) -n_w(\psi)) +k \\
&= 2 \tilde A(\mathbf{x}) + 2d(n_w(\sigma) - n_v(\rho) -n_v(\psi)) +k \\
&= 2 \tilde A(\mathbf{x}) + 2d(n_w(\sigma) -n_v(\sigma)) +k,
\end{align*}
which proves \eqref{eq: abgdb-c1-Pg}.
The computations for the rest of the items are similar and left for the reader.
\end{proof}
\begin{remark} \label{re: veven}
We can adopt the perspective in Remark \ref{re: nprime}: for a fixed $n$, the relations in Lemma \ref{le: pentagonspinc} hold for all $n'$ with $1\leq n' \leq n$. In particular, taking $n'=1,$ then \eqref{eq: abgdb-c1-V} implies that $\gen{c_1(\mathfrak{s}_z(\sigma)), [V]}$ is always an even integer, and \eqref{eq: abgdb-c1-Qt} implies that $\gen{c_1(\mathfrak{s}_z(\sigma)), [\tilde Q]}=em$ for some odd integer $e.$
Moreover, by varying $n'$, we obtain
$n_z(\sigma) - n_{z_1}(\sigma)=n_{z_j}(\sigma) - n_{z_{j+1}}(\sigma)$ for $j=1,\cdots,n-1.$ This is true because the pentagon class $\sigma$ has no $(\alpha,\gamma)$ or $(\alpha,\delta)$ endpoints.
\end{remark}
\begin{definition}[Definition 6.33 in \cite{HL}] \label{def: g0-trunc}
Fix $0 < \epsilon < \frac13$. Let $\tilde g^t_0$ denote the sum of all terms $g^t_{0,\mathfrak{v}}$ for which $\mathfrak{v}$ satisfies either both
\begin{align}
\label{eq: g0-trunc-Pg} \abs{\gen{c_1(\mathfrak{v}), [P_\gamma]}} &< \epsilon md \\
\label{eq: g0-trunc-Qt} \gen{c_1(\mathfrak{v}), [\tilde Q]} &= \pm m, \\
\intertext{or it satisfies both}
\label{eq: g0-trunc-Pdt} \abs{\gen{c_1(\mathfrak{v}), [\tilde P_\delta]}} &< (1+\epsilon)(k+md) \\
\label{eq: g0-trunc-Q} \gen{c_1(\mathfrak{v}), [Q]} &= \pm m, \\
\intertext{or it satisfies}
\label{eq: g0-trunc-V} \gen{c_1(\mathfrak{v}), [V]} &= 0.
\end{align}
\end{definition}
The following lemma puts further constraints on the spin$^c$ evaluations. It is stated and proved in \cite{HL}.
\begin{lemma} [Lemma 6.32 in \cite{HL}] \label{lemma: g0-trunc}
Fix $t \in \mathbb{N}$ and $0<\epsilon < \epsilon' < 1$. For all $m$ sufficiently large, if $\mathfrak{v}$ is any spin$^c$ structure for which $g^t_{0,\mathfrak{v}} \ne 0$, then the following implications hold:
\begin{enumerate}
\item \label{it: g0-trunc-Pg-Pgt}
If $\abs{\gen{c_1(\mathfrak{v}), [P_\gamma]}} < \epsilon md$ and $\gen{c_1(\mathfrak{v}), [\tilde Q] } = \pm m$, then
\begin{equation} \label{eq: g0-trunc-Pgt}
\abs{\gen{c_1(\mathfrak{v}), [\tilde P_\gamma]}} < \epsilon' md.
\end{equation}
\item \label{it: g0-trunc-Pdt-Pd}
If $\gen{c_1(\mathfrak{v}), [Q]} = \pm m$ and $\abs{\gen{c_1(\mathfrak{v}), [\tilde P_\delta]}} < (1+\epsilon)(k+md)$, then \begin{equation} \label{eq: g0-trunc-Pd}
\abs{\gen{c_1(\mathfrak{v}), [P_\delta]}} < (1+\epsilon')(k+md).
\end{equation}
\item \label{it: g0-trunc-V-Q}
If $\gen{c_1(\mathfrak{v}), [V]}=0$, then
\[
\gen{c_1(\mathfrak{v}), [Q]} = \gen{c_1(\mathfrak{v}), [\tilde Q]} = \pm m.
\]
\end{enumerate}
\end{lemma}
We are ready to compute the filtration shifts. The following lemma is parallel to \cite[Lemma 6.34]{HL}.
\begin{lemma} \label{lemma: g0-filt}
Fix $t \in \mathbb{N}$ and $\epsilon>0$. For all $m$ sufficiently large, the map $\tilde g^t_0$ is filtered with respect to the filtrations $\mathcal{J}_{\alpha\beta}$ and $\mathcal{J}_{\alpha\tilde\beta}$.
\end{lemma}
\begin{proof}
For any $\mathbf{x} \in \mathbb{T}_\alpha \cap \mathbb{T}_\beta$, we have
\begin{align*}
(\theta_{\alpha\tilde\beta} &\circ g^t_0 \circ \theta_{\alpha\beta}^{-1}) ([\mathbf{x},i] \otimes T^r) \\
&=
\sum_{\tilde\mathbf{y} \in \mathbb{T}_\alpha \cap \mathbb{T}_{\tilde\beta}} \sum_{\substack{\sigma \in \pi_2(\mathbf{x}, \Theta_{\beta\gamma}, \Theta_{\gamma\delta}, \Theta_{\delta\tilde\beta}, \tilde\mathbf{y}) \\ \mu(\sigma)=-2 \\ \mathclap{r + k/d +A_{w,z}(\mathbf{x}) + n_z(\sigma) - n_v(\sigma) \equiv 0 \pmod m}}} \#\mathcal{M}(\sigma) \, [\tilde\mathbf{y}, i-n_z(\sigma)] \otimes T^{-k/d -A_{w,z} (\tilde\mathbf{y}) + n_v(\sigma) - n_z(\sigma)}
\end{align*}
So as before, the composition $(\theta_{\alpha\tilde\beta} \circ g^t_{0,\mathfrak{v}} \circ \theta_{\alpha\beta}^{-1})$ is only nonzero on the summand $\operatorname{CF}^t(\bm\alpha, \bm\beta, \mathfrak{s}, z) \otimes T^r \Gamma_m $ and lands in $\operatorname{CF}^t(\bm\alpha, \bm\beta, \tilde \mathfrak{s}, z) \otimes T^s \Gamma_m $, where $\mathfrak{s} = \mathfrak{v} |_{Y_{\alpha\beta}}$ , $\tilde\mathfrak{s} = \mathfrak{v} |_{Y_{\alpha\tilde\beta}}$ and $r,s \in \mathbb{Q}$ are given by
\begin{equation} \label{eq: g0-r-s-def}
\begin{aligned}
\frac{-k-md}{2d} &\le r < \frac{-k+md}{2d} & r &\equiv -\frac{1}{2d}( \gen{c_1(\mathfrak{v}), [P_\gamma]} +k) \pmod m \\
\frac{-k-md}{2d} &\le s < \frac{-k+md}{2d} & s &\equiv -\frac{1}{2d}( \gen{c_1(\mathfrak{v}), [\tilde P_\delta]} +k+md) \pmod m.
\end{aligned}
\end{equation}
Therefore, for some $p,q\in \mathbb{Z}$ we can write
\begin{align*}
\gen{c_1(\mathfrak{v}), [P_\gamma]} &= -2dr -k + 2pmd \\
\gen{c_1(\mathfrak{v}), [\tilde P_\delta]} &= -2ds -k + (2q-1) md,
\end{align*}
so that
\begin{align}
\label{eq: g0-filt-Pg-bounds} (2p-1)md &< \gen{c_1(\mathfrak{v}), [P_\gamma]} \le (2p+1)md \\
\label{eq: g0-filt-Pdt-bounds} (2q-2)md &< \gen{c_1(\mathfrak{v}), [\tilde P_\delta]} \le 2q md.
\end{align}
Next, assume that $\gen{c_1(\mathfrak{v}), [\tilde Q]} = em$, where $e$ is an odd integer. For any $\mathfrak{v}$ that appears in the definition of $\tilde g^t_0$, we now claim
\[
e=2p-2q+1.
\]
Following the same argument in \cite{HL}, we will first establish that $\gen{c_1(\mathfrak{v}), [V]}$ is small compared to $m$. To be precise, we will show $\abs{k \gen{c_1(\mathfrak{v}), [V]}} < 2md$ in all three cases in Definition \ref{def: g0-trunc}.
\begin{enumerate}[label=(\roman*)]
\item If $\mathfrak{v}$ satisfies \eqref{eq: g0-trunc-Pg} and \eqref{eq: g0-trunc-Qt}, hence \eqref{eq: g0-trunc-Pgt} by Lemma \ref{lemma: g0-trunc}\eqref{it: g0-trunc-Pg-Pgt},where we take $\epsilon' = 2\epsilon$. Then
\begin{align*}
\abs{k \gen{c_1(\mathfrak{v}), [V]}} &= \abs{ \gen{c_1(\mathfrak{v}), [P_\gamma] - [\tilde P_\gamma]}} \\
&\le \abs{\gen{c_1(\mathfrak{v}), [P_\gamma]}} + \abs{\gen{c_1(\mathfrak{v}), [\tilde P_\gamma]}} \\
&< 3\epsilon md \\
&< 2md
\end{align*}
as required.
\item Suppose that $ \mathfrak{v}$ satisfies \eqref{eq: g0-trunc-Pdt} and \eqref{eq: g0-trunc-Q} and hence \eqref{eq: g0-trunc-Pd} by Lemma \ref{lemma: g0-trunc}\eqref{it: g0-trunc-Pdt-Pd}, again taking $\epsilon'=2\epsilon$.
Therefore,
\begin{align*}
(k+md) \abs{\gen{c_1(\mathfrak{v}), [V]}} = \abs{\gen{c_1(\mathfrak{v}), [\tilde P_\delta] - [P_\delta]}} &< (2+3\epsilon)(k+md).
\end{align*}
For $m$ sufficiently large, we again obtain $\abs{k \gen{c_1(\mathfrak{v}), [V]}} <2md$, as required.
\item If $ \mathfrak{v}$ satisfies \eqref{eq: g0-trunc-V}, then the requirement is met immediately.
\end{enumerate}
We proceed by observing since
\[
[P_\gamma] - [P_\delta] = d[Q], \quad [\tilde P_\delta] = [P_\delta] + (k+md) [V] \quad \text{and} \quad[\tilde Q] = [Q] -m[V],
\]
it follows
\begin{equation}
\gen{c_1(\mathfrak{v}),[\tilde P_\delta ] - [P_\gamma] + d [\tilde Q]} = \gen{c_1(\mathfrak{v}),k[V]}.
\end{equation}
Comparing the range of both sides of the equation, using \eqref{eq: g0-filt-Pg-bounds} and \eqref{eq: g0-filt-Pdt-bounds}, we obtain
\begin{equation} \label{eq: g0-filt-V-bounds}
2q-2p-3 + e < \frac{k\gen{c_1(\mathfrak{v}), [V]}}{md} < 2q-2p+1 + e.
\end{equation}
When $\abs{\gen{c_1(\mathfrak{v}), [V]}}$ is sufficiently small, this implies $e=2p-2q+1,$ proving the claim.
For any $\sigma \in \pi_2(\mathbf{x}, \Theta_{\beta\gamma}, \Theta_{\gamma\delta}, \Theta_{\delta\tilde\beta}, \tilde \mathbf{y})$ contributing to $\tilde g^t_0$, we compute:
\begin{align*}
\mathcal{J}_{\alpha\beta}([\mathbf{x},i] \otimes T^r) &- \mathcal{J}_{\alpha\tilde\beta}([\tilde\mathbf{y},i-n_z(\sigma)] \otimes T^s) \\
&= i - \frac{2nd r +nk+n^2 d}{2k} - (i-n_z(\sigma)) + \frac{2nd s +nk+n^2 d}{2k} \\
&= \frac{nd(s-r)}{k} + n_z(\sigma) \\
&= \frac{\gen{c_1(\mathfrak{v}), n[P_\gamma]-n[\tilde P_\delta]} + (2q-2p-1)nmd }{2k} + n_z(\sigma) \\
&= \frac{\gen{c_1(\mathfrak{v}), n[P_\delta] + nd[Q] -n[\tilde P_\delta]} -enmd }{2k} + n_z(\sigma) \\
&= \frac{\gen{c_1(\mathfrak{v}), -(k+md)n[V] + nd[Q]} - \gen{c_1(\mathfrak{v}), nd[\tilde Q]} }{2k} + n_z(\sigma) \\
&= \frac{\gen{c_1(\mathfrak{v}), -(k+md)n[V] + nmd[V]} }{2k} + n_z(\sigma) \\
&= \frac{-k\gen{c_1(\mathfrak{v}), n[V]} }{2k} + n_z(\sigma) \\
&= n_{z_n}(\sigma) - n_z(\sigma) + n_z(\sigma)\\
&= n_{z_n}(\sigma)\\
&\geq 0.
\end{align*}
\end{proof}
\begin{lemma}[Lemma 6.35 in \cite{HL}] \label{lemma: g0-trunc-htpy}
Fix $t \in \mathbb{N}$. For all $m$ sufficiently large, the map $\tilde g^t_0$ is a (filtered) chain homotopy between $\tilde \Psi^t_0$ and $\Phi^t_0$.
\end{lemma}
\begin{proof}
The proof in \cite{HL} can proceed without modulation, except that in the last item (P-$5$) where $\sigma$ has the decomposition $\sigma = \rho * \psi$, where $\rho \in \pi_2(\Theta_{\beta\gamma}, \Theta_{\gamma\delta}, \Theta_{\delta\tilde\beta}, \tilde \Theta)$ and $\psi \in \pi_2(\mathbf{x}, \Theta, \tilde\mathbf{y})$ with $\tilde \Theta \in \mathbb{T}_\beta \cap \mathbb{T}_{\tilde\beta}$, now $z$ and $z_n$ are in the same region of both $(\Sigma, \bm\beta, \bm\gamma, \bm\delta, \bm{\tilde \beta})$ and $(\Sigma, \bm\alpha, \bm\beta, \bm{\tilde \beta})$. As a result, $n_z(\sigma) = n_{z_n}(\sigma)$. By \eqref{eq: abgdb-c1-V}, $\gen{c_1(\mathfrak{s}_z(\sigma)), [V]}=0$, so $\mathfrak{v}$ satisfies the condition \eqref{eq: g0-trunc-V} and must appear in the definition of $\tilde g^t_0$.
\end{proof}
This concludes the proof of Proposition \ref{prop: rect-filt}, and hence of Theorem \ref{thm: CFt-cone-f2}.
\section{Proof of the filtered mapping cone formula}\label{sec: proofcone}
We are ready to prove the filtered mapping cone formula. According to Lemma \ref{lemma: cft}, it suffices to prove the mapping cone formula for $\operatorname{CF}^t$. The proof follows closely the recipe from \cite[Section 7]{HL}, and we will write down the steps for completeness.
\begin{proof}[Proof of Theorem \ref{thm: mapping-cone}]
We will only prove the case when $k>0$. Given $t\in \mathbb{N}$, by Theorem \ref{thm: CFt-cone-f2} , for a diagram $(\Sigma, \bm\alpha, \bm\beta, \bm\delta, w, z, z_n)$ and large enough $m$, we have doubly-filtered homotopy equivalence
\[
\begin{pmatrix} f_1^t \\ h_1^t \end{pmatrix} \colon\thinspace \operatorname{CF}^t(\bm\alpha, \bm\gamma, z) \to \operatorname{Cone}(f_2^t).
\]
Fixing a spin$^c$ structure $\mathfrak{t} \in \Spin^c(Y_\lambda(K))$, consider all its extension in $\Spin^c(W_\lambda).$ As discussed at the end of Section \ref{sssec: spinc}, these spin$^c$ structures form an orbit with the action of $\PD[C]$, where $C$ denotes the $2$-handle attached to $Y$. Their image under the bijection $E_{Y,\lambda,K}$ form an orbit in $\underline\Spin^c(Y,K)$ with the action of $\PD[K];$ call them $\xi_l$. The Alexander gradings $A_{Y,K}(\xi_l)$ with $l \in \mathbb{Z}$ are precisely the arithmetic sequence $s_l$ with step length $k/d$, as defined in Section \ref{ssec: statement}, where the index is fixed by
\[
\frac{(2l-1)k}{2d} < s_l \le \frac{(2l+1)k}{2d}.
\]
Under the bijection $E_{Y_\lambda, K_{n,\lambda}}\circ (E_{Y,\lambda,K})^{-1} \colon\thinspace \underline\Spin^c(Y,K) \to \underline\Spin^c(Y_\lambda,K_{n,\lambda})$, the set $\{\xi_l\}$ is identified with a sequence of spin$^c$ structures in $\underline\Spin^c(Y_\lambda,K_{n,\lambda})$ (also with the action of $\PD[K]$). Their $A_{Y_\lambda,K_{n,\lambda}}$ values form a $\mathbb{Q}/n\mathbb{Z}$ coset, which we denote by $A_{Y_\lambda,K_{n,\lambda}}(\mathfrak{t}).$ By \eqref{eq: abg-alex}, we have
\[
nds_l \equiv kA_{Y_\lambda,K_{n,\lambda}}(\mathfrak{t}) + \frac{-nk+n^2 d}{2} \pmod{\mathbb{Z}}
\]
Next, define $\mathfrak{v}_l \in \Spin^c_0(\bar X_{ \alpha\gamma\delta})$ to be the spin$^c$ structure extending $\mathfrak{t}$ with
\[
\frac{(2l-1) mk}{\nu} < \gen{c_1(\mathfrak{v}_l), [R]} \le \frac{(2l+1)mk}{\nu},
\]
and let $\mathfrak{u}_l = \mathfrak{v}_l | _{Y_{\alpha\delta}}$. We want to show $s_{\mathfrak{u}_l}=s_l$. (For the definition of $s_{\mathfrak{u}_l}$ see (\ref{eq: su-def}). By \eqref{eq: agd-c1-q}, we have
\begin{align*}
\frac{n\nu}{2m}\gen{c_1(\mathfrak{v}_l),[R]}&=\tilde A(\mathbf{q}) - k \sum_{j=1}^n \left( n_{z_j}(\psi) - n_{u_j}(\psi) \right) +\frac{-nk + n^2 d}{2}\\
&\equiv kA_{Y_\lambda,K_{n,\lambda}}(\mathfrak{t}) +\frac{-nk + n^2 d}{2} \pmod{k}\\
&\equiv nd s_l. \pmod{k}
\end{align*}
Therefore
\[
\gen{c_1(\mathfrak{v}_l),[R]}= \frac{2mds_l}{\nu}.
\]
On the other hand, according to Lemma \ref{le: f1range},
\[
\gen{c_1(\mathfrak{v}_l),[R]}= \frac{2mds_{\mathfrak{u}_l}}{\nu},
\]
which shows $s_{\mathfrak{u}_l}=s_l$, as required. Lemma \ref{le: f1range} further shows that $f^t_{1,\mathfrak{v}_l}$ is only non zero when $-L\leq l \leq L$ for some $L$. In other words, the image of $f^t_1$ restricted to $\operatorname{CF}^t(\bm\alpha,\bm\gamma,\mathfrak{t})$ is contained in the direct sum of $\operatorname{CF}^t(\bm\alpha,\bm\delta,\mathfrak{u}_l)$ for $-L\leq l \leq L$.
Recall that spin$^c$ structures $\mathfrak{x}_{\mathfrak{u}_l}, \mathfrak{y}_{\mathfrak{u}_l}$ on $W_{\alpha\delta\beta}$, both restricting to $\mathfrak{u}_l$ on $Y_{\alpha\delta}$, are characterized by
\begin{align*}
\gen{c_1(\mathfrak{x}_{\mathfrak{u}_l}), [P_\delta]} &= 2ds_l - k-md \\
\gen{c_1(\mathfrak{y}_{\mathfrak{u}_l}), [P_\delta]} &= 2ds_l + k+md.
\end{align*}
We will write $\mathfrak{x}_l = \mathfrak{x}_{\mathfrak{u}_l}$ and $\mathfrak{y}_l = \mathfrak{y}_{\mathfrak{u}_l}$ to simplify the notation. The proof of Proposition \ref{prop: f2-filt} shows that $\mathfrak{x}_{l}$ and $\mathfrak{y}_{l}$ are the only two spin$^c$ structures that may contribute to the map $f^t_{2}$ on $\operatorname{CF}^t(\bm\alpha,\bm\delta, \mathfrak{u}_l)$.
Observe $\mathfrak{x}_{l}$ and $\mathfrak{y}_{l-1}$ restrict to the same spin$^c$ structure $\mathfrak{s}_{\mathfrak{u}_l}$ on $Y$. We will write $\mathfrak{s}_l =\mathfrak{s}_{\mathfrak{u}_l}.$
Moreover, the images of the maps $\theta \circ f^t_{2, \mathfrak{x}_{l}}$ and $\theta \circ f^t_{2, \mathfrak{y}_{l-1}}$ both lie in the summand $\operatorname{CF}^t(\bm\alpha, \bm\beta, \mathfrak{s}_l) \otimes T^{-s_l}$.
Therefore, the complex $\operatorname{CF}^t(\Sigma, \bm\alpha, \bm\gamma, w, \mathfrak{t})$ equipped with filtrations $\mathcal{I}_{\alpha\gamma}$ and $\mathcal{J}_{\alpha\gamma}$, is doubly-filtered quasi-isomorphic to the doubly-filtered complex
\begin{equation} \label{eq: mapping-cone-CFt}
\xymatrix@R=0.6in{
\cdots \ar[dr] &\operatorname{CF}^t(\bm\alpha, \bm\delta, \mathfrak{u}_{l-1}) \ar[d]|{F^t_{W'_m, \mathfrak{x}_{l-1}}} \ar[dr]|{F^t_{W'_m, \mathfrak{y}_{l-1}}} & \operatorname{CF}^t(\bm\alpha, \bm\delta, \mathfrak{u}_{l}) \ar[d]|{F^t_{W'_m, \mathfrak{x}_{l}}} \ar[dr]|{F^t_{W'_m, \mathfrak{y}_{l}}} & \cdots \ar[d] \\
&\operatorname{CF}^t(\bm\alpha, \bm\beta, \mathfrak{s}_{l-1}) \otimes T^{-s_{l-1}} & \operatorname{CF}^t(\bm\alpha, \bm\beta, \mathfrak{s}_{l}) \otimes T^{-s_{l}} & \cdots
}
\end{equation}
where the filtrations are inherited from those on $\operatorname{CF}^t(\bm\alpha, \bm\delta, z)$ and $\underline\operatorname{CF}^t(\bm\alpha, \bm\beta, z; \Gamma_m)$, respectively.
By Theorem \ref{thm: large-surgery}, there are doubly-filtered quasi-isomorphisms
\[
\Lambda^t_{\mathfrak{u}_l}\colon\thinspace \operatorname{CF}^t(\Sigma, \bm\alpha, \bm\delta, \mathfrak{u}_l) \to A^t_{\mathfrak{s}_l, s_l}
\]
where the Alexander filtration on $\Lambda^t_{\mathfrak{u}_l}$ is identified with the filtration $\mathcal{J}_{\mathfrak{u}_l}$ from \eqref{eq: Ju-def}. Whereas each $\operatorname{CF}^t(\bm\alpha, \bm\beta, \mathfrak{s}_{l}) \otimes T^{-s_{l}}$ can be identified with $B^t_{\mathfrak{s}_l} = C_{\mathfrak{s}_l}\{0 \le i \le t\}$, so that the complex in \eqref{eq: mapping-cone-CFt} is quasi-isomorphic to
\begin{equation} \label{eq: mapping-cone-AtBt}
\xymatrix@C=0.6in@R=0.6in{
\cdots \ar[dr] & A^t_{\xi_{l-1}} \ar[d]|{v^t_{\xi_{l-1}}} \ar[dr]|{h^t_{\xi_{l-1}}} & A^t_{\xi_{l}} \ar[d]|{v^t_{\xi_{l}}} \ar[d]|{v^t_{\xi_{l}}} \ar[dr]|{h^t_{\xi_{l}}} &\cdots \ar[d] \\
& B^t_{\mathfrak{s}_{l-1}} & B^t_{\mathfrak{s}_{l}}& \cdots
}
\end{equation}
This is the complex $X^t_{\lambda,\mathfrak{t},n,-L,L}$ from Section \ref{ssec: statement} by definition.
Disregarding the second filtration, the underlying mapping cone is isomorphic to the one defined in \cite{HL}, and with the mapping cone in \cite{rational}, for this purpose. The maps induced by cobordisms are independent of the choice of basepoints. Thus the absolute grading shift on our mapping cone matches with Hedden-Levine's. Placing the $z_n$ basepoint differently amounts to imposing a different $\mathcal{J}$ filtration to the mapping cone. To complete the proof, we just need to check the $\mathcal{J}$ filtration agrees with the descriptions (\ref{eq: Jt-def-A}) and (\ref{eq: Jt-def-B}) in Section \ref{ssec: statement}.
\begin{itemize}
\item
On each summand $\operatorname{CF}^t(\bm\alpha, \bm\delta, \mathfrak{u}_l)$, $\mathcal{J}_{\alpha\delta}$ is defined in \eqref{eq: ad-filt} as the Alexander filtration plus $\frac{nd^2m(2s_{\mathfrak{u}_l}-n)}{2k(k+md)}$, thus $\mathcal{J}$ on $A^t_{\xi_l}$ would be $\mathcal{J}_{\mathfrak{u}_l}$ plus the same shift:
\begin{align*}
\mathcal{J}([\mathbf{x}, i, j]) &= \mathcal{J}_{\mathfrak{u}_l}([\mathbf{x},i,j]) + \frac{nd^2m(2s_{\mathfrak{u}_l}-n)}{2k(k+md)} \\
&= \max\{i-n, j-s_l\} + \frac{nd(2s_l - n)}{2(k+md)} + \frac12 + \frac{nd^2m(2s_l-n)}{2k(k+md)} \\
&= \max\{i-n, j-s_l\} + \frac{nds_l +nk-n^2 d}{2k}
\end{align*}
as required.
\item
On each summand $\operatorname{CF}^t(\bm\alpha, \bm\beta, \mathfrak{s}_l) \otimes T^{-s_l}$, according to \eqref{eq: ab-twisted-filt} we have
\begin{align*}
\mathcal{J}([\mathbf{x}, i, j]) &= \mathcal{J}_{\alpha\beta}([\mathbf{x}, i] \otimes T^{-s_l}) \\
&= i - \frac{2nd(-s_l) + nk + n^2 d}{2k} \\
&= i-n + \frac{2nds_l + nk - n^2 d}{2k}
\end{align*}
as required.
\end{itemize}
We have proved that $\operatorname{CFK}^t(Y_\lambda, K_{n,\lambda}, \mathfrak{t})$ is doubly-filtered quasi-isomorphic to $X^t_{\lambda,\mathfrak{t},n,-L,L}$. According to Lemma \ref{lemma: cft}, this implies the doubly-filtered quasi-isomorphism between $\operatorname{CFK}^-(Y_\lambda, K_{n,\lambda}, \mathfrak{t})$ and $X^-_{\lambda,\mathfrak{t},n,-L,L}$. To pass to the infinity version, tensor both complexes by $\mathbb{F}[U,U^{-1}]$, and we conclude that $\operatorname{CFK}^\infty(Y_\lambda, K_{n,\lambda}, \mathfrak{t})$ is doubly-filtered quasi-isomorphic to $X^\infty_{\lambda,\mathfrak{t},n,-L,L}$, as required.
\end{proof}
\section{Rational surgery} \label{sec: rational}
Following the same approach spelled out in \cite[Section 8]{HL}, our formula also supports a generalization to the rational surgery. For the simplicity, in this section we only demonstrate the process for $1/p$ surgery on a null-homologous knot $K$ inside a rational homology sphere $Y$, where $p$ is a positive integer. But this can be generalized to the case of arbitrary rational surgeries on any knot $K$ in a rational homology sphere without too much trouble.
Let $K_{n,1/p}$ denote the knot in $Y_{1/p}(K)$ obtained from $(n,1)$--cable of a left-handed meridian of $K$. Note that the left-handed meridian is no longer isotopic to the surgery solid torus as in the case of integral surgery. Following Hedden and Levine's notation, let $Y'$ denote $Y\mathbin{\#} -L(p,1)$ and $K'=K\mathbin{\#} O_p$, where $O_p \subset -L(p,1)$ is obtained from the Hopf link by performing a $-p$ surgery on one of the unknot components. Then $Y_{1/p}(K)$ is obtained from $Y'$ via a $2$-handle cobordism along $K';$ let $W$ denote this cobordism.
For $q\in \{0, \cdots, p-1 \}$, the complex $\widehat{\operatorname{HFK}}(-L(p,1), O_p, \mathfrak{u}_q)$ is generated by a single generator in Alexander grading $-\frac{p-2q-1}{2p}$. For any $\mathfrak{s} \in \Spin^c(Y)$, let $\mathfrak{s}_{q} = \mathfrak{s} \mathbin{\#} \mathfrak{u}_q \in \Spin^c(Y \mathbin{\#} -L(p,1))$. The K\"unneth principle for connected sums (see \cite[Theorem 7.1]{OSknot}) implies that $\operatorname{CFK}^\infty(Y',K', \mathfrak{s}_q)$ is isomorphic to $\operatorname{CFK}^\infty(Y,K)$, with the Alexander grading shifted by $-\frac{p-2q-1}{2p}$. Moreover, all the $\mathfrak{s}_{q}$ are cobordant to the same spin$^c$ structure on $Y_{1/p}(K)$ through $W$; denoted this spin$^c$ structure by $\mathfrak{t} \in \Spin^c(Y_{1/p}(K))$.
In order to compute $\operatorname{CFK}^\infty(Y_{1/p}(K),K_{n,1/p},\mathfrak{t}),$ we apply the filtered mapping cone formula to $(Y',K')$. Using the notation from Section \ref{ssec: statement}, we take $d=p$ and the framing on $K'$ corresponds to $k=1$. Consider all the spin$^c$ structures on $W$ that extends $\mathfrak{t}$. Through the bijection between $\Spin^c(W)$ and $\underline\Spin^c(Y',K'),$ they form a sequence $\xi_l \in \underline\Spin^c(Y',K')$ for $l\in \mathbb{Z}$, where
\[
\frac{2l-1}{2p} < A_{Y',K'}(\xi_l) \le \frac{2l+1}{2p}.
\]
We set $s_l = A_{Y',K'}(\xi_l)$. Note that each $A_{Y',K'}(\xi_l)$ satisfies
\[
A_{Y',K'}(\xi_l) \equiv \frac{-p+2q+1}{2p} \pmod \mathbb{Z},
\]
for some $q=0,1,\cdots,p-1.$ So we can write
\[
s_l = \frac{-p+2q+1}{2p} + r
\]
for some $r \in \mathbb{Z}.$ The above arithmetic constraint is enough to pin down $s_l$ and $r$, as demonstrated by the following computation from \cite{HL}: Observe $2l-1 < (2r-1)p + 2q+1 \le 2l+1$, so we have $(2r-1)p \le 2(l-q) < (2r-1)p + 2$. There are two cases:
\begin{enumerate}[label=(\roman*)]
\item If $p$ is even, we deduce that $2(l-q) = (2r-1)p$, which implies
\[
s_l = \frac{2l+1}{2p};
\]
\item if $p$ is odd, then $2(l-q) = (2r-1)p+1$, and therefore
\[
s_l= \frac {l}{p}.
\]
\end{enumerate}
In both cases we have
\[
r=\floor {\frac{2l+p}{2p}}.
\]
In the mapping cone, the complexes $A^\infty_{\xi_l}$ and $B^\infty_{\xi_l}$ are each copies of $\operatorname{CFK}^\infty(Y',K', \mathfrak{s}_q)$ by definition. As noted above, each of these complexes is in turn isomorphic to $\operatorname{CFK}^\infty(Y,K,\mathfrak{s})$, but with the $j$ coordinate shifted by $-\frac{p-2q-1}{2p}$. Now we can directly compute the filtrations on $A^\infty_{\xi_l}$ and $B^\infty_{\xi_l}$ using the formulas \eqref{eq: It-def-A}, \eqref{eq: Jt-def-A}, \eqref{eq: It-def-B}, and \eqref{eq: Jt-def-B}. We collect the results as follows.
\begin{align}
\intertext{On $A^\infty_{\xi_l}$,}
\label{eq: It-ratl-A} \mathcal{I}_\mathfrak{t}([\mathbf{x},i,j]) &= \max\{i,j-r\} \\
\label{eq: Jt-ratl-A} \mathcal{J}_\mathfrak{t}([\mathbf{x},i,j]) &= \max\{i-n,j-r\} + nl -
\begin{cases}
\frac{n}{2}(np-1) \quad p \text{ odd}, \\
\frac{n}{2}(np-2) \quad p \text{ even}, \\
\end{cases}
\intertext{On $B^\infty_{\xi_l}$,}
\label{eq: It-ratl-B} \mathcal{I}_\mathfrak{t}([\mathbf{x},i,j]) &= i \\
\label{eq: Jt-ratl-B} \mathcal{J}_\mathfrak{t}([\mathbf{x},i,j]) &= i-n + nl -
\begin{cases}
\frac{n}{2}(np-1) \quad p \text{ odd}, \\
\frac{n}{2}(np-2) \quad p \text{ even}. \\
\end{cases
\end{align}
In particular, the above formula takes the same form as the mapping cone defined by Ozsv\'ath and Szab\'o, namely each $A_s$ and $B_s$ complex appears $p$ times in the mapping cone (though in our case the $\mathcal{J}_t$ filtration of each copy is shifted accordingly).
\section{Examples and applications}\label{sec: examples}
In this section, we compute a series of examples that lead us to the proof of Proposition \ref{prop: phi} and other applications. For the reader's convenience, we restate the mapping cone formula for any knot $K\subset S^3$ under the $+1$-surgery, where the $(n,1)$--cable of the meridian is denoted by $K_{n,1}$. Take $k=d=1$, and $s_l=s \in \mathbb{Z}$ in this case, since the Alexander grading for any relative spin$^c$ structure of $K$ is an integer.
According to the main theorem, the knot floer complex $\operatorname{CFK}^\infty(S^3_{1}(K),K_{n,1})$ is given by $X^\infty_{1,n,a,b}(K)$ for some $a\ll 0$ and $b\gg 0$. Specifically, let us consider the mapping cone $X^\infty_{1,n,-g+1,g+n-1}(K)$ (we will see this gives suitable values for $a$ and $b$ later):
\begin{align}
\bigoplus^{g+n-1}_{s=-g+1}A^\infty_s \xrightarrow{v^\infty_s+h^\infty_s} \bigoplus^{g+n-1}_{s=-g+2}B^\infty_s,
\end{align}
where each $A^\infty_s$ and $B^\infty_s$ is isomorphic to $\operatorname{CFK}^\infty(S^3,K)$, the map $v^\infty_s\colon\thinspace A^\infty_s \to B^\infty_s $ is the identity and $h^\infty_s\colon\thinspace A^\infty_s \to B^\infty_{s+1} $ is the reflection map precomposed with $U^s$. The $\mathcal{I}$ and $\mathcal{J}$ filtrations are given by
\begin{align}
\intertext{For $[\mathbf{x},i,j] \in A^\infty_{s}$,}
\label{eq: filtration_s3_1}
\mathcal{I}([\mathbf{x},i,j]) &= \max\{i,j-s\} \\
\label{eq: filtration_s3_2}
\mathcal{J}([\mathbf{x},i,j]) &= \max\{i-n,j-s\} + ns - \frac{n(n-1)}{2} \\
\intertext{For $[\mathbf{x},i,j] \in B^\infty_{s}$,}
\label{eq: filtration_s3_3}
\mathcal{I}([\mathbf{x},i,j]) &= i \\
\label{eq: filtration_s3_4}
\mathcal{J}([\mathbf{x},i,j]) &= i-n + ns - \frac{n(n-1) }{2}.
\end{align}
It is straightforward to check that for $s<-g+1$, the map $h^\infty_s$ induces an isomorphism on the homology; for $s>g+n-1,$
the map $v^\infty_s(K)$ induces an isomorphism on the homology. Thus the filtered quasi-isomorphism type of $X^\infty_{1,n,a,b}(K)$ does not depend on $a,b$ if $a\leq -g+1$ and $b\geq g+n-1$. We will write
\[
X_n^\infty(K) = X^\infty_{1,n,-g+1,g+n-1}(K).
\]
For the purpose of computations, looking at the associated graded complex is usually helpful. We will describe the associated graded complex with $\mathcal{I}=0$, and the associated graded complex with other $\mathcal{I}$ values can be obtained from translations. The complex $X_n^\infty (K)\{i=0\}$ is given by
\begin{align}
\bigoplus^{g+n-1}_{s=-g+1}A^\infty_s \{\text{max}(i,j-s)=0 \} \xrightarrow{v^\infty_s+h^\infty_s} \bigoplus^{g+n-1}_{s=-g+2}B^\infty_s\{i=0\},
\end{align}
where each $A^\infty_s \{\text{max}(i,j-s)=0 \}$ admits exactly $n+1$ filtration levels with respect to $\mathcal{J}$, namely $\mathcal{J}= ns+\frac{n(n-1)}{2}, 1+ns+\frac{n(n-1)}{2},\cdots, n+ns+\frac{n(n-1)}{2}, $ while $B^\infty_s\{i=0\}$ corresponds to the filtration level $-n+ns+\frac{n(n-1)}{2}.$ See Figure \ref{fig: filtration}.
\begin{figure}
\labellist
\pinlabel $ns+n(n-1)/2$ at 100 290
\pinlabel $-1+ns+n(n-1)/2$ at 193 268
\pinlabel $\cdot$ at 143 249
\pinlabel $\cdot$ at 143 245
\pinlabel $\cdot$ at 143 241
\pinlabel $-n+ns+n(n-1)/2$ at 193 215
\pinlabel $A_{s-1}$ at 110 314
\pinlabel $h_{s-1}$ at 200 164
\pinlabel $v_s$ at 292 164
\pinlabel $n+ns+n(n-1)/2$ at 280 288
\pinlabel $n-1+ns+n(n-1)/2$ at 382 268
\pinlabel $\cdot$ at 330 248
\pinlabel $\cdot$ at 330 244
\pinlabel $\cdot$ at 330 240
\pinlabel $ns+n(n-1)/2$ at 389 215
\pinlabel $A_s$ at 290 314
\pinlabel $ns+n(n-1)/2$ at 200 100
\pinlabel $B_s$ at 290 110
\endlabellist
\includegraphics{filtration}
\caption{A portion of the complex $X_n^\infty(K) \{i=0\}$, where the vertical bar of each complex is at $i=0$ and the horizontal bar of $A_{s}$ (resp.~ $A_{s-1}$) is at $j=s$ (resp.~ $j=s-1$). }
\label{fig: filtration}
\end{figure}
The general strategy for computation involves finding a \emph{reduced} basis for $X^\infty_n(K),$ where every term in the differential strictly lowers at least one of the filtrations. This can be achieved through a cancellation process (see for example \cite{Bordered}) as follows: suppose $\partial x_i = y_i$ $+ $ lower filtration terms, where the double filtration of $y_i$ is the same as $x_i$, then the subcomplex of $X^\infty_n(K)$ generated by all such $\{x_i, \partial x_i\}$ is acyclic, and $X^\infty_n(K)$ quotient by this complex is reduced.
In the rest of the section, we wish to shed a light on the richness of examples coming from the new filtered mapping cone formula. But at first we have to go on a detour for some preliminaries of Heegaard Floer concordance invariants. The experts can skip the next subsection.
\subsection{Concordance invariants and other rings}\label{ssec: ring}
The concordance invariants in our interests are defined using Heegaard Floer homology over other rings. Pointing the reader to the sources, we would not go in depth about the detailed construction. Instead, we will review some results that are useful to us and use an example to explain some of the intuition.
To start, Ian Zemke provided a reinterpretation of Heegaard Floer homology over the ring $\mathbb{F}[U,V]$ (see for example \cite{zemkegrading}). In the knot Floer complex over $\mathbb{F}[U,V]$, there is a bigrading $\gr=(\gr_U,\gr_V)$ assigned to each generator, where $\gr(U)=(-2,0)$ and $\gr(V)=(-2,0)$. Two knot Floer complexes are \emph{locally equivalent} over $\mathbb{F}[U,V]$ if the two knots are (smoothly) concordant (\cite[Section 2.3]{ianconnect}). The local equivalence takes on slightly different meanings when defined over different rings, but is defined to always respect the knot concordance. We are generally interested in, within different contexts, the morphism between the monoid of knots over concordance and the set of knot Floer complexes over the local equivalence relation.
No matter over which ring it is defined, the knot Floer complex always possesses certain symmetric property. For example, over the ring $\mathbb{F}[U,V]$ this entails that after exchanging the role of $U$ and $V$, the resulting complex is homotopy equivalent to the original knot Floer complex.
In \cite{Moreconcor}, Dai-Hom-Stoffregen-Truong studied knot Floer complexes over the ring $\mathbb{F}[U,V]/(UV).$ The results were initially for knots in $S^3,$ but easily generalizable to knots in integer homology sphere $L$--spaces. According to \cite[Definition 4.3]{Moreconcor}, a \emph{standard complex} over $\mathbb{F}[U,V]/(UV)$ is freely generated by $\{x_0, x_1, \cdots, x_{2l} \}$ for some $l\in \mathbb{N},$ where each pair of generators $x_{2i}$ and $x_{2i+1}$ is connected by some ``U-arrow'' and each pair of generators $x_{2i+1}$ and $x_{2i+2}$ is connected by some ``V-arrow'' for $i=0,\cdots, l-1$. All information of a standard complex can be encoded using a sequence of signed integers describing the length and the direction of the arrows in order. Thus we shall use this sequence to indicate a standard complex. It turns out that the complexes over the ring $\mathbb{F}[U,V]/(UV)$ are quite nice. Indeed, we have
\begin{theorem*}[Theorem 6.1 in \cite{Moreconcor}]
The knot Floer complex of any knot in an integer homology sphere $L$--space is locally equivalent to a unique standard complex over $\mathbb{F}[U,V]/(UV).$
\end{theorem*}
Therefore, the sequence of signed integers is a concordance invariant through this identification. In \cite[Definition 7.1]{Moreconcor}, the invariant $\varphi_i$ is defined to be a signed count of the times number $i$ appear in the sequence in odd position, and subsequently in \cite[Theorem 7.2]{Moreconcor}, is shown to be a homomorphism from the knot concordance group $\mathcal{C}$ to $\mathbb{Z}$. Equivalently we can define $\varphi_i$ by counting only in even position (inverting the sign for each integer since we want the two definitions to be consistent); either way only half of the sequence is needed due to the symmetry. The two definitions correspond respectively to counting all the ``U-arrows'' or ``V-arrows'' of a certain length.
Setting $U=0$ in the ring $\mathbb{F}[U,V]/(UV)$ makes the knot Floer complex a module over $\mathbb{F}[V]$; call the homology of the resulting complex the \emph{vertical homology.} Since $\mathbb{F}[V]$ is a PID, the vertical homology of any knot Floer complex has the decomposition
\begin{align*}
\mathbb{F}[V] \oplus \big( \bigoplus_j \mathbb{F}[V]/(V^j) \big).
\end{align*}
A basis on the knot Floer complex that realizes this decomposition is called a \emph{vertical basis;} namely, a vertical basis consists of generators that pairwise generate the $V$--torsions and a standalone generator (i.e. in the kernel but not the image of $\partial$) that generates the single copy of $\mathbb{F}[V]$. Furthermore by setting $V=1,$ the resulting complex computes $\operatorname{\widehat{HF}}(Y) \cong \mathbb{F},$ where $Y$ is the ambient manifold. Therefore the standalone generator in a vertical basis has $\gr_V=d(Y),$ or we say the $d$--invariant is supported in said generator.
In \cite{Homoconcor}, the above construction is generalized by the same group of authors. They devised the following ring
\begin{align*}
\mathbb{X}=\frac{\mathbb{F}[U_B,\{W_{B,i}\}_{i\in \mathbb{Z}},V_T,\{W_{T,i}\}_{i\in \mathbb{Z}}]}{(U_B V_T, U_B W_{B,i}-W_{B,i+1},V_T W_{T,i}-W_{T,i+1})}
\end{align*}
with a bigrading $(\gr_1,\gr_2) \in \mathbb{Z} \times \mathbb{Z},$ where
\begin{gather*}
\begin{aligned}
\hspace{8em} \gr(U_B)=(-2,0) &\hspace{2em} \text{and} \hspace{2em}& \gr(W_{B,i})=(-2i,-2) &\hspace{3em}
\shortintertext{and}
\gr(V_T)=(0,-2) &\hspace{2em} \text{and} \hspace{2em}& \gr(W_{T,i})=(-2,-2i) &\hspace{3em}.
\end{aligned}
\end{gather*}
Inside $\mathbb{X}$ there are two subrings $\mathcal{R}_U$ and $\mathcal{R}_V$ given by
\begin{align*}
\mathcal{R}_U &= \mathbb{F}[U_B,\{W_{B,i}\}_{i\in \mathbb{Z}}]/(U_B W_{B,i}-W_{B,i+1})
\intertext{and}
\mathcal{R}_V &= \mathbb{F}[V_T,\{W_{T,i}\}_{i\in \mathbb{Z}}]/(V_T W_{T,i}-W_{T,i+1}).
\end{align*}
Note that elements in $\mathcal{R}_U$ can be uniquely written down as $U^i_B W^j_{B,0},$ where
\begin{align*}
(i,j)\in (\mathbb{Z} \times \mathbb{Z}^{\geq 0}) - (\mathbb{Z}^{<0} \times \{0\}).
\end{align*}
Namely, $i$ (the power of $U_B$) can be negative when $j$ (the power of $W_{B,0}$) is positive, but is required to be non-negative when $j=0.$ This is analogous to the ring $\mathbb{F}[U,V]$ localized at the maximal ideal $(U)$, denoted by $\mathbb{F}[U,U^{-1},V]$. Here $U_B$ plays the role of $U$ and $W_{B,0}$ plays the role of $V$, such that we are allow to invert $U$ but not $V$. However, a difference is that in $\mathcal{R}_U$ when there is no $V$ power (when $j=0$) we ``remember'' the power of $U$ ($i$ needs to be non-negative).
Similarly, the elements in $\mathcal{R}_V$ can be written down uniquely as well. Whenever talking about the elements in $\mathcal{R}_U$ and $\mathcal{R}_V$, we canonically use this expression.
The complexes over $\mathbb{F}[U,V]$ can be translated to $\mathbb{X}$ using the map
\begin{align*}
U&\xmapsto[\hspace{2em}]{} U_B + W_{T,0}\\
V&\xmapsto[\hspace{2em}]{} V_T + W_{B,0},
\end{align*}
so this allows us to talk about knot Floer complexes over the ring $\mathbb{X}.$
Similar to the case of $\mathbb{F}[U,V]/(UV)$, there is the notion of the standard complex over $\mathbb{X}$ (\cite[Definition 5.1]{Homoconcor}). A standard complex is freely generated by $\{x_0, x_1, \cdots, x_{2l} \}$ for some $l\in \mathbb{N},$ where each pair of generators $x_{2i}$ and $x_{2i+1}$ is connected by a differential using elements in $\mathcal{R}_U$ and each pair of generators $x_{2i+1}$ and $x_{2i+2}$ is connected by a differential using elements in $\mathcal{R}_V$, for $i=0,\cdots, l-1$. Again, quite remarkably, we have the following.
\begin{theorem*}[Theorem 7.1 in \cite{Homoconcor}]
Every knot Floer complex over the ring $\mathbb{X}$ is locally equivalent to a unique standard complex.
\end{theorem*}
Therefore one can define the concordance invariants $\varphi_{i,j}$ by either counting $\mathcal{R}_U$ edges or $\mathcal{R}_V$ edges in the standard complex that is locally equivalent to the given complex (\cite[Definition 8.1]{Homoconcor}). For $(i,j)\in (\mathbb{Z} \times \mathbb{Z}^{\geq 0}) - (\mathbb{Z}^{<0} \times \{0\})$, $\varphi_{i,j}$ are indeed homomorphisms according to \cite[Theorem 8.2]{Homoconcor}. When $j=0$, the map $\varphi_{i,0}$ agrees with the previously defined homomorphism $\varphi_i$, and when $j \neq 0$, all $\varphi_{i,j}$ vanish for any knot in an integer homology sphere $L$--space.
Note that for a knot Floer complex over $\mathbb{X}$, suppose we set $V_{T}=1$. This forces $\mathcal{R}_U=0$ due to the relation $U_B V_T=0$. The resulting complex is a graded module over $\mathbb{F}[W_{T,0}]$, and once again we obtain that the $d$--invariant is supported in the non-torsion element in this complex.
We hope that the following example provides some intuition in dealing with complexes over the ring $\mathbb{X}.$
\begin{figure}
\begin{minipage}{.5\linewidth}
\centering
\subfloat[The complex $C$ in Example \ref{example:C}. ]{
\begin{tikzpicture}
\begin{scope}[thin, black!20!white]
\foreach \i in {-2,...,3}
{\draw (\i-0.5, 3) -- (\i-0.5, -3);
\draw (3, \i-0.5) -- (-3, \i-0.5); }
\end{scope}
\filldraw (2, 0) circle (2pt) node[] (x1){};
\filldraw (2, 1) circle (2pt) node[] (y2){};
\filldraw (2, 2) circle (2pt) node[] (z){};
\filldraw (1, 2) circle (2pt) node[] (x2){};
\filldraw (0, 2) circle (2pt) node[] (y1){};
\filldraw (-1, -2) circle (2pt) node[] (w1){};
\filldraw (-2, -1) circle (2pt) node[] (w2){};
\draw (2,2) -- (2,1);
\draw (2,2) -- (1,2);
\draw (2,1) -- (-2,-1);
\draw (2,1) -- (-1,-2);
\draw (1,2) -- (-1,-2);
\draw (1,2) -- (-2,-1);
\draw (0,2) -- (-2,-1);
\draw (2,0) -- (-1,-2);
\node [right] at (x1) {$x_1$};
\node [right] at (z) {$z$};
\node [right] at (y2) {$y_2$};
\node [above] at (y1) {$y_1$};
\node [above] at (x2) {$x_2$};
\node [below] at (w1) {$w_1$};
\node [below] at (w2) {$w_2$};
\end{tikzpicture}
}
\end{minipage}%
\begin{minipage}{.5\linewidth}
\centering
\subfloat[{The complex $C$ after a change of basis in $\mathbb{F}[U,V,V^{-1}]$.}]{
\begin{tikzpicture}[scale=0.9]
\begin{scope}[thin, black!0!white]
\draw (5, 0) -- (-5,0);
\end{scope}
\begin{scope}[thin, black!20!white]
\foreach \i in {-2,...,3}
{\draw (\i-0.5, 3) -- (\i-0.5, -3);
\draw (3, \i-0.5) -- (-3, \i-0.5); }
\end{scope}
\filldraw (2, 0) circle (2pt) node[] (x1){};
\filldraw (2, 1) circle (2pt) node[] (y2){};
\filldraw (2, 2) circle (2pt) node[] (z){};
\filldraw (1, 2) circle (2pt) node[] (x2){};
\filldraw (0, 2) circle (2pt) node[] (y1){};
\filldraw (-1, -2) circle (2pt) node[] (w1){};
\filldraw (-2, -1) circle (2pt) node[] (w2){};
\draw (2,2) -- (2,1);
\draw (1,2) -- (-1,-2);
\draw (0,2) -- (-2,-1);
\tiny{
\node [right] at (z) {$z$};
\node [right] at (y2) {$y_2+UV^{-1}x_2$};
\node [left] at (y1) {$y_1$};
\node [above] at (x2) {$x_2+Uy_1$};
\node [below] at (w1) {$w_1$};
\node [below] at (w2) {$w_2$};
\node at (2.1, -0.25) {$x_1+UV^{-1}x_2+U^2V^{-2}y_1$};
}
\end{tikzpicture}
}
\end{minipage}
\caption{}
\label{fig: complex_example}
\end{figure}
\begin{example} \label{example:C}
Consider the following complex $C$ over $\mathbb{F}[U,V]$ generated by $x_1,x_2,y_1,y_2,$ $w_1,w_2$ and $z$ with differentials as follows (see Figure \ref{fig: complex_example})
\begin{gather*}
\begin{aligned}
\partial x_1 &= U^3V^2w_1 \hspace{4em} &\partial x_2 = U^2V^4w_1 + U^3V^3w_2 \\
\partial y_1 &= U^2V^3w_2 &\partial y_2 = U^3V^3w_1 + U^4V^2w_2\\
\partial z &= Ux_2 + Vy_2. &
\end{aligned}
\end{gather*}
In fact, $C$ is locally equivalent to the complex $CFK(S^3_1(T_{2,11}),(T_{2,11})_{2,1}).$ (This follows from Lemma \ref{le: localequi} and the proof of Proposition \ref{prop: phi}.) We aim to compute $\varphi_{i,j}(C)$ for some suitable $i$ and $j,$ which necessitates studying $C$ over the ring $\mathbb{X}.$ To provide some intuition, we would like to first examine $C$ over $\mathbb{F}[U,V]$.
Inverting $V$, in the localized ring $\mathbb{F}[U,V,V^{-1}]$, we can perform a change of basis by replacing $y_2$ with $y_2+UV^{-1}x_2.$ This simplifies the differentials:
\begin{gather}\label{eq: example_middlestep_changebasis}
\begin{aligned}
\partial x_1 &= U^3V^2w_1 \hspace{4em} &\partial x_2 = U^2V^4w_1 + U^3V^3w_2 \\
\partial y_1 &= U^2V^3w_2 &\partial z = V(y_2 + UV^{-1}x_2 ). \hspace{0.8em}
\end{aligned}
\end{gather}
Observing that $x_1, x_2$ and $y_1$ and their differentials make up ``a connected series of line segments", we further perform the following change of basis:
\begin{align*}
x_1 &\xmapsto[\hspace{1em}]{} x_1 + UV^{-2}x_2 + U^2V^{-2}y_1\\
x_2 &\xmapsto[\hspace{1em}]{} x_2 + Uy_1.
\end{align*}
Under this change of basis, (the image of) $y_1$ and $w_2$, $x_2$ and $w_1$, $z$ and $y_2$ pairwise generate the torsion in the homology while the standalone generator $x_1$ generates the single copy of the base ring, realizing the decomposition
\[
H_*(C\otimes \mathbb{F}[U,V,V^{-1}]) \cong \mathbb{F}[U,V,V^{-1}] \oplus \big(\bigoplus_j \mathbb{F}[U,V,V^{-1}]/(U^j)\big).
\]
If we instead invert $U$, then one could also write down a similar change of basis over the ring $\mathbb{F}[U,U^{-1},V],$ such that there is one standalone generator while all other generators appear in pairs forming the torsion in the homology. Indeed,
\begin{gather*}
\begin{aligned}
y_1 &\xmapsto[\hspace{1em}]{} y_1 + U^{-2}Vy_2 + U^{-2}V^2x_1, & \hspace{2em}
y_2 \xmapsto[\hspace{1em}]{} y_2 + Vy_1, \\
x_2 &\xmapsto[\hspace{1em}]{} x_2 + U^{-1}Vy_2 &
\end{aligned}
\end{gather*}
accomplishes the job.
Next, denote by $C_\mathbb{X}$ the complex $C$ over the ring $\mathbb{X}.$ Translating using the language of ring $\mathbb{X}$, the differentials become
\begin{align*}
\partial x_1 &= (U_B^3 W_{B,0}^2 + V_T^2W_{T,0}^3)w_1\\
\partial x_2 &= (U_B^2 W_{B,0}^4 + V_T^4 W_{T,0}^2 )w_1 + (U_B^3W_{B,0}^3 + V_T^3 W_{T,0}^3 )w_2 \\
\partial y_1 &= (U_B^2 W_{B,0}^3 + V_T^3 W_{T,0}^2 )w_2\\
\partial y_2 &= (U_B^3 W_{B,0}^3 + V_T^3 W_{T,0}^3 )w_1 + (U_B^4W_{B,0}^2 + V_T^2 W_{T,0}^4) w_2\\
\partial z &= (U_B + W_{T,0}) x_2 + (W_{B,0} + V_T ) y_2.
\end{align*}
We wish to formulate a change of basis, under which $C_\mathbb{X}$ is a standard complex. To motivate such a change of basis, consider the following. Inside the differential of each generator, there are terms in the subring $\mathcal{R}_V$ and terms in the subring $\mathcal{R}_U.$ As we have seen, the ring $\mathcal{R}_V$ is the analogue of inverting $V$ inside $\mathbb{F}[U,V]$ and the ring $\mathcal{R}_U$ is the analog of inverting $U$ inside $\mathbb{F}[U,V]$. If we ignore all the terms in $\mathcal{R}_U$ and focus on the terms in $\mathcal{R}_V$, there clearly exists a set of basis analogous to the case of $\mathbb{F}[U,V,V^{-1}]$. Similar for $\mathcal{R}_U$. Thus the intuition would be to perform the two sets of the change of basis simultaneously.
First, if we set
\begin{gather*}
\begin{aligned}
\hspace{2.5em} \widetilde{x}_2 &= x_2 + U^{-1}_B W_{B,0} y_2 \hspace{5.5em} &\widetilde{y}_2 &= y_2 + V^{-1}_T W_{T,0} x_2,
\end{aligned}
\end{gather*}
the differentials can be simplified to
\begin{gather*}
\begin{aligned}
\hspace{4em} \partial x_1 &= (U_B^3 W_{B,0}^2 + V_T^2W_{T,0}^3)w_1 \hspace{2em} &\partial \widetilde{x}_2 &= V_T^4 W_{T,0}^2 w_1 + V_T^3 W_{T,0}^3 w_2 \hspace{0.3em}\\
\partial y_1 &= (U_B^2 W_{B,0}^3 + V_T^3 W_{T,0}^2 )w_2 &\partial \widetilde{y}_2 &= U_B^3 W_{B,0}^3 w_1 + U_B^4W_{B,0}^2 w_2\\
\partial z &= U_B \widetilde{x}_2 + V_T \widetilde{y}_2.
\end{aligned}
\end{gather*}
Compare the $\mathcal{R}_V$ terms in the differential of each generator with the differentials in \eqref{eq: example_middlestep_changebasis} and notice the similarity. With this observation in mind, we further perform the following change of basis
\begin{gather*}
\begin{aligned}
\hspace{6em}\widehat{x}_1 &= x_1 + V_T^{-2}W_{T,0} \widetilde{x}_2 + V_T^{-2}W^{2}_{T,0} y_1 \hspace{1em}
&\widehat{x}_2 &= \widetilde{x}_2 + W_{T,0} y_1 \hspace{0.3em}\\
\widehat{y}_1 &= y_1 + U_B^{-2}W_{B,0} \widetilde{y}_2 + U_B^{-2}W^{2}_{B,0} x_1
&\widehat{y}_2 &= \widetilde{y}_2 + W_{B,0} x_1,
\shortintertext{making the differentials become}
\partial \widehat{x}_1 &= U_B^3 W_{B,0}^2 w_1 &\partial \widehat{x}_2 &= V_T^4 W_{T,0}^2 w_1 \\
\partial \widehat{y}_1 &= V_T^3 W_{T,0}^2 w_2 &\partial \widetilde{y}_2 &= U_B^4W_{B,0}^2 w_2\\
\partial z &= U_B \widehat{x}_2 + V_T \widehat{y}_2.
\end{aligned}
\end{gather*}
We conclude that $C_\mathbb{X}$ under this final basis is a standard complex given by
\[
(-(3,2),(4,2),(1,0),-(1,0),-(4,2),(3,2)).
\]
In particular, we have
\begin{align*}
\varphi_{i,j}(C)=\begin{cases} -1 \quad &\text{if} \hspace{0.5em} (i,j)=(3,2) \text{ or }(4,2)\\
1 &\text{if} \hspace{0.5em} (i,j)=(1,0)\\
0 &\text{otherwise.}
\end{cases}
\end{align*}
\end{example}
In the following subsection, we will perform most of the computations (especially when we are invoking the filtered mapping cone formula) over the original ring $\mathbb{F}[U,U^{-1}].$ Only after collecting enough information and ready to compute the concordance invariants, we then move on to either the ring $\mathbb{F}[U,V]/(UV)$ or $\mathbb{X}$ depending on the case at hand. We will always specify the ring we are working on whenever the base ring is not $\mathbb{F}[U,U^{-1}]$ or $\mathbb{F}[U]$. It is our hope that the current approach causes minimal confusion.
\subsection{Examples and applications} \label{ssec: examples}
We begin by describing a family of concrete examples by applying the filtered mapping formula to the $+1$-surgery on the torus knot $T_{2,3}$ with varying $n$.
Using the notion of the standard complex from either \cite{Homoconcor} or \cite{Moreconcor}, we have
\begin{proposition} \label{prop: t23_n1}
The complex $\operatorname{CFK}^\infty(S^3_1(T_{2,3}),(T_{2,3})_{n,1})$ is locally equivalent to the standard complex $(-1,n,1,-1,-2,\cdots,n-i+1,1,-1,-i-1,\cdots,2,1,-1,-n,1)$ for some fixed $n\geq 1$, where $i=1,\cdots, n-1.$
\end{proposition}
The standard complex described in the proposition is simply the concatenation of $(n-i+1,1,-1,-i-1)$ for $i=1,\cdots,n-1$ in that order, with $-1$ and $1$ in the front and back.
\begin{figure}
\begin{tikzpicture}
\node at (-2,6) {$A_s$};
\node[]() at (0,4) {
\begin{tikzpicture}
\draw[step=1, black!80!white, thin] (0, 0) grid (1, 3);
\filldraw (0.5, 2.5) circle (2pt) node[] (b){};
\filldraw (0.5, 1.5) circle (2pt) node[] (a){};
\filldraw (0.5, 0.5) circle (2pt) node[] (c){};
\draw [thick, ->] (a) -- (c);
\node at (-0.3,1.5) {$\alpha_s$};
\node at (-0.3,2.5) {$x_s$};
\node at (-0.3,0.5) {$y_s$};
\node at (3.3,1.5) {$-\frac{1}{2}n^2 + \frac{1}{2}n + ns -s$};
\node at (3.3,2.5) {$-\frac{1}{2}n^2 + \frac{1}{2}n + ns -(s-1)$};
\node at (3.3,0.5) {$-\frac{1}{2}n^2 + \frac{1}{2}n + ns -(s+1)$};
\end{tikzpicture}};
\node at (-5,-1) {$B_s$};
\node[]() at (-3,0) {
\begin{tikzpicture}
\filldraw (0, 0) circle (2pt) node[] {};
\node at (-0.4,0.6) {$\beta_s$};
\node at (2.5,0) {$-\frac{1}{2}n^2 + \frac{1}{2}n + n(s -1)$};
\end{tikzpicture}};
\node at (2.5,-1) {$B_{s+1}$};
\node[]() at (4,0) {
\begin{tikzpicture}
\filldraw (0, 0) circle (2pt) node[] {};
\node at (0.3,0.7) {$\beta_{s+1}$};
\node at (2.5,0) {$-\frac{1}{2}n^2 + \frac{1}{2}n + ns$};
\end{tikzpicture}};
\draw [thick, ->] (-2.5,2) -- (-4,0.6);
\draw [thick, ->] (-1.3,2) -- (1,0.4);
\node at (-3.6,1.5) {$v_s$};
\node at (0.3,1.4) {$h_s$};
\end{tikzpicture}
\caption{A part of the complex $X^\infty_n(T_{2,3})\{i=0\}$ under the chosen basis.}
\label{fig: portion1}
\end{figure}
\begin{proof}
The case of $n=1$ follows from a simple calculation. Now consider when $n>1.$
Let us denote the generators of $\operatorname{CFK}^\infty(S^3,T_{2,3})$ by $a,b$ and $c$ with coordinates $(0,0),(0,1)$ and $(0,-1)$ respectively, where $b$ has Maslov grading $0$. They satisfy
\[
\partial a=Ub + c.
\]
Via the isomorphism with $\operatorname{CFK}^\infty(S^3,T_{2,3})$, we denote the generators in $A_s$ (resp.~ $B_s$) by $a_s,b_s$ and $c_s$ (resp.~ $a'_s,b'_s$ and $c'_s$) for $0\leq s \leq n$. We will abuse the notation and use $\partial$ for the differential in $X^\infty_n(T_{2,3})$ from now on. For each $s=1,\cdots, n-1$, We have
\begin{align*}
\partial a_s &= Ub_s + c_s + a'_i + U^s a'_{s+1} \qquad \qquad \null \\
\partial b_s &= b'_s + U^{s-1} c'_{s+1} \\
\partial c_s &= c'_s + U^{s+1} b'_{s+1}\\
\partial a'_s &= Ub'_s + c'_s.
\end{align*}
We will establish a set of basis for the reduced complex of $X^\infty_n(T_{2,3})$. First we can let $c'_s + Ub'_s$ replace $c'_s$ and quotienting out the acyclic summand generated by $a'_s$ and $c'_s + Ub'_s$. Then considering the first three relations, it is natural to perform the following change of basis (for all $0\leq s\leq n$):
\begin{gather*}
\begin{aligned}
\alpha_s &= a_s \qquad \qquad & x_s &= b_s + U^{s-1} a'_{s+1}\\
\beta_s &=c'_s \qquad \qquad & y_s &= c'_s + a_s,
\end{aligned}
\end{gather*}
such that under the new basis, the relations are simplified. See Figure \ref{fig: portion1}. For each $s=1,\cdots, n-1$,
\begin{align*}
\partial \alpha_s &= Ux_s + y_s \qquad \qquad \qquad \qquad \qquad \null \\
\partial x_s &= \beta_s + U^s \beta_{s+1} \\
\partial y_s &= U\beta_s + U^{s+1} \beta_{s+1}.
\end{align*}
When considered as a generator of the complex $B_s\{ i =0\},$ $\beta_s$ is the unique generator of homology under the chosen basis. The $(\mathcal{I},\mathcal{J})$ filtrations of the generators can be computed using \eqref{eq: filtration_s3_1},\eqref{eq: filtration_s3_2},\eqref{eq: filtration_s3_3} and \eqref{eq: filtration_s3_4}. All $\alpha_s, \beta_s, x_s$ and $y_s$ have $\mathcal{I}=0$ and their $\mathcal{J}$ filtrations are as follows:
\begin{align*}
\mathcal{J}(\alpha_s)&=-\frac{1}{2}n^2 + \frac{1}{2}n + ns -s\\
\mathcal{J}(\beta_s)&=-\frac{1}{2}n^2 + \frac{1}{2}n + n(s -1)\\
\mathcal{J}(x_s)&=-\frac{1}{2}n^2 + \frac{1}{2}n + ns -(s-1)\\
\mathcal{J}(y_s)&=-\frac{1}{2}n^2 + \frac{1}{2}n + ns -(s+1).
\end{align*}
The differential of $\alpha_s$ consists of $Ux_s$ and $y_s$, which differ by $(1,-1)$ in the $(\mathcal{I},\mathcal{J})$ grading. Next consider the differentials of $x_s$ and $y_s$. The $(\mathcal{I},\mathcal{J})$ grading shift is equal to $(0,n-s+1)$ from $x_s$ to $\beta_s$ and equal to $(s,1)$ from $x_s$ to $U^s\beta_{s+1}$. Note that $ \partial y_s = \partial Ux_s$, from this immediately follows that the $(\mathcal{I},\mathcal{J})$ grading shifts in the differential of $y_s$ are $(1,n-s)$ and $(s+1,0)$ respectively.
We then consider the generators in $A_0$. After performing a change of basis
\begin{align*}
x_0 &\xmapsto[]{} x_0 + U^{-1}y_0\\
\alpha_0 &\xmapsto[]{} \alpha_0,
\end{align*}
and quotienting out the resulting acyclic summand, the only remaining generator in $A_0$ is $y_0$, with the differential given by
\[
\partial y_0 = U\beta_1.
\]
The $\mathcal{J}$ filtrations of $y_0 $ and $ U\beta_1$ are equal.
With the above computations ready, the rest of the proof assesses the complex over the ring $\mathbb{F}[U,V]/(UV)$, using the language of \cite{Moreconcor}. The sub-quotient complex generated by $\alpha_s, \beta_s x_s$ and $y_s$ corresponds to the sequence $(n-s+1,1,-1,-s-1)$, starting from $\beta_s$. See Figure \ref{fig: portion2}. Next observe that each $\beta_s$ is in the image of $x_s$ and $y_{s+1}$ (with some appropriate $U,V$ decoration), thus as $s$ ranges from $1$ to $n-1$, the sub-quoitent complex generated by all $\alpha_s, \beta_s, x_s$ and $y_s$ corresponds to the concatenation of $(n-s+1,1,-1,-s-1)$.
Finally, it is easy to see that by setting $U=0$, each pair of generators $\alpha_s$ and $y_s$, $\beta_s$ and $x_s$ for $n\geq s\geq 1$ makes up a torsion summand in the vertical homology while $y_0$ generates the single copy of $\mathbb{F}[V]$. Therefore by further setting $V=1$, $\operatorname{\widehat{HF}}(S^3_1(T_{2,3}))$ (and thus the $d$-invariant) is supported in $y_0$ as required. The differential of $y_0$ gives the $-1$ in the beginning of the sequence. Following from the symmetry, we obtain the $1$ at the end of the sequence.
\end{proof}
\begin{remark}
In fact, the proof shows that over the ring $\mathbb{F}[U,V],$ $\operatorname{CFK}(S^3,T_{2,3})/(UV)$ is isomorphic to the standard complex $(-1,\cdots,n-i+1,1,-1,-i-1,\cdots,1)$, with $i=1,\cdots, n-1.$ Furthermore, this information completely determines $\operatorname{CFK}^\infty(S^3,T_{2,3})$ (via the condition $\partial^2 = 0$).
\end{remark}
\begin{figure}[!htb]
\begin{tikzpicture}
\draw [thick] (2, 2) -- (1, 2);
\draw [thick] (2, 2) -- (2, 1);
\draw [thick] (1, 2) -- (1, -2);
\draw [thick] (2, 1) -- (-2, 1);
\node[] at (-2.5,1.3) {\small $\beta_{s+1}$};
\node[] at (2.4,0.7) {\small $y_s$};
\node[] at (2.4,2.3) {\small $\alpha_s$};
\node[] at (0.7,2.3) {\small $x_s$};
\node[] at (1.3,-2) {\small $\beta_{s}$};
\node[] at (1.5,2.2) {\tiny $1$};
\node[] at (2.2,1.6) {\tiny $1$};
\node[] at (-0.5,1.3) {\small $s+1$};
\node[] at (0,-0.5) {\small $n-s+1$};
\end{tikzpicture}
\caption{The portion of $X^\infty_n (T_{2,3})$ that corresponds to the sequence \newline $(n-s+1,1,-1,-s-1)$ in a standard complex using vertical basis. The generators are denoted abstractly, without their $U,V$ decoration.}
\label{fig: portion2}
\end{figure}
We are ready to prove the computational result Proposition \ref{prop: phi} which leads to Theorem \ref{thm: phi}.
Recall that torus knots $T_{2,4k+3}$ are $L$--space knots with genus $g=2k + 1$, where $k\in \mathbb{Z}_{\geq 0}$. (We have so far reserved the letter $k$ for the surgery coefficient. Since in this section the surgery is mostly fixed, hopefully this new assignment of letter $k$ does not cause confusion.) The knot Floer complex $\operatorname{CFK}^\infty(S^3,T_{2,4k+3})$ is generated by $a_i$ for $i=1,\cdots,g$ with coordinate $(0,-g+2i-1)$ and $b_i$ for $i=1,\cdots,g+1$ with coordinate $(0,-g+2i-2)$. The Maslov grading is supported in $b_{g+1}$ and the differentials are given by
\[
\partial a_i = Ub_{i+1} + b_i, \quad \text{for} \hspace{0.5em} i=1,\cdots,g.
\]
As before, let $(T_{2,4k+3})_{n,1}$ denote the $(n,1)$--cable of the meridian inside $+1$-surgery on $T_{2,4k+3}$ and let $J_{n,k}$ denote $(T_{2,4k+3})_{n,1}$ connected sum with the unknot in $-S^3_1(T_{2,4k+3})$. The ambient manifold $S^3_1(T_{2,4k+3}) \mathbin{\#} -S^3_1(T_{2,4k+3})$ is homology cobordant to $S^3$.
\propphi*
This immediately implies Theorem \ref{thm: phi}. In fact, we can prove a stronger version.
\begin{theorem}
For any integers $i$ and $j$ such that $i>j\geq 0,$ and a given sequence of arbitrary integers $(h_{j+1},\cdots, h_i)$,
there exists some knot $K\in \widehat{\mathcal{C}}_\Z$ such that $\varphi_{t,j}(K)=h_t$ for $t=j+1,\cdots, i,$ and $\varphi_{t,j}(K)=0$ for $t>i.$
\end{theorem}
\begin{proof}
The concordance invariant $\varphi_{i,j}$ is additive under the group action of $\widehat{\mathcal{C}}_\Z$. By Proposition \ref{prop: phi}, we can first choose a knot with $\varphi_{i,j}=h_i$. Assuming $i-j>1,$ then by adding copies of $J_{i-1,j}$ to the chosen knot, we can make it such that $\varphi_{i-1,j}=h_{i-1}$ while the value of $\varphi_{i,j}$ remains unchanged. Repeat this process until a knot with desired property is obtained.
\end{proof}
\begin{figure}[!htb]
\begin{tikzpicture}[scale=0.7]
\begin{scope}[thin, black!50!white]
\draw [<->] (-6, 0) -- (6, 0);
\draw [<->] (0, -6) -- (0, 6);
\end{scope}
\filldraw (-6,4) circle (2pt) node (b_{6}) {};
\node[above] at (b_{6}) {\small $U^3b_{6}$};
\foreach \i in {-2,...,2}
{
\filldraw (-2*\i,2*\i) circle (2pt) node (a_{\i}) {};
\filldraw (-2*\i,-2+2*\i) circle (2pt) node (b_{\i}) {};
\draw [very thick] (-2*\i,2*\i)--(-2*\i-2,2*\i);
\draw [very thick] (-2*\i,2*\i)--(-2*\i,-2+2*\i);
}
\node[below] at (b_{2}) {\small $U^2b_{5}$};
\node[below] at (b_{1}) {\small $Ub_{4}$};
\node[below] at (b_{0}) {\small $b_{3}$};
\node[below] at (b_{-1}) {\small $U^{-1}b_{2}$};
\node[below] at (b_{-2}) {\small $U^{-2}b_{1}$};
\node[right] at (a_{2}) {\small $U^2a_{5}$};
\node[right] at (a_{1}) {\small $Ua_{4}$};
\node[right] at (a_{0}) {\small $a_{3}$};
\node[right] at (a_{-1}) {\small $U^{-1}a_{2}$};
\node[right] at (a_{-2}) {\small $U^{-2}a_{1}$};
\end{tikzpicture}
\caption{The knot Floer complex $\operatorname{CFK}^\infty(S^3,T_{2,11})$. The torus knot $T_{2,11}$ is an $L$--space knot with genus equal to $5$.}
\label{fig:lspace}
\end{figure}
Before moving on to the proof of Proposition \ref{prop: phi}, let us study the complex $X^\infty_n (T_{2,4k+3})$ closely. The case when $k=0$ has been studied, so we assume $k>0$ from now on. Via the isomorphism with $\operatorname{CFK}^\infty(S^3,T_{2,4k+3})$, denote the generators in $A_s$ by $(a_i)_s$ for $i=1,\cdots, g$ and $(b_i)_s$ for $i=1,\cdots, g+1$ and the generators in $B_s$ by $(a_i)'_s$ and $(b_i)'_s$, where $s=-g+1,\cdots, g+n-1.$ The differential on the mapping cone is given by
\begin{align*}
\partial (a_i)_s &= (b_i)_s + U(b_{i+1})_s + (a_i)'_s + U^{s+g+1-2i} (a_{g-i+1})'_{s+1} \\
\partial (b_i)_s &= (b_i)'_s + U^{s+g+2-2i} (b_{g-i+2})'_{s+1} \\
\partial (a_i)'_s &= (b_i)'_s + U(b_{i+1})'_s.
\end{align*}
We first aim to choose a reduced basis for the complex of $X^\infty_n (T_{2,4k+3})$. Note that $(a_i)'_s $ and $ (b_i)'_s \in \partial (a_i)'_s$ are in the same $(\mathcal{I},\mathcal{J})$ filtration, so we may quotient out the acyclic complex generated by $\{(a_i)'_s, \partial (a_i)'_s \}$. Under this identification, we have
\[
(b_{g+1})'_s = U(b_{g})'_s = \cdots = U^g(b_{1})'_s. \quad \null
\]
Setting
\begin{align*}
(\alpha_i)_s &=(a_i)_s \qquad \qquad \qquad \qquad \qquad \null\\
(x_i)_s &=(b_i)_s \\
\beta_s &=(b_{g+1})'_s,
\end{align*}
the differentials are now simplified considerably:
\begin{align}
\label{eq: diff_a} \partial (\alpha_i)_s &= (x_i)_s + U(x_{i+1})_s \\
\label{eq: diff_x} \partial (x_i)_s &= U^{g+1-i}\beta_s + U^{s+g+1-i} \beta_{s+1}.
\end{align}
Next, notice for certain values of $i$ and $s$, at least one term in $\partial (\alpha_i)_s$ preserve the $(\mathcal{I},\mathcal{J})$ filtration, and we shall quotient out the acyclic summands generated by these $\{(a_i)_s, \partial (a_i)_s \}$ as well. See Figure \ref{fig: reduced}. When $-g\leq s\leq g$, we claim for $i \in \{1,\cdots, g\}$ the above condition is satisfied if and only
\begin{gather*}
\begin{aligned}
i\geq \frac{s+g+1}{2} & \quad \text{or} \quad i\leq \frac{s+g-1}{2}-\floor{\frac{n-1}{2}} & \text{ if } s \text{ is even;} \\
i\geq \frac{s+g+2}{2} & \quad \text{or} \quad i\leq \frac{s+g}{2}-\ceil{\frac{n-1}{2}} & \text{ if } s \text{ is odd},
\end{aligned}
\end{gather*}
and when $s\geq g$, if and only if
\[
i\leq \frac{g+1}{2} - \ceil{\frac{n-s}{2}}.
\]
We will show the claim when $-g\leq s\leq g$ and $s$ is even. The other two cases are similar and left for the reader to verify. The first half of the claim is clear, and we focus on the second half.
\begin{figure}[!htb]
\centering
\subfloat[The complex $A_s \{\text{max}(i,j-s)=0 \}$ when $-g \leq s \leq g$ and $s$ is even. ]{
\begin{tikzpicture}[scale=0.6]
\begin{scope}[thin, black!0!white]
\draw (-10, 0) -- (12, 0);
\end{scope}
\begin{scope}[thin, black!40!white]
\draw (0, 0) -- (0, -10);
\draw (2, 2) -- (2, -10);
\draw (-4, 0) -- (0, 0);
\draw (-4, 2) -- (2, 2);
\draw (0, 0) -- (2, 0);
\draw (0, -2) -- (2, -2);
\draw (0, -4) -- (2, -4);
\end{scope}
\filldraw (-1,1) circle (2pt) node () {};
\foreach \i in {0,...,2}
{
\filldraw (1,1-4*\i) circle (2pt) node () {};
\filldraw (1,-1-4*\i) circle (2pt) node () {};
\draw [very thick] (1,1-4*\i)--(1,-1-4*\i);
}
\draw [very thick] (-1,1)--(1,1);
\node[] at (-2.1,0.8) {$x_{\frac{s+g+3}{2}}$};
\node[right] at (1.2,1) {$\alpha_{\frac{s+g+1}{2}}$};
\node[right] at (1.2,-1) {$x_{\frac{s+g+1}{2}}$};
\node[right] at (1.2,-3) {$\alpha_{\frac{s+g-1}{2}}$};
\node[right] at (2,-5) {$\cdot$};
\node[right] at (2,-5.5) {$\cdot$};
\node[right] at (2,-4.5) {$\cdot$};
\node[right] at (1.2,-7) {$\alpha_i$};
\node[right] at (1.2,-9) {$x_i$};
\end{tikzpicture}
}\par\medskip
\begin{minipage}{.5\linewidth}
\centering
\subfloat[The complex $A_s \{\text{max}(i,j-s)=0 \}$ when $-g \leq s \leq g$ and $s$ is odd. ]{
\begin{tikzpicture}[scale=0.5]
\begin{scope}[thin, black!0!white]
\draw (-10, 0) -- (8, 0);
\end{scope}
\begin{scope}[thin, black!40!white]
\draw (0, 0) -- (0, -10);
\draw (2, 2) -- (2, -10);
\draw (-4, 0) -- (0, 0);
\draw (-4, 2) -- (2, 2);
\draw (0, 0) -- (2, 0);
\draw (0, -2) -- (2, -2);
\draw (0, -4) -- (2, -4);
\end{scope}
\filldraw (-1,1) circle (2pt) node () {};
\filldraw (-3,1) circle (2pt) node () {};
\draw [very thick] (-1,1)--(-3,1);
\foreach \i in {0,...,1}
{
\filldraw (1,-1-4*\i) circle (2pt) node () {};
\filldraw (1,-3-4*\i) circle (2pt) node () {};
\draw [very thick] (1,-1-4*\i)--(1,-3-4*\i);
}
\filldraw (1,1) circle (2pt) node () {};
\node[right] at (1.2,1) {$x_{\frac{s+g+2}{2}}$};
\node[right] at (1.2,-1) {$\alpha_{\frac{s+g}{2}}$};
\node[right] at (1.2,-3) {$x_{\frac{s+g}{2}}$};
\node[right] at (2,-5) {$\cdot$};
\node[right] at (2,-5.5) {$\cdot$};
\node[right] at (2,-6) {$\cdot$};
\end{tikzpicture}
}
\end{minipage}%
\begin{minipage}{.5\linewidth}
\centering
\subfloat[The complex $A_s \{\text{max}(i,j-s)=0 \}$ when $ s \geq g$.]{
\begin{tikzpicture}[scale=0.5]
\begin{scope}[thin, black!0!white]
\draw (-10, 0) -- (8, 0);
\end{scope}
\begin{scope}[thin, black!40!white]
\draw (0, 6) -- (0, -4);
\draw (2, 6) -- (2, -4);
\foreach \i in {0,...,2}
{
\draw (0, 2*\i) -- (2, 2*\i);
}
\end{scope}
\filldraw (1,5) circle (2pt) node () {};
\filldraw (1,3) circle (2pt) node () {};
\filldraw (1,1) circle (2pt) node () {};
\filldraw (1,-1) circle (2pt) node () {};
\filldraw (1,-3) circle (2pt) node () {};
\draw [very thick] (1,3)--(1,1);
\draw [very thick] (1,-3)--(1,-1);
\node[right] at (1.2,5) {$x_{g+1}$};
\node[right] at (1.2,3) {$\alpha_g$};
\node[right] at (1.2,1) {$x_g$};
\node[right] at (1.5,-2) {$\cdot$};
\node[right] at (1.5,-1.5) {$\cdot$};
\node[right] at (1.5,-1) {$\cdot$};
\end{tikzpicture}
}
\end{minipage}
\caption{Part of the associated graded complex $X^\infty_n(T_{2,4k+3})\{i=0\}$, the complex $A_s \{\text{max}(i,j-s)=0 \}$ with the $\mathcal{J}$ filtration level depicted. }
\label{fig: reduced}
\end{figure}
In order to simplify the notation when talking about the filtration, define
\begin{equation}\label{eq: def_f}
f(n,s)=-\frac{(n-1)n}{2} + ns,
\end{equation}
so that the $\mathcal{J}$ filtration on the complex $A_s \{\text{max}(i,j-s)=0 \}$ ranges between $f(n,s)$ and $f(n,s)-n.$ When $s$ is even, for any $l\geq 0$ the only generator supported in the coordinate $(0,s-l)$ is $ x_{(s+g+2-l)/2}$ if $l$ is odd and $\alpha_{(s+g+1-l)/2}$ if $l$ is even. Therefore $\mathcal{J}(\alpha_i)=f(n,s)-n$ if and only if
\begin{align*}
\begin{cases}
i\leq (s+g-n)/2 & n \text{ is odd}\\
i\leq (s+g+1-n)/2 & n \text{ is even}.
\end{cases}
\end{align*}
In either case, the second half of the claim is verified. Thus we have obtained a reduced basis for the complex $X^\infty_n (T_{2,4k+3})$ as follows. The generators in $A^\infty_s$ are given by $(\alpha_i)_s, (x_i)_s$ for $s=-g+1,\cdots, g+n-1$, such that when $s\leq g,$
\begin{gather*}
\begin{aligned}
\frac{s+g+1}{2} \geq i \geq \text{max}\{1,\frac{s+g+1}{2}-\floor{\frac{n-1}{2}}\} & \text{ if } s \text{ is even,} \\
\frac{s+g+2}{2} \geq i \geq \text{max}\{1,\frac{s+g+2}{2}-\ceil{\frac{n-1}{2}}\} & \text{ if } s \text{ is odd},
\end{aligned}
\end{gather*}
when $s\geq g$,
\[
g+1 \geq i\geq \text{max} \{1,\frac{g+3}{2} - \ceil{\frac{n-s}{2}}\};
\]
the generators in $B^\infty_s$ are $\beta_s$ for $s=-g+2,\cdots, g+n-1$ and the differentials are given by \eqref{eq: diff_a} and \eqref{eq: diff_x}.
Before moving on, let us introduce some notational shorthands:
denote by $x_s$ the ``top'' generator in each $A_s$ complex, i.e. the one with the largest $i$ index; for example, when $-g+1 \leq s \leq g$ and $s$ is even, $x_s=(x_{(s+g+1)/2})_s$. Let $i^{(t)}_{n,s}$ be this ``top'' $i$ index. Similarly, let $i^{(b)}_{n,s}$ be the smallest integer that $i$ index can take in each case. Here letters $t$ and $b$ are chosen to indicate ``top'' and ``bottom'' respectively.
We want to take a look at the $(\mathcal{I},\mathcal{J})$ filtration shifts next. Clearly, the filtration shifts in the differential of each $(\alpha_i)_s$ are $(1,0)$ and $(0,1).$ So we focus on the filtration shifts in the differential of each $(x_i)_s$. Suppose $U^c \beta_{s'}$ is a nontrivial term in $\partial (x_i)_s$, where $s'=s$ or $s+1.$ Define
\begin{equation}
\Delta_{\mathcal{I},\mathcal{J}}((x_i)_s,\beta_{s'}) = (\mathcal{I},\mathcal{J})((x_i)_s) - (\mathcal{I},\mathcal{J})(U^c \beta_{s'})
\end{equation}
and similarly define $\Delta_\mathcal{I}$ and $\Delta_\mathcal{J}$. Note that we have
\begin{equation}
\Delta_{\mathcal{I},\mathcal{J}}((x_{i-1})_s,\beta_{s'})=\Delta_{\mathcal{I},\mathcal{J}}((x_i)_s,\beta_{s'}) + (1,-1)
\end{equation}
for $ i^{(t)}_{n,s} \geq i\geq i^{(b)}_{n,s}+1$, therefore to completely understand the filtration shifts of all the generators in each complex $A_s$, we need only compute the filtration shift of the ``top'' generator $x_s$.
\begin{lemma}\label{le: del_ij}
When $-g+1\leq s \leq g,$ if $s$ is even,
\begin{align}
\label{eq: del_sleqg_even_s} \Delta_{\mathcal{I},\mathcal{J}}(x_s,\beta_s)&=\Big(\frac{-s+g+1}{2},n+\frac{-s+g-1}{2}\Big)\\
\label{eq: del_sleqg_even_sp} \Delta_{\mathcal{I},\mathcal{J}}(x_s,\beta_{s+1})&=\Big(\frac{s+g+1}{2},\frac{s+g-1}{2}\Big);\\
\intertext{if $s$ is odd,}
\label{eq: del_sleqg_odd_s} \Delta_{\mathcal{I},\mathcal{J}}(x_s,\beta_s)&=\Big(\frac{-s+g}{2},n+\frac{-s+g}{2}\Big)\\
\label{eq: del_sleqg_odd_sp} \Delta_{\mathcal{I},\mathcal{J}}(x_s,\beta_{s+1})&=\Big(\frac{s+g}{2},\frac{s+g}{2}\Big);\\
\intertext{when $g\leq s \leq g+n-1$,}
\label{eq: del_sgeqg_s} \Delta_{\mathcal{I},\mathcal{J}}(x_s,\beta_s)&=(0,n+g-s)\\
\label{eq: del_sgeqg_sp} \Delta_{\mathcal{I},\mathcal{J}}(x_s,\beta_{s+1})&=(s,g).
\end{align}
\end{lemma}
\begin{proof}
For each case, $\mathcal{J}(\beta_s)=f(n,s)-n$ and $\mathcal{J}(\beta_{s+1})=f(n,s)$ by \eqref{eq: filtration_s3_4}. When
$-g+1\leq s \leq g,$ and $s$ is even,
\begin{align*}
\partial x_s &= U^{(-s+g+1)/2} \beta_s + U^{(s+g+1)/2} \beta_{s+1},\\
\mathcal{J}(x_s)&=f(n,s)-1.
\end{align*}
Compute
\begin{align*}
\Delta_{\mathcal{I},\mathcal{J}}(x_s,\beta_s)&= \Big(\frac{-s+g+1}{2},\frac{-s+g+1}{2}\Big) + (0,n-1)\\
&=\Big(\frac{-s+g+1}{2},n+\frac{-s+g-1}{2}\Big)\\
\Delta_{\mathcal{I},\mathcal{J}}(x_s,\beta_{s+1})&= \Big(\frac{s+g+1}{2},\frac{s+g+1}{2}\Big) + (0,-1)\\
&=\Big(\frac{s+g+1}{2},\frac{s+g-1}{2}\Big).
\end{align*}
When $-g+1\leq s \leq g,$ and $s$ is odd, the computation is parallel. Suppose now $g\leq s \leq g+n-1$, then we have
\begin{align*}
\partial x_s &= \beta_s + U^s \beta_{s+1},\\
\mathcal{J}(x_s) &= f(n,s) - (s-g).
\end{align*}
So we may compute
\begin{align*}
\Delta_{\mathcal{I},\mathcal{J}}(x_s,\beta_s)&= (0,n+g-s)\\
\Delta_{\mathcal{I},\mathcal{J}}(x_s,\beta_{s+1})&= (s,s) + (0,g-s)\\
&=(s,g).
\end{align*}
\end{proof}
Next we claim that for the purpose of computing concordance invariants, we need only look at a portion of the mapping cone.
Define $X^\infty_n (K) \langle l \rangle = X^\infty_{1,n,-l+n,l} (K)$ for $l\in \mathbb{Z}$, which is the filtered mapping cone
\begin{align*}
\bigoplus^{l}_{s=-l+n}A^\infty_s \xrightarrow{v^\infty_s+h^\infty_s} \bigoplus^{l}_{s=-l+n+1}B^\infty_s.
\end{align*}
Note that under this notation $X^\infty_n (K) = X^\infty_n (K) \langle g+n-1 \rangle$.
\begin{lemma}\label{le: localequi}
For any $k \geq 1$ and $n\geq 1$, the filtered complex $X^\infty_n (T_{2,4k+3}) \langle g+n-1 \rangle$ is isomorphic to $ X^\infty_n (T_{2,4k+3}) \langle n \rangle \oplus D $ up to a change of basis, where $H_*(D)=0$.
\end{lemma}
\begin{proof}
It suffices to show for each $l=n+1,\cdots, g+n-1,$ the complex $X^\infty_n (T_{2,4k+3}) \langle l \rangle$ is isomorphic to $X^\infty_n (T_{2,4s+3}) \langle l-1 \rangle \oplus D$ up to a change of basis, where $H_*(D)=0$. For every such $l$, we will demonstrate a filtered change of basis such that the complex given by
\begin{equation}\label{eq: summand1}
A^\infty_{-l+n} \xrightarrow[]{h_{-l+n}}B^\infty_{-l+n+1}
\end{equation}
becomes a summand under the new basis. Thus following from the symmetry, there is also a filtered change of basis such that
\begin{equation}\label{eq: summand2}
A^\infty_{l} \xrightarrow[]{\hspace{0.3em} v_{l}\hspace{0.3em}}B^\infty_{l}
\end{equation}
becomes a summand under the new basis as required.
Let $s=-l+n+1$ in the following proof, and we have $s\leq 0.$ There are two cases to consider depending on the parity of $s.$
\begin{itemize}
\item When $s$ is even, perform the change of basis
\begin{align*}
(x_i)_{s} \xmapsto[\qquad\null]{} &(x_i)_{s} + U^{\frac{-s+g+3}{2} -i} x_{{s}-1},
\quad \text{for} \hspace{0.5em} i^{(t)}_{n,s} \geq i \geq i^{(b)}_{n,s}.
\end{align*}
By \eqref{eq: diff_a} and \eqref{eq: diff_x} it is straightforward to check that the complex given by \eqref{eq: summand1} is a summand under this basis. (Note that in complex $X^\infty_n (T_{2,4k+3}) \langle l \rangle$ the map $v_{-l+n}=v_{s-1}$ is the zero map.) It remains to show that this change of basis is filtered. This in fact amounts to showing for $i^{(t)}_{n,s} \geq i \geq i^{(b)}_{n,s}$,
\begin{align*}
\Delta_{\mathcal{I},\mathcal{J}}((x_i)_s, \beta_s ) \geq \Delta_{\mathcal{I},\mathcal{J}}(x_{s-1}, \beta_s).
\end{align*}
Thus we compute
\begin{align*}
\Delta_{\mathcal{I},\mathcal{J}}&((x_i)_s, \beta_s ) - \Delta_{\mathcal{I},\mathcal{J}}(x_{s-1}, \beta_s)\\
&= \Delta_{\mathcal{I},\mathcal{J}}(x_s, \beta_s ) +
\left( \frac{s+g+1}{2} - i\right)(1,-1) - \Delta_{\mathcal{I},\mathcal{J}}(x_{s-1}, \beta_s) \\
&= \left( \frac{-s+g+1}{2} , n + \frac{-s+g-1}{2} \right) + \left( \frac{s+g+1}{2} - i , -\frac{s+g+1}{2} + i \right) \\
& \hspace{2em}- \left( \frac{s-1+g}{2} , \frac{s-1+g}{2} \right)\\
&= \left( \frac{-s+g+3}{2} -i , n-1 + \frac{-3s-g+1}{2} + i \right).
\end{align*}
The $\mathcal{I}$ filtration shift is clearly positive for all $i$ since
\begin{align*}
\frac{-s+g+3}{2} -i &\geq \frac{-s+g+3}{2} - \frac{s+g+1}{2}\\
&=-s+1\\
&> 0,
\end{align*}
while for the $\mathcal{J}$ filtration shift there are two possible cases depending on the value of $i^{(b)}_{n,s}$:
\begin{enumerate}[label=\roman*)]
\item Suppose
\[
1\geq \frac{s+g+1}{2} - \floor{\frac{n-1}{2}},
\]
then we have
\begin{align*}
n-1 + \frac{-3s-g+1}{2} + i &\geq n + \frac{-3s-g+1}{2} \\
& \geq 2\floor{\frac{n-1}{2}} +1 + \frac{-3s-g+1}{2}\\
&\geq s+g + \frac{-3s-g+1}{2}\\
&\geq \frac{-s+g+1}{2}\\
&>0.
\end{align*}
\item If instead
\[
1\leq \frac{s+g+1}{2} - \floor{\frac{n-1}{2}},
\]
this is when $n$ is relatively small, and we have
\begin{align*}
n-1 + \frac{-3s-g+1}{2} + i &\geq n-1 + \frac{-3s-g+1}{2} + \frac{s+g+1}{2} - \floor{\frac{n-1}{2}} \\
& \geq -s+1 - \floor{\frac{n-1}{2}} +n -1 \\
&\geq -s+1 + \frac{n-1}{2} \\
&>0.
\end{align*}
\end{enumerate}
\item When $s$ is odd, perform the change of basis
\begin{align*}
(x_i)_s \xmapsto[\qquad\null]{} &(x_i)_s + U^{\frac{-s+g+2}{2} -i} x_{s-1}, \quad \text{for} \hspace{0.5em} i^{(t)}_{n,s} \geq i \geq i^{(b)}_{n,s}.
\end{align*}
Again one can verify under this basis the complex given by \eqref{eq: summand1} is a summand. The difference in $(\mathcal{I},\mathcal{J})$ filtration is
\begin{align*}
\Delta_{\mathcal{I},\mathcal{J}}&((x_i)_s, \beta_s ) - \Delta_{\mathcal{I},\mathcal{J}}(x_{s-1}, \beta_s)\\
&= \Delta_{\mathcal{I},\mathcal{J}}(x_s, \beta_s ) +
\left( \frac{s+g+2}{2} - i\right)(1,-1) - \Delta_{\mathcal{I},\mathcal{J}}(x_{s-1}, \beta_s) \\
&= \left( \frac{-s+g}{2} , n + \frac{-s+g}{2} \right) + \left( \frac{s+g+2}{2} - i , -\frac{s+g+2}{2} + i \right) \\
& \hspace{2em}- \left( \frac{s+g}{2} , \frac{s+g-2}{2} \right)\\
&= \left( \frac{-s+g+2}{2} -i , n + \frac{-3s-g}{2} + i \right).
\end{align*}
The $\mathcal{I}$ filtration shift is clearly positive since
\begin{align*}
\frac{-s+g+2}{2} -i &\geq \frac{-s+g+2}{2} - \frac{s+g+2}{2}\\
&=-s\\
&> 0,
\end{align*}
while the $\mathcal{J}$ filtration has two possible cases depending on the value of $i^{(b)}_{n,s}$.
\begin{enumerate}[label=\roman*)]
\item Suppose that
\[
1\geq \frac{s+g+2}{2} - \ceil{\frac{n-1}{2}},
\]
then we have
\begin{align*}
n + \frac{-3s-g}{2} + i &\geq n + \frac{-3s-g}{2} + 1 \\
& \geq 2\ceil{\frac{n-1}{2}} + \frac{-3s-g}{2} + 1\\
&\geq s+g + \frac{-3s-g}{2} + 1\\
&\geq \frac{-s+g}{2} + 1\\
&>0.
\end{align*}
\item If instead
\[
1\leq \frac{s+g+1}{2} - \floor{\frac{n-1}{2}},
\]
in this case we have
\begin{align*}
n + \frac{-3s-g}{2} + i &\geq n + \frac{-3s-g}{2} + \frac{s+g+2}{2} - \ceil{\frac{n-1}{2}} \\
& \geq -s + 1 + \ceil{\frac{n-1}{2}} \\
&>0
\end{align*}
as required.
\end{enumerate}
\end{itemize}
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop: phi}]
By Lemma \ref{le: localequi}, in order to compute concordance invariants it suffices to consider the complex $X^\infty_n(T_{2,4k+3})\langle n \rangle$. We claim that over the ring $\mathbb{X}$ discussed in the previous subsection \ref{ssec: ring}, after a change of basis $X^\infty_n(T_{2,4k+3})\langle n \rangle$ is a standard complex.
Translating to the ring $\mathbb{X}$, the differential on the complex is given by
\begin{align*}
\partial (\alpha_i)_s &= (U_B + W_{T,0}) (x_{i+1})_s + (W_{B,0} + V_T) (x_i)_s \\
\partial {{(x_i)}}_0 &= \big( U^{\Delta_\mathcal{I}({(x_i)}_0, \beta_{1})}_B W^{\Delta_\mathcal{J}({(x_i)}_0, \beta_{1})}_{B,0} + V^{\Delta_\mathcal{J}({(x_i)}_0, \beta_{1})}_T W^{\Delta_\mathcal{I}({(x_i)}_0, \beta_{s+1})}_{T,0} \big)\beta_1\\
\partial {{(x_i)}}_n &= \big( U^{\Delta_\mathcal{I}({(x_i)}_n, \beta_n)}_B W^{\Delta_\mathcal{J}({(x_i)}_n, \beta_n)}_{B,0} + V^{\Delta_\mathcal{J}({(x_i)}_n, \beta_n)}_T W^{\Delta_\mathcal{I}({(x_i)}_n, \beta_n)}_{T,0} \big)\beta_n\\
\intertext{\hspace{1em}and if $n>s>0,$ then}
\partial {{(x_i)}}_s &= \big( U^{\Delta_\mathcal{I}({(x_i)}_s, \beta_s)}_B W^{\Delta_\mathcal{J}({(x_i)}_s, \beta_s)}_{B,0} + V^{\Delta_\mathcal{J}({(x_i)}_s, \beta_s)}_T W^{\Delta_\mathcal{I}({(x_i)}_s, \beta_s)}_{T,0} \big)\beta_s \\
+\big( &U^{\Delta_\mathcal{I}({(x_i)}_s, \beta_{s+1})}_B W^{\Delta_\mathcal{J}({(x_i)}_s, \beta_{s+1})}_{B,0} + V^{\Delta_\mathcal{J}({(x_i)}_s, \beta_{s+1})}_T W^{\Delta_\mathcal{I}({(x_i)}_s, \beta_{s+1})}_{T,0} \big)\beta_{s+1}.
\end{align*}
We first perform the change of basis
\begin{align*}
(\widetilde{x}_i)_s &= \begin{cases}
(x_i)_s + U^{-1}_B W_{B,0} (x_{i-1})_s &\quad \text{if} \hspace{0.5em} i=i^{(t)}_{n,s}\\
(x_i)_s + U^{-1}_B W_{B,0} (x_{i-1})_s + V^{-1}_T W_{T,0} (x_{i+1})_s &\quad \text{if} \hspace{0.5em} i^{(b)}_{n,s}<i<i^{(t)}_{n,s}\\
(x_i)_s + V^{-1}_T W_{T,0} (x_{i+1})_s &\quad \text{if} \hspace{0.5em} i=i^{(b)}_{n,s},
\end{cases}
\intertext{which simplifies the differentials considerably:}
\partial (\alpha_i)_s &= U_B (\widetilde{x}_{i+1})_s + V_T (\widetilde{x}_i)_s \\
\partial {(\widetilde{x}_i)}_0 &= \begin{cases}
V^{\Delta_\mathcal{J}({(x_i)}_0, \beta_{1})}_T W^{\Delta_\mathcal{I}({(x_i)}_0, \beta_{1})}_{T,0} \beta_{1}, \hspace{0.5em} i=i^{(t)}_{n,0} \\
U^{\Delta_\mathcal{I}({(x_i)}_0, \beta_{1})}_B W^{\Delta_\mathcal{J}({(x_i)}_0, \beta_{1})}_{B,0}\beta_{1}, \hspace{0.5em} i=i^{(b)}_{n,0}\\
0, \hspace{10em}\text{ otherwise}
\end{cases}\\
\partial {(\widetilde{x}_i)}_n &=\begin{cases}
V^{\Delta_\mathcal{J}({(x_i)}_n, \beta_n)}_T W^{\Delta_\mathcal{I}({(x_i)}_n, \beta_n)}_{T,0} \beta_n, \hspace{0.5em} i=i^{(t)}_{n,n} \\
U^{\Delta_\mathcal{I}({(x_i)}_n, \beta_n)}_B W^{\Delta_\mathcal{J}({(x_i)}_n, \beta_n)}_{B,0}\beta_n, \hspace{0.5em} i=i^{(b)}_{n,n}\\
0, \hspace{10.4em}\text{ otherwise}
\end{cases}
\end{align*}
and for $n>s>0,$
\begin{align*}
\partial {(\widetilde{x}_i)}_s = &\begin{cases}
V^{\Delta_\mathcal{J}({(x_i)}_s, \beta_s)}_T W^{\Delta_\mathcal{I}({(x_i)}_s, \beta_s)}_{T,0} \beta_s
+ V^{\Delta_\mathcal{J}({(x_i)}_s, \beta_{s+1})}_T W^{\Delta_\mathcal{I}({(x_i)}_s, \beta_{s+1})}_{T,0} \beta_{s+1}, \hspace{0.5em} i=i^{(t)}_{n,s} \\
U^{\Delta_\mathcal{I}({(x_i)}_s, \beta_s)}_B W^{\Delta_\mathcal{J}({(x_i)}_s, \beta_s)}_{B,0}\beta_s
+ U^{\Delta_\mathcal{I}({(x_i)}_s, \beta_{s+1})}_B W^{\Delta_\mathcal{J}({(x_i)}_s, \beta_{s+1})}_{B,0}\beta_{s+1}, \hspace{0.5em} i=i^{(b)}_{n,s}\\
0, \hspace{24.5em}\text{ otherwise.}
\end{cases}
\end{align*}
Next observe that
\begin{align*}
\Delta_\mathcal{I}(x_s,\beta_{s+1}) - \Delta_\mathcal{I}(x_{s+1},\beta_{s+1}) = \begin{cases}
s+1 \quad &\text{if} \hspace{0.5em} 0\leq s \leq g\\
s &\text{otherwise,}
\end{cases}
\end{align*}
by \eqref{eq: del_sleqg_even_s}, \eqref{eq: del_sleqg_even_sp}, \eqref{eq: del_sleqg_odd_s}, \eqref{eq: del_sleqg_odd_sp}, \eqref{eq: del_sgeqg_s} and \eqref{eq: del_sgeqg_sp}. In particular, for any $s$ with $0\leq s \leq n-1,$ we have $\Delta_\mathcal{I}(x_s,\beta_{s+1}) - \Delta_\mathcal{I}(x_{s+1},\beta_{s+1})>0.$ Similarly, one can show $\Delta_\mathcal{J}((x_{i^{(b)}_{n,s}})_s,\beta_s) - \Delta_\mathcal{J}((x_{i^{(b)}_{n,s-1}})_{s-1},\beta_s)>0$ for $n\geq s \geq 1$.
Therefore we may further perform a change of basis
\begin{align}
\label{eq: changebasis1}
(\widetilde{x}_{i^{(t)}_{n,s}})_s &\xmapsto[\hspace{2em}]{} \sum_{n\geq s'\geq s}V^{\lambda_{s'}}_T W^{\mu_{s'}}_{T,0} (\widetilde{x}_{i^{(t)}_{n,s'}})_{s'} \quad \text{for all} \hspace{0.5em} n > s \geq 0,\\
\label{eq: changebasis2}
(\widetilde{x}_{i^{(b)}_{n,s}})_s &\xmapsto[\hspace{2em}]{} \sum_{0\leq s'\leq s} U^{\theta_{s'}}_B W^{\eta_{s'}}_{B,0}(\widetilde{x}_{i^{(b)}_{n,s'}})_{s'} \quad \text{for all} \hspace{0.5em} 0 < s \leq n,
\end{align}
where
\begin{align*}
\lambda_{s'} &= \sum_{s'-1 \geq t\geq s} \big( \Delta_\mathcal{J}(x_t,\beta_{t+1}) - \Delta_\mathcal{J}(x_{t+1},\beta_{t+1}) \big)\\
\mu_{s'} &= \sum_{s'-1 \geq t\geq s} \big( \Delta_\mathcal{I}(x_t,\beta_{t+1}) - \Delta_\mathcal{I}(x_{t+1},\beta_{t+1}) \big) \hspace{5em},
\end{align*}
and $\theta_{s'}, \eta_{s'}$ can be defined similarly. Note that the definition implies $\lambda_s=\mu_s = 0.$ When $s' > s$ (resp.~$s' < s$), $\mu_{s'}$ (resp.~ $\eta_{s'}$) is positive, while $\lambda_{s'}$ (resp. ~$\theta_{s'}$) needs not to be (in fact they are negative). This change of basis results in a standard complex as required.
Let $(\widetilde{x})_s$ denote the image of $(\widetilde{x}_{i^{(t)}_{n,s}})_s$ under the above change of basis. Then we have $(\widetilde{x})_0 \in \operatorname{ker} \partial$, and starting from $(\widetilde{x})_0$ every odd number of edge is marked by elements in $\mathcal{R}_U$ and every even number of edge is marked by elements in $\mathcal{R}_V$. In order to compute $\varphi_{i,j},$ we need only consider the set of (say) $\mathcal{R}_V$ edges. Each $\mathcal{R}_V$ edge that is not equal to $V_T$ is marked by the differential of $(\widetilde{x})_s$ for some $s=1,\cdots,n.$ For $1\leq s\leq n,$
\begin{align*}
\partial(\widetilde{x})_s = V^{\Delta_\mathcal{J}(x_s,\beta_s)}_T W^{\Delta_\mathcal{I}(x_s,\beta_s)}_{T,0}\beta_s
\end{align*}
thus we simply need to look at the values of $\Delta_{\mathcal{I},\mathcal{J}}(x_s,\beta_s).$
For the knot $T_{2,4k+3}$, $g=2k+1.$ By \eqref{eq: del_sleqg_even_s}, \eqref{eq: del_sleqg_odd_s} and \eqref{eq: del_sgeqg_s}, when $n\geq g$, the sequence of $\Delta_{\mathcal{I},\mathcal{J}}(x_s,\beta_s)$ for $s=1,\cdots,n$ is given by
\begin{align*}
(k,n+k),(k,n+k-1),\cdots,(1,n+1),(1,n),(0,n),(0,n-1),\cdots,(0,2k+1);
\end{align*}
when $n\leq g$, the above sequence terminates at $(k+1-n/2,n/2+k)$ if $n$ is even and at $(k-(n-1)/2,(n+1)/2+k)$ if $n$ is odd.
Therefore we conclude that for the interested complex, the concordance invariant $\varphi_{i,j}=-1$ if and only if $(i,j)=(n+k,k),(n+k-1,k),\cdots$ and $\varphi_{i,j}=0$ if $(i,j)$ is not among those values and also $(i,j)\neq (1,0).$
\end{proof}
We now prove Proposition \ref{prop: intro_middle} with a more precise restatement. The result is stated for knots in $S^3$, but a similar result should hold for knots in rational homology spheres as well.
\begin{proposition}\label{prop: app_middle}
For a knot $K\subset S^3,$ when $n\geq 2g,$ as a quotient complex of the filtered mapping cone (for any surgery), $A_{\ceil{n/2}}$ is filtered homotopy equivalent to $\operatorname{CFK}^\infty(S^3,K).$
\end{proposition}
\begin{proof}
When $n\geq 2g,$ $\ceil{n/2}\geq g.$ Note that all the generators of $\operatorname{CFK}^\infty(S^3,K)$ are within the region of $-g\leq i+j \leq g.$ It is then straightforward to check using \eqref{eq: filtration_s3_1} and \eqref{eq: filtration_s3_2} that this region is filtered.
\end{proof}
Finally let us analyze the behavior of the complex $X^\infty_n(K)$ when $n\geq 2g.$ Comparing $X^\infty_n(K)$ and $X^\infty_{n+1}(K)$, we can identify $A_s$ in both complex for $s\leq \floor{n/2}.$ Under this identification, one can check that $\mathcal{J}(y_s) - \mathcal{J}(h_s(y_s))$ is constant while $\mathcal{J}(y_s) - \mathcal{J}(v_s(y_s))$ increases as $n$ increases, and the $\mathcal{I}$ filtration shift is constant. At the same time, we can identify $A_s \subset X^\infty_n(K)$ with $A_{s+1} \subset X^\infty_{n+1}(K)$ for $s\geq \floor{n/2}+1 > g$, and under this identification, similarly the filtration shifts from the $v$ map is constant while the filtration shifts from the $h$ map increases as $n$ increases. Furthermore, by Proposition \ref{prop: app_middle}, the ``middle'' complex $A_{\ceil{(n+1)/2}}$ is simply a copy of $\operatorname{CFK}^\infty(S^3,K)$ (with some differentials pointing to the rest of the complex).
Therefore, when $n\geq 2g$ and as $n$ increases by $1$, we conclude that there are two things happen to $X^\infty_n(K)$:
\begin{itemize}
\item certain edges are extended;
\item a copy of $\operatorname{CFK}^\infty(S^3,K)$ is added to the ``middle'' of the complex.
\end{itemize}
Aside from these two changes, the complex remains the same ``shape'' as $n$ increases. We name this phenomenon \emph{stabilization}. For an example, note that the whole family described by Proposition \ref{prop: t23_n1} can be seen as the result of continued stabilizations. In general, with slightly more careful analysis, one should be able to pin down the exact behavior of a given complex during the stabilization.
\bibliographystyle{amsalpha}
|
1,314,259,994,865 | arxiv | \section{Introduction}
Here we consider the following family of non-autonomous nonlinear second-order differential equations
\begin{equation}\label{eq:eq1}
y_{zz}+f(z,y) y_{z}^{2}+g(z,y)y_{z}+h(z,y)=0,
\end{equation}
where $f$, $g$ and $h$ are arbitrary sufficiently smooth functions. We assume that $gh\not\equiv0$ and that $f^{2}+g_{y}^{2}+h_{yy}^{2}\neq0$, i.e. we exclude the linear subcase of \eqref{eq:eq1} from the consideration.
Equations from family \eqref{eq:eq1} often appear in numerous applications in mechanics, physics and so on \cite{Guckenheimer1983,Andronov}. Therefore, various aspects of integrability of \eqref{eq:eq1} have been studied in a number of works (see, e.g., \cite{Duarte,Meleshko2010,Muriel2009,Muriel2010,Muriel2011,Muriel2011a,Meleshko2012,Muriel2018,Nucci2010,Nucci2010a,Bagderina2013,Bagderina2016,Guha2019,Gine2019a,Gine2019b}). For example, in \cite{Muriel2009,Muriel2011,Meleshko2012,Muriel2018} authors considered applications of several linearizing transformations and $\lambda$-symmetries for finding first integrals of equations from family \eqref{eq:eq1}. In particular, in \cite{Muriel2009} it was shown that equations admitting a linear with respect to the first derivative first integral form exactly the same class as equations linearized to the Laguerre normal form of linear second order differential equations (the latter class was obtained in \cite{Duarte}). Authors of \cite{Muriel2009} also demonstrated that equations from the corresponding class possess a certain $\lambda$-symmetry and there is a subclass of completely integrable equations with two independent first integrals. In \cite{Muriel2010,Muriel2011,Muriel2011a,Meleshko2012} various connections between linearizability of second-order differential equations and the existence of certain first integrals, in particular rational ones, were studied. Authors of \cite{Nucci2010,Nucci2010a} applied the Jacobi last multiplier approach for studying integrability of \eqref{eq:eq1}, while in \cite{Bagderina2013,Bagderina2016} equivalence problems via point transformations were studied. Connections via nonlocal transformations between equations from \eqref{eq:eq1} and various Painlev\'e type equations were considered in \cite{Kudryashov2017a,Sinelshchikov2017,Sinelshchikov2018,Sinelshchikov2019}.
Here we deal with the linearization problem for \eqref{eq:eq1} via the generalized Sundman transformations, which have the form
\begin{equation}\label{eq:eq2}
w=F(z,y), \quad d\zeta=G(z,y)dz,
\end{equation}
where $F$ and $G$ are some sufficiently smooth functions satisfying $F_{y}G\neq0$. This problem was previously studied in \cite{Duarte,Meleshko2010}. While in \cite{Duarte} linearization to the Laguerre normal form of second order linear differential equation, namely to the equation $w_{\zeta\zeta}=0$, was considered, in \cite{Meleshko2010} it was shown that for the linearization via transformations \eqref{eq:eq2} it is insufficient to use the Laguerre normal form of a linear second order differential equation and connections between \eqref{eq:eq1} and
\begin{equation}\label{eq:eq5}
w_{\zeta\zeta}+\beta w_{\zeta}+\alpha w=0,
\end{equation}
were considered. Here $\alpha\neq0,\beta\neq0$ are arbitrary parameters. Although authors of \cite{Meleshko2010} studied the equivalence problem between \eqref{eq:eq1} and \eqref{eq:eq5} via \eqref{eq:eq2}, only a particular case of transformations \eqref{eq:eq2}, specifically the case of $F_{z}=0$, was considered. However, it is known that there are interesting from an applied point of view nonlinear oscillators that can be linearized via \eqref{eq:eq2} only if $F_{z}\neq0$ (see, e.g. \cite{Demina2019}). Therefore, in this work we consider the full linearization problem for \eqref{eq:eq1} and find all equations from family \eqref{eq:eq1} that can be linearized with the help of \eqref{eq:eq2} with $F_{z}\neq0$. We demonstrate that there are nontrivial examples of equations from \eqref{eq:eq1} that can be linearized only via \eqref{eq:eq2} with $F_{z}\neq0$. Furthermore, we show that each linearizable equation from \eqref{eq:eq1} admits a certain first integral, which can be explicitly constructed via the parameters of the studied equation and linearizing transformations. This follows from the fact that linear equation \eqref{eq:eq5} possesses an autonomous first integral and we believe that this is the first time when the corresponding first integrals are obtained for linearizable equations from \eqref{eq:eq1}. We also separately consider the cases when this first integral is autonomous or a rational/polynomial function. Finally, let us remark that authors of \cite{Meleshko2010} also included a constant parameter $\gamma$ in \eqref{eq:eq5}, but it can be easily removed via the transformation $F\rightarrow F+\gamma/\alpha$, and, consequently, we do not take it into consideration.
Notice also that the linearization problem for family \eqref{eq:eq1} via a more general class of nonlocal transformations, when the function $G$ in \eqref{eq:eq2} depends on $y_{z}$ was considered (see, e.g. \cite{Muriel2011,Muriel2011a,Chandrasekar2006} and references therein). For instance, in \cite{Muriel2011a} linearization problem for \eqref{eq:eq1} via \eqref{eq:eq2} with $G(z,y,y_{z})=G_{1}(z,y)y_{z}+G_{2}(z,y)$ was studied. Authors of \cite{Muriel2011a} showed that equations for this linearizable class possess a certain rational first integral and a $\lambda$-symmetry, which can be calculated in terms of the coefficients of the corresponding equation.
The rest of this work is organized as follows. In the next Section we present the equivalence criterion for \eqref{eq:eq1} and \eqref{eq:eq5}. We also show how to construct a first integral for linearizable equation from \eqref{eq:eq1} and present several interesting subcases of linearizable equations from \eqref{eq:eq1}, namely Darboux integrable cases and equations with rational non-autonomous first integrals. In Section 3 we provide several examples of linearizable equations from \eqref{eq:eq1} including parametrically forced generalizations of the Duffing and Van der Pol equations. In the last section we briefly discuss and summarize our results.
\section{New integrability conditions}
Let us start with some preliminary results. First we introduce a canonical form of \eqref{eq:eq1} with respect to \eqref{eq:eq2}.
\begin{proposition}
Family of equations \eqref{eq:eq1} is closed with respect to \eqref{eq:eq2} and its canonical form is
\begin{equation}\label{eq:eq1a}
y_{zz}+g(z,y)y_{z}+h(z,y)=0.
\end{equation}
\end{proposition}
\textit{Proof.}
The closedness of \eqref{eq:eq1} with respect to \eqref{eq:eq2} can be checked by direct calculations. Thus, without loss of generality, one can assume that $f(z,y)=0$. Indeed, substituting the transformation $\tilde{y}=\int \exp\{\mathfrak{f}\}dy$, which is a particular case of \eqref{eq:eq2}, into \eqref{eq:eq1} we get
\begin{equation}
\tilde{y}_{zz}+\tilde{g}(z,\tilde{y})\tilde{y}_{z}+\tilde{h}(z,\tilde{y})=0,
\label{eq:eq3}
\end{equation}
where
\begin{equation}
\begin{gathered}
\mathfrak{f}=\int f dy, \quad \tilde{g}=g-2\mathfrak{f}_{z},\\
\tilde{h}={\rm e}^{\mathfrak{f}}h+(2\mathfrak{f}_{z}-g)\int \mathfrak{f}_{z} {\rm e}^\mathfrak{f}dy -\int ( \mathfrak{f}_{zz}+\mathfrak{f}_{z}^{2}){\rm e}^{ \mathfrak{f}} dy.
\label{eq:eq3a}
\end{gathered}
\end{equation}
In order to obtain results for family of equations \eqref{eq:eq1} from the results for \eqref{eq:eq3} we need to make the following substitutions
\begin{equation}
\begin{gathered}
y\rightarrow \int {\rm e}^{\mathfrak{f}}dy, \quad g\rightarrow g+2\mathfrak{f}_{z}, \\
h\rightarrow {\rm e}^{-\mathfrak{f}} \left(h+\int \mathfrak{f}_{z} {\rm e}^\mathfrak{f}dy g +\int ( \mathfrak{f}_{zz}+\mathfrak{f}_{z}^{2}){\rm e}^{ \mathfrak{f}} dy\right).
\label{eq:eq3b}
\end{gathered}
\end{equation}
This completes the proof. $\Box$
Consequently, further we assume that $f(z,y)=0$ and study equivalence problem for \eqref{eq:eq1a}.
Now let us show that \eqref{eq:eq5} has an autonomous first integral that can be used for constructing first integrals for linearizable equations from \eqref{eq:eq1a}. Indeed, it is easy to verify that the following expression
\begin{equation}\label{eq:eq6a}
I=\left(2w_{\zeta}+(\beta+\rho)w\right)^{\rho+\beta}\left(2w_{\zeta}+(\beta-\rho)w\right)^{\rho-\beta},
\end{equation}
where $\rho=\sqrt{\beta^{2}-4\alpha}\neq0$, is a first integral of \eqref{eq:eq5}. If $\rho=0$, instead of \eqref{eq:eq6a} one needs to use
\begin{equation}\label{eq:eq6b}
I=(2w_{\zeta}+\beta w)\exp\left\{\frac{\beta w}{2w_{\zeta}+\beta w}\right\}.
\end{equation}
Notice also that if $\rho$ is imaginary, i.e. $\beta^{2}-4\alpha<0$, first integral \eqref{eq:eq6a} can be transformed into a real form as follows
\begin{equation}
I=\ln\left\{\alpha w^{2}+\beta w w_{\zeta}+w_{\zeta}^{2}\right\}-\frac{2\beta}{\sqrt{-\rho^{2}}}\arctan\left\{\frac{2w_{\zeta}+\beta w}{\sqrt{-\rho^{2}}w}\right\}.
\label{eq:eq6c}
\end{equation}
\begin{remark}
Notice that from the results of \cite{Muriel2009} it follows that \eqref{eq:eq5} possesses two functionally independent first integrals, which are linear functions with respect to $w_{\zeta}$. These integrals are
\begin{equation}\label{eq:eq6d}
I_{1}={\rm e}^{\frac{\beta+\rho}{2}\zeta}(2w_{\zeta}+(\beta-\rho)w), \quad I_{2}={\rm e}^{\frac{\beta-\rho}{2}\zeta}(2w_{\zeta}+(\beta+\rho)w).
\end{equation}
However, \eqref{eq:eq6d} cannot be used for constructing first integrals of linearizable equations from \eqref{eq:eq1a}, since transformations \eqref{eq:eq2} do not map a non-autonomous first integral of \eqref{eq:eq5} into a first integral of a linearizable equation from \eqref{eq:eq1a}.
On the other hand, integral \eqref{eq:eq6a} can be easily obtained from \eqref{eq:eq6d} as $I=I_{1}^{\rho-\beta}I_{2}^{\rho+\beta}$. If one considers another function of $I_{1}$ and $I_{2}$ that gives an autonomous first integral of \eqref{eq:eq5}, one obtains an integral, that is a function of \eqref{eq:eq6a}, since \eqref{eq:eq5} can admit at most one, up to a functional dependence, autonomous first integral. In other words, any autonomous first integral of \eqref{eq:eq5} is a function of \eqref{eq:eq6a}. Thus, only \eqref{eq:eq6a} (or any function of it) can be used for constructing first integrals for linearizable equations from \eqref{eq:eq1a}. The cases of $\beta^{2}-4\alpha<0$ and $\beta^{2}-4\alpha=0$ can be treated in a similar way.
\end{remark}
Let us proceed with the main result of this section and obtain the necessary and sufficient conditions for \eqref{eq:eq1a} to be equivalent to \eqref{eq:eq5} via \eqref{eq:eq2}. Necessary conditions can be obtained if one substitutes the expressions for $w$, $w_{\zeta}$ and $w_{\zeta\zeta}$ via $F$ and $G$ into \eqref{eq:eq5}. This yields to
\begin{equation}
\label{eq:eq7}
y_{zz}+g y_{z}+h=0,
\end{equation}
provided that
\begin{equation}
\label{eq:eq7a}
GF_{yy}-F_{y}G_{y}=0,
\end{equation}
holds. Here
\begin{equation}
\label{eq:eq7b}
g=\frac{2GF_{yz}-F_{y}G_{z}-F_{z}G_{y}+\beta G^{2}F_{y}}{GF_{y}}, \quad h=\frac{GF_{zz}-F_{z}G_{z}+\beta G^{2}F_{z}+\alpha G^{3}F}{G F_{y}}.
\end{equation}
Therefore, equation \eqref{eq:eq1a} can be transformed into \eqref{eq:eq5} if it is of the form \eqref{eq:eq7} and \eqref{eq:eq7a} holds.
Conversely, if the functions $F$ and $G$ satisfy \eqref{eq:eq7a}, \eqref{eq:eq7b} then equation can be transformed into \eqref{eq:eq5} with the help of \eqref{eq:eq2}. As a result, compatibility conditions for the following overdetermined system of partial differential equations for the functions $F$ and $G$
\begin{equation}
\begin{gathered}
\label{eq:eq8}
GF_{yy}-G_{y}F_{y}=0, \\
gGF_{y}-2GF_{yz}+F_{y}G_{z}+F_{z}G_{y}-\beta G^{2}F_{y}=0,\\
h G F_{y}+G_{z}F_{z}-G F_{zz}-\alpha G^{3}F-\beta G^{2} F_{z}=0.
\end{gathered}
\end{equation}
give us the necessary and sufficient conditions for \eqref{eq:eq1a} to be equivalent to \eqref{eq:eq5} via \eqref{eq:eq2}.
Now our goal is to explicitly find correlations on the functions $f$ and $g$ that provide compatibility of \eqref{eq:eq8} and, hence, define equations of the form \eqref{eq:eq1a} that can be both linearizable and admit a certain first integral. Although direct computation of the compatibility conditions for \eqref{eq:eq8} is quite cumbersome, we can considerably simplify this system, which allows us to explicitly find required compatibility conditions.
Solving the first equation from \eqref{eq:eq8} we get that
\begin{equation}
G=AF_{y},
\label{eq:eq9}
\end{equation}
where $A=A(z)\not\equiv 0$ is an arbitrary sufficiently smooth function. With the help of this relation, from \eqref{eq:eq8} we obtain
\begin{equation}
\begin{gathered}
\beta A F_{y}-g-\frac{A_{z}}{A}-\frac{F_{z}F_{yy}}{F_{y}^{2}}+\frac{F_{yz}}{F_{y}}=0,\vspace{0.1cm}\\
\alpha A^{2}FF_{y}+\beta A F_{z}-h-\left(\frac{A_{z}}{A}+\frac{F_{yz}}{F_{y}}\right)\frac{F_{z}}{F_{y}}+\frac{F_{zz}}{F_{y}}=0.
\label{eq:eq10}
\end{gathered}
\end{equation}
The first equation from \eqref{eq:eq10} can be integrated once with respect to $y$. As a result, we obtain
\begin{equation}
F_{z}+\left(\beta A F -m -C-\frac{A_{z}}{A}y\right)F_{y}=0,
\label{eq:eq10a}
\end{equation}
where $m_{y}=g$ and $C(z)$ is an arbitrary sufficiently smooth function.
With the help of \eqref{eq:eq10a}, from the second equation from \eqref{eq:eq10} we get
\begin{equation}
FF_{y}+\frac{1}{\alpha A^{2}}\left(m_{z}+C_{z}-h+\frac{A_{zz}}{A}y-\frac{A_{z}}{A}m+\frac{A_{z}}{A}C-2\frac{A_{z}^{2}}{A^{2}}y\right)=0.
\label{eq:eq10b}
\end{equation}
Introducing in \eqref{eq:eq10a}, \eqref{eq:eq10b} the following notations
\begin{equation}
p=m+C+\frac{A_{z}}{A}y, \quad l=\frac{\beta^{2}}{\alpha A^{2}}\left(\frac{A_{z}}{A}p+h-p_{z}\right), \quad L=\beta F,
\label{eq:eq10c}
\end{equation}
we have
\begin{equation}
\begin{gathered}
LL_{y}-l=0, \quad LL_{z}+A l L-lp=0.
\label{eq:eq11}
\end{gathered}
\end{equation}
This system is quite simple in comparison with \eqref{eq:eq8}. If we consider \eqref{eq:eq11} as an overdetermined system for the function $L$, the corresponding compatibility conditions give the necessary and sufficient conditions for linearization of \eqref{eq:eq1a} via \eqref{eq:eq2} provided that one takes into account notations \eqref{eq:eq10c}.
The compatibility conditions for \eqref{eq:eq11} split into four separate cases: the generic case and three particular cases. During the computation of the compatibility conditions we assume that $L\neq0$, $l\neq0$ and $A\neq0$ since otherwise transformations \eqref{eq:eq2} degenerate. To simplify further representation we introduce the following notations
\begin{equation}
P=l_{yy}, \quad Q=p_{yy}, \quad R=pl_{y}+lp_{y}-l_{z}.
\end{equation}
The generic case of the compatibility conditions is
\begin{equation}
\begin{gathered}
A^{2}Pl^{3}R_{y}R-P^{2}A^{4}l^{5}+ \left( 8\,A^{4}l^{4}l_{y}^{2}-7\,A^{2}R^{2}l^{2}l_{y}+R^{4} \right)P-\\
-A^{2}l_{y}l^{3}{R_{y}}^{2}+l_{y}R \left( 6 A^{2}l^{2}l_{y}-R^{2} \right) R_{y}-A^{2}l_{y}^{3}l \left( 16\,A^{2}l^{2}l_{y}-3\,R^{2} \right)=0, \vspace{0.2cm} \\
A^{3}PQl^{4}+ \left(2AR^{2}lp_{y}-A^{3}Rl^{2}l_{y}-A^{3}l^{3}R_{y}+ARlpR_{y}-AR^{3}-\right. \\ \left.-ARlR_{z}+R^{2}lA_{z}\right) P
-Al_{y}l \left( 4\,A^{2}l^{2}l_{y}-R^{2} \right) Q-l_{y} \left( l_{y}R-lR_{y} \right) \left( 6\,A^{3}ll_{y} -\right. \\ \left.-2ARp_{y}-ApR_{y}+AR_{{z}}-RA_{z} \right)=0,\vspace{0.2cm} \\
l \left( 5A^{2}Rl^{2}l_{y}-A^{2}l^{3}R_{y}-R^{3} \right) P_{y}-7A^{2}P^{2}l^{3}R+A^{2}Pl^{4}R_{yy}-ll_{y} \left( 4A^{2}l^{2}l_{y}-R^{2}\right) R_{yy}-\\-
\left( 4A^{2}l^{2}l_{y}+3R^{2} \right) \left( Rl_{y}-lR_{y} \right) P-3l_{y} \left( Rl_{y}-lR_{y} \right)\left( 4A^{2}ll_{y}^{2}-RR_{y} \right)=0,
\label{eq:cc_case1}
\end{gathered}
\end{equation}
while the function $L$ is given by
\begin{equation}
L=\frac {Al \left(lR_{y}l_{y}- PRl-Rl_{y}^{2} \right) }{A^{2}P{l}^{2}l_{y}-4\,A^{2}ll_{y}^{3}-PR^{2}+RR_{y}l_{y}}\,\,\,.
\label{eq:cc_case1_L}
\end{equation}
Let us briefly describe the process of computation of conditions \eqref{eq:cc_case1} and expression \eqref{eq:cc_case1_L}. We consider \eqref{eq:eq11} as an overdetermined system of equations for $L$ and apply the Riquier--Janet compatibility theory (see, e.g. \cite{Reid1996}) for computing the corresponding compatibility conditions. This is done via calculating various mixed partial derivatives of $L$ with respect to $z$ and $y$ and comparing them. The comparison of $L_{yz}$ and $L_{zy}$ leads to the expression for $L^{2}$ via $p$ and $l$. Then, with the help of this expression and expressions for $Lzyy$, $Lyyz$ and $Lyzy$ we find expression \eqref{eq:cc_case1_L} and the first condition from \eqref{eq:cc_case1}. Further computing and comparing third order mixed derivatives of $L$ we obtain the second condition from \eqref{eq:cc_case1}. Finally, with the help of expressions for $L_{yyyz}$, $L_{zyyy}$ and $L_{yyzy}$ we find the last compatibility condition from \eqref{eq:cc_case1}. Computation of further mixed partial derivatives of $L$ does not lead to new compatibility conditions. In addition, to verify that all compatibility conditions are obtained we compare our results with those produced by the Rif package \cite{Reid1996}. Our results and results produced by Rif coincide. Let us remark that further we do not provide details of the computation of the compatibility conditions since they are similar to those given above.
Now we need to consider particular cases of the compatibility conditions. First, we deal with the case when the denominator of \eqref{eq:cc_case1_L} vanishes. As a result, we get the following relations
\begin{equation}
\begin{gathered}
A^{2}Pl^{3}-4\,A^{2}l^{2}l_{y}^{2}+R^{2}l_{y}=0,\\
l^{2} \left( Plp-4\,pl_{y}^{2}+5\,Rl_{y}-lR_{y} \right) A^{2}-R^{2} \left( R-pl_{y} \right)=0,\\
\left( APp+AQl+2\,Al_{y}p_{y}-AR_{y}+A_{z}l_{y} \right) L^{4}-R_{z}L^{3}+\\
+l \left( 2\,Alp_{y}+2\,Apl_{y}-3\,AR+lA_{z} \right) L^{2}+l \left( 2\,A^{2}l^{2}+Rp \right) L-2\,Apl^{3}=0,\\
4\,l^{2}RA^{2} \left( A^{2}l^{2}+5\,Rp \right) l_{y}^{2}+5\,Ql_{y}A^{4}l^{6}-l^{4}R^{2}QA^{2}-A^{4}l^{7}Q_{y}+2\,R^{5}-\\
-2\,R \left( -2\,A^{2}Rl^{3}p_{y}+A^{2}l^{3}pR_{y}+6\,A^{2}R^{2}l^{2}+A^{2}l^{3}R_{z}-ARl^{3}A_{z}+2\,R^{3}p \right) l_{y}=0,\\
\left( 2\,Al_{zzy}+4\,A_{z}l_{{z,y}}+2\,l_{y}A_{zz}\right) L^{4}+ \left(2 l_{zzz}-A^{2}Plp-A^{2}p l_{y}^{2}-2\,lp_{zzy}-\right. \\ \left.
-2\,pl_{zzy}-2\,l_{y}p_{zz}-4\,l_{z}p_{zy}-4\,p_{z}l_{zy}-2\,p_{y}l_{zz}\right)L^{3}+\left(APl{p}^{2}+AQl^{2}p+3Alpl_{y}p_{y}+\right. \\ \left. +A{p}^{2}l_{y}^{2}-4\,Al^{2}p_{zy}+5\,Alpl_{zy}-2\,All_{y}p_{z}-6\,All_{z}p_{y}-Apl_{y}l_{z}-\right.\\
\left.-2\,l^{2}A_{z}p_{y}+8\,lpA_{z}l_{y}+8\,All_{zz}+6\,A{l_{z}}^{2}+2l^{2}A_{zz}+10lA_{z}l_{z}\right) L^{2}-\\
-l \left( 4 A^{2}l^{2}p_{y}+A^{2}lpl_{y}-24A^{2}ll_{z}-12Al^{2}A_{z}+6lpp_{zy}+6 p^{2}l_{zy}+6 pl_{y}p_{z}+\right. \\ \left.
+6pl_{z}p_{y}-6pl_{zz} \right) L-l^{2} \left(7\,Alpp_{y}-12\,A^{3}l^{2}+3Ap^{2}l_{y}+2Alp_{z}-9Apl_{z}-2lpA_{z} \right)=0,
\label{eq:cc_case2}
\end{gathered}
\end{equation}
and $L$ is given by
\begin{equation}
\begin{gathered}
L=Al^{2} \Big( 2\,A^{2}l^{2} \left( A^{2}l^{2}-5\,Rp\right) l_{y}^{2}-RQl^{4}A^{2}+ \left(pR_{y}+R_{z} -2\,Rp_{y} \right) l_{y}A^{2}l^{3}+\\+5 A^{2}l^{2}l_{y}R^{2} -Al^{3}l_{y}A_{z}R+R^{3} \left( 2 pl_{y}-R \right) \Big) \Big (Ql_{y}A^{4}l^{6}-2\,A^{2}l^{2}R \left( A^{2}l^{2}+5\,Rp \right)l_{y}^{2}-
\\-A^{2}l^{4}QR^{2} +R l_{y}A^{2}l^{3}( pR_{y}+R_{z}-2Rp_{y} )+6A^{2}l^{2}l_{y}R^{3}-Al^{3}l_{y}A_{z}R^{2}+R^{4} ( 2pl_{y}-R) \Big)^{-1}.
\label{eq:cc_case2_L}
\end{gathered}
\end{equation}
The next case corresponds to the vanishing of the denominator of \eqref{eq:cc_case2_L}. Consequently, we get that
\begin{equation}
\begin{gathered}
l^{2} \left( Pl-4\,{l_{y}}^{2} \right) A^{2}+l_{y}R^{2}=0, \quad l^{2} \left( Ql^{2}-4\,Rl_{y} \right) A^{2}+R^{3}=0,\vspace{0.1cm} \\
l^{2} \left( Plp+Ql^{2}-4\,p{l_{y}}^{2}+Rl_{y}-lR_{y}\right) A^{2}+l_{y}pR^{2}=0,\vspace{0.1cm}\\
l^{2} \left( Plp^{2}+Ql^{2}p-4\,p^{2}l_{y}^{2}+2\,Rlp_{y}+6\,Rpl_{y}-lpR_{y}-R^{2}-lR_{z}\right) A^{2}-\\-2\,l_{y}A^{4}l^{4}+A_{z}Al^{3}R+pR^{2} \left( pl_{y}-R \right)=0,
\label{eq:cc_case3}
\end{gathered}
\end{equation}
while $L$ satisfies the equation
\begin{equation}
\begin{gathered}
L^{2}-\frac{R}{A l_{y}} +\frac {l^{2}}{l_{y}}=0.
\label{eq:cc_case3_L}
\end{gathered}
\end{equation}
Finally, in the case of $l_{y}=0$ we obtain
\begin{equation}
\begin{gathered}
l_{y}=0, \quad l_{z}^{3}-3\,p_{y}l_{z}^{2}l+3p_{y}^{2}l_{z}l^{2}-l^{3} \left( A^{2}Ql+p_{y}^{3} \right)=0,\\
pl_{z}^{3}- \left(A^{2}l^{2}+3 lpp_{y} \right) l_{z}^{2}+ \left(3l^{2}pp_{y}^{2}-A^{2}l^{3}p_{y}-Al^{3}A_{z} \right) l_{z}+\\+l^{3} \left( lp_{y}^{2}-lp_{zy}+l_{zz} \right)A^{2}+A_{z}p_{y}Al^{4}-p_{y}^{3}pl^{3}=0,
\label{eq:cc_case4}
\end{gathered}
\end{equation}
and
\begin{equation}
\begin{gathered}
L=\frac{Al^{2}}{lp_{y}-l_{z}}.
\label{eq:cc_case4_L}
\end{gathered}
\end{equation}
We do not need to consider the case of $lp_{y}-l_{z}=0$ separately, since it results in either degeneration of transformations \eqref{eq:eq2} or reduces to subcases of \eqref{eq:cc_case2} or \eqref{eq:cc_case3}.
The above results can be summarized as follows:
\begin{theorem}
Equation \eqref{eq:eq1a} can be transformed into \eqref{eq:eq5} if and only if one of the sets of correlations \eqref{eq:cc_case1}, \eqref{eq:cc_case2}, \eqref{eq:cc_case3} or \eqref{eq:cc_case4} holds.
\label{th:th1}
\end{theorem}
\begin{remark}
In order to check compatibility conditions for a particular member of \eqref{eq:eq1a} one needs to calculate the values of the functions $p$ and $l$ via $g$ and $h$ with the help of the relations \eqref{eq:eq10c} taking into account that $m_{y}=g$. Then, one needs to substitute the corresponding values of the functions $p$ and $l$ into one of the sets of the compatibility conditions \eqref{eq:cc_case1}, \eqref{eq:cc_case2}, \eqref{eq:cc_case3} or \eqref{eq:cc_case4} and check whether they hold at some values of $A\neq0$ and $C$. We present a detailed algorithm for verifying compatibility conditions at the beginning of the next section.
\end{remark}
As an immediate consequence of Theorem \ref{th:th1} we get
\begin{corollary}
If one of the sets of correlations \eqref{eq:cc_case1}, \eqref{eq:cc_case2}, \eqref{eq:cc_case3} or \eqref{eq:cc_case4} holds then equation \eqref{eq:eq1a} admits the following first integral
\begin{equation}
I=A^{-2\rho}\left(2y_{z}+2p-\frac{\beta-\rho}{\beta}AL\right)^{\rho+\beta}\left(2y_{z}+2p-\frac{\beta+\rho}{\beta}AL\right)^{\rho-\beta},
\label{eq:fi_general}
\end{equation}
where $p$ is given in \eqref{eq:eq10c}, $\rho=\sqrt{\beta^{2}-4\alpha}\neq0$ and $L=\beta F$. If $\rho=0$ then the first integral is
\begin{equation}
I=\frac{2y_{z}+2p-AL}{A}\exp\left\{\frac{AL}{2y_{z}+2p-AL}\right\}.
\end{equation}
\label{cr:cr1}
\end{corollary}
It is interesting to understand when transformations \eqref{eq:eq2} keep first integral \eqref{eq:eq6a} autonomous. One can show that this is true if and only if $G_{z}=F_{z}=0$. As a consequence, we have that the following statement holds
\begin{corollary}
Equation of the form
\begin{equation*}
y_{zz}+g(y)y_{z}+h(y)=0,
\end{equation*}
is integrable with the first integral
\begin{equation*}
I=\left(2\beta y_{z}+(\rho+\beta)(m+\mu)\right)^{\rho+\beta}\left(2\beta y_{z}+(\rho-\beta)(m+\mu)\right)^{\rho-\beta},
\end{equation*}
if
\begin{equation*}
\beta^{2}(hg_{y}-gh_{y})+\alpha g^{3}=0,
\end{equation*}
where $m_{y}=g$.
\end{corollary}
Let us also consider the case when transcendental first integral \eqref{eq:fi_general} becomes a rational one. One can show that this is true if the following relation holds $4\alpha=(1-r^{2})\beta$, where $r\neq0$ is a rational number. As a consequence, we have that
\begin{corollary}
If one of the sets of correlations \eqref{eq:cc_case1}, \eqref{eq:cc_case2}, \eqref{eq:cc_case3} or \eqref{eq:cc_case4} holds and
\begin{equation}
4\alpha=(1-r^{2})\beta, \quad r=\frac{n}{k}, \quad k,n \in \mathbb{Z}\setminus\{0\},
\end{equation}
then equation \eqref{eq:eq1a} admits the following rational first integral
\begin{equation}
I=A^{-2n}\left(2y_{z}+2p-(1-r)AL\right)^{n+k}\left(2y_{z}+2p-(1+r)AL\right)^{n-k},
\label{eq:fi_rational}
\end{equation}
where $p$ is given in \eqref{eq:eq10c} and $L=\beta F$.
\end{corollary}
Thus, in this section we have explicitly find correlations on functions $g$ and $h$ that give us the linearization criterion for \eqref{eq:eq1a} via generalized Sundman transformations. We have also showed that once an equation from \eqref{eq:eq1} is linearizable it possesses a certain first integral. Moreover, we have isolated linearizable families of equations that admits an autonomous first integral or a rational one.
\section{Examples}
In this section we provide several new examples of linearizable equations of form \eqref{eq:eq1}. First, we demonstrate that there are indeed equations from family \eqref{eq:eq1} with coefficient satisfying conditions from Theorem \ref{th:th1}, but which cannot be linearized via \eqref{eq:eq2} with $F_{z}=0$. Then, we provide several example of both non-autonomous and autonomous nonlinear oscillators including generalizations of the Duffing and Van der Pol oscillators, that can be linearized via \eqref{eq:eq2} with $F_{z}\neq0$.
Let us present an algorithm for verifying that a particular member of \eqref{eq:eq1a} can be linearized with the help of \eqref{eq:eq2}. It consists of the following three steps.
First, using \eqref{eq:eq10c} and taking into account that $m_{y}=g$ we calculate the values of the functions $p$ and $l$ via $g$ and $h$. Second, we substitute the corresponding values of the functions $p$ and $l$ into one of compatibility conditions \eqref{eq:cc_case1}, \eqref{eq:cc_case2}, \eqref{eq:cc_case3} or \eqref{eq:cc_case4}. As a result of this substitution, we obtain polynomials in $y$, whose coefficients are functions of $z$. Equating coefficients of these polynomials to zero, we get a system of equations for the functions $A$ and $C$. If this system is satisfied for any values of $A\neq0$ and $C$, then the corresponding equation from \eqref{eq:eq1a} is linearizable. Third, if one of the sets of the compatibility conditions is satisfied, we calculate the value of $L$ via one of the relations \eqref{eq:cc_case1_L}, \eqref{eq:cc_case2_L}, \eqref{eq:cc_case3_L}, \eqref{eq:cc_case4_L} and then it is easy to find the explicit form of the linearizing transformations with the help of \eqref{eq:eq9} and \eqref{eq:eq10c}.
\textbf{Example 1}. Let us consider the following equation from family \eqref{eq:eq1a}
\begin{equation}\label{eq:ex3_1}
y_{zz}+(\beta e^{-\delta z}y-2\delta)y_{z}+\frac{e^{-\delta z}}{2}y^{2}(\alpha e^{-\delta z}y-2\beta\delta)=0.
\end{equation}
In order to check that this equation can be linearized via \eqref{eq:eq2} we use the algorithm presented above. With the help of \eqref{eq:eq10c} we find that $p=\beta e^{-\delta z}/2 y^{2}-\delta y-\beta\delta^{2}e^{\delta z}/\alpha$ and $l=2\beta^{2}e^{-2\delta z}y(\alpha e^{-2\delta z}y^{2}-2\delta^{2})/\alpha$. Substituting these values of $p$ and $l$ into \eqref{eq:cc_case1} and equating coefficients at the same powers of $y$ we find that $A=e^{\delta z}/2$ and $C=-\beta\delta^{2}e^{\delta z}/\alpha$ and from \eqref{eq:cc_case1_L}, \eqref{eq:eq9} and \eqref{eq:eq10c} we get that $L=\beta (e^{-2\delta z}y^{2}-2\delta^{2}/\alpha)$, $F=e^{-2\delta z}y^{2}-2\delta^{2}/\alpha$ and $G={\rm e}^{-\delta z}y$. As a result, we have that \eqref{eq:ex3_1} can be linearized via \eqref{eq:eq2} and its general solution can be presented in the following parametric form
\begin{equation}\label{eq:ex3_2}
y=\pm e^{\delta z}\left(w+\frac{2\delta^{2}}{\alpha}\right)^{1/2}, \quad
z=\pm \bigintssss \frac{d\zeta}{\left(w+\frac{2\delta^{2}}{\alpha}\right)^{1/2}},
\end{equation}
where $w$ is the general solution of \eqref{eq:eq5}.
From Corollary \ref{cr:cr1} it follows that \eqref{eq:ex3_1} possesses the first integral
\begin{equation}\label{eq:ex3_3}
\begin{gathered}
I={\rm e}^{-2\rho\delta z}\left(2y_{z}-2\delta y+\beta e^{-\delta z}y^{2}-\frac{2\beta\delta^{2}}{\alpha}e^{\delta z}-\frac{e^{\delta z}}{2\alpha}(\beta-\rho)(\alpha e^{-2\delta z }y^{2}-2\delta^{2})\right)^{\rho+\beta}\\
\left(2y_{z}-2\delta y+\beta e^{-\delta z}y^{2}-\frac{2\beta\delta^{2}}{\alpha}e^{\delta z}-\frac{e^{\delta z}}{2\alpha}(\beta+\rho)(\alpha e^{-2\delta z }y^{2}-2\delta^{2})\right)^{\rho-\beta},
\end{gathered}
\end{equation}
if $\rho\neq0$ and the first integral
\begin{equation}\label{eq:ex3_3a}
\begin{gathered}
I=e^{\delta z}\left(4y_{z}-4\delta y+\beta \frac{y^{2}}{e^{\delta z}}-\frac{8\delta^{2}}{\beta}e^{\delta z}\right)
\exp\left\{\frac{8\delta^{2}e^{2\delta z}-\beta^{2}y^{2}}{8\delta^{2}e^{2\delta z}-\beta^{2}y^{2}+4\beta \delta e^{\delta z}y-4 \beta e^{\delta z} y_{z}}\right\},
\end{gathered}
\end{equation}
if $\rho=0$.
Equation \eqref{eq:ex3_1} can be considered as a non-autonomous generalization of the damped Duffing oscillator. Notice that one can show that equation \eqref{eq:ex3_1} cannot be linearized via \eqref{eq:eq2} with $F_{z}=0$ and possesses only one Lie point symmetry. Therefore, \eqref{eq:ex3_1} provides an example of an equation that cannot be both integrated with the help of the classical Lie approach and linearized with the help of the restricted case of transformations \eqref{eq:eq2}.
\begin{figure}[!t]
\center
\includegraphics[width=0.9\textwidth]{fig1.pdf}
\caption{Projections of \eqref{eq:ex2_2} on the plane $z=c$ for different values of $c$: $s(z)=\sin z$, $\alpha=-\beta=1$ (left figure); $s(z)={\rm e}^{-2z}$, $\alpha=10$, $\beta=-5$ (middle figure); $s(z)=\sin z \cos (\pi z)$, $\alpha=10$, $\beta=1$ (right figure). }
\label{fig1}
\end{figure}
\textbf{Example 2.}
Consider a family of parametrically forced Duffing oscillators with linear damping
\begin{equation}
y_{zz}+(b_{1} y+b_{0}(z))y_{z}+a_{3}y^{3}+a_{2}(z) y^{2}+a_{1}(z) y=0,
\label{eq:ex2_1a}
\end{equation}
where $b_{1}\neq0$ and $a_{3}\neq0$ are certain parameters and $b_{0}$, $a_{2}$ and $a_{1}$ are certain functions of $z$. Now we need to check whether coefficients of \eqref{eq:ex2_1a} satisfy one of the sets of the compatibility conditions. For the sake of simplicity, we assume that $C=0$. The case of $C\neq0$ can be treated in the same way.
According to the algorithm presented above, at the first step we find that
\begin{equation}
\begin{gathered}
\label{eq:ex2_1b}
p=\frac{A_{z}}{A}y+\frac{b_{2}y^{2}}{2}+b_{0}y, \\
l=\frac{\beta^{2}}{2\alpha A^{4}}\left[2(a_{3}y^{3}+a_{2}y^{2}-b_{0,z}y+a_{1}y)A^{2}+(b_{2}y^{2}+2b_{0}y)A_{z}-2AA_{zz}y+4A_{z}^{2}y\right].
\end{gathered}
\end{equation}
Substituting \eqref{eq:ex2_1b} into \eqref{eq:cc_case1} and collecting coefficients at the same powers of $y$, we find that if
\begin{equation}\label{eq:ex2_1c}
b_{2}=2\beta, \quad b_{0}=3s, \quad a_{3}=2\alpha, \quad a_{2}=2\beta s, \quad a_{1}=2s^{2}+s_{z},\quad A(z)={\rm e}^{-2 \int s(z)dz},
\end{equation}
then conditions \eqref{eq:cc_case1} are satisfied. Here $s=s(z)$ is an arbitrary function. As a consequence, with the help of \eqref{eq:cc_case1_L}, \eqref{eq:eq10c} and \eqref{eq:eq9}, we get that
$F={\rm e}^{2\int s dz}y^{2}$ and $G=2y$.
\begin{figure}[!t]
\center
\includegraphics[width=0.3\textwidth]{L13_ps_1.pdf}
\includegraphics[width=0.3\textwidth]{L13_ps_2.pdf}
\includegraphics[width=0.3\textwidth]{L13_ps_3.pdf}
\caption{Plots of one parametric families of solutions of \eqref{eq:ex2_1} for different forcing functions: $s(z)=z/(z^{2}+1)$, $\alpha=4$, $\beta=5$ (left figure); $s(z)=2z$, $\alpha=4$, $\beta=5$ (middle figure); $s(z)=\tan z$, $\alpha=3.8$, $\beta=4$ (right figure).}
\label{fig2}
\end{figure}
As a result, we have that the equation
\begin{equation}
y_{zz}+(2\beta y+3s)y_{z}+2\alpha y^{3}+2 \beta s y^{2}+(2s^{2}+s_{z}) y=0,
\label{eq:ex2_1}
\end{equation}
can be linearized with the help of \eqref{eq:eq2}.
From Corollary \ref{cr:cr1} it follows that \eqref{eq:ex2_1} has the following first integral if $\rho\neq0$
\begin{equation}
I={\rm e}^{4\rho\int s dz} \left(2y_{z}+2sy+(\beta+\rho)y^{2}\right)^{\rho+\beta}\left(2y_{z}+2sy+(\beta-\rho)y^{2}\right)^{\rho-\beta},
\label{eq:ex2_2}
\end{equation}
and if $\rho=0$ this first integral is
\begin{equation}
I={\rm e}^{2\int sdz}\left(2y_{z}+2sy+\beta y^{2}\right)\exp\left\{\frac{\beta y^{3}}{2y_{z}+2s y+\beta y^{2}}\right\}.
\label{eq:ex2_2a}
\end{equation}
The general solution of \eqref{eq:ex2_1} can be presented as follows
\begin{equation}
y=\pm\sqrt{w}{\rm e}^{\int s dz},
\label{eq:ex2_3}
\end{equation}
where $z$ is given by
\begin{equation}
\pm \bigintssss \frac{d\zeta}{2\sqrt{w}}=\bigintssss {\rm e}^{-\int s dz}dz,
\label{eq:ex2_4}
\end{equation}
and $w$ is the general solution of \eqref{eq:eq5}.
Let us discuss some properties of solutions of \eqref{eq:ex2_1}. In Fig.\ref{fig1} we demonstrate projections of \eqref{eq:ex2_2} on the plane $z=\mbox{const}$ for different values the forcing function $s$ and other parameters. Notice that for the left figure the integration constant corresponds to $y(0)=1, y_{z}(0)=0$ and for the other cases the integration constant corresponds to $y(0)=1/2, y_{z}(0)=1/5$. One can see that equation \eqref{eq:ex2_1} has various types of periodic solutions even if the forcing function is not periodic. Furthermore, one can show by varying the integration constant that these periodic trajectories are not isolated in the phase space, namely, they are not limit cycles.
One-parametric families of solutions of \eqref{eq:ex2_1} can be easily obtained from \eqref{eq:ex2_2} and \eqref{eq:ex2_2a} as follows
\begin{equation}
y=\frac{2\exp\{-\int s(z)dz\}}{(\beta\pm\rho)\int \exp\{-\int s(z)dz\}dz+C_{1} },
\label{eq:ex2_5}
\end{equation}
where $C_{1}$ is an arbitrary constant. We demonstrate plots of \eqref{eq:ex2_5} for different forcing functions and values of parameters in Fig.\ref{fig2}. One can see that depending on the forcing functions these solutions may be solitary or periodic waves.
\begin{figure}[!t]
\center
\includegraphics[width=0.4\textwidth,height=0.3\textheight]{L_2_5_fi.pdf}
\includegraphics[width=0.4\textwidth,height=0.3\textheight]{L_2_5_numerical.pdf}
\caption{Projections of \eqref{eq:ex4_2} on the plane $y=c$ for different values of $c$ (left figure) and two numerical solutions of the Cauchy problem for \eqref{eq:ex4_1} (right figure) at $\alpha=4$, $\beta=5$, $\mu=-1$ and $s(z)={\rm e}^{\sin z}$ (right figure).}
\label{fig3}
\end{figure}
\textbf{Example 3.} Let us consider the following family of non-autonomous nonlinear oscillators
\begin{equation}
y_{zz}+\left(b_{2} y^{2}+b_{0}\right)y_{z}+a_{5}y^{5}+a_{3}y^{3}+a_{2} y^{2}+a_{1}y=0,
\label{eq:ex4_1a}
\end{equation}
where $b_{i}=b_{i}(z),\,i=1,2$ and $a_{j}=a_{j}(z),j=1,2,3,5$ are some functions and $b_{2},\,a_{5}\not\equiv0$. Equation \eqref{eq:ex4_1a} can be considered as a parametrically forced $\phi^{6}$--Van der Pol oscillator or as a parametrically forced extended Duffing--Van der Pol system (see, e.g. \cite{Siewe2004,Yu2009}).
Now we find a case of \eqref{eq:ex4_1a}, whose coefficients satisfy \eqref{eq:cc_case1}. First, we compute the values of $l$ and $p$
\begin{equation}
\begin{gathered}
p=\frac{A_{z}y}{A}+\frac{b_{2}y^{3}}{3}+b_{0}y+C, \quad l=\frac{\beta^{2}}{3\alpha A^{4}} \Big[(b_{2}y^{3}+3b_{0}y+3C)AA_{z}+\\
+ 3(2A_{z}^{2}-AA_{zz})y+(3a_{5}y^{5}+(3a_{3}-b_{2,z})y^{3}+3a_{2}y^{2}+3(a_{1}-b_{0,z})y-3C_{z})A^{2}\Big].
\label{eq:ex4_1b}
\end{gathered}
\end{equation}
Second, we substitute these values into \eqref{eq:cc_case1} and equate coefficients at the same powers of $y$. As a consequence, we obtain that
\begin{equation}
\begin{gathered}
b_{5}=3\beta s, \quad b_{0}=\frac{5s_{z}}{3s}, \quad a_{5}=3\alpha s^{2}, \quad a_{3}=2\beta s_{z}, \quad a_{2}=3\mu,\\
a_{1}=\frac{2s_{zz}}{3s}, \quad A=1/s, \quad C=\beta \mu /(\alpha s).
\label{eq:ex4_1c}
\end{gathered}
\end{equation}
Here $s(z)\not\equiv 0$ is an arbitrary sufficiently smooth function and $\mu$ is an arbitrary parameter.
Third, using \eqref{eq:ex4_1c}, \eqref{eq:cc_case1_L}, \eqref{eq:eq9} and \eqref{eq:eq10c} we get that the equation
\begin{equation}
y_{zz}+\left(3\beta s y^{2}+\frac{5s_{z}}{3s}\right)y_{z}+3\alpha s^{2}y^{5}+2\beta s_{z}y^{3}+3\mu y^{2}+\frac{2s_{zz}}{3s}y=0,
\label{eq:ex4_1}
\end{equation}
can be linearized via transformations \eqref{eq:eq2} with
\begin{equation}
F=s^{2}y^{3}+\frac{\mu}{\alpha}, \quad G=3 s y^{2}.
\label{eq:ex4_1e}
\end{equation}
Notice that the general solution of \eqref{eq:ex4_1} in a nonlocal form can be obtained by inverting \eqref{eq:eq2} with \eqref{eq:ex4_1e} as follows
\begin{equation}\label{eq:ex4_1d}
y^{3}=\frac{1}{s^{2}}\left(w-\frac{\mu}{\alpha}\right), \quad \int s dz=\int \frac{d\zeta}{3y^{2}},
\end{equation}
where $w$ is the general solution of \eqref{eq:eq5}.
With the help of \ref{cr:cr1} we find that \eqref{eq:ex4_1} possesses a first integral
\begin{equation}
I=\left(6\alpha s y_{z}+4\alpha y s_{z}+3(\beta+\rho)(\alpha s^{2}y^{3}+\mu)\right)^{\rho+\beta}\left(6\alpha s y_{z}+4\alpha y s_{z}+3(\beta-\rho)(\alpha s^{2}y^{3}+\mu)\right)^{\rho-\beta},
\label{eq:ex4_2}
\end{equation}
if $\rho\neq0$, and a first integral
\begin{equation}
I=(6\beta s y_{z}+4\beta s_{z}y+3\beta^{2} s^{2}y^{3}+12\mu)\exp\left\{\frac{3\beta^{2}s^{2}y^{3}+12\mu}{6\beta s y_{z}+4 \beta s_{z}y+3\beta^{2}s^{2}y^{3}+12\mu}\right\},
\label{eq:ex4_3}
\end{equation}
if $\rho=0$.
Now we discuss some properties of integrals \eqref{eq:ex4_2} and \eqref{eq:ex4_3}. In the left part of Fig. \ref{fig3} we demonstrate projections of \eqref{eq:ex4_2} at certain values of the parameters on the $y=\rm{const}$ plane for different values of this constant. One can see that there are periodic trajectories admitted by \eqref{eq:ex4_1}. We argue that these periodic trajectories are limit cycles. To support this claim in the right part of Fig. \ref{fig3} we demonstrate the results of numerical solution of the Cauchy problem for \eqref{eq:ex4_1}. We see that nearby trajectories in the phase space converge to a certain closed trajectory, and, thus, there is indeed a limit cycle in \eqref{eq:ex4_1}. One can also observe a similar situation for first integral \eqref{eq:ex4_3}. Finally, if we assume that the forcing function $s(z)$ is periodic and has no zeros on the real line, one can again find a limit cycle in \eqref{eq:ex4_1}.
\textbf{Example 4.} Now we consider the following equation from family \eqref{eq:eq1}
\begin{equation}
y_{zz}+\frac{y_{z}^{2}}{y}+\left(\mu+\frac{\beta}{y^{3}}\right)y_{z}-\mu^{2}y+\frac{\beta\mu}{y^{2}}-\frac{\alpha}{y^{5}}=0,
\label{eq:ex5_1}
\end{equation}
where $\mu\neq0$ is an arbitrary parameter. If we transform \eqref{eq:ex5_1} into its canonical form via $y\rightarrow \sqrt{2y}$, one can verify with the help of the above proposed algorithm, that the coefficients of the corresponding equation of type \eqref{eq:eq1a} satisfy conditions \eqref{eq:cc_case1}. Indeed, in this case we have that $p=2\mu y-\beta/\sqrt{2y}$, $l=-\beta^{2}{\rm e}^{-2\mu z}/(4y^{2})$ and $A=-{\rm e}^{-\mu z}$ satisfy \eqref{eq:cc_case1} and $L=\beta {\rm e}^{-\mu z}/\sqrt{2y}$. Consequently, equation \eqref{eq:ex5_1} can be linearized via \eqref{eq:eq2} with
\begin{equation}
F=\frac{{\rm e}^{-\mu z}}{y}, \quad G=\frac{1}{y^{3}}.
\end{equation}
The general solution of \eqref{eq:ex5_1} can be expressed in the parametric form as follows
\begin{equation}
y=\frac{{\rm e}^{-\mu z}}{w}, \quad z=\frac{1}{3\mu}\ln \left\{3\mu \int\frac{d\zeta}{w^{3}} \right\}.
\end{equation}
With the help of Corollary \ref{cr:cr1} we find that \eqref{eq:ex5_1} has the first integral
\begin{equation}
I=\left(2e^{-\mu z}y(y_{z}+\mu y)-(\beta+\rho)\frac{e^{-\mu z}}{y}\right)^{\rho+\beta}\left(2e^{-\mu z}y(y_{z}+\mu y)-(\beta-\rho)\frac{e^{-\mu z}}{y}\right)^{\rho-\beta},
\end{equation}
if $\rho\neq0$ and the first integral
\begin{equation}
I=e^{-\mu z}\left(2y(y_{z}+\mu y)-\frac{\beta }{y}\right)\exp\left\{\frac{\beta}{\beta-2\mu y^{3}-2 y^{2}y_{z}}\right\},
\end{equation}
if $\rho=0$. Notice that equation \eqref{eq:ex5_1} provides an example of an autonomous nonlinear oscillator from family \eqref{eq:eq1}, i.e. an equation with quadratic nonlinearity with respect to the first derivative, that can be linearized via \eqref{eq:eq2} with $F_{z}\neq0$.
In this section we have provided several examples of linearizable equations from family \eqref{eq:eq1} that can be transformed into \eqref{eq:eq3} via \eqref{eq:eq2} only if $F_{z}=0$. We have also showed that the corresponding first integrals allows us to find periodic trajectories. including limit cycles, admitted by the considered nonlinear oscillators.
\section{Conclusion}
In this work we have considered family \eqref{eq:eq1} of nonlinear second order ordinary differential equations. We have studied the complete linearization problem for this family of equations via the generalized Sundman transformations and obtained linearizability conditions in the explicit form. We have also shown that each linearizable equation from \eqref{eq:eq1} admits a certain transcendental first integral. As a consequence, we classify all equations of form \eqref{eq:eq1} that possess this transcendental first integral. We have also separated families of equations with autonomous and rational first integral. We have provided several nontrivial examples of applications of the linearizing transformations including generalizations of the Duffing and Van der Pol oscillators. In particular, we have demonstrated that our approach can be used for finding centers and limit cycles admitted by equations from the considered family.
\section{Acknowledgments}
This research was supported by Russian Science Foundation grant No. 19-71-10003. Numerical calculations in Section 3 were supported by Russian Science Foundation grant No. 19-71-10048.
|
1,314,259,994,866 | arxiv | \section{Introduction}
In this work,
we study the numerical discretization of a nonlocal analog of the classical Lighthill-Richards-Whitham (LWR) model \cite{lighthill1955kinematic,richards1956shock}. The latter, given by
\begin{align}
\partial_t \rho(t,x) + \partial_x \left( \rho(t,x) v(\rho(t,x)) \right) =0 , \label{eq:lwr}
\end{align}
for a density $\rho=\rho(t,x)$ and a velocity $v= v( \rho (t, x))$,
has been widely used in the study of traffic flows.
To study the dynamics of traffic flows in the presence of nonlocal inter-vehicle interactions \cite{Blandin2016,goatin2016well},
the following \emph{nonlocal LWR model} has been developed in recent years
\begin{align}
\partial_t \rho(t,x) + \partial_x \left( \rho(t,x) v_\delta( \rho(t,\cdot), t, x)
\right) = 0, \quad x\in\mathbb{R}, \, t>0. \label{eq:nonlocal_lwr}
\end{align}
In contrast to \eqref{eq:lwr}, the nonlocal LWR model \eqref{eq:nonlocal_lwr} adopts a modeling assumption that in a fleet of vehicles driving on a highway, each vehicle decides its driving speed not by the local information but rather through a nonlocal weighted average of traffic information within a road segment of length $\delta>0$ ahead of the vehicle's current location. More specifically,
the velocity $v_\delta= v_\delta( \rho, t, x)$
takes on the form
\begin{align}
\label{eq:nonlocal_velocity}
v_\delta( \rho(t,\cdot), t, x)
=v(q_\delta( \rho(t,\cdot), t, x ) ), \quad\text{with}\quad
q_\delta( \rho(t,\cdot), t, x )
= \int_0^\delta \rho(t,x+s) w_\delta(s) \,ds,
\end{align}
where the integral kernel $w=w_\delta(s)$ is assumed to be a probability density function defined on the interval $[0,\delta]$.
Alternatively, one may also consider the nonlocal velocity given by \cite{friedrich2022conservation}
\begin{align}
\label{eq:nonlocal_velocity-a}
v_\delta( \rho(t,\cdot), t, x)
= \int_0^\delta v(\rho(t,x+s) ) w_\delta(s) \,ds.
\end{align}
The equation \eqref{eq:nonlocal_lwr} is solved with the initial condition:
\begin{align}\label{eq:ini_data}
\rho(0,x) = \rho_0(x), \quad x\in\mathbb{R},
\end{align}
where $\rho_0: \, \mathbb{R}\to[0,1]$ represents the initial traffic density.
The case $\rho_0 \equiv 0$ indicates that the road is empty and the case $\rho_0 \equiv 1$ corresponds to fully congested traffic.
The equation \eqref{eq:nonlocal_lwr} leads to a nonlocal conservation law due to the nonlocal dependence of the velocity on the density.
Consider the rescaled kernel $w_\delta(s)=w(s/\delta)/\delta$ such that $w_\delta$ converges to a Dirac point mass as $\delta\to0$, it is clear that the nonlocal LWR model \eqref{eq:nonlocal_lwr}, with either choices of the velocity given by \eqref{eq:nonlocal_velocity} or \eqref{eq:nonlocal_velocity-a},
formally recovers the local model \eqref{eq:lwr} by taking the limit $\delta\to 0$.
For more rigorous analysis of the nonlocal LWR model \eqref{eq:nonlocal_lwr}, we refer to a number of existing studies in the literature, including the model well-posedness \cite{Blandin2016,goatin2016well,bressan2019traffic,colombo2021local,friedrich2022conservation}, traveling wave solutions \cite{ridder2019traveling,shen2018traveling}, the asymptotic stability of uniform flows \cite{huang2022stability}, and nonlocal-to-local limit as $\delta\to0$ \cite{bressan2019traffic,bressan2020entropy,colombo2021local,colombo2022nonlocal,keimer2019approximation,coclite2020general,colombo2019singular,friedrich2022conservation,keimer2022singular}.
The numerical discretization of the nonlocal LWR model \eqref{eq:nonlocal_lwr} has also been studied in \cite{Blandin2016,goatin2016well,friedrich2018godunov,friedrich2019maximum,Chalons2018,colombo2021role}.
However, there was no systematic study on the dependence of numerical solutions on the parameter $\delta$ and their behavior under the limit $\delta\to0$.
In the present work, we aim to fill this gap by designing and analysing finite volume numerical schemes for the nonlocal LWR model \eqref{eq:nonlocal_lwr} such that they are able to correctly resolve both the nonlocal model for a given $\delta>0$ and also the local model \eqref{eq:lwr} when $\delta\to0$.
Such schemes are in the spirit of \emph{asymptotically compatible} schemes, which can offer robust numerical computation under the changes of $\delta$; see \cite{tian2014asymptotically,tian2020asymptotically} for discussions on asymptotically compatibility of numerical discretizations of more general nonlocal models.
The main contributions of our work here are the rigorous proofs of the asymptotically compatibility of the schemes, which include both the uniform convergence of the numerical solutions to the unique solution of nonlocal continuum model for a given positive horizon parameter and the convergence to the unique entropy solution of the local model as the mesh size and the nonlocal horizon parameter go to zero simultaneously. These results are established for the first time in the literature.
The main ingredients of the proofs are the compactness in the $\mathbf{BV}_{\mathrm{loc}}$ space and the entropy admissibility of numerical solutions.
The analysis provided in \cite{goatin2016well,Blandin2016} was based on a priori $\mathbf{L}^\infty$ and total variation estimates for a fixed $\delta>0$, but the resulting total variation bound blows up to infinity as $\delta\to0$.
In this work, a novelty is our use of a different approach to prove that numerical solutions produced by the proposed schemes satisfy an one-sided Lipschitz condition when $\delta$ is close to zero, which enforces both the boundedness of total variation and the entropy admissibility. Such an approach has been used to study numerical schemes for the local model \eqref{eq:lwr}, see \cite{tadmor1984large,brenier1988discrete}, but to our best knowledge, has not been used for nonlocal models.
Numerical experiments are also reported to complement the theoretical investigation. Note that while the current work is motivated by modeling traffic flows with nonlocal vehicle interactions,
let us mention that conservation laws with nonlocal fluxes were also studied in the modeling of pedestrian traffic \cite{Colombo2012,burger2020non}, sedimentation \cite{Betancourt2011}, and material flow on conveyor belts \cite{Goettlich2014,rossi2020well}; see \cite{Aggarwal2015,Amorim2015,Colombo2018,Goatin2019,Chiarello2019,Berthelin2019,karafyllis2020analysis,friedrich2022lyapunov} for more relevant studies.
Thus, our study here can be useful in the numerical simulations of a broad range of problems in various application domains.
To summarize the paper, in the remainder of this Section 1, after briefly describing the assumptions on the nonlocal model and some basic mathematical properties, we introduce the numerical discretization schemes and summarize the main theorems on their convergence behavior and the asymptotic compatibility. The detailed proofs of the main theorems are given in Section 2. We present results of some numerical experiments in Section 3 and offer some conclusions in Section 4.
\subsection{A review of well-posedness and nonlocal-to-local limit}
Let us first state some assumptions on the model.
\begin{assm}\label{assm:1}
(i) The nonlocal kernel is given by $w_\delta(s)=w(s/\delta)/\delta$ for $s\in[0,\delta]$, where $w=w(s)$ is a $\mathbf{C}^1$ smooth, strictly decreasing, and nonnegative probability density function defined on $[0,1]$, and it satisfies the normalization condition $\int_0^1 w(s)\,ds=1$.\\
(ii) The velocity function is $v(\rho)=1-\rho$. Consequently, \eqref{eq:nonlocal_velocity} and \eqref{eq:nonlocal_velocity-a} produce the same outcome.\\
(iii) The initial data $\rho_0 \in \mathbf{L}^\infty(\mathbb{R})$ and it satisfies $0\leq\rho_0(x)\leq1$ for all $x\in\mathbb{R}$. In addition, $\rho_0$ has bounded total variation.
\end{assm}
Concerning the mathematical analysis of the nonlocal LWR model \eqref{eq:nonlocal_lwr}, we recall that
the existence and uniqueness of weak solutions have been shown with general choices of the nonlocal kernel, the velocity function, and the initial data, see for example,
\cite{Blandin2016,goatin2016well,bressan2019traffic,colombo2021local}.
For our case, the following proposition summarizes the known results in the above works.
\begin{prop}\label{prop:nonlocal_sol}
Under Assumption~\ref{assm:1}, the nonlocal LWR model \eqref{eq:nonlocal_lwr} admits a unique weak solution $\rho \in \mathbf{L}^\infty\left([0,+\infty)\times\mathbb{R}\right)$ such that
\begin{align}
\int_0^\infty\int_{\mathbb{R}}\rho(t,x)\partial_t\phi(t,x)+\rho(t,x)v_\delta
(\rho(t,\cdot), t, x)
)\partial_x\phi(t,x)\,dxdt+\int_{\mathbb{R}}\rho_0(x)\phi(0,x)\,dx=0,\label{eq:nonlocal_sol}
\end{align}
for all $\phi\in\mathbf{C}^1_{\mathrm{c}}\left([0,+\infty)\times\mathbb{R}\right)$, where $v_\delta
(\rho(t,\cdot), t, x)$ is given by \eqref{eq:nonlocal_velocity}. Moreover, the solution satisfies the maximum principle \begin{align}\label{eq:maxm_principle}
\inf_{x\in\mathbb{R}}\rho_0(x) \leq \rho(t,x) \leq \sup_{x\in\mathbb{R}}\rho_0(x), \quad (t,x)\in [0,+\infty)\times\mathbb{R}.
\end{align}
\end{prop}
The convergence of solutions of the nonlocal LWR model \eqref{eq:nonlocal_lwr} as $\delta\to0$ has also been extensively studied. In the literature, it was usually assumed that the nonlocal kernel $w=w_\delta(s)$ is defined for $s\in[0,+\infty)$ and the nonlocal density is defined by
\begin{align*}
q_\delta(\rho(t,\cdot),t,x) = \int_0^\infty \rho(t,x+s) w_\delta(s) \,ds.
\end{align*}
\cite{bressan2019traffic,bressan2020entropy} considered the exponential kernels $w_\delta(s)=\delta^{-1}e^{-\frac{s}{\delta}}$ and showed convergence from the solutions of the nonlocal model \eqref{eq:nonlocal_lwr} to the unique weak entropy solution of the local model \eqref{eq:lwr}, assuming that the initial data $\rho_0$ is uniformly positive. \cite{colombo2021local} generalized the convergence result for a class of nonlocal kernels with exponential decay rate but under one additional assumption that $\rho_0$ is one-sided Lipschitz continuous. In \cite{colombo2021local}, the authors also provided counterexamples to show that the uniform positivity of the initial data is essential to the convergence result.
In the subsequent works \cite{coclite2020general,colombo2022nonlocal,keimer2019approximation,friedrich2022conservation,keimer2022singular}, convergence results concerning the nonlocal quantity $q_\delta(\rho(t,\cdot),t,x)$ as $\delta\to0$ were given without assuming the initial data being uniformly positive.
In the present work, we adopt an approach similar as that in \cite{colombo2021local} and make the following additional assumption on the initial data, which basically requires the initial data to be uniformly positive and to have no negative jumps.
\begin{assm}\label{assm:2}
The initial data $\rho_0$ satisfies
\begin{align}
\rho_0(x) \geq \rho_{\mathrm{min}} > 0 \quad \forall x\in\mathbb{R}, \qquad -\frac{\rho_0(y)-\rho_0(x)}{y-x}\leq L \quad \forall x\neq y\in\mathbb{R}, \label{eq:ini_lip_const}
\end{align}
for some constants $\rho_{\mathrm{min}}>0$ and $L>0$.
\end{assm}
In our case, the same arguments as in \cite{colombo2021local} can be applied to give the nonlocal-to-local limit result, as stated in the following Proposition~\ref{prop:local_limit}, with very little modifications for compactly supported nonlocal kernels.
\begin{prop}\label{prop:local_limit}
Suppose Assumptions~\ref{assm:1} and \ref{assm:2} are satisfied.
As $\delta\to0$, the solution of the nonlocal LWR model \eqref{eq:nonlocal_lwr} converges in $\mathbf{L}^1_{\mathrm{loc}}([0,+\infty)\times\mathbb{R})$ to the weak entropy solution of the local model \eqref{eq:lwr} that satisfies
\begin{align}
\int_0^\infty\int_{\mathbb{R}}\rho(t,x)\partial_t\phi(t,x)+\rho(t,x)v\left(\rho(t,x)\right)\partial_x\phi(t,x)\,dxdt+\int_{\mathbb{R}}\rho_0(x)\phi(0,x)\,dx=0,\label{eq:local_sol}
\end{align}
for all $\phi\in\mathbf{C}^1_{\mathrm{c}}\left([0,+\infty)\times\mathbb{R}\right)$, and
\begin{align}
-\frac{\rho(t,y)-\rho(t,x)}{y-x}\leq \frac{1}{2t} \quad \forall x\neq y\in\mathbb{R},\,t>0.\label{eq:oleinik}
\end{align}
\end{prop}
In Proposition~\ref{prop:local_limit}, the inequality \eqref{eq:oleinik}, which is known as the Oleinik's entropy condition, is used to select the unique entropy admissible solution of the scalar conservation law \eqref{eq:lwr}, see \cite{lefloch2002hyperbolic}.
As a constraint on the one-sided Lipschitz constant of the solution, the entropy condition \eqref{eq:oleinik} yields that the solution can only have positive jumps.
\subsection{Finite volume approximations}
Now let us consider the numerical discretization of the nonlocal LWR model \eqref{eq:nonlocal_lwr}. With finite volume approximations, the numerical solution is defined as a piecewise constant function:
\begin{align}\label{eq:num_sol}
\rho(t,x)=\sum_{j\in\mathbb{Z}}\sum_{n=0}^\infty\rho_j^n\mathbf{1}_{\mathcal{C}_j\times\mathcal{T}^n}(t,x),
\end{align}
where $\mathcal{C}_j=(x_{j-1/2},x_{j+1/2})$, $\mathcal{T}^n=(t^n,t^{n+1})$ are spatial and temporal cells. The grid points are $x_j=jh$ and $t^n=n\tau$, where $h$ and $\tau$ are spatial and temporal mesh sizes.
At the initial time $t^0=0$, the initial data is discretized as:
\begin{align}\label{eq:ini_data_discrete}
\rho_j^0 = \frac1h \int_{\mathcal{C}_j} \rho_0(x) \,dx, \quad j\in\mathbb{Z} .
\end{align}
Denote $F_{j-1/2}^n$ and $F_{j+1/2}^n$ the numerical fluxes across cell boundaries $x_{j-1/2}$ and $x_{j+1/2}$ during time $t^n$ to $t^{n+1}$. Specifying appropriate boundary fluxes, the finite volume scheme is:
\begin{align}
\label{eq:finite_volume}
\rho_j^{n+1}=\rho_j^n+
\lambda
(F_{j-1/2}^n-F_{j+1/2}^n),
\end{align}
where the CFL ratio $\lambda = \tau/h$ is taken to be a fixed constant.
To specify the numerical fluxes, we need to evaluate the nonlocal density $q_\delta(\rho(t,\cdot),t,x)$ given in \eqref{eq:nonlocal_velocity}.
Let us take
\begin{align}\label{eq:discrete_nonlocal_density}
q_j^n = \sum_{k=0}^{m-1} w_k \rho_{j+k}^n,
\end{align}
where $m=\lceil\frac{\delta}{h}\rceil$ is the number of cells involved in the nonlocal integral, and $\{w_k\}_{k=0}^{m-1}$ is a set of numerical quadrature weights, such that:
\begin{align}
w_{\delta,h}(s)=\sum_{k=0}^{m-1}w_k\mathbf{1}_{[kh, (k+1)h]}(s), \quad s\in[0,\delta],
\end{align}
is a piecewise constant approximation of the nonlocal kernel $w_\delta=w_\delta(s)$.
Given the discretized nonlocal densities $\{q_j^n\}_{j\in\mathbb{Z}}^{n\geq0}$, the nonlocal fluxes in \eqref{eq:finite_volume} can be constructed in a number of different ways. Let us mention the following examples.
\begin{itemize}
\item In \cite{Blandin2016,goatin2016well}, a Lax-Friedrichs type scheme was developed with the numerical fluxes:
\begin{align}\label{eq:goatin_flux}
F_{j-1/2}^n=\frac12 \left[\rho_{j-1}^n v\left(\sum_{k=0}^{m-1}w_k\rho_{j+k-1}^n\right) + \rho_j^n v\left(\sum_{k=0}^{m-1}w_k\rho_{j+k}^n\right) \right] + \frac{\alpha}{2}(\rho_{j-1}^n-\rho_j^n),
\end{align}
where $\alpha>0$ is a numerical viscosity constant and the numerical quadrature weights are given by the left endpoint values:
\begin{align}
\mbox{[Left endpoint]} \quad & w_k = w_\delta(kh)h, \quad k=0,\cdots,m-1. \label{eq:quad_weight_left}
\end{align}
\item In \cite{friedrich2018godunov}, a Godunov type scheme was proposed with the numerical fluxes defined by:
\begin{align}\label{eq:upwind_flux}
F_{j-1/2}^n = \rho_{j-1}^n v\left(\sum_{k=0}^{m-1}w_k\rho_{j+k}^n\right),
\end{align}
where the numerical quadrature weights are given by the exact quadrature:
\begin{align}
\mbox{[Exact quadrature]} \quad w_k = \int_{kh}^{\min\{(k+1)h,\delta\}} w_\delta(s)\,ds, \quad k=0,\cdots,m-1. \label{eq:quad_weight_exact}
\end{align}
\item Inspired by both \eqref{eq:goatin_flux} and \eqref{eq:upwind_flux}, we also consider the following Lax-Friedrichs type fluxes:
\begin{align}\label{eq:lf_flux_new}
F_{j-1/2}^n=\frac12 \left(\rho_{j-1}^n + \rho_j^n \right) v\left(\sum_{k=0}^{m-1}w_k\rho_{j+k}^n\right) + \frac{\alpha}{2}(\rho_{j-1}^n-\rho_j^n),
\end{align}
where the numerical quadrature weights are given by either the left endpoint values or the exact quadrature.
\end{itemize}
In the present work, we consider a family of finite volume schemes:
\begin{align}\label{eq:nonlocal_lwr_num}
\rho_j^{n+1} &= \mathcal{H} \left( \rho_{j-1}^n,\rho_j^n,\rho_{j+1}^n,\cdots,\rho_{j+m}^n \right)=\rho_j^n+
\lambda
(F_{j-1/2}^n-F_{j+1/2}^n)\\
&= \rho_j^n + \lambda \left[ g(\rho_{j-1}^n,\rho_j^n,q_{j-1}^n,q_j^n) - g(\rho_j^n,\rho_{j+1}^n,q_j^n,q_{j+1}^n) \right], \label{eq:nonlocal_lwr_num_2}
\end{align}
where $q_j^n$ is given by \eqref{eq:discrete_nonlocal_density}, $\lambda=\tau/h$ is the CFL ratio, and $g=g(\rho_L,\rho_R,q_L,q_R)$ is a \emph{numerical flux function} that depends on both local densities $\rho_L,\rho_R$ and nonlocal densities $q_L,q_R$.
We remark that, by taking $q_L=\rho_L$ and $q_R=\rho_R$, $g=g(\rho_L,\rho_R,\rho_L,\rho_R)$ becomes a numerical flux function for the local model \eqref{eq:lwr}, and the respective numerical scheme:
\begin{align}\label{eq:local_lwr_num}
\rho_j^{n+1} &= \rho_j^n + \lambda \left[ g(\rho_{j-1}^n,\rho_j^n,\rho_{j-1}^n,\rho_j^n) - g(\rho_j^n,\rho_{j+1}^n,\rho_j^n,\rho_{j+1}^n) \right],
\end{align}
can be viewed as the local counterpart of \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2}.
It is worthwhile to mention that the aforementioned schemes, with numerical fluxes in \eqref{eq:goatin_flux}, \eqref{eq:upwind_flux}
and \eqref{eq:lf_flux_new} respectively, all belong to the above family \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2}, with the numerical flux functions given by:
\begin{subequations}
\begin{align}
\mbox{[Lax-Friedrichs]} \quad &g(\rho_L, \rho_R, q_L, q_R) = \frac12( \rho_L v(q_L) + \rho_R v(q_R) ) + \frac\alpha2 (\rho_L - \rho_R), \label{eq:g_func_LxF} \\
\mbox{[Godunov]} \quad &g(\rho_L, \rho_R, q_L, q_R) = \rho_L v(q_R), \label{eq:g_func_Godunov} \\
\mbox{[modified Lax-Friedrichs]} \quad &g(\rho_L, \rho_R, q_L, q_R) = \frac12( \rho_L + \rho_R ) v(q_R) + \frac\alpha2 (\rho_L - \rho_R), \label{eq:g_func_LxF_new}
\end{align}
\end{subequations}
respectively. Now we make the following assumptions on the numerical quadrature weights, the numerical flux function, and
the CFL ratio $\lambda$.
\begin{assm}\label{assm:3}
The numerical quadrature weights $\{w_k\}_{0\leq k\leq m-1}$ satisfy
\begin{align}\label{eq:weight_monotone}
w_\delta(kh)h \geq w_k \geq w_\delta((k+1)h)h \quad \mathrm{and} \quad w_k - w_{k+1} \geq c m^{-2},
\end{align}
for some constant $c>0$ only depending on the kernel function $w=w(s)$.
Moreover, $\{w_k\}_{0\leq k\leq m-1}$ satisfy the normalization condition:
\begin{align}\label{eq:normalization_condition}
\sum_{k=0}^{m-1} w_k = 1.
\end{align}
\end{assm}
\begin{assm}\label{assm:4}
(i) The numerical flux function $g$ is a quadratic function.
(ii) When $\rho_L=\rho_R$ and $q_L=q_R$, $g(\rho_L,\rho_L,q_L,q_L)=\rho_L(1-q_L)$.
(iii) Denote $\gamma_{ij}$, $1\leq i,j \leq4$ the second order partial derivatives of $g$, they satisfy
\begin{align}
&\gamma_{11}=\gamma_{12}=\gamma_{22}=0, \quad \gamma_{33}=\gamma_{34}=\gamma_{44}=0, \\ &\gamma_{13},\gamma_{23},\gamma_{14},\gamma_{24}\leq0, \quad \gamma_{13}+\gamma_{23}+\gamma_{14}+\gamma_{24}=-1.
\end{align}
(iv) Denote $\theta^{(i)}$, $1\leq i \leq4$ the first order partial derivatives of $g$ with respect to its four arguments $\rho_L,\rho_R,q_L,q_R$.
For any $0\leq \rho_L,\rho_R,q_L,q_R\leq1$:
\begin{align}
&\theta^{(1)}(q_L,q_R)\geq0, \ \theta^{(2)}(q_L,q_R)\leq0, \ \theta^{(3)}(\rho_L,\rho_R)\leq0, \ \theta^{(4)}(\rho_L,\rho_R)\leq0, \\
&\theta^{(1)}(q_L,q_R) + \theta^{(3)}(\rho_L,\rho_R) + 2(\gamma_{13}+\gamma_{23}) \geq 0, \ \theta^{(2)}(q_L,q_R) - 2(\gamma_{23}+\gamma_{24}) \leq 0, \\
&\theta^{(3)}(\rho_L,\rho_R) + \theta^{(4)}(\rho_L,\rho_R) \leq -\min\{\rho_L,\rho_R\}.
\end{align}
\end{assm}
\begin{assm}\label{assm:5}
Given the notation $\theta^{(i)}$, $1 \leq i \leq 4$ in Assumption~\ref{assm:4}, $\lambda$
satisfies
\begin{align}
\lambda \sum_{i=1}^4 \norm{\theta^{(i)}}_\infty < 1.
\end{align}
\end{assm}
\subsection{Main results}\label{sec:main_result}
This section summarizes the main results. We note that all the theorems are subject to Assumptions~\ref{assm:1}-\ref{assm:5}. To clarify the notation, we denote:
\begin{itemize}
\item $\rho^\delta$: the continuum solution of the nonlocal LWR model \eqref{eq:nonlocal_lwr};
\item $\rho^0$: the continuum solution of the local LWR model \eqref{eq:lwr};
\item $\rho^{\delta,h}$: the numerical solution of the nonlocal LWR model; and
\item $\rho^{0,h}$: the numerical solution of the local LWR model.
\end{itemize}
There are two sets of parameters: the nonlocal horizon parameter $\delta$ and the mesh size parameter $h$.
In the present work, we are interested in establishing relations between those solutions when $\delta\to0$ and $h\to0$ along various limiting paths, as shown in Fig.~\ref{fig:main_diagram}.
\begin{figure}[htbp]
\centering \centerline{\xymatrix@R+4.4pc@C+10.5pc{
\rho^{\delta, h} \ar[r]^{\mbox{$\delta\to0$}}_{\mbox{Proposition~\ref{prop:scheme_local_limit},\ Theorem~\ref{thm:ap}}} \ar[dr]^{\mbox{$\delta\to0,h\to0$}}_{\mbox{Theorem~\ref{thm:ac}}} \ar[d]^{\rotatebox{90}{$h\to0$}}_{\rotatebox{90}{Theorem~\ref{thm:numerical_convergence}}} & {\rho^{0,h}} \ar[d]^{\rotatebox{90}{$h\to0$}} \\ {\rho^\delta} \ar[r]_{\mbox{Proposition~\ref{prop:local_limit}}}^{\mbox{$\delta\to0$}} & {\rho^0}
}}
\vspace{-0.2cm}
\caption{Diagram of various limiting paths}
\label{fig:main_diagram}
\end{figure}
\begin{enumerate}
\item The numerical convergence for the nonlocal model: $\rho^{\delta,h}\to\rho^{\delta}$ when $h\to0$ with fixed $\delta>0$ can be proved following the approach in \cite{goatin2016well}. The proof is based on a priori $\mathbf{L}^\infty$ and total variation estimates of the numerical solution. In Theorem~\ref{thm:numerical_convergence}, we provide a stronger result stating uniform numerical convergence with respect to $\delta$.
\item The numerical convergence for the local model: $\rho^{0,h}\to\rho^0$ when $h\to0$ is a classical result, see for example \cite{leveque2002finite}.
\item The nonlocal-to-local limit: $\rho^{\delta}\to\rho^0$ when $\delta\to0$ is given in Proposition~\ref{prop:local_limit}.
\item The nonlocal-to-local limit of numerical discretizations: $\rho^{\delta,h}\to\rho^{0,h}$ as $\delta\to0$ with fixed $h$ follows from Proposition~\ref{prop:scheme_local_limit}. We also provide a uniform convergence result in Theorem~\ref{thm:ap}.
\end{enumerate}
To complete the convergence diagram in Fig.~\ref{fig:main_diagram}, one would ask whether $\rho^{\delta,h}\to\rho^0$ when both $\delta\to0$ and $h\to0$ simultaneously. If that is the case, we say that the numerical scheme \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2} is \emph{asymptotically compatible} \cite{tian2014asymptotically,tian2020asymptotically} with its local limit.
Our key contribution is to prove the asymptotically compatibility of the proposed scheme \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2}, which is given in Theorem~\ref{thm:ac}.
The proof is base on the a priori $\mathbf{L}^\infty$ and total variation estimates given in Theorem~\ref{thm:a_priori_estimates}, which are uniform to the nonlocal horizon parameter $\delta$.
\begin{thm}\label{thm:a_priori_estimates}
Under Assumptions~\ref{assm:1}-\ref{assm:5},
and that
\begin{align}\label{eq:delta_condition}
0< \delta \leq \delta_0 \doteq \frac{c\rho_{\mathrm{min}}}{2Lw(0)},
\end{align}
where the constant $c$ is as in \eqref{eq:weight_monotone} and the constants $\rho_{\mathrm{min}}$ and $L$ are as in \eqref{eq:ini_lip_const}.
The numerical solution $\rho^{\delta,h}$ produced by the scheme \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2} satisfies the maximum principle
\begin{align}\label{eq:numerical_maxm_principle}
\inf_{x\in\mathbb{R}}\rho_0(x) \leq \rho^{\delta,h}(t,x) \leq \sup_{x\in\mathbb{R}}\rho_0(x), \quad (t,x)\in [0,+\infty)\times\mathbb{R}.
\end{align}
Moreover, the total variation of the numerical solution in space $\mathrm{TV}(\rho^{\delta,h}(t,\cdot))$ is a non-increasing function of $t\in[0,+\infty)$, and
\begin{align}\label{eq:numerical_tvd}
\mathrm{TV}(\rho^{\delta,h}; \, [0,T]\times\mathbb{R}) \leq T\cdot\mathrm{TV}(\rho_0) \quad \forall T>0.
\end{align}
\end{thm}
\begin{thm}\label{thm:ac}
Under Assumptions~\ref{assm:1}-\ref{assm:5},
when $\delta\to0$ and $h\to0$ simultaneously,
the numerical solution $\rho^{\delta,h}$ produced by the scheme \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2} converges in $\mathbf{L}^1_{\mathrm{loc}}([0,+\infty)\times\mathbb{R})$ to the weak entropy solution $\rho^0$ of the local model \eqref{eq:lwr} as defined in Proposition~\ref{prop:local_limit}.
\end{thm}
Based on the asymptotically compatibility of the scheme, we can show numerical convergence from $\rho^{\delta,h}$ to $\rho^\delta$ uniformly in $\delta$, which guarantees robustness of numerical computation when using the scheme \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2} under changes to $\delta$. Moreover, we can also give uniform convergence from $\rho^{\delta,h}$ to $\rho^{0,h}$ with respect to the mesh size $h$. Such a property is referred to as \emph{asymptotic preserving} in the literature \cite{jin2010asymptotic,filbet2010class,jin1999efficient}.
\begin{thm}\label{thm:numerical_convergence}
Under Assumptions~\ref{assm:1}-\ref{assm:5},
and that $\delta$ satisfies the condition \eqref{eq:delta_condition},
as $h\to0$,
the numerical solution $\rho^{\delta,h}$ produced by the scheme \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2} converges in $\mathbf{L}^1_{\mathrm{loc}}([0,+\infty)\times\mathbb{R})$ to the weak solution $\rho^{\delta}$ of the nonlocal model \eqref{eq:nonlocal_lwr} as defined in Proposition~\ref{prop:nonlocal_sol}.
Moreover, the convergence is uniform with respect to $\delta\in(0,\delta_0]$ where $\delta_0$ is as in \eqref{eq:delta_condition}:
\begin{align}\label{eq:uniform_numerical_convergence}
\lim_{h\to0} \left. \sup_{\delta\in(0,\delta_0]} \norm{\rho^{\delta,h} - \rho^{\delta}}_{\mathbf{L}^1(U)} \right. = 0 \quad \mathrm{for\ any\ bounded\ } U\subset [0,+\infty)\times\mathbb{R}.
\end{align}
\end{thm}
Let us make some remarks on the convergence rates in the above Theorem~\ref{thm:ac} and Theorem~\ref{thm:numerical_convergence}.
On one hand, the scheme \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2} is expected to be at most first order accurate because it is based on a piecewise constant approximation.
On the other hand, for scalar conservation laws, it is known that a first order monotone scheme may have a $O(h^{1/2})$ convergence rate for discontinuous solutions \cite{leveque2002finite}.
In the numerical experiments in Section~\ref{sec:numerical_experiments}, we test the scheme with both smooth initial data and discontinuous ones, the results validate the $O(h)$ convergence rate to the local solution (as in Theorem~\ref{thm:ac}) when $\delta=mh$ for a fixed integer $m>0$, and the $O(h)$ convergence rate to the nonlocal solution uniformly in $\delta$ (as in Theorem~\ref{thm:numerical_convergence}).
We leave the rigorous analysis of convergence rates along various limiting paths in the future works.
Finally, we can also obtain the nonlocal-to-local limit of numerical discretizations, in particular, the following uniform convergence result.
\begin{thm}\label{thm:ap}
Under Assumptions~\ref{assm:1}-\ref{assm:5},
for any $h_0>0$, we have
\begin{align}\label{eq:ap_conv}
\lim_{\delta\to0} \left. \sup_{h\in(0,h_0]} \norm{\rho^{\delta,h} - \rho^{0,h}}_{\mathbf{L}^1(U)} \right. = 0 \quad \mathrm{for\ any\ bounded\ } U\subset [0,+\infty)\times\mathbb{R}.
\end{align}
\end{thm}
\subsection{Comments on numerical quadrature weights and numerical flux functions}
Let us make some remarks on the choice of the numerical quadrature weights $\{w_k\}_{0\leq k\leq m-1}$. Provided that the nonlocal kernel $w_\delta=w_\delta(s)$ is $\mathbf{C}^1$ smooth and decreasing, one can write the numerical quadrature weights as
\begin{align*}
w_k = w(\xi_k)\frac{h}{\delta}
, \quad \xi_k \in \left[ k\frac{h}{\delta}, (k+1)\frac{h}{\delta} \right], \quad k=0,\cdots,m-1,
\end{align*}
where $\{\xi_k\}_{0\leq k\leq m-1}$ can be viewed as sampling points of a Riemann sum quadrature on $[0,1]$.
The condition \eqref{eq:weight_monotone} in Assumption~\ref{assm:3} basically requires that the sampling points should not be too close to each other, and the condition is used to derive the necessary a priori estimates on numerical solutions as in Theorem~\ref{thm:a_priori_estimates}.
To demonstrate the meaning of the constant $c$ and the factor $m^{-2}$ in \eqref{eq:weight_monotone}, let us illustrate with the left endpoint quadrature weights in \eqref{eq:quad_weight_left}. In this case,
\begin{align*}
w_{k-1} - w_k = \frac{h}{\delta} \left[ w\left((k-1)\frac{h}{\delta}\right) - w\left(k\frac{h}{\delta}\right) \right] \geq \left( \min_{s\in[0,1]} -w'(s) \right) \left( \frac{h}{\delta} \right)^2 \geq c m^{-2},
\end{align*}
where the constant $c= \min_{s\in[0,1]} -w'(s) > 0$.
The condition \eqref{eq:normalization_condition} in Assumption~\ref{assm:3} is the normalization condition for the numerical quadrature weights, which is essential to the consistency between the scheme \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2} and the local model \eqref{eq:lwr}.
To demonstrate potential risks when the normalization condition \eqref{eq:normalization_condition} is violated, let us consider the case $\delta=mh$ where $m$ is a fixed positive integer. Then the scheme \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2} can be viewed as a $m+2$-point conservative scheme of the local model \eqref{eq:lwr} with the numerical flux function:
\begin{align*}
g_{\mathrm{local}}(\rho_j, \cdots, \rho_{j+m}) = g\left(\rho_j, \rho_{j+1}, \sum_{k=0}^{m-1} w_k\rho_{j+k}, \sum_{k=0}^{m-1} w_k\rho_{j+k+1} \right),
\end{align*}
where $g$ is as in Assumption~\ref{assm:4}.
Suppose $\rho_j = \cdots = \rho_{j+m} = \bar{\rho}$, to make $g_{\mathrm{local}}$ consistent to the local model \eqref{eq:lwr}, it is necessary to have
\begin{align*}
g_{\mathrm{local}}(\bar{\rho}, \cdots, \bar{\rho}) = \bar{\rho} \left(1 - \bar{\rho} \sum_{k=0}^{m-1}w_k \right) = \bar{\rho} (1 - \bar{\rho}),
\end{align*}
which requires the normalization condition \eqref{eq:normalization_condition}. In contrast, if the condition \eqref{eq:normalization_condition} is violated and $\eta \doteq \sum_{k=0}^{m-1}w_k \neq 1$, the numerical solutions will formally converge to a solution of the equation
\begin{align*}
\partial_t\rho(t,x) + \partial_x( \rho(t,x) (1-\eta \rho(t,x)) ) = 0,
\end{align*}
other than the desired equation \eqref{eq:lwr} with $v(\rho)=1-\rho$.
This means that the absence of the normalization condition \eqref{eq:normalization_condition} for some numerical quadrature weights may lead to incorrect limit solutions when $\delta\to0$ and $h\to0$ simultaneously.
Hence, we introduce the following normalized left endpoint quadrature weights:
\begin{align}
\mbox{[Normalized left endpoint]} \quad w_k = \frac{w_\delta(kh)h}{\sum_{k=0}^{m-1} w_\delta(kh)h}, \quad k=0,\cdots,m-1, \label{eq:quad_weight_left_norm}
\end{align}
and give the following proposition.
\begin{prop}
The normalized left endpoint quadrature weights \eqref{eq:quad_weight_left_norm} and the exact quadrature weights \eqref{eq:quad_weight_exact} both satisfy the Assumption~\ref{assm:3}, with the constant $c$ in the condition \eqref{eq:weight_monotone} given by $c= \frac{1}{1+w(0)} \min_{s\in[0,1]} -w'(s) $ and $c= \min_{s\in[0,1]} -w'(s) $, respectively.
The left endpoint quadrature weights satisfy the condition \eqref{eq:weight_monotone} with the constant $c= \min_{s\in[0,1]} -w'(s) $ but they do not satisfy the normalization condition \eqref{eq:normalization_condition}.
\end{prop}
A comparison between the different choices of numerical quadrature weights is made through numerical experiments in Section~\ref{sec:numerical_experiments}.
Concerning the Assumption~\ref{assm:4} on the numerical flux function $g$, with the velocity function $v(\rho)=1-\rho$, the flux function in the continuum model \eqref{eq:nonlocal_lwr} is $\rho(1-q)$, which is a quadratic polynomial of $(\rho,q)$ with the only quadratic term being $-\rho q$. It is then reasonable to assume that the numerical flux function $g$ is quadratic with its second order derivatives satisfying the condition (iii). The condition (ii) guarantees the consistency of the scheme \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2} to the model \eqref{eq:nonlocal_lwr}. The condition (iv) is used to show that the scheme is monotone under all Assumptions~\ref{assm:1}-\ref{assm:5}, see the Theorem~\ref{thm:a_priori_estimates}. It is natural to ask if the results in this work can be extended to more general numerical flux functions, e.g., $g$ is not quadratic. We leave the study of such an extension to future works.
Let us mention that the numerical flux functions given in \eqref{eq:g_func_LxF}-\eqref{eq:g_func_LxF_new} all satisfy Assumption~\ref{assm:4}. For the two Lax-Friedrichs type numerical flux functions \eqref{eq:g_func_LxF} and \eqref{eq:g_func_LxF_new}, the numerical viscosity constant should satisfy $\alpha\geq2$.
We also remark that, in the case of $0<\delta\leq h$, i.e., the nonlocal horizon is within one spatial mesh cell, it holds that $q_L=\rho_L$ and $q_R=\rho_R$ by Assumption~\ref{assm:3}.
Suppose $g$ satisfies Assumption~\ref{assm:4}, the numerical flux function $g=g(\rho_L,\rho_R,\rho_L,\rho_R)$ for the local model \eqref{eq:lwr} is non-decreasing with respect to $\rho_L$ and non-increasing with respect to $\rho_R$.
Therefore, the scheme \eqref{eq:local_lwr_num} is a monotone scheme for the local model \eqref{eq:lwr}.
As a consequence, we give the following result.
\begin{prop}\label{prop:scheme_local_limit}
Under Assumptions~\ref{assm:1}, \ref{assm:3}-\ref{assm:5}, let $\rho^{\delta,h}$ be the numerical solution produced by the scheme \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2} and $\rho^{0,h}$ be the one produced by \eqref{eq:local_lwr_num}. It holds that:
\begin{align*}
\rho^{\delta,h} = \rho^{0,h} \quad \mathrm{when} \quad 0 < \delta \leq h.
\end{align*}
Moreover, $\rho^{0,h}$ converges in $\mathbf{L}^1_{\mathrm{loc}}([0,+\infty)\times\mathbb{R})$ to the weak entropy solution $\rho^0$ of the local model \eqref{eq:lwr} as defined in Proposition~\ref{prop:local_limit}.
\end{prop}
\section{Proof of theorems}
This section aims to give the proofs of our main results. First, in Section~\ref{sec:maxm_principle}, we show the maximum principle for numerical solutions. Then we present an one-sided Lipschitz estimate for numerical solutions in Section~\ref{sec:tv_estimate_ac}, the monotonicity of the numerical scheme \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2} and the total variation estimate for numerical solutions follow as corollaries.
These two subsections constitute the proof of Theorem~\ref{thm:a_priori_estimates}.
In Section~\ref{sec:convergence}, we let $h\to0$ and show convergence of numerical solutions to the proper nonlocal or local solution, which gives the proofs of Theorem~\ref{thm:ac} and Theorem~\ref{thm:numerical_convergence}.
In Section~\ref{sec:local_limit_num}, we give the proof of Theorem~\ref{thm:ap} on the nonlocal-to-local limit of numerical discretizations.
\subsection{Maximum principle}\label{sec:maxm_principle}
In this subsection, we aim to show the maximum principle \eqref{eq:numerical_maxm_principle} in Theorem~\ref{thm:a_priori_estimates}.
By Assumption~\ref{assm:1} and \eqref{eq:ini_data_discrete}, the numerical solution at the initial time $\{\rho_j^0\}_{j\in\mathbb{Z}}$ satisfies $0 \leq \rho_j^0 \leq 1$ for all ${j\in\mathbb{Z}}$. Then the maximum principle \eqref{eq:numerical_maxm_principle} can be proved by induction using the following Lemma~\ref{lemm:sol_range_induction}.
\begin{lemm}\label{lemm:sol_range_induction}
Suppose all conditions in Theorem~\ref{thm:a_priori_estimates} are given, and that $0\leq\rho_{\mathrm{min}}\leq\rho_{j+k}^n\leq\rho_{\mathrm{max}}\leq1$ for $k=-1,0,1,\cdots,m$.
Then we have
\begin{align}\label{eq:sol_range_induction}
\rho_{\mathrm{min}}\leq \mathcal{H}(\rho^n_{j-1},\rho^n_j,\rho^n_{j+1},\cdots,\rho^n_{j+m})\leq \rho_{\mathrm{max}},
\end{align}
where the operator $\mathcal{H}$ is as defined in \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2}.
\end{lemm}
Let us first check the monotonicity of the scheme defined by \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2}.
Denote
\begin{align*}
\theta_j^{n,(i)} = \theta^{(i)}(q_{j-1}^n, q_j^n), \ i=1,2; \quad \theta_j^{n,(i)} = \theta^{(i)}(\rho_{j-1}^n, \rho_j^n), \ i=3,4 \quad \text{for} \quad j\in\mathbb{Z}, \ n\geq0.
\end{align*}
A direct calculation gives:
\begin{subequations}
\begin{align}
\frac{\partial \mathcal{H}}{\partial \rho^n_{j-1}} &= \lambda \left( \theta_j^{n,(1)} + w_0 \theta_j^{n,(3)} \right); \label{eq:H_grad_1} \\
\frac{\partial \mathcal{H}}{\partial\rho^n_j} &= 1 + \lambda \left( \theta_j^{n,(2)} - \theta_{j+1}^{n,(1)} + w_1 \theta_j^{n,(3)} + w_0 \theta_j^{n,(4)} - w_0 \theta_{j+1}^{n,(3)} \right); \label{eq:H_grad_2} \\
\frac{\partial \mathcal{H}}{\partial \rho^n_{j+1}} &= \lambda \left(w_2 \theta_j^{n,(3)} + w_1 \theta_j^{n,(4)} - \theta_{j+1}^{n,(2)} - w_1 \theta_{j+1}^{n,(3)} - w_0 \theta_{j+1}^{n,(4)} \right); \label{eq:H_grad_3} \\
\frac{\partial \mathcal{H}}{\partial \rho^n_{j+k}} &= \lambda \left( w_{k+1} \theta_j^{n,(3)} - w_k \theta_{j+1}^{n,(3)} + w_k \theta_j^{n,(4)} - w_{k-1} \theta_{j+1}^{n,(4)} \right), \quad k=2,\cdots,m; \label{eq:H_grad_4}
\end{align}
\end{subequations}
where we make the convention that $w_m = w_{m+1} = 0$.
In \eqref{eq:H_grad_4} that corresponds to the nonlocal dependence of the flux on the solution, it is possible that $\theta_j^{n,(3)}<0, \, \theta_j^{n,(4)}<0$ while $\theta_{j+1}^{n,(3)}=\theta_{j+1}^{n,(4)}=0$ at some point $j=j_0$, e.g., if we consider the Riemann type solution:
\begin{align*}
\rho_j^n = 1, \ j\leq j_0; \quad \rho_j^n = 0, \ j> j_0.
\end{align*}
In this case, $\frac{\partial \mathcal{H}}{\partial \rho^n_{j+k}} < 0$ for $k=2,\cdots,m-1$.
Therefore, one can not deduce \eqref{eq:sol_range_induction} by showing \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2} is a monotone scheme. Here we prove \eqref{eq:sol_range_induction} in an alternative way, which was also used in \cite{goatin2016well,friedrich2018godunov}.
\begin{proof}[Proof of Lemma~\ref{lemm:sol_range_induction}]
We observe the identity $\mathcal{H}(\rho_{\mathrm{min}},\rho_{\mathrm{min}},\rho_{\mathrm{min}},\cdots,\rho_{\mathrm{min}})=\rho_{\mathrm{min}}$ thus we can write the term $\mathcal{H}(\rho^n_{j-1},\rho^n_j,\rho^n_{j+1},\cdots,\rho^n_{j+m})-\rho_{\mathrm{min}}$ as the summation of two parts:
\begin{align*}
\Delta \mathcal{H}_1=&\mathcal{H}(\rho^n_{j-1},\rho^n_j,\rho^n_{j+1},\rho^n_{j+2}\cdots,\rho^n_{j+m})-\mathcal{H}(\rho_{\mathrm{min}},\rho_{\mathrm{min}},\rho^n_{j+1},\rho^n_{j+2},\cdots,\rho^n_{j+m}),\\
\Delta \mathcal{H}_2=&\mathcal{H}(\rho_{\mathrm{min}},\rho_{\mathrm{min}},\rho^n_{j+1},\rho^n_{j+2},\cdots,\rho^n_{j+m})-\mathcal{H}(\rho_{\mathrm{min}},\rho_{\mathrm{min}},\rho_{\mathrm{min}},\rho_{\mathrm{min}}\cdots,\rho_{\mathrm{min}}).
\end{align*}
By the mean value theorem,
\begin{align*}
\Delta \mathcal{H}_1&=\sum_{k=-1,0}\frac{\partial \mathcal{H}}{\partial \rho^n_{j+k}}(\tilde{\rho}^n_{j-1},\tilde{\rho}^n_j,\rho^n_{j+1},\rho^n_{j+2}\cdots,\rho^n_{j+m})(\rho^n_{j+k}-\rho_{\mathrm{min}}),\\
\Delta \mathcal{H}_2&=\sum_{1\leq k\leq m}\frac{\partial \mathcal{H}}{\partial \rho^n_{j+k}}(\rho_{\mathrm{min}},\rho_{\mathrm{min}},\tilde{\rho}^n_{j+1},\tilde{\rho}^n_{j+2}\cdots,\tilde{\rho}^n_{j+m})(\rho^n_{j+k}-\rho_{\mathrm{min}}),
\end{align*}
where $0\leq\rho_{\mathrm{min}}\leq\tilde{\rho}^n_{j+k}\leq\rho_{\mathrm{max}}\leq1 \ \forall k=-1,0,1,\cdots,m$.
Let us use \eqref{eq:H_grad_1}-\eqref{eq:H_grad_4} with $\theta_j^{n,(i)}$ replaced by $\tilde{\theta}_j^{n,(i)}$ that is with respect to $\tilde{\rho}^n_{j+k}$.
By Assumption~\ref{assm:4}, we have $\tilde{\theta}_j^{n,(1)} + \tilde{\theta}_j^{n,(3)} \geq0$ giving that the term with respect to $k=-1$ in $\Delta \mathcal{H}_1$ is nonnegative. Moreover, Assumption~\ref{assm:5} implies
that the term with respect to $k=0$ in $\Delta \mathcal{H}_1$ is nonnegative.
For $\Delta \mathcal{H}_2$, we note that
\begin{align*}
\tilde{\theta}_{j+1}^{n,(3)} = \tilde{\theta}_{j}^{n,(3)} + \gamma_{23} (\tilde{\rho}_{j+1}^n - \rho_{\mathrm{min}}) \leq \tilde{\theta}_{j}^{n,(3)}, \quad \tilde{\theta}_{j+1}^{n,(4)} = \tilde{\theta}_{j}^{n,(4)} + \gamma_{24} (\tilde{\rho}_{j+1}^n - \rho_{\mathrm{min}}) \leq \tilde{\theta}_{j}^{n,(4)},
\end{align*}
which yields that
\begin{align*}
w_{k+1} \tilde{\theta}_j^{n,(3)} - w_k \tilde{\theta}_{j+1}^{n,(3)} + w_k \tilde{\theta}_j^{n,(4)} - w_{k-1} \tilde{\theta}_{j+1}^{n,(4)} \geq (w_{k+1} - w_k) \tilde{\theta}_j^{n,(3)} + (w_k - w_{k-1}) \tilde{\theta}_j^{n,(4)} \geq0,
\end{align*}
for $k=1,\cdots,m$.
Hence $\frac{\partial \mathcal{H}}{\partial \tilde{\rho}^n_{j+k}}\geq0$ for $k=1,\cdots,m$.
Then we deduce that
\begin{align*}
\mathcal{H}(\rho^n_{j-1},\rho^n_j,\rho^n_{j+1},\cdots,\rho^n_{j+m})-\rho_{\mathrm{min}}=\Delta \mathcal{H}_1+\Delta \mathcal{H}_2\geq0.
\end{align*}
Similarly one can show the upper bound estimate $\mathcal{H}(\rho^n_{j-1},\rho^n_j,\rho^n_{j+1},\cdots,\rho^n_{j+m})-\rho_{\mathrm{max}}\leq0$.
\end{proof}
\subsection{One-sided Lipschitz estimate}\label{sec:tv_estimate_ac}
We now derive an one-sided Lipschitz estimate for numerical solutions as given in Lemma~\ref{lem:lip_estimate_1}.
Then we can deduce that the scheme \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2} is monotone and obtain total variation estimates for numerical solutions as given in Lemma~\ref{lem:tv_estimate_ac}.
The total variation diminishing property and the estimate \eqref{eq:numerical_tvd} in Theorem~\ref{thm:a_priori_estimates} are direct corollaries of Lemma~\ref{lem:tv_estimate_ac}.
\begin{lemm}\label{lem:lip_estimate_1}
Suppose all conditions in Theorem~\ref{thm:a_priori_estimates} are given, and that $\{\rho_j^n\}_{j\in\mathbb{Z}}^{n\geq0}$ is the numerical solution produced by the scheme \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2}.
The numerical differences
\begin{align}
r_j^n = \rho_{j+1}^n - \rho_j^n, \quad j\in\mathbb{Z}, \ n\geq0,
\end{align}
satisfy
\begin{align}
r_j^n\geq -Lh, \quad j\in\mathbb{Z},\ n\geq0.\label{eq:num_lip_estimate_1}
\end{align}
\end{lemm}
\begin{proof}
It follows from the definition of the scheme \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2} that
\begin{align}\label{eq:tmp1}
r_j^{n+1} &= r_j^n + \lambda \left[ 2g(\rho_j^n,\rho_{j+1}^n,q_j^n,q_{j+1}^n) - g(\rho_{j-1}^n,\rho_j^n,q_{j-1}^n,q_j^n) - g(\rho_{j+1}^n,\rho_{j+2}^n,q_{j+1}^n,q_{j+2}^n) \right].
\end{align}
Noting that $g$ is a quadratic function, we can do Taylor's expansions to get
\begin{align*}
&g(\rho_j^n,\rho_{j+1}^n,q_j^n,q_{j+1}^n) - g(\rho_{j-1}^n,\rho_j^n,q_{j-1}^n,q_j^n) \\
&\;\; = \theta_j^{n,(1)} r_{j-1}^n + \theta_j^{n,(2)} r_j^n + \theta_j^{n,(3)} (q_j^n-q_{j-1}^n) + \theta_j^{n,(4)} (q_{j+1}^n-q_j^n) \\
&\quad + \gamma_{13}r_{j-1}^n(q_j^n-q_{j-1}^n) + \gamma_{14}r_{j-1}^n(q_{j+1}^n-q_j^n) + \gamma_{23}r_j^n(q_j^n-q_{j-1}^n) + \gamma_{24}r_j^n(q_{j+1}^n-q_j^n), \\
&g(\rho_{j+1}^n,\rho_{j+2}^n,q_{j+1}^n,q_{j+2}^n) - g(\rho_{j-1}^n,\rho_j^n,q_{j-1}^n,q_j^n) \\
&\;\; = \theta_j^{n,(1)} (r_{j-1}^n+r_j^n) + \theta_j^{n,(2)} (r_j^n+r_{j+1}^n) + \theta_j^{n,(3)} (q_{j+1}^n-q_j^n+q_j^n-q_{j-1}^n)\\
&\quad + \theta_j^{n,(4)} (q_{j+2}^n-q_{j+1}^n+q_{j+1}^n-q_j^n) + \gamma_{13} (r_{j-1}^n+r_j^n)(q_{j+1}^n-q_j^n+q_j^n-q_{j-1}^n)\\
&\quad + \gamma_{14} (r_{j-1}^n+r_j^n) (q_{j+2}^n-q_{j+1}^n+q_{j+1}^n-q_j^n)
+ \gamma_{23} (r_j^n+r_{j+1}^n) (q_{j+1}^n-q_j^n+q_j^n-q_{j-1}^n) \\
&\quad + \gamma_{24} (r_j^n+r_{j+1}^n) (q_{j+2}^n-q_{j+1}^n+q_{j+1}^n-q_j^n),
\end{align*}
where
$
\{\theta_j^{n,(i)} = \theta^{(i)}(q_{j-1}^n, q_j^n)\}_{i=1}^2$ and
$\{\theta_j^{n,(i)} = \theta^{(i)}(\rho_{j-1}^n, \rho_j^n)\}_{i=3}^4$ for
$j\in\mathbb{Z}$ and $ n\geq0$.
Moreover, from the definition of $q_j^n$ give in \eqref{eq:discrete_nonlocal_density} we obtain:
\begin{align*}
q_{j+1}^n - q_j^n = \sum_{k=0}^{m-1} w_k r_{j+k}^n.
\end{align*}
Therefore \eqref{eq:tmp1} can be rewritten as
\begin{align*}
& r_j^{n+1} = \lambda \theta_j^{n,(1)} r_{j-1}^n + \left( 1 + \lambda \theta_j^{n,(2)} - \lambda \theta_j^{n,(1)} \right) r_j^n - \lambda \theta_j^{n,(2)} r_{j+1}^n \\
&\quad + \lambda \left( \theta_j^{n,(3)} + \gamma_{13}r_{j-1}^n + (\gamma_{23}-\gamma_{13})r_j^n -\gamma_{23}r_{j+1}^n \right) \sum_{k=0}^{m-1} w_k r_{j+k-1}^n \\
&\quad + \lambda \left( \theta_j^{n,(4)}-\theta_j^{n,(3)} + (\gamma_{14}-\gamma_{13}) r_{j-1}^n + (\gamma_{24}-\gamma_{13}-\gamma_{14}-\gamma_{23}) r_j^n - (\gamma_{23}+\gamma_{24}) r_{j+1}^n \right) \sum_{k=0}^{m-1} w_k r_{j+k}^n \\
&\quad - \lambda \left( \theta_j^{n,(4)} + \gamma_{14}r_{j-1}^n + (\gamma_{14}+\gamma_{24})r_j^n + \gamma_{24}r_{j+1}^n \right) \sum_{k=0}^{m-1} w_k r_{j+k+1}^n.
\end{align*}
In the above expression, $r_j^{n+1}$ is represented as a linear combination of $r_{j-1}^n,\cdots,r_{j+m}^n$.
By a direct calculation, the summation of the coefficients before the terms $r_{j-1}^n,\cdots,r_{j+m}^n$ is
\begin{align*}
S = 1 - 2\lambda \left( (\gamma_{13}+\gamma_{14})r_j^n + (\gamma_{23}+\gamma_{24})r_{j+1}^n \right),
\end{align*}
where the fact $\gamma_{13}+\gamma_{14}+\gamma_{23}+\gamma_{24}=-1$ is used.
Since the summation does not equal one, we split two quadratic terms with respect to $r_j^n$ and $r_{j+1}^n$, which gives the form
\begin{align}\label{eq:quad_comb}
r_j^{n+1} = \sum_{-1\leq k\leq m}c_{j,k}^n r_{j+k}^n - 2\lambda (\gamma_{13}+\gamma_{14}) (r_j^n)^2 - 2\lambda (\gamma_{23}+\gamma_{24}) (r_{j+1}^n)^2,
\end{align}
such that $\sum_{-1\leq k\leq m}c_{j,k}^n=1$.
The coefficients $\{c_{j,k}^n\}_{-1\leq k\leq m}$ are given by:
\begin{subequations}\label{eq:coefficients}
\begin{align}
c_{j,-1}^n &= \lambda \theta_j^{n,(1)} + \lambda w_0 \left( \theta_j^{n,(3)} + \gamma_{13}r_{j-1}^n + (\gamma_{23}-\gamma_{13})r_j^n -\gamma_{23}r_{j+1}^n \right); \\
c_{j,0}^n &= 1 + \lambda \left( \theta_j^{n,(2)} - \theta_j^{n,(1)} \right) + \lambda p_{j,0}^n + 2\lambda (\gamma_{13}+\gamma_{14}) r_j^n; \\
c_{j,1}^n &= -\lambda \theta_j^{n,(2)} + \lambda p_{j,1}^n + 2\lambda (\gamma_{23}+\gamma_{24}) r_{j+1}^n; \\
c_{j,k}^n &= \lambda p_{j,k}^n, \quad k=2,\cdots,m;
\end{align}
\end{subequations}
where
\begin{align}\label{eq:coefficients_p}
p_{j,k}^n =& w_{k+1} \left( \theta_j^{n,(3)} + \gamma_{13}r_{j-1}^n + (\gamma_{23}-\gamma_{13})r_j^n -\gamma_{23}r_{j+1}^n \right) \\
& + w_k \left( \theta_j^{n,(4)}-\theta_j^{n,(3)} + (\gamma_{14}-\gamma_{13}) r_{j-1}^n + (\gamma_{24}-\gamma_{13}-\gamma_{14}-\gamma_{23}) r_j^n - (\gamma_{23}+\gamma_{24}) r_{j+1}^n \right) \notag\\
& - w_{k-1} \left( \theta_j^{n,(4)} + \gamma_{14}r_{j-1}^n + (\gamma_{14}+\gamma_{24})r_j^n + \gamma_{24}r_{j+1}^n \right), \notag
\end{align}
and we make the convention that $w_{-1} = w_m = w_{m+1} = 0$.
The initial one-sided Lipschitz condition \eqref{eq:ini_lip_const} gives $r_j^0\geq -Lh$ for all $j\in\mathbb{Z}$.
We next show that if \eqref{eq:num_lip_estimate_1} holds for any $n\geq0$, then it is also true for $n+1$. Then \eqref{eq:num_lip_estimate_1} follows by induction.
Let us use \eqref{eq:quad_comb}-\eqref{eq:coefficients_p}.
By Assumptions~\ref{assm:3}-\ref{assm:5},
we have $c_{j,k}^n\geq0$ for $k=-1,0$ and $-\lambda \theta_j^{n,(2)} + 2\lambda (\gamma_{23}+\gamma_{24}) r_{j+1}^n\geq0$. To show $c_{j,k}^n\geq0$ for all $-1\leq k\leq m$, it suffices to show $p_{j,k}^n\geq0$ for all $k=1,\cdots,m$.
By Assumptions~\ref{assm:3}-\ref{assm:5}, we have that
\begin{align*}
w_{k+1} \leq w_k \leq w_{k-1} \leq w(0)m^{-1}, \quad
w_{k-1}-w_k \geq cm^{-2}, \quad w_k-w_{k+1} \geq cm^{-2},
\end{align*}
and $\theta_j^{n,(3)} + \theta_j^{n,(4)} \leq -\rho_{\mathrm{min}}$, where the constant $c$ is as in \eqref{eq:weight_monotone} and the constant $\rho_{\mathrm{min}}$ is as in \eqref{eq:ini_lip_const}. Then we deduce that
\begin{align*}
p_{j,k}^n \geq& -2 w(0)m^{-1} [(\gamma_{13}+\gamma_{14})r_j^n + (\gamma_{23}+\gamma_{24})r_{j+1}^n] + cm^{-2} (\theta_j^{n,(3)} + \theta_j^{n,(4)}) \\
\geq& -2 w(0)m^{-1}Lh + cm^{-2} \rho_{\mathrm{min}}
\geq m^{-2} (c\rho_{\mathrm{min}} - 2w(0)\delta L)
\geq 0,
\end{align*}
provided $0<\delta\leq\delta_0=\frac{c\rho_{\mathrm{min}}}{2Lw(0)}$.
Now we have that the coefficients $\{c_{j,k}^n\}_{-1\leq k\leq m}$ are all nonnegative and the sum of the coefficients $\sum_{-1\leq k\leq m}c_{j,k}^n=1$.
Therefore $r_j^{n+1}$ is a convex combination of $r_{j-1}^n,r_j^n,r_{j+1}^n,\cdots,r_{j+m}^n$ plus the nonnegative quadratic terms $- 2\lambda (\gamma_{13}+\gamma_{14}) (r_j^n)^2 - 2\lambda (\gamma_{23}+\gamma_{24}) (r_{j+1}^n)^2$. Hence we have:
\begin{align*}
\inf_{j\in\mathbb{Z}}r_j^{n+1}\geq\inf_{j\in\mathbb{Z}}r_j^n\geq -Lh,
\end{align*}
which completes the proof.
\end{proof}
Based on Lemma~\ref{lem:lip_estimate_1}, a more careful analysis gives the following sharper estimate corresponding to the entropy condition \eqref{eq:oleinik}.
\begin{lemm}\label{lem:lip_estimate_2}
Suppose all conditions in Theorem~\ref{thm:a_priori_estimates} are given, and that $0<h<h_0$ with $h_0>0$ only depending on $1-\lambda\sum_{i=1}^4\norm{\theta^{(i)}}_\infty$ and $\frac{c\rho_{\mathrm{min}}}{2Lw(0)}-\delta$. We have:
\begin{align}
L^n\leq\frac{1}{\frac{1}{L^0}+2n\tau}\leq\frac{1}{2n\tau},\quad n\geq1,\label{eq:num_lip_estimate_2}
\end{align}
where
\begin{align}
L^n\triangleq\sup_{j\in\mathbb{Z}}\max\left\{-\frac{r_j^n}{h},0\right\},\quad n\geq0.
\end{align}
\end{lemm}
\begin{proof}
We still start with \eqref{eq:quad_comb}. For $k\neq0,1$, we use the estimate
\begin{align}\label{eq:c_tmp1}
c_{j,k}^nr_{j+k}^n\geq -c_{j,k}^nL^nh.
\end{align}
For $k=0$ and $k=1$, we consider the following quadratic functions:
\begin{align*}
b_0(r_j^n) \doteq c_{j,0}^nr_j^n - 2\lambda (\gamma_{13}+\gamma_{14}) (r_j^n)^2 , \quad b_1(r_{j+1}^n) \doteq c_{j,1}^nr_{j+1}^n - 2\lambda (\gamma_{23}+\gamma_{24}) (r_{j+1}^n)^2,
\end{align*}
respectively.
One can verify that
\begin{align*}
b'_0(r_j^n) &= c_{j,0}^n - 4\lambda (\gamma_{13}+\gamma_{14}) r_j^n \geq c_{j,0}^n + 4\lambda (\gamma_{13}+\gamma_{14})L^nh \geq C_0 - 4\lambda Lh, \\
b'_1(r_{j+1}^n) &= c_{j,1}^n - 4\lambda (\gamma_{23}+\gamma_{24}) r_{j+1}^n \geq c_{j,1}^n - 4\lambda (\gamma_{23}+\gamma_{24})L^nh \geq C_1 - 4 \lambda Lh,
\end{align*}
when $r_j^n, r_{j+1}^n \geq -L^nh$, where the constant $C_0>0$ only depends on $1-\lambda\sum_{i=1}^4\norm{\theta^{(i)}}_\infty$ and the constant $C_1>0$ only depends on $\frac{c\rho_{\mathrm{min}}}{2Lw(0)}-\delta$. Therefore there exists $h_0>0$ only depending on $1-\lambda\sum_{i=1}^4\norm{\theta^{(i)}}_\infty$ and $\frac{c\rho_{\mathrm{min}}}{2Lw(0)}-\delta$ such that $b'_0(r_j^n)\geq0, b'_1(r_{j+1}^n)\geq0$ whenever $h<h_0$.
In this case, we have
\begin{align}\label{eq:c_tmp2}
b_0(r_j^n) \geq -c_{j,0}^nL^nh - 2\lambda (\gamma_{13}+\gamma_{14}) (L^nh)^2 , \quad b_1(r_{j+1}^n) \geq -c_{j,1}^nL^nh - 2\lambda (\gamma_{23}+\gamma_{24}) (L^nh)^2.
\end{align}
Summing up \eqref{eq:c_tmp1} for $k\neq0,1$ and \eqref{eq:c_tmp2} for $k=0,1$, and noting that $\gamma_{13}+\gamma_{14}+\gamma_{23}+\gamma_{24}=-1$, we obtain:
\begin{align*}
r_j^{n+1}\geq-\left(\sum_{-1\leq k\leq m}c_{j,k}^n\right)L^nh+2\lambda(L^nh)^2=-L^nh+2(L^n)^2h\tau,
\end{align*}
which yields
$ L^{n+1}\leq L^n-2(L^n)^2\tau$.
Then \eqref{eq:num_lip_estimate_2} follows by induction.
\end{proof}
Now let us go back to check the monotonicity of the scheme \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2}. With the derived one-sided Lipschitz estimate \eqref{eq:num_lip_estimate_1}, a calculation similar to that in the proof of Lemma~\ref{lem:lip_estimate_1} gives:
\begin{align*}
\frac{\partial \mathcal{H}}{\partial \rho^n_{j+k}} &= \lambda \left( w_{k+1} \theta_j^{n,(3)} - w_k \theta_{j+1}^{n,(3)} + w_k \theta_j^{n,(4)} - w_{k-1} \theta_{j+1}^{n,(4)} \right) \\
&\geq \lambda m^{-2} (c\rho_{\mathrm{min}} - 2w(0)\delta L) \geq0,
\end{align*}
for $k=2,\cdots,m$. In this case, the scheme \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2} is monotone with respect to each of its arguments. As a direct corollary, it is total variation diminishing (TVD). So we have the following lemma.
\begin{lemm}\label{lem:tv_estimate_ac}
Under the same conditions as in Lemma~\ref{lem:lip_estimate_1}, the numerical solution $\{\rho_j^n\}_{j\in\mathbb{Z}}^{n\geq0}$ produced by the scheme \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2} satisfies:
\begin{align}
\sum_{j\in\mathbb{Z}}|r_j^n|\leq& \sum_{j\in\mathbb{Z}}|r_j^0|\leq \mathrm{TV}(\rho_0),\quad n\geq0;\label{eq:tv_estimate_ac}\\
\sum_{j\in\mathbb{Z}}|\rho_j^{n+1}-\rho_j^n|\leq& \lambda \norm{\nabla g}_\infty \sum_{j\in\mathbb{Z}}|r_j^n|\leq \mathrm{TV}(\rho_0),\quad n\geq0.\label{eq:tv_estimate_2_ac}
\end{align}
\end{lemm}
The proof of the Lemma is similar to that given in \cite{leveque2002finite} for monotone schemes of scalar conservation laws. The total variation estimate \eqref{eq:numerical_tvd} follows immediately from the above lemma.
\subsection{Convergence}\label{sec:convergence}
In this subsection, we are going to give the proofs of Theorem~\ref{thm:ac} and Theorem~\ref{thm:numerical_convergence}.
We recall that the numerical solution is defined as:
\begin{align}\label{eq:sol_piece_const}
\rho^{\delta,h}(t,x)=\sum_{j\in\mathbb{Z}}\sum_{n=0}^\infty\rho_j^n\mathbf{1}_{\mathcal{C}_j\times\mathcal{T}^n}(t,x),
\end{align}
where $\mathcal{C}_j=(x_{j-1/2},x_{j+1/2}) \ \forall j\in\mathbb{Z}$ and $\mathcal{T}^n=(t^n,t^{n+1})\ \forall n\geq0$.
\begin{proof}[Proof of Theorem~\ref{thm:ac}]
Let us consider the family of numerical solutions $
\{ \rho^{\delta,h}\}_{0 < \delta \leq \delta_0, 0 < h < 1}$,
where $\delta_0$ is as in \eqref{eq:delta_condition}.
Theorem~\ref{thm:a_priori_estimates} gives the a priori $\mathbf{L}^\infty$ and total variation estimates on $\rho^{\delta,h}$, thus the family of numerical solutions is uniformly bounded in $\mathbf{BV}_{\mathrm{loc}}([0,+\infty) \times \mathbb{R})$. Thus, it is precompact in the $\mathbf{L}^1_{\mathrm{loc}}$ norm (see \cite{evans2018measure}), and there exists a sequence
$\{\rho^{\delta_l, h_l}\}$ converging in $\mathbf{L}^1_{\mathrm{loc}}([0,+\infty)\times\mathbb{R})$ to a limit function $\rho^{*}$ as $\delta_l\to0, h_l\to0$ simultaneously. Noting the uniqueness of the entropy solution, to show the convergence of $\rho^{\delta,h}$ when $\delta\to0$ and $h\to0$ along an arbitrary path, we only need to show $\rho^{*}$ satisfies both the weak form \eqref{eq:local_sol} and the entropy condition \eqref{eq:oleinik}.
For any test function $\phi\in\mathbf{C}^1_{\mathrm{c}}\left([0,+\infty)\times\mathbb{R}\right)$, we denote $\phi_j^n=\phi(t^n,x_j)$ for all $j\in\mathbb{Z}$ and $n\geq0$. Multiplying the scheme \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2} by $\phi_j^nh$, summing over all $j\in\mathbb{Z}$ and $n\geq0$, and applying summation by parts, we obtain:
\begin{align}
h\tau \sum_{n\geq1} \sum_{j\in\mathbb{Z}} \frac{\phi_j^n-\phi_j^{n-1}}{\tau} \rho_j^n + h\tau \sum_{n\geq0} \sum_{j\in\mathbb{Z}} \frac{\phi_{j+1}^n-\phi_j^n}{h} g(\rho_j^n,\rho_{j+1}^n,q_j^n,q_{j+1}^n) + h \sum_{j\in\mathbb{Z}} \phi_j^0\rho_j^0 = 0. \label{eq:dis_weak_form}
\end{align}
When $h\to0$, given the assumptions on $\phi$, it is straightforward to show that:
\begin{align}
h\sum_{j\in\mathbb{Z}}\phi_j^0\rho_j^0\to&\int_{\mathbb{R}}\rho_0(x)\phi(0,x)\,dx,\label{eq:conv_1}\\
h\tau\sum_{n\geq1}\sum_{j\in\mathbb{Z}}\frac{\phi_j^n-\phi_j^{n-1}}{\tau}\rho_j^n\to&\int_0^\infty\int_{\mathbb{R}}\rho^*(t,x)\partial_t\phi(t,x)\,dxdt.\label{eq:conv_2}
\end{align}
We need to show:
\begin{align*}
h\tau \sum_{n\geq0} \sum_{j\in\mathbb{Z}} \frac{\phi_{j+1}^n-\phi_j^n}{h} g(\rho_j^n,\rho_{j+1}^n,q_j^n,q_{j+1}^n) \to \int_0^\infty\int_{\mathbb{R}} \rho^\ast(t,x)v\left(\rho^\ast(t,x)\right)\partial_x\phi(t,x) \,dxdt.
\end{align*}
Using the total variation estimate \eqref{eq:tv_estimate_ac}, we have
\begin{align}
&\sum_{j\in\mathbb{Z}} |g(\rho_j^n,\rho_{j+1}^n,q_j^n,q_{j+1}^n) - \rho_j v(q_j^n)|
= \sum_{j\in\mathbb{Z}} |g(\rho_j^n,\rho_{j+1}^n,q_j^n,q_{j+1}^n) - g(\rho_j^n,\rho_j^n,q_j^n,q_j^n)| \notag\\
& \qquad \leq \norm{\nabla g}_\infty \sum_{j\in\mathbb{Z}} |\rho_{j+1}^n-\rho_j^n|+|q_{j+1}^n-q_j^n|
\leq 2\norm{\nabla g}_\infty \mathrm{TV}(\rho_0), \label{eq:conv3}
\end{align}
and
\begin{align*}
& \sum_{j\in\mathbb{Z}} |\rho_j v(q_j^n) - \rho_j v(\rho_j^n)|
\leq \sum_{j\in\mathbb{Z}} |q_j^n-\rho_j^n|
\leq \sum_{k=0}^{m-1} w_k \sum_{j\in\mathbb{Z}} |\rho_{j+k}^n-\rho_j^n| \\
&\qquad \leq \left( \sum_{k=1}^{m-1} kw_k \right) \sum_{j\in\mathbb{Z}} |\rho_{j+1}^n-\rho_j^n|
\leq \frac{mw(0)}{2} \mathrm{TV}(\rho_0).
\end{align*}
Therefore we have that the difference between the summations
\begin{align*}
& \left|
h\tau \sum_{n\geq0} \sum_{j\in\mathbb{Z}} \frac{\phi_{j+1}^n-\phi_j^n}{h} g(\rho_j^n,\rho_{j+1}^n,q_j^n,q_{j+1}^n)
-
h\tau \sum_{n\geq0} \sum_{j\in\mathbb{Z}} \frac{\phi_{j+1}^n-\phi_j^n}{h} \rho_j^n v(\rho_j^n) \right|\\
&\quad \leq
C(\phi) \left( 2\norm{\nabla g}_\infty \mathrm{TV}(\rho_0) h + \frac{w(0)}{2} \mathrm{TV}(\rho_0) \delta \right) \to 0,
\end{align*}
as $\delta\to0,h\to0$, where $C(\phi)>0$ is a constant only depending on $\phi$.
Then we can pass the limit
\begin{align*}
h\tau \sum_{n\geq0} \sum_{j\in\mathbb{Z}} \frac{\phi_{j+1}^n-\phi_j^n}{h} \rho_j^n v(\rho_j^n)
\to
\int_0^\infty\int_{\mathbb{R}} \rho^\ast(t,x)v\left(\rho^\ast(t,x)\right)\partial_x\phi(t,x) \,dxdt
\end{align*}
as $h\to0$ by using the $\mathbf{L}^1_{\mathrm{loc}}$ convergence from $\rho^{\delta,h}$ to $\rho^*$, and the a priori $\mathbf{L}^\infty$ bound as given in \eqref{eq:numerical_maxm_principle}, and deduce that $\rho^*$ satisfies \eqref{eq:local_sol}.
For the entropy condition, let us consider numerical solutions $\tilde{\rho}^{\delta,h}$ that are constructed by linear interpolation rather than the piecewise constant reconstruction as defined in \eqref{eq:sol_piece_const}. Then by Lemma~\ref{lem:lip_estimate_2}, $\tilde{\rho}^{\delta,h}$ satisfies the one-sided Lipschitz estimate:
\begin{align}\label{eq:dis_olenik}
-\frac{\tilde{\rho}^{\delta,h}(t,y)-\tilde{\rho}^{\delta,h}(t,x)}{y-x}\leq \frac{1}{2t} \quad \forall x\neq y\in\mathbb{R},\,t>0.
\end{align}
Noting that $\tilde{\rho}^{\delta,h}$ converges to the same limit function $\rho^{*}$ pointwise, we can show that $\rho^{*}$ satisfies the Oleinik's entropy condition \eqref{eq:oleinik} by passing the limit on \eqref{eq:dis_olenik}.
\end{proof}
To prove Theorem~\ref{thm:numerical_convergence}, we first prove the following lemma.
\begin{lemm}\label{lemm:numerical_convergence}
Under Assumptions~\ref{assm:1}-\ref{assm:5},
and that $\delta$ satisfies the condition \eqref{eq:delta_condition}.
When $\delta\to\delta_*>0$ and $h\to0$,
the numerical solution $\rho^{\delta,h}$ produced by the scheme \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2} converges in $\mathbf{L}^1_{\mathrm{loc}}([0,+\infty)\times\mathbb{R})$ to the weak solution $\rho^{\delta_*}$ of the nonlocal LWR model \eqref{eq:nonlocal_lwr} as defined in Proposition~\ref{prop:nonlocal_sol}.
\end{lemm}
\begin{proof}
Similarly as in the proof of Theorem~\ref{thm:ac}, when taking the limit $\delta\to\delta_*$ and $h\to0$, there exists a sequence $\{\rho^{\delta_l,h_l}\}$ converging to a limit function $\rho^{**}$ in the $\mathbf{L}^1_{\mathrm{loc}}$ norm with $\delta_l\to\delta_*, h_l\to0$. Noting that Proposition~\ref{prop:nonlocal_sol} already gives the solution uniqueness, we only need to show that the limit function $\rho^{**}$ satisfies the weak form \eqref{eq:nonlocal_sol}.
With similar calculations to those in the proof of Theorem~\ref{thm:ac}, we have \eqref{eq:dis_weak_form}-\eqref{eq:conv_2} for $\rho^{**}$. But here we only use \eqref{eq:conv3} and the convergence:
\begin{align}\label{eq:conv4}
\sum_{j\in\mathbb{Z}}\sum_{n=0}^\infty q_j^n\mathbf{1}_{\mathcal{C}_j\times\mathcal{T}^n}(t,x) \to \int_0^\delta\rho^{\delta,h}(t,x+s)w_\delta(s)\,ds,
\end{align}
in the $\mathbf{L}^1_{\mathrm{loc}}$ norm. The proof of \eqref{eq:conv4} is similar to that given in \cite{Blandin2016}, we omit the details here.
Then we have
\begin{align*}
& h\tau\sum_{n\geq0}\sum_{j\in\mathbb{Z}}\frac{\phi_{j+1}^n-\phi_j^n}{h}g(\rho_j^n,\rho_{j+1}^n,q_j^n,q_{j+1}^n)\\
&\qquad \to\int_0^\infty\int_{\mathbb{R}}\rho^{**}(t,x) v\left(\int_0^\delta\rho^{**}(t,x+s)w_\delta(s)\,ds\right)\partial_x\phi(t,x)\,dxdt,
\end{align*}
which implies that $\rho^{**}$ satisfies \eqref{eq:nonlocal_sol}.
\end{proof}
We now give the proof of Theorem~\ref{thm:numerical_convergence}.
\begin{proof}[Proof of Theorem~\ref{thm:numerical_convergence}]
For any bounded set $U\subset[0,+\infty)\times\mathbb{R}$, suppose \eqref{eq:uniform_numerical_convergence} is not true, there exists a sequence of $\delta_l$ and $h_l$ where $\delta_l\in(0,\delta_0]$ and $h_l\to0$ as $l\to\infty$, and $\varepsilon>0$, such that
\begin{align*}
\norm{\rho^{\delta_l,h_l} - \rho^{\delta_l}}_{\mathbf{L}^1(U)} \geq \varepsilon.
\end{align*}
By possibly selecting a subsequence we suppose $\delta_l\to \delta_*\in[0,\delta_0]$.
If $\delta_l\to0$, both $\rho^{\delta_l,h_l}$ and $\rho^{\delta_l}$ converge to $\rho^0$; If $ \delta_l\to \delta_*>0$, by Lemma~\ref{lemm:numerical_convergence}, $\rho^{\delta_l,h_l}\to\rho^{\delta_*}$, and by applying the same arguments on continuum solutions, it holds that $\rho^{\delta_l}\to\rho^{\delta_*}$. In either case there is a contradiction.
\end{proof}
\subsection{Local limit of numerical discretizations}\label{sec:local_limit_num}
We now present the proof of Theorem~\ref{thm:ap}.
\begin{proof}[Proof of Theorem~\ref{thm:ap}]
For any bounded set $U\subset[0,+\infty)\times\mathbb{R}$, suppose \eqref{eq:ap_conv} is not true, there exists a sequence of $\delta_l$ and $h_l$ where $h_l\in(0,h_0]$ and $\delta_l\to0$ as $l\to\infty$, and $\varepsilon>0$, such that
\begin{align*}
\norm{\rho^{\delta_l,h_l} - \rho^{0,h_l}}_{\mathbf{L}^1(U)} \geq \varepsilon.
\end{align*}
By possibly selecting a subsequence we suppose $h_l\to h_*\in[0,h_0]$.
If $h_l\to0$, both $\rho^{\delta_l,h_l}$ and $\rho^{0,h_l}$ converge to $\rho^0$; If $ h_l\to h_*>0$, by Proposition~\ref{prop:scheme_local_limit} it holds that $\rho^{\delta_l,h_l}=\rho^{0,h_l}$ when $l$ is large enough. In either case there is a contradiction.
\end{proof}
\section{Numerical experiments}\label{sec:numerical_experiments}
In this section, we test the presented numerical scheme \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2} in several numerical experiments to demonstrate the established results.
In the implementation of the scheme \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2},
the numerical flux function $g$ is chosen from the ones given in \eqref{eq:g_func_LxF}-\eqref{eq:g_func_LxF_new},
and the numerical quadrature weights $\{w_k\}_{0\leq k\leq m-1}$ are chosen from the ones given in \eqref{eq:quad_weight_left}-\eqref{eq:quad_weight_left_norm}-\eqref{eq:quad_weight_exact}.
We fix the CFL ratio $\lambda=0.25$. For the Lax-Friedrichs type numerical flux functions \eqref{eq:g_func_LxF} and \eqref{eq:g_func_LxF_new}, we fix the numerical viscosity constant $\alpha=2$.
In all but the final experiments, we use the linear decreasing kernel $w_\delta(s) = \frac{2}{\delta^2}(\delta - s)$.
Assuming $\delta=mh$ where $m$ is a positive integer, the numerical quadrature weights for the linear decreasing kernel computed from \eqref{eq:quad_weight_left}-\eqref{eq:quad_weight_left_norm}-\eqref{eq:quad_weight_exact} are given respectively by
\begin{itemize}
\item (Left endpoint) $w_k = \frac{2(m-k)}{m^2}$ for $0 \leq k \leq m-1$, with $\sum_{k=0}^{m-1} w_k = 1 + \frac1m$;
\item (Normalized left endpoint) $w_k = \frac{2(m-k)}{m(m+1)}$ for $0 \leq k \leq m-1$, with $\sum_{k=0}^{m-1} w_k = 1$;
\item (Exact quadrature) $w_k = \frac{2(m-k)-1}{m^2}$ for $0 \leq k \leq m-1$, with $\sum_{k=0}^{m-1} w_k = 1$.
\end{itemize}
The velocity function is chosen to be $v(\rho)=1-\rho$. Two sets of initial data $\rho_0$ are used, one is a bell-shaped curve:
\begin{align}\label{eq:ini_bellshape}
\rho_0(x) = 0.4 + 0.4 \exp\left(-100(x-0.5)^2\right), \quad x\in\mathbb{R},
\end{align}
while the other represents the Riemann data:
\begin{align}\label{eq:ini_riemann}
\rho_0(x) = \begin{dcases}
\rho_L, \quad x < 0.5 \\
\rho_R, \quad x > 0.5
\end{dcases} , \quad x\in\mathbb{R},
\end{align}
we take $\rho_L=0.1$ and $\rho_R=0.6$ in all the experiments.
The numerical solutions are presented on the spatial domain $x\in[0,1]$ and in the time horizon $t\in[0,1]$ even though
the numerical computations are done on a larger spatial domain with the constant extension on both sides.
In the first three experiments, we examine the asymptotically compatibility and uniform numerical convergence of the scheme \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2} with different numerical quadrature weights.
In the last experiment, we test the scheme with different choices of the nonlocal kernel.
{\bf Experiment 1.}
We first present numerical solutions $\rho^{\delta,h}$ computed with the Lax-Friedrichs numerical flux function \eqref{eq:g_func_LxF} and different numerical quadrature weights.
For each initial data and each set of numerical quadrature weights, we compute the numerical solution $\rho^{\delta,h}$ with $\delta=0.005, h=0.001$ and plot its snapshots at selected times $t=0,0.5,1$.
Moreover, the snapshot of the numerical solution $\rho^{\delta,h}$ at time $t=1$ is compared with that of the solution $\rho^0$ of the local model \eqref{eq:lwr}.
In this experiment, the local solution $\rho^0$ is also computed numerically because the analytical solution is not always available. The numerical computation is done on a fine grid with $h=0.0002$ using a Lax-Friedrichs scheme for \eqref{eq:lwr} with the numerical flux function
\begin{align}\label{eq:local_LxF}
g_{\mathrm{local}}(\rho_L,\rho_R) = \frac12 (\rho_L v(\rho_L) + \rho_R v(\rho_R)) + \frac\alpha2 (\rho_L - \rho_R),
\end{align}
that is the local counterpart of \eqref{eq:g_func_LxF}.
The snapshot of the local solution $\rho^0$ at time $t=1$ is plotted with dashed line.
See Figure~\ref{fig:exp_1}.
\begin{figure}[htbp!]
\centering
\begin{subfigure}{.32\textwidth}
\includegraphics[width=\textwidth]{figs/exp1_bellshape_left.png}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\includegraphics[width=\textwidth]{figs/exp1_bellshape_left_norm.png}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\includegraphics[width=\textwidth]{figs/exp1_bellshape_exact.png}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\includegraphics[width=\textwidth]{figs/exp1_riemann_left.png}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\includegraphics[width=\textwidth]{figs/exp1_riemann_left_norm.png}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\includegraphics[width=\textwidth]{figs/exp1_riemann_exact.png}
\end{subfigure}
\caption{Experiment 1: Snapshots of computed solutions for the bell-shaped initial data (\emph{top}) and the Riemann initial data (\emph{bottom}) corresponding to the left endpoint quadrature weights (\emph{left}), the normalized left endpoint quadrature weights (\emph{middle}), and the exact quadrature weights (\emph{right}).}
\label{fig:exp_1}
\end{figure}
For the bell-shaped initial data, we observe from the top row of Figure~\ref{fig:exp_1} that the numerical solutions of the nonlocal model preserve the smoothness of the initial data while the local solution develops a shock. At time $t=1$, the numerical solutions of the nonlocal model computed with the normalized left endpoint quadrature weights and the exact quadrature weights are close to the local solution, especially in the region away from the shock of the local solution.
This means that the numerical solution $\rho^{\delta,h}$ with both $\delta,h$ small provides a good approximation to the local solution $\rho^0$, which validates the conclusion of Theorem~\ref{thm:ac}.
We also observe from the top left figure of Figure~\ref{fig:exp_1} that the numerical solution of the nonlocal model computed with the left endpoint quadrature weights is very different from the local solution at time $t=1$. Although the numerical solution of the nonlocal model still approximates a shock profile at time $t=1$, the shock position is incorrect.
The comparison between the three sets of numerical quadrature weights emphasizes the significance of the normalization condition \eqref{eq:normalization_condition} for numerical quadrature weights.
For the Riemann initial data, the local solution $\rho^0$ is a traveling wave moving at the constant speed $v=1-(\rho_L+\rho_R)=0.3$.
We observe from the bottom row of Figure~\ref{fig:exp_1} that the numerical solutions of the nonlocal model computed with the normalized left endpoint quadrature weights and the exact quadrature weights are close to the local solution at time $t=1$.
Meanwhile, in contrast to the discontinuity of the local solution, the nonlocal solutions get smoothed because of the nonlocal effects.
We also observe from the bottom left figure of Figure~\ref{fig:exp_1} that the numerical solution of the nonlocal model computed with the left endpoint quadrature weights is very different from the local solution at time $t=1$. While the former still approximates a Riemann data at time $t=1$, the position of the jump from $\rho_L=0.1$ to $\rho_R=0.6$ is incorrect.
The comparison again emphasizes the significance of the normalization condition \eqref{eq:normalization_condition} for numerical quadrature weights.
{\bf Experiment 2.} We next check the asymptotically compatibility of the scheme \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2} by plotting $\norm{\rho^{\delta,h} - \rho^0}_{\mathbf{L}^1}$ with $\delta \propto h \to 0$.
We take $\delta=mh$ where $m=1,2,5$ and $h=0.01\times 2^{-l}$ for $l=0,1,2,3$, and compute numerical solutions $\rho^{\delta,h}$ using the Lax-Friedrichs numerical flux function \eqref{eq:g_func_LxF} and different numerical quadrature weights.
The local solution $\rho^0$ is numerically solved on a fine grid with $h=0.01\times 2^{-5}$ using a Lax-Friedrichs scheme for \eqref{eq:lwr} with the numerical flux function \eqref{eq:local_LxF}.
For each initial data and each set of numerical quadrature weights, we compute the $\mathbf{L}^1$ error $\norm{\rho^{\delta,h} - \rho^0}_{\mathbf{L}^1}$ with an interpolation of $\rho^{\delta,h}$ onto the fine grid on which $\rho^0$ is computed, and plot $\norm{\rho^{\delta,h} - \rho^0}_{\mathbf{L}^1}$ against $h^{-1}$ in the log-log scale for $\delta=h$, $\delta=2h$, and $\delta=5h$ in different colors.
We also plot a dashed line with the slope $-1$ to represent the linear convergence rate.
See the results in Figure~\ref{fig:exp_2}.
\begin{figure}[htbp]
\centering
\begin{subfigure}{.32\textwidth}
\includegraphics[width=\textwidth]{figs/exp2_bellshape_left.png}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\includegraphics[width=\textwidth]{figs/exp2_bellshape_left_norm.png}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\includegraphics[width=\textwidth]{figs/exp2_bellshape_exact.png}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\includegraphics[width=\textwidth]{figs/exp2_riemann_left.png}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\includegraphics[width=\textwidth]{figs/exp2_riemann_left_norm.png}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\includegraphics[width=\textwidth]{figs/exp2_riemann_exact.png}
\end{subfigure}
\caption{Experiment 2: Convergence from $\rho^{\delta,h}$ to $\rho^0$ for the bell-shaped initial data (\emph{top}) and the Riemann initial data (\emph{bottom}) corresponding to the left endpoint quadrature weights (\emph{left}), the normalized left endpoint quadrature weights (\emph{middle}), and the exact quadrature weights (\emph{right}).}
\label{fig:exp_2}
\end{figure}
We observe from Figure~\ref{fig:exp_2} that: for the normalized left endpoint quadrature weights and the exact quadrature weights, the error $\norm{\rho^{\delta,h} - \rho^0}_{\mathbf{L}^1}$ has a linear decay rate with respect to $h$ for both initial data and $\delta=mh$ for $m=1,2,5$. This means that $\rho^{\delta,h}$ converges to $\rho^0$ along the limiting paths $\delta=mh\to0$ for $m=1,2,5$, which validates the conclusion of Theorem~\ref{thm:ac}.
Moreover, the numerical results show that the convergence is of first order with the particular choices of the initial data and the limiting paths.
In contrast, for the left endpoint numerical quadrature weights, the error $\norm{\rho^{\delta,h} - \rho^0}_{\mathbf{L}^1}$ stagnates on the scale of $10^{-1}$ for both initial data and $\delta=mh$ for $m=1,2,5$.
This is due to the convergence of $\rho^{\delta,h}$ to an incorrect solution when $\delta=mh\to 0$, further highlighting the importance of asymptotically compatibility via
the normalization condition \eqref{eq:normalization_condition}.
{\bf Experiment 3.} We now check the uniform convergence of the scheme \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2} with respect to $\delta$ by plotting $\norm{\rho^{\delta,h} - \rho^\delta}_{\mathbf{L}^1}$ with $h \to 0$ for different choices of $\delta$.
We take $\delta=0.01\times 2^{-l}$ for $l=0,1,2$ and $h=0.01\times 2^{-l}$ for $l=0,1,2,3$, and compute numerical solutions $\rho^{\delta,h}$ using the Lax-Friedrichs numerical flux function \eqref{eq:g_func_LxF} and different numerical quadrature weights.
The reference solution $\rho^\delta$ is numerically solved on a fine grid with $h=0.01\times 2^{-5}$ using the same scheme.
For each initial data and each set of numerical quadrature weights, we compute the $\mathbf{L}^1$ error $\norm{\rho^{\delta,h} - \rho^\delta}_{\mathbf{L}^1}$ with an interpolation of $\rho^{\delta,h}$ onto the fine grid, and plot $\norm{\rho^{\delta,h} - \rho^\delta}_{\mathbf{L}^1}$ with respect to $h^{-1}$ in the log-log scale for $\delta=0.01$, $\delta=0.005$, and $\delta=0.0025$ in different colors.
A dashed line with the slope $-1$ is again provided.
See the results in Figure~\ref{fig:exp_3}.
\begin{figure}[htbp]
\centering
\begin{subfigure}{.32\textwidth}
\includegraphics[width=\textwidth]{figs/exp3_bellshape_left.png}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\includegraphics[width=\textwidth]{figs/exp3_bellshape_left_norm.png}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\includegraphics[width=\textwidth]{figs/exp3_bellshape_exact.png}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\includegraphics[width=\textwidth]{figs/exp3_riemann_left.png}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\includegraphics[width=\textwidth]{figs/exp3_riemann_left_norm.png}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\includegraphics[width=\textwidth]{figs/exp3_riemann_exact.png}
\end{subfigure}
\caption{Experiment 3: Convergence from $\rho^{\delta,h}$ to $\rho^\delta$ for the bell-shaped initial data (\emph{top}) and the Riemann initial data (\emph{bottom}) corresponding to the left endpoint quadrature weights (\emph{left}), the normalized left endpoint quadrature weights (\emph{middle}), and the exact quadrature weights (\emph{right}).}
\label{fig:exp_3}
\end{figure}
From Figure~\ref{fig:exp_3}, we see that for the normalized left endpoint quadrature weights and the exact quadrature weights, the error $\norm{\rho^{\delta,h} - \rho^\delta}_{\mathbf{L}^1}$ has a linear decay rate with respect to $h$ for both initial data and all choices of $\delta$.
Moreover, the plots of $\norm{\rho^{\delta,h} - \rho^\delta}_{\mathbf{L}^1}$ with respect to $h^{-1}$ have very little change for $\delta=0.01$, $\delta=0.005$, and $\delta=0.0025$. This means that $\rho^{\delta,h}$ converges to $\rho^\delta$ as $h\to0$ uniformly in $\delta$, which validates the conclusion of Theorem~\ref{thm:numerical_convergence}.
In addition, the numerical results show that the convergence is of first order with the particular choices of the initial data and the parameter $\delta$.
In contrast, for the left endpoint numerical quadrature weights, the error $\norm{\rho^{\delta,h} - \rho^\delta}_{\mathbf{L}^1}$ stagnates on the scale of $10^{-1}$ when $h\geq\delta$ for both initial data and all choices of $\delta$.
This may because $\rho^\delta$ approximates $\rho^0$ well when $\delta$ is small while $\rho^{\delta,h}=\rho^{0,h}$ when $h\geq\delta$ and $\rho^{0,h}$ is not a consistent numerical approximation to $\rho^0$.
We also observe that, in each case, the error decays when $h<\delta$. However, the error increases when $\delta$ decreases from $0.01$ to $0.0025$ for any fixed mesh size $h$. One can infer that the convergence from $\rho^{\delta,h}$ to $\rho^\delta$ as $h\to0$ will become slower and slower as $\delta\to0$, and the uniform convergence cannot hold, which is again showing the importance of
the normalization condition \eqref{eq:normalization_condition} for the uniform convergence of the scheme \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2}.
{\bf Experiment 4.} We finally test the scheme \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2} with different choices of the nonlocal kernel.
Besides the linear decreasing kernel considered before, we also use the exponential kernel $w_\delta(s)=\frac{e^{-\frac{s}{\delta}}}{\delta(1-e^{-1})}$
and the constant kernel
$
w_\delta(s) = \frac1\delta
$, and adopt the exact quadrature weights \eqref{eq:quad_weight_exact}.
We take $\delta=mh$ where $m=1,2,5$ and $h=0.01\times 2^{-l}$ for $l=0,1,2,3$, and compute numerical solutions $\rho^{\delta,h}$ using the Lax-Friedrichs numerical flux function \eqref{eq:g_func_LxF} and different numerical quadrature weights.
The local solution $\rho^0$ is numerically solved on a fine grid with $h=0.01\times 2^{-5}$ using a Lax-Friedrichs scheme for \eqref{eq:lwr} with the numerical flux function \eqref{eq:local_LxF}.
For each initial data and each nonlocal kernel, we compute the $\mathbf{L}^1$ error $\norm{\rho^{\delta,h} - \rho^0}_{\mathbf{L}^1}$.
A dashed line with the slope $-1$ is again provided.
See the results in Figure~\ref{fig:exp_4}.
\begin{figure}[htbp]
\centering
\begin{subfigure}{.32\textwidth}
\includegraphics[width=\textwidth]{figs/exp4_bellshape_linear_ac.png}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\includegraphics[width=\textwidth]{figs/exp4_bellshape_exp_ac.png}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\includegraphics[width=\textwidth]{figs/exp4_bellshape_const_ac.png}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\includegraphics[width=\textwidth]{figs/exp4_riemann_linear_ac.png}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\includegraphics[width=\textwidth]{figs/exp4_riemann_exp_ac.png}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\includegraphics[width=\textwidth]{figs/exp4_riemann_const_ac.png}
\end{subfigure}
\caption{Experiment 4: Convergence from $\rho^{\delta,h}$ to $\rho^0$ for the bell-shaped initial data (\emph{top}) and the Riemann initial data (\emph{bottom}) corresponding to the linear decreasing kernel (\emph{left}), the exponential kernel (\emph{middle}), and the constant kernel (\emph{right}).}
\label{fig:exp_4}
\end{figure}
We observe from Figure~\ref{fig:exp_4} that: for all the three nonlocal kernels, the error $\norm{\rho^{\delta,h} - \rho^0}_{\mathbf{L}^1}$ has a linear decay rate with respect to $h$ for both initial data and in all cases $\delta=mh$ for $m=1,2,5$.
Moreover, the plots for the three nonlocal kernels have little difference.
For the linear decreasing kernel and the exponential kernel, the convergence result validates the conclusion of Theorem~\ref{thm:ac}.
For the constant kernel, it does not satisfy the condition that $w=w_\delta(s)$ is strictly decreasing, and \eqref{eq:weight_monotone} does not hold because $w_{k-1}-w_k=0$ for all $k=1,\cdots,m-1$.
In this case, the analysis used in the proof of Theorem~\ref{thm:a_priori_estimates} cannot give the necessary estimates on numerical solutions but the numerical results show that the conclusion of Theorem~\ref{thm:ac} may still be true.
\section{Conclusions and future work}
In this work, finite volume numerical schemes \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2} are studied for solving the nonlocal LWR model \eqref{eq:nonlocal_lwr} with a parameter $\delta$ that measures the range of information exchange.
An important observation is that,
based on both numerical analysis and computational experiments, certain numerical quadrature weights that provide consistent approximations in the case of a given $\delta>0$ may lead to consistency between the scheme \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2} and the local limit \eqref{eq:lwr} of the nonlocal model \eqref{eq:nonlocal_lwr} as $\delta\to 0$ and $h\to 0$. For properly selected numerical quadrature weights, we are able to prove, under reasonable assumptions that the numerical solutions of the nonlocal model converge to the continuum solution of the nonlocal model with a fixed $\delta>0$ as $h\to 0$, while they converge to the entropy solution of the local continuum model \eqref{eq:lwr} as $\delta\to0$ and $h\to 0$ simultaneously.
That is, such schemes are asymptotically compatible with its local limit.
We are able to demonstrate that these asymptotically compatible schemes can offer robust numerical simulations under the changes in $\delta$ due to the uniform convergence when the values of $\delta$ are within a proper range.
Our established results are based on the a priori estimates on the numerical solutions as given in Theorem~\ref{thm:a_priori_estimates}, subject to assumptions alluded to above.
As shown in the computational experiments, the normalization condition for numerical quadrature weights is essential to the asymptotically compatibility of the scheme \eqref{eq:nonlocal_lwr_num}-\eqref{eq:nonlocal_lwr_num_2}.
The experiments also suggest that the results of this work may be extended to the cases with more general nonlocal kernels and numerical flux functions.
It might also be possible to establish the results with more general velocity functions $v=v(\rho)$ other than the linear one $v(\rho)=1-\rho$ used here and also more general initial data that may have negative jumps.
Furthermore, with the a priori bounds on the numerical solutions and known estimates on the exact solutions, it is possible to derive a priori error estimates subject to suitable conditions on the regularities of continuum solutions.
These questions along with further generalizations and applications of nonlocal traffic flow models will be subjects of future research.
\bibliographystyle{siam}
|
1,314,259,994,867 | arxiv | \section{Introduction}
In this paper, we introduce a class of infinite simple groups based on the idea of applying ``(reversible) gates'', namely maps that modify only a bounded set of coordinates, simultaneously at various positions of an ambient group $G$. More precisely we apply them on translates of finite-index subgroups, or lattices, sparse enough so the applications commute. We call the resulting maps gate lattices, and we call the group they generate $\mathfrak{L}$.
As we show, in the case of the ambient group being $\mathbb{Z}$, this idea is in fact strongly connected to existing ideas in one-dimensional symbolic dynamics, in particular to the simple automorphisms of Nasu \cite{Na88}, and the inert automorphisms (kernel of Krieger's dimension representation \cite{BoLiRu88,BoFi91}). Namely, in the stabilized point of view explored in \cite{HaKrSc21}, these all turn out to be equivalent concepts on mixing SFTs.
The connection between simplicity and inertness is made in \cite{Wa90a} (strengthened in \cite{Bo88}). In our terminology, we can essentially interpret this as saying that an inert automorphism can be split into finitely many applications of gate lattices. A related result is proved for two-dimensional full shifts in \cite{Ka96}, namely it is shown that every automorphism is a product of finitely many block permutations and partial shifts. Block permutations are almost equivalent to our gate lattices, and partial shifts can be thought of as the device for making the automorphism inert (although we are not aware of a definition of inertness in this context). Some other papers exploring the idea of commuting (not necessarily reversible) gates are \cite{Sa22a,ArMoEo22,BrGaTh20}.
The observation that one-dimensional full shifts lead to a simple group of gate lattices is made in \cite{HaKrSc21} (though not in this terminology). There the technical gist is the same as here: any nontrivial normal subgroup of $\mathfrak{L}$ actually contains a nontrivial gate lattice. This is Lemma~5.2 in \cite{HaKrSc21}.
Our proof of the analogous claim can be summarized in one sentence that does not really hide any technicalities: commutatoring any nice enough homeomorphism (in particular any element of the stabilized automorphism group) by a gate we obtain a gate, and performing the same commutatoring on a sufficiently sparse finite-index subgroup gives the corresponding gate lattice. Translating this into a proof takes some work, but most of this work is about setting up the algebra and basic theory of gate lattices.
\subsection{Main statements}
We now give the brief definition of our groups (see Section~\ref{sec:Definitions} for more detailed definitions) and a version of the main theorem statement.
Throughout this paper, $G$ stands for a countable residually finite group. A set $X \subset \Sigma^G$ is a \emph{subshift} if it is closed and shift-invariant ($\Sigma$ a finite discrete alphabet), a \emph{gate} on $X$ is a homeomorphism that only modifies a bounded set of coordinates. A gate is \emph{even} if the permutations on the finite set $N$ it modifies are even in all possible contexts $x \in \Sigma^{G \setminus N}$. Our main object of interest is a group called $\hat{\mathfrak{L}}$ whose generators are homeomorphisms where even gates are applied on the right cosets of a finite index subgroup; we call these generators \emph{even gate lattices}.
A \emph{subshift of finite type} is a subshift (shift-invariant closed set) of the form $\bigcap_{g \in G} gC$ for $C \subset A^G$ clopen, which means the subshift is defined by a finite set of forbidden patterns. We also need to define a gluing property: a subshift has the \emph{eventual filling property} if for any finite set $F$ there exists a larger finite set $N$ such that the patterns occurring in $F$ are independent from those occurring in $G \setminus N$ (no control on the transition pattern in $N \setminus F$ is assumed).
\begin{theorem}
\label{thm:Main}
Let $X$ be a subshift of finite type with the eventual filling property, on an countably infinite residually finite group. Then the group generated by even gate lattices on $X$ is simple.
\end{theorem}
This is proved in Theorem~\ref{thm:MainProof}.
In many cases, the group generated by even gate lattices is actually the same as the one generated by arbitrary gate lattices. In particular this happens whenever
\begin{itemize}
\item $X$ has the property that in every \emph{context} (pattern on a fixed co-finite set) the number of ways to fill the remaining hole is even;
\item $X$ is a full shift $A^G$, and $G$ has \emph{halvable subgroups}, meaning every finite index subgroup of $G$ has a subgroup of even index;
\item $G = \mathbb{Z}$.
\end{itemize}
In particular, full shifts have a simple $\mathfrak{L}$ if their alphabet is even, or the group has halvable subgroups. All finitely-generated infinite finite-dimensional matrix groups over commutative rings have halvable subgroups by \cite{We87}, and so do (trivially) all infinite residually finite $2$-groups, including the (non-linear) Grigorchuk group \cite{Gr80}.
In the case $G = \mathbb{Z}$, the eventual filling property is equivalent to mixing, and in this case $X$ is up to isomorphism the set of bi-infinite paths in a finite digraph whose associated matrix is primitive. In this case, $\hat{\mathfrak{L}}$ is not only equal to $\mathfrak{L}$, but is equal to the stabilized inert automorphism group of the subshift, as defined in \cite{HaKrSc21}. This is shown in Proposition~\ref{prop:InertIsEgnets}.
\begin{corollary}
\label{cor:Stab}
The stabilized inert automorphism group of a mixing one-dimensional subshift of finite type is simple.
\end{corollary}
This result was proved in \cite{HaKrSc21} for one-dimensional full shifts.
We also show that, in general, $\mathfrak{L}$ is normal as a subgroup of the stabilized automorphism group, so in the case $\hat{\mathfrak{L}} = \mathfrak{L}$, it is a \emph{maximal} simple subgroup of the stabilized automorphism group. This is Theorem~\ref{thm:MainMaximal}.
In particular, if one defines a notion of inertness for stabilized automorphisms groups on a group other than $\mathbb{Z}$, then if the group of stabilized inert automorphisms $Q$ contains $\hat{\mathfrak{L}}$ (which we feel it should, as $\hat{\mathfrak{L}}$ is as ``inert'' as a group could possibly get), and we also happen to have $\mathfrak{L} = \hat{\mathfrak{L}}$, then $Q$ is simple if and only if it is precisely $\hat{\mathfrak{L}}$. Thus, even though there is no standard notion of inertness for general groups, Theorem~\ref{thm:Main} can reasonably be considered a natural generalization of the result of \cite{HaKrSc21} even beyond the one-dimensional case.
Our precise statement of Theorem~\ref{thm:Main} is Theorem~\ref{thm:MainProof}, and it contains an extra generalization, namely we can take any net of finite-index subgroups (with enough normal subgroups), and consider only assignments of gate lattices where the lattice is on this net. One could define an analogous variant of the stabilized automorphism group by restricting to a subnet of finite-index subgroups, but our proof of Corollary~\ref{cor:Stab} does not directly go through for arbitrary nets.
\section{Setting the scene}
\label{sec:Definitions}
\subsection{Basic (but partially new) definitions}
In this section we fix convensions, recall some basic definitions, and introduce some new ones. By $A \Subset B$ we mean $A$ is a finite subset of $B$. For groups, our commutator convention is $[a,b] = a^{-1}b^{-1}ab$, and conjugation is $a^b = b^{-1}ab$. The identity element of a group $G$ is $1_G$. Intervals $[i,j]$ with $i,j \in \mathbb{Z}$ are discrete, i.e.\ $[i,j] = \{k \in \mathbb{Z} \;|\; i \leq k \leq j\}$. By $\Sigma^*$ we denote finite (possibly empty) words over alphabet $\Sigma$, i.e.\ elements of the free monoid, and for $u \in \Sigma^*$, $|u|$ denotes the length of $u$. In groups of homeomorphisms we write composition of $\phi_1$ and $\phi_2$ as $\phi_1 \circ \phi_2$ or simply $\phi_1 \phi_2$, and the rightmost homeomorphism is applied first. The \emph{support} of a homeomorphism is the smallest closed set such that every point outside is fixed.
A group $G$ is \emph{residually finite} if for every $g \neq e_G$, there exists a normal finite-index subgroup $H$ such that $g \notin H$. We say $G$ is a $2$-group if every element has finite order which is a power of $2$. By $F_n$ we denote the free group on $n$ generators. By $S_n$ and $A_n$ we denote the symmetric and alternating group on $n$ elements, respectively, and $\mathrm{Sym}(A), \mathrm{Alt}(A)$ are the corresponding groups for a set $A$.
Throughout this paper (unless otherwise mentioned), $G$ denotes a countably infinite residually finite discrete group, not necessarily finitely-generated. We think of this as an ``ambient'' group, and often omit it in statements (unless we need to specify further properties of it).
We order finite subsets of a set $G$ by inclusion, and something holds for \emph{arbitrarily large} subsets of $G$ if for any finite set $F \Subset G$ it holds for some $S \supset F$. If $G$ is finitely-generated, we say a sequence of translated finite-index subgroups $H_ig_i$ gets \emph{arbitrarily sparse} if the distance between elements $hg_i, h'g_i$ is uniformly arbitrarily sparse in $h \in H_i$ in some right-invariant word metric; equivalently, the word norm of the minimal non-identity element in $H_i$ grows without bound. On subsets of a group $G$ we use the \emph{Fell topology}, namely the Cantor topology $\{0,1\}^G$ after identifying sets with their characteristic functions. For the subspace of subgroups this is also known as the \emph{Chabauty topology}. On a general group, we say $H_ig_i$ becomes arbitrarily sparse if $(H_i)_i$ tends to the trivial group.
We will frequently and without explicit mention use the following basic group theory fact, which works for any group $G$; for $K$ one can simply take the kernel of the translation action on left cosets of $H$.
\begin{lemma}
If $H \leq G$ is of finite index, then $H$ contains a normal subgroup $K \triangleleft G$ of finite index.
\end{lemma}
A \emph{topological dynamical system} is a pair $(X, G)$ where $X$ is a compact metrizable space, $G$ is a countable discrete group, and $G \curvearrowright X$ acts continuously on $X$. For $G = \mathbb{Z}$ we write this also just as $(X, \sigma)$ where $\sigma$ is the homeomorphism corresponding to the cyclic generator $1 \in \mathbb{Z}$. If $G$ is a countable infinite group and $\Sigma$ a finite set, we consider $\Sigma^G$ with the product topology; topologically it is just the Cantor set. The group $G$ acts on $\Sigma^G$ by $(g, x) \mapsto \sigma_g(x)$ where $\sigma_g(x)_h = x_{hg}$. To shorten formulas, we often omit ``$\sigma$'' and identify elements of $G$ with these translation maps. If $X$ is a topologically closed $G$-invariant set $X \subset \Sigma^G$, $(X, G)$ is called a \emph{subshift} and $x \in X$ is called a \emph{configuration}.
For $X \subset \Sigma^G$ and $D \subset G$, for $x \in X$ write $x|D \in \Sigma^D$ for the partial configuration $\forall g \in D: y_g = x_g$ (we do not drop the right argument of $|$ to a subscript, to avoid complex formulas in subscripts and double subscripting). Partial configurations are also called \emph{patterns}, and they are \emph{finite} if their domain is. Write $X|D \subset \Sigma^D$ for the set of patterns $x|D$ where $x \in X$. In the one-dimensional ($G = \mathbb{Z}$) situation for $u \in \Sigma^*$ we write $u \sqsubset X$ for $u \in X|[0,|u|-1]$, with the obvious identification of words and patterns. When a subshift is clear from context, $y \in \Sigma^D$ is a pattern, and $N \subset G$, define $\mathcal{F}(y, N) = \{z \in \Sigma^N \;|\; \exists x \in X: x|D = y, x|N = z\}$. For two patterns $x \in \Sigma^D, y \in \Sigma^E$ with $D \cap E = \emptyset$, write $x \sqcup y$ for the obvious union pattern with domain $D \cup E$.
When a group $G$ is clear from context, we fix a net $(B_r)_r$ of finite subsets of $G$ that exhausts it (we do not name the directed set of $r$s). We call the finite set $B_r$ the \emph{ball} of radius $r$. For a symmetric set $N$, write $A_{r,N} = NB_r \setminus B_r$ for the \emph{annulus} of \emph{thickness} $N$. As the notation may suggest, we like to pretend our groups are finitely-generated; in this case one may fix some finite set of generators (all results will be independent of this choice, more generally the net). For $d$ the corresponding right-invariant word metric, and $r \in \mathbb{N}$, we can pick $B_r = \{g \;|\; d(g, 1_G) \leq r\}$. Picking $N = B_R$, the corresponding annulus is just $A_{r,N} = B_{r+R} \setminus B_r$, explaining the terminology. Patterns whose domain is an annulus $A_{r,N}$ are often called \emph{contexts}, and many of our arguments deal with the various fillings $x \in \mathcal{F}(P, B_r)$ (or $x \in \mathcal{F}(P, NB_r)$ if we want to keep the context visible) for contexts $P \in X|A_{r, N}$. Sometimes we use similar terminology with ``full'' contexts $x \in X|G \setminus N$ with $N \Subset G$.
The usual notion of isomorphism for topological dynamical systems is \emph{topological conjugacy}, meaning homeomorphism commuting with the action. Up to topological conjugacy, we can define subshifts on countable groups as expansive actions on compact subsets of the Cantor set, where \emph{expansivity} means that there exists $\epsilon > 0$ such that
\[ (\forall g \in G: d(gx, gy) < \epsilon) \implies x = y \]
where $d$ metrizes the Cantor topology.
A subshift of the form $\bigcap_{g \in G} gC$, where $C \subset \Sigma^G$ is clopen, is called an \emph{SFT} (short for subshift of finite type). If $P \in \Sigma^D$ where $D \Subset G$, the \emph{cylinder} defined by $P$ is $[P] = \{x \in X \;|\; x|D = P\}$. A clopen set is a finite union of cylinders, and a \emph{window} for an SFT $X$ is any symmetric finite set $N$ (meaning $N = \{g^{-1} \;|\; g \in N\}$) that contains all the sets $DD^{-1}$ such that $D$ appears among the cylinders. The important property of a window $N$ is the following; we omit the straightforward proof.
\begin{lemma}
If $N$ is a window for $X$, then for all $y \in X|A_{r,N}$, the choices of followers $\mathcal{F}(y, B_r)$ and $\mathcal{F}(y, G \setminus NB_r)$ are independent in $X$, meaning for any $x \in \mathcal{F}(y, B_r)$ and $z \in \mathcal{F}(y, G \setminus NB_r)$, we have $x \sqcup y \sqcup z$ is in $X$.
\end{lemma}
\begin{definition}
A subshift $X$ has the \emph{eventual filling property}, or \emph{EFP}, if
\[ \forall F \Subset G: \exists N \Subset G: \forall x, y \in X: \exists z \in X: z|F = x|F \wedge z|(G \setminus N) = y|(G \setminus N). \]
\end{definition}
EFP is a \emph{gluing property}, meaning it deals with compatibility of patterns on different areas of the group. On the groups $\mathbb{Z}^d$, the EFP property can be seen as a weakening of the uniform filling property, introduced in \cite{RoSa99} for $\mathbb{Z}^2$s. To the best of our knowledge it has not been studied previously even on $\mathbb{Z}^2$.
As a technical weaker notion, we say a subshift has the \emph{many fillings property}, or \emph{MFP}, if as $F \Subset G$ tends to $G$, the number of configurations agreeing with $x|_{G \setminus F}$ tends to infinity uniformly in $x \in X$. A subshift is \emph{nontrivial} if it has at least two configurations. The following is a simple exercise.
\begin{lemma}
Every nontrivial EFP subshift has MFP, in particular it is infinite.
\end{lemma}
It is also easy to see that a nontrivial EFP subshift cannot have isolated points (thus is homeomorphism to the Cantor set).
An \emph{automorphism} of a subshift $X \subset \Sigma^G$ is a homeomorphism $f : X \to X$ that commutes with the action of $G$. By the definition of the topology on $\Sigma^G$, an automorphism has a (not necessarily unique) finite \emph{neighborhood} $N \subset G$ and a \emph{local rule} $\hat f : \Sigma^N \to \Sigma$ such that $f(x)_g = \hat f(P)$ where $P \in \Sigma^N$ is defined by $P_h = x_{hg}$. This is called the \emph{Curtis-Hedlund-Lyndon theorem}. Obviously the inverse of an automorphism is an automorphism as well, so the automorphisms of a subshift form a group denoted by $\mathrm{Aut}(X, G)$. More generally, a topological conjugacy between two subshifts satisfies an obvious analog of the Curtis-Hedlund-Lyndon theorem.
\begin{lemma}
For one-dimensional ($G = \mathbb{Z}$) SFTs, EFP is equivalent to the condition
\[ \exists m: \forall u, v \sqsubset X: \exists w: |w| = m \wedge uwv \sqsubset X. \]
\end{lemma}
(This is the one-dimensional case of the uniform filling property of \cite{RoSa99}.)
\begin{proof}
If the condition holds, then EFP is clear (even without the SFT assumption), namely for any finite set $F \subset [i,j]$ we can pick $N = [i-m, j+m]$ and use the condition on both sides of the interval to glue $F$-patterns (extended arbitrarily to $[i,j]$) to $(\mathbb{Z} \setminus N)$-patterns.
From Curtis-Hedlund-Lyndon, we easily see that topological conjugacy preserves both EFP and the condition in the lemma (even every \emph{factor map}, i.e.\ surjective $G$-commuting continuous map, preserves them), so we can conjugate $X$ to an \emph{edge shift} \cite{LiMa95}, meaning $X$ is the set of paths (sequences of edges with matching endpoints) in a finite directed graph $(V,E)$. Let $M$ be the matrix with $M_{a,b}$ the number of edges from vertex $a$ to vertex $b$
Suppose now that we have EFP. Pick $F = \{0\}$ and $N$ as in the definition of EFP. We can clearly make $G \setminus N$ smaller without breaking the gluing property for this pair, so we may take $N = [i, j]$ with $i \leq 0 \leq j$. Now in particular, by applying the gluing property to all pairs of particular configurations and looking at the rightmost vertex of the edge at $0$, and the leftmost vertex of the edge at $j+1$, we see that for any two vertices $a,b \in V$, we have a path of length $j$ from $a$ to $b$. Thus $M^j$ is a matrix with all entries positive. Now the condition of the lemma holds with $m = j$.
\end{proof}
The condition in the lemma is known to be equivalent to topological mixing \cite{LiMa95}, so we refer to it as simply \emph{mixing}. In the proof we showed that EFP implies that $X$ is defined, as an edge shift, by a \emph{primitive} matrix, namely one with a positive power.
If $X \subset \Sigma^G$ is a subshift and $H \leq G$ is a finite index subgroup, then the action of the subgroup $H$ on $X$ can itself be considered a subshift, namely pick a set of left representatives $R$ and define $\phi : \Sigma^G \to (\Sigma^R)^H$ by $(\phi(x)_h)_r = x_{rg}$. Alternatively, in terms of the abstract characterization of subshifts, it is easy to see that passing to a subgroup of finite index only changes the expansivity constant, since the $G$-action is continuous. Even if $X \subset \Sigma^G$ is not necessarily $G$-invariant, we say it is a $K$-subshift if it is closed and $K$-invariant.
A crucial property of automorphisms of subshifts is that they satisfy a dual notion of continuity, namely information cannot move from near the origin of the group to its ends in one step (which can happen with general homeomorphisms $f : \Sigma^G \to \Sigma^G$).
\begin{definition}
Let $G$ be a countable set, $\Sigma$ an alphabet and $X \subset \Sigma^G$. A homeomorphism $f : X \to X$ is \emph{ntinuous} if for each finite set $S$ there exists a finite set $F$ such that for $g \notin F$ and $x \in X$, $x \mapsto f(x)_g$ factors through the projection $x \mapsto x|{G \setminus S}$. A homeomorphism is \emph{bintinuous} if it is ntinuous, and its inverse is also ntinuous.
\end{definition}
Not every homeomorphism is bintinuous, one example is the inverse of the map $f : \{0,1\}^\mathbb{N} \to \{0,1\}^\mathbb{N}$ defined by $f(x)_0 = x_0$ and $\forall i \geq 1: f(x)_i \equiv x_{i-1} + x_i \bmod 2$. It is clear that automorphisms of subshifts (and thus also their inverses) are bintinuous. It is also clear that bintinuous homeomorphisms on a subshift $X$ form a group.
We need a simple fact about uniform convergence. Note that we only deal with homeomorphisms on Cantor space, so all metrics are equivalent and all continuous functions are uniformly continuous.
\begin{lemma}
\label{lem:UniformComposition}
Let $X, Y, Z$ be metric spaces and let $g_i, g: Y \to Z, f_i, f : X \to Y$ be functions, where $i$ runs over some directed set $\mathcal{I}$. If $g_i \rightarrow g$ and $f_i \rightarrow f$ uniformly and $g$ is uniformly continuous, then $g_i \circ f_i \rightarrow g \circ f$ uniformly.
\end{lemma}
\begin{proof}
Let $\epsilon > 0$ be arbitrary. Let $j_0$ be such that for $j \geq j_0$ we have $\forall y \in Y: d_Z(g_j(y), g(y)) < \epsilon/2$. Use uniform continuity of $g$ to find $\delta > 0$ such that $\forall x, y \in Y: d_Y(x, y) < \delta \implies d_Z(g(x), g(y)) < \epsilon/2$. Use uniform convergence of $(f_i)_i$ to find $i_0 \geq j_0$ such that for $i \geq i_0$ we have $\forall x \in X: d_Y(f_i(x), f(x)) < \delta$.
Suppose now that $i \geq i_0$ and let $x \in X$ be arbitrary. We have
\[ d_Z(g(f(x)), g(f_i(x))) < \epsilon/2 \] because $d_Y(f(x), f_i(x)) < \delta$ and by the choice of $\delta$, and
we have
\[ d_Z(g(f_i(x)), g_i(f_i(x))) < \epsilon/2 \]
because $i \geq j_0$, so by the triangle inequality we have
\[ d_Z(g(f(x)), g_i(f_i(x))) \leq d_Z(g(f(x)), g(f_i(x))) + d_Z(g(f_i(x)), g_i(f_i(x))) < \epsilon \]
proving uniform convergence of $g_i \circ f_i$ to $g \circ f$.
\end{proof}
\subsection{Gates}
\begin{definition}
A \emph{gate} on a subshift $X \subset A^G$ is a homeomorphism $\chi : X \to X$ such that for some \emph{weak neighborhood} $N \Subset G$ we have $\chi(x)_g = x_g$ for all $g \notin N$. Write $\mathfrak{G}$ for the group generated by gates.
\end{definition}
Note that ``$\mathfrak{G}$'' is a fancy ``G'', and stands for ``gate''. The following lemma was proved for $G = \mathbb{Z}$ in \cite{Sa22a}; the proof for general $G$ is the same, and follows more or less directly from the definition of the product topology.
\begin{lemma}
\label{lem:StrongWeak}
A homeomorphism $\chi : X \to X$ is a gate if and only if it admits a \emph{strong neighborhood} $N \Subset G$ such that for some \emph{local rule} $\hat\chi : X|N \to X|N$ such that for all $g \notin N$ we have $\chi(x)_g = x_g$ for all $x \in X$, and for all $g \in N$ we have $\chi(x)_g = \hat\chi(x|_N)_g$.
\end{lemma}
It is clear that $\mathfrak{G}$ in fact consists of gates, as we can always increase the strong neighborhoods of two gates to be equal (after which they compose like permutations). For a gate $\chi$ and $g \in G$ write $\chi^g = \sigma_{g^{-1}} \circ \chi \circ \sigma_g$. Note in particular that when $G = \mathbb{Z}$, $\chi^n$ does not refer to iteration -- we never need to iterate a gate. One can see $\chi^g$ as applying the gate ``at'' $g$, if we use the convention where configurations of $X$ are seen as vertex-labelings of some left Cayley graph of the group $G$ (at least when it is finitely-generated). If $\chi$ has strong (resp.\ weak) neighborhood $N$, then $\chi^g$ has strong (resp.\ weak) neighborhood $Ng$.
Gates form a normal subgroup of the group of bintinuous homeomorphisms on $X$:
\begin{lemma}
\label{lem:GatesNormal}
Let $f$ be a homeomorphism with ntinuous inverse and $\chi$ a gate. Then $\chi^f$ is a gate.
\end{lemma}
\begin{proof}
Suppose that $N \Subset G$ is a strong neighborhood for $\chi$.
By ntinuity of $f^{-1}$, outside some finite set $F$, the image of $f^{-1}$ can be determined without looking at the cells in $N$, in other words for $g \notin F$, we have $f^{-1}(\chi(f(x)))_g = f^{-1}(f(x))_g = x_g$ since the application of $\chi$ does not affect the $f^{-1}$-image. This shows that $F$ is a weak neighborhood for $\chi^f$.
\end{proof}
We say two gates $\chi, \chi'$ \emph{commute} if they do, i.e.\ if $[\chi, \chi'] = \chi^{-1} \circ (\chi')^{-1} \circ \chi \circ \chi' = \mathrm{id}$. It is clear that if $\chi, \chi'$ have strong radii $N, N'$ respectively, and $N \cap N' = \emptyset$, then $\chi$ and $\chi'$ commute. If $S \subset G$ is a (possibly infinite) subset, and $\chi^s$ commutes with $\chi^t$ for any distinct $s, t \in S$, then we say \emph{the product $\prod_{s \in S} \chi^s$ commutes}, or just that \emph{$\chi^S$ commutes}, and define
\[ \chi^S(x) = \lim_{F \Subset S} (\prod_{k \in F} \chi^k)(x) \]
by taking the pointwise limit (note that $\prod$ means function composition here).
\begin{lemma}
\label{lem:PointwiseLimit}
If $\chi^K$ commutes, then the pointwise limit is well-defined, and convergence is uniform.
\end{lemma}
\begin{proof}
To see that the pointwise limit is well-defined, observe that in a finite subproduct $\chi^F$, $F \Subset K$, the value of $\chi^F(x)_g$ only depends on the value of finitely many elements of $G$, since we may order the product so that translates of $\chi$ that may change the value of $g$ (i.e.\ $g$ is in their strong neighborhood) are applied first. The same argument gives uniform convergence: the value at $g$ stabilizes once we have applied all translates of $\chi$ with $g$ in their strong neighborhood.
\end{proof}
Say a gate is \emph{eventually even} if for all large enough strong neighborhoods $N$ the corresponding permutation $\pi \in \mathrm{Sym}(X|N)$ restricted to any complement pattern is even, i.e.\ for any $y \in X|{G \setminus N}$, the restriction of $\pi$ to the set $\mathcal{F}(y, N)$ is even. Say a gate is \emph{sometimes even} if the same is true for at least one strong neighborhood $N$.
\begin{lemma}
\label{lem:CommutatorCharacterization}
On every subshift, each of the following implies the next.
\begin{enumerate}
\item $\chi = [\chi_1, \chi_2]$ for some gates $\chi_1, \chi_2$;
\item $\chi \in [\mathfrak{G}, \mathfrak{G}]$;
\item $\chi$ is eventually even;
\item $\chi$ is sometimes even.
\end{enumerate}
The last two items are always equivalent, and on an MFP SFT all four are equivalent.
\end{lemma}
Due to the last point, we simply call eventually/sometimes even gates \emph{even}. Write $\hat{\mathfrak{G}}$ for the group generated by even gates (we only consider MFP SFTs, so one can read $\hat{\mathfrak{G}}$ as $[\mathfrak{G}, \mathfrak{G}]$, but the concrete characterization is more useful).
\begin{proof}
The implication (1) $\implies$ (2) is trivial. For (2) $\implies$ (3) take a any $N$ larger than the radius of all the gates involved in a composition of commutators of gates. For any $y \in X|{G \setminus N}$, the permutation of $\mathcal{F}(y, N)$ performed in the context is just the corresponding composition of commutators of gates restricted to $\mathcal{F}(y, N)$, and thus is in the commutator subgroup of the symmetric group of that set, which is the corresponding alternating group. The implication (3) $\implies$ (4) is trivial.
We show (4) $\implies$ (3) in every subshift, so (3) and (4) are equivalent. Simply observe that if $N$ is a strong neighborhood such that $\chi$ performs an even permutation on $N$ in every $(G \setminus N)$-context, then the same is true for any $(G \setminus N')$-context for $N' \supset N$, as for any $(G \setminus N')$-context we can write the permutation $\chi$ performs on $\mathcal{F}(y, N')$ as a finite composition of even permutions. Namely, for each of the finitely many extensions $z \in \mathcal{F}(y, G \setminus N)$ the permutation $\chi$ performs on the pattern in $N$ is even by assumption.
It now suffices to show that in an MFP SFT $X$, (3) $\implies$ (1). To see this, let $N$ be a window for $X$ and pick a large strong neighborhood $B_r$ such that there are at least $5$ fillings of each $A_{r,N}$-context. Now increasing the strong neighborhood to $NB_r$, we have a permutation of $X|NB_r$ which does not modify the contents of the annulus $A_{r,N}$ and for each pattern on the annulus performs an even permutation on the pattern inside. Since there are at least $5$ extensions of the pattern and $[S_n,S_n] = \{[g, h] \;|\; g, h \in S_n\}$ for $n \geq 5$ \cite{Or51}, we can write the restriction to each $A_{r,N}$-context as a commutator of two permutations in that same context. For different contexts the permutations commute, so we can write this as a commutator of two gates.
\end{proof}
\subsection{Gate lattices}
\begin{lemma}
\label{lem:ExpHIsAuto}
Suppose that $K \leq G$ is a subgroup, $X \subset \Sigma^G$ is a $K$-subshift, $\chi$ is a gate on $X$, and $\chi^K$ commutes. Then $\chi^K$ is an automorphism of $(X, K)$.
\end{lemma}
\begin{proof}
To see that $\chi^K$ is a homeomorphism, observe that by Lemma~\ref{lem:PointwiseLimit} it is a uniform limit of continuous functions, thus continuous (alternatively, the proof shows this directly). It clearly has continuous inverse $(\chi^{-1})^K$, thus it is a homeomorphism.
We check commutation with $K$-shifts. If $h \in K$, then
\begin{align*}
\chi^K \circ \sigma_h &= \lim_{F \Subset K} (\prod_{k \in F} \chi^k) \circ \sigma_h \\
&= \lim_{F \Subset K} \prod_{k \in F} \sigma_{k^{-1}} \circ \chi \circ \sigma_k \circ \sigma_h \\
&= \lim_{F \Subset K} \prod_{k \in F} \sigma_h \circ \sigma_{(kh)^{-1}} \circ \chi \circ \sigma_{kh} \\
&= \lim_{F \Subset K} \prod_{k \in F} \sigma_h \circ \sigma_{k^{-1}} \circ \chi \circ \sigma_{k} \\
&= \sigma_h \circ \chi^K
\end{align*}
where uniform convergence of products means limits commute with composition (Lemma~\ref{lem:UniformComposition}), and the fourth equality holds because $Fh$ runs over the finite subsets of $K$ as $F$ does.
\end{proof}
\begin{lemma}
\label{lem:LatticeBase}
Suppose $X$ is a $G$-subshift, $g \in G$ and $H \leq G$. If $\chi^H$ commutes, then $\chi^{Hg}$ commutes and $\chi^{Hg} = (\chi^H)^g$.
\end{lemma}
\begin{proof}
Interpreting infinite products as pointwise uniform limits of finite products and applying Lemma~\ref{lem:UniformComposition} to pull functions out of the limit, the calculation
\[ \chi^{Hg} = \prod_{h \in H} \sigma_{g^{-1}h^{-1}} \circ \chi \circ \sigma_{hg} = (\prod_{h \in H} \sigma_h \circ \chi \circ \sigma_{h^{-1}})^g = (\chi^H)^g \]
suffices.
\end{proof}
\begin{example}
We note that the product $\chi^{gH}$ may not commute even if $\chi^H$ commutes. Suppose e.g.\ that $G = F_2 = \langle a, b \rangle$, $X = \{0,1\}^G$, $H = \langle a \rangle$ and $\chi$ swaps the symbols at $\{b, ba\}$. Clearly $\chi^H$ commutes, but $\chi^{b^{-1}H}$ does not. \qee
\end{example}
\begin{definition}
\label{def:Gnets}
Maps of the form $\chi^{Hg}$ where $H \leq G$ is of finite index are called \emph{gate lattices}, and they are \emph{even} if $\chi$ is. Write $\mathfrak{L}$ for the group generated by gate lattices, and $\hat{\mathfrak{L}}$ for the group generated by even gate lattices.
\end{definition}
Note that ``$\mathfrak{L}$'' is a fancy ``L'', and stands for ``lattice'', which refers to the fact gates are applied at the points of a (translated) lattice.
We do not need to take \emph{all} subgroups of finite index in this definition for some of our results, in particular in the precise statement of the main result Theorem~\ref{thm:MainProof} we allow any net of finite-index subgroups with a cofinal subnet of normal subgroups, tending to the trivial group in the Chabauty topology (presumably leading to different groups for different such nets). Such a net $\mathcal{I}$ will be called a \emph{lattice net}; for simplicity we directly take $\mathcal{I}$ to be a set of finite-index subgroups ordered under inclusion and directed downward. We define $\mathcal{I}$-gate lattices in the obvious way, as well as notations $\mathfrak{L}_{\mathcal{I}}, \hat{\mathfrak{L}}_{\mathcal{I}}$. Note that of course $\mathfrak{L}_{\mathcal{I}}, \hat{\mathfrak{L}}_{\mathcal{I}}$ are respectively subgroups of $\mathfrak{L}, \hat{\mathfrak{L}}$.
\begin{lemma}
\label{lem:Decomposition}
Suppose $\chi^{Hg}$ commutes, and $K \leq H$. Then $\chi^{Hg}$ can be written as a finite commuting product of commuting gates of the form $\chi^{Kg'}$
\end{lemma}
\begin{proof}
Simple write $H = \bigcup_{h \in T} Kh$ for some set of representatives $T$, and observe that
\[ \prod_{h \in H} \chi^{hg} = \prod_{h \in T} \prod_{k \in K} \chi^{khg} = \prod_{h \in T} \chi^{Khg} \]
by the commutation of $\chi^{Hg}$.
\end{proof}
\begin{lemma}
\label{lem:NormalAuto}
If $H$ is a normal subgroup of $G$, then $\chi^{Hg}$ is an automorphism of $(X, H)$ whenever $\chi^H$ commutes.
\end{lemma}
\begin{proof}
we have $\chi^{H g} = \chi^{g H}$ by normality, and the latter product, equal to $(\chi^g)^H$, must commute since by Lemma~\ref{lem:LatticeBase} $\chi^{H g}$ does, and $g H$ and $H g$ are literally the same subset of $G$ where $\chi$ gets applied. Now $\chi^{Hg} = (\chi^g)^H$ is an automorphism of $(X, H)$ by Lemma~\ref{lem:ExpHIsAuto}.
\end{proof}
We recall a definition of Hartman-Kra-Schmieding (generalized to residually finite acting groups, as seems appropriate here).
\begin{definition}
Let $G$ be a residually finite group, and $X \subset \Sigma^G$ a subshift. The \emph{stabilized automorphism group} of $X$ is the direct union of $\mathrm{Aut}(X, H)$ where $H$ ranges over finite-index subgroups of $G$.
\end{definition}
\begin{lemma}
The groups $\mathfrak{L}$ and $\hat{\mathfrak{L}}$ are contained in the stabilized automorphism group of $\Sigma^G$.
\end{lemma}
\begin{proof}
Every finite-index subgroup of contains a normal finite-index subgroup. By Lemma~\ref{lem:Decomposition} we can write any $\chi^{Hg}$ as a composition of maps of the form $\chi^{Kg'}$ with normal $K$, and by Lemma~\ref{lem:NormalAuto} these are automorphisms of $(X, K)$, thus in the stabilized automorphism group.
\end{proof}
\subsection{Conditions on gate lattices being even}
\label{sec:AllEven}
\begin{lemma}
If $X$ is an MFP SFT, then $\hat{\mathfrak{L}}$ is contained in $[\mathfrak{L}, \mathfrak{L}]$.
\end{lemma}
\begin{proof}
Let $\chi^{Hg} \in \hat{\mathfrak{L}}$. By Lemma~\ref{lem:CommutatorCharacterization} we have $\chi = [\chi_1, \chi_2]$ for some gates $\chi_1, \chi_2$. Using Lemma~\ref{lem:Decomposition} and residual finiteness of $G$, we may assume $H$ is very sparse compared to the strong neighborhood of $\chi$ and the $\chi_i$. Then an easy calculation shows $\chi^{Hg} = [\chi_1^{Hg}, \chi_2^{Hg}]$. Namely the strong neighborhoods of different translates of $\chi$ and $\chi_i$ do not intersect, thus these translates commute, thus for finite $F \Subset H$ we have
\begin{align*}
\chi^{Fg} &= \prod_{h \in F} (\chi_1^{-1})^{hg} \circ (\chi_2^{-1})^{hg} \circ \chi_1^{hg} \circ \chi_2^{hg} \\
&= \prod_{h \in F} (\chi_1^{-1})^{hg} \circ \prod_{h \in F} (\chi_2^{-1})^{hg} \circ \prod_{h \in F} \chi_1^{hg} \circ \prod_{h \in F} \chi_2^{hg} \\
&= [\chi_1^{Fg}, \chi_2^{Fg}] \end{align*}
and the claim follows by taking the (uniformly converging) limits and applying Lemma~\ref{lem:UniformComposition}.
\end{proof}
We do not know when $[\mathfrak{L}, \mathfrak{L}]$ is equal to $\hat{\mathfrak{L}}$, but we show some conditions under which we even have $\hat{\mathfrak{L}} = \mathfrak{L}$.
We first show that a full shift over an even alphabet over any group $G$ has this property. We in fact state some more general dynamical properties that imply it. First say a subshift has \emph{even fillings} if for some $F \Subset G$, we have that $|\mathcal{F}(x, F)|$ is even for all $x \in X|G \setminus F$. It is clear that if this happens for some $F$, it happens for all larger $F$ and all right translates of $F$.
\begin{lemma}
If $X$ is an MFP SFT with even fillings, then $\hat{\mathfrak{L}}_{\mathcal{I}} = [\mathfrak{L}_{\mathcal{I}}, \mathfrak{L}_{\mathcal{I}}] = \mathfrak{L}_{\mathcal{I}}$ for any lattice net $\mathcal{I}$.
\end{lemma}
\begin{proof}
If $\chi$ has strong neighborhood $N$, let $R$ be a window for $X$, let $F$ have even fillings, and replace the strong neighborhood of $\chi$ by $N \sqcup Fg$ such that $N \cap RFg = \emptyset$, acting trivially on the contents of $Fg$. Now consider any context $x \in X|G \setminus (N \cup Fg)$. Since applying $\chi$ does not affect the contents of $RFg$, the possible contents of $Fg$ only depends on $x$ before and after the application.\footnote{Since $\chi$ is well-defined with strong neighborhood $N$, it is not even necessary to assume $N \cap RFg = \emptyset$, only that $N \cap Fg = \emptyset$; but the extra leeway does not hurt.} Thus for each even cycle that $\chi$ performs, we perform an even number of independent copies of it when $\chi$ is seen as a permutation of $\mathcal{F}(x, N \sqcup Fg)$, in particular this is an even permutation. It is clear that $\chi$ with the new neighborhood still commutes with its translates (since it is the same homeomorphism).
\end{proof}
A common way to get even fillings is that we literally have freely changeable bits visible on each configuration. More precisely, we say a subshift has \emph{syndetic free bits} if it is conjugate to a subshift $X$ over the disjoint union alphabet $A \cup (B \times \{0,1\})$ such that elements of $B \times \{0,1\}$ appear in every configuration, and if $x \in X$ and $x_g = (b, c) \in B \times \{0,1\}$, then also $y \in X$, where $y_h = x_h$ for $h \neq g$ and $y_g = (b, 1-c)$.
\begin{lemma}
\label{lem:FreeBitsPerfect}
If $X$ has syndetic free bits, then is has even fillings.
\end{lemma}
\begin{proof}
A free involution on the fillings of a hole $F$ is obtained by flipping the first free bit under any ordering of $F$ (and a free involution is a bijective proof of even cardinality). The hole $F$ simply needs to be large enough so there is always a free bit. Indeed, since free bits appear in all configurations of $X$, by compactness they appear in any large enough finite pattern of any configuration, thus such $F$ exists.
\end{proof}
Next, we cover all full shifts, under a condition on the group. We say a group $G$ \emph{has halvable subgroups} if all of its finite index subgroups have a finite index subgroup with an even index, i.e.\ $\forall H \leq G: \exists K \leq H: 2|[H : K]$.
\begin{lemma}
\label{lem:HalvablePerfect}
If $G$ has halvable subgroups and $X = \Sigma^G$, then $\hat{\mathfrak{L}} = [\mathfrak{L}, \mathfrak{L}] = \mathfrak{L}$.
\end{lemma}
\begin{proof}
It suffices to write any $\chi^{Hg}$ as a commutator, where $H$ is a sufficiently sparse normal finite index subgroup. Let $K \leq H$ have even index, take right coset representatives $k_1, ..., k_{2n} \in H$. By normality of $K$ we have $\chi^{Kk_i} = \chi^{k_iK}$ and we can see $\chi^{Hg}$ as a composition of maps $(\chi^{k_{2i+1}} \circ \chi^{k_{2i+2}})^{Kg}$.
Clearly $\chi^{k_{2i+1}}$ and $\chi^{k_{2i+2}}$ have the same parity in any annular context because the subshift's coordinates are interchangeable (we are on a full shift). Thus the parity of their composition is even in every context. By MFP and Lemma~\ref{lem:CommutatorCharacterization} we can write their product as a commutator.
\end{proof}
We give two examples of (classes of) groups with halvable subgroups.
\begin{lemma}
The following groups have halvable subgroups:
\begin{itemize}
\item all finitely-generated infinite finite-dimensional matrix groups over commutative rings;
\item all infinite residually finite $2$-groups.
\end{itemize}
\end{lemma}
\begin{proof}
The first item is a special case of a more general result of \cite{We87}. For the latter, it is clearly enough to prove that every infinite residually finite $2$-group has a subgroup of even finite index. Take any proper normal subgroup $N \triangleleft G$ and consider the group $G/N$. The order of any nontrivial element $gN$ is a power of $2$, so by Lagrange's theorem the order $[G : N]$ of $G/N$ is even.
\end{proof}
Of course one could generalize Lemma~\ref{lem:HalvablePerfect} to the case $\mathfrak{L}_{\mathcal{I}}$, namely simply take it as a property of a lattice net $\mathcal{I}$ that any $H \in \mathcal{I}$ has an even index subgroup $K \in \mathcal{I}$. Note that even $\mathbb{Z}$ does not have this property for all lattice nets $\mathcal{I}$.
In Lemma~\ref{lem:HalvablePerfect}, we could also replace the condition of being a full shift with a weaker statement, namely that that number of connecting patterns between an annular pattern and another pattern positioned in two different fixed areas inside it have the same parity (with suitable quantifiers). We omit a precise statement, but in the case of $\mathbb{Z}$ we can use this idea to generalize the statement $\hat{\mathfrak{L}} = \mathfrak{L}$ to all mixing SFTs.
\begin{lemma}
\label{lem:ZPerfect}
If $G = \mathbb{Z}$ and $X$ is a mixing SFT, then $\hat{\mathfrak{L}} = [\mathfrak{L}, \mathfrak{L}] = \mathfrak{L}$.
\end{lemma}
\begin{proof}
It clearly suffices to show that, if $\chi$ is an arbitrary permutation with strong neighborhood $[0,n-1]$, then $\chi^{n\mathbb{Z}}$ is a composition of even gate lattices. We may assume $X$ is represented as an edge shift, so elements of $\mathbb{Z}$ carry edges of a finite directed graph and there are vertices between them at elements of $\mathbb{Z}+\frac12$; let $M$ be the matrix with $M_{a,b}$ the number of edges from vertex $a$ to vertex $b$. Note that the permutation $\pi$ that $\chi$ performs in $[0,n-1]$ fixes the vertices at $-\frac12$ and $n-\frac12$. Write $p_{a,b}$ for the parity of the restriction of $\pi$ to the context where the vertices at $(-\frac12, n-\frac12)$ are respectively $(a,b)$.
Next let $\hat M$ be the matrix obtained by taking entries of $M$ modulo $2$, and observe that $\hat M^n = \widehat{M^n}$. Observe that powers of $\hat M^n$ eventually get into a cycle, meaning $\hat M^{mn} = \hat M^{(m+p)n}$ for some $m \geq 0, p \geq 1$. Now consider the commuting product $\chi \circ \chi^{\sigma_{pn}}$. We claim that if we use the strong neighborhood $[-mn, mn+pn-1]$, this is an even permutation.
To see this, let $a,b \in V$ and consider the permutation $\chi$ with vertices $(a,b)$ at $(-mn-\frac12, mn+pn-\frac12)$. For $c,d \in V$, for every choice of path from $a$ to $c$ and from $d$ to $b$, counting modulo $2$, $\chi$ has $p_{c,d}$ cycles of even length. If the number of paths from $a$ to $c$ is even or the number of paths from $d$ to $b$ is even, these cancel out, so the parity is just the parity of then number of triples $(a,c,d,b)$ such that $\hat M^{mn}_{a,d} = p_{c,d} = M^{(m+p)n}_{d,b} = 1$. The same calculation holds for $\chi^{\sigma_{pn}}$, so their composition performs an even permutation in the context $(a,b)$.
We conclude that $\chi \circ \chi^{\sigma_{pn}}$ with a suitable choice of strong neighborhood is even, and clearly the choice of strong neighborhood does not affect the commutation of the product $(\chi \circ \chi^{\sigma_{pn}})^{2np\mathbb{Z}}$. By applying $\chi$ in a similar paired-up way on other cosets of $pn\mathbb{Z}$, we get $\mathfrak{L} \subset \hat{\mathfrak{L}}$. Since $\hat{\mathfrak{L}} \subset [\mathfrak{L}, \mathfrak{L}] \subset \mathfrak{L}$, we conclude that the three groups are equal.
\end{proof}
It seems that the above proof essentially uses the fact $\mathcal{I}$ is the net of all finite-index subgroups.
\subsection{Inertness}
In the case $G = \mathbb{Z}$, there is a standard notion of \emph{inertness} for an automorphism of a mixing SFT, meaning an automorphism that acts trivially on Krieger's \emph{dimension group} (see e.g.\ \cite{BoLiRu88,BoFi91}). We omit the general definition of inertness, and only use a result of Wagoner \cite{Wa90a} (except for the proof of the well-known fact (2) $\implies$ (1) below), namely Lemma~\ref{lem:Wagoner} below. To state it, we need a few definitions.
Recall again that an edge shift is the set of paths $p : \mathbb{Z} \to E$ (with matching endpoints for successive edges) where $(V, E)$ is a directed (multi-)graph (with loops). A \emph{simple graph symmetry} is an automorphism of an edge shift which is defined by a bijection $\pi : E \to E$ that preserves the tails and heads of all vertices. If $X$ is an SFT, an automorphism $f \in \mathrm{Aut}(X, \sigma)$ is \emph{simple} if there exists a topological conjugacy between $(X, \sigma)$ and an edge shift, so that $f$ is a simple graph symmetry. This definition is due to Nasu \cite{Na88}. The relevant result of Wagoner is the following:
\begin{lemma}
\label{lem:Wagoner}
If $f$ is an inert automorphism of a mixing SFT $(X, \sigma)$, then there exists $m \in \mathbb{Z}_+$ such that for all $n \geq m$, $f$ can be written as a product of simple automorphisms of $\mathrm{Aut}(X, \sigma^n)$.
\end{lemma}
\begin{lemma}
\label{lem:1DCase}
Let $X$ be a one-dimensional mixing SFT. Then the following are equivalent:
\begin{enumerate}
\item $f$ is a product of inert automorphisms of $(X, \sigma^n)$ for some $n \in \mathbb{Z}_+$;
\item $f$ is a product of simple automorphisms of $(X, \sigma^n)$ for some $n \in \mathbb{Z}_+$;
\item $f$ is a product of gate lattices on $X$;
\item $f$ is a product of even gate lattices on $X$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) $\implies$ (2): If $f$ is inert for some $(X, \sigma^n)$, then because $(X, \sigma^n)$ is itself topologically conjugate to a mixing SFT, by Wagoner's result it is a product of simple automorphisms of higher powers of $\sigma^n$.
(2) $\implies$ (1): This is obvious from the definition of the dimension group through rays and beams \cite{LiMa95}, namely if we represent $X$ as the edge where the automorphism is a simple graph symmetry, the action on rays is clearly the identity map.
(2) $\implies$ (3): If $f$ is a simple automorphism of $\sigma^n$, then the edge permutations are obviously commuting gates, and thus $f$ is directly a gate lattice on the subgroup $n\mathbb{Z}$ (note that looking through a topological conjugacy of course does not change the set of gates, nor affect the commutation of their translates).
(3) $\implies$ (2): We simply need to show that if $\chi^{n\mathbb{Z}}$ is a gate lattice, then it can be written as a product of simple automorphisms. This is straightforward from Lemma~\ref{lem:Decomposition}, namely we can write $n\mathbb{Z}$ as a union of sparser translated subgroups $jn + mn\mathbb{Z}$, and it is easy to see that for large enough $m$, $\chi^{jn+mn\mathbb{Z}}$ is a simple automorphism of $(X, \sigma^{mn\mathbb{Z}})$: One can take the vertices to represent words of length $r$ around the positions $j + \lfloor mn/2 \rfloor + kmn$, where $[0,r-1]$ is a window for $X$. For large $m$, these words are not modified by $\chi^{j+mn\mathbb{Z}}$, and indeed if the strong neighborhood of $\chi$ is contained in $[-mn/2+r, mn/2-r]$ then $\chi$ can be seen as a permutation of the edges.
The equivalence of (3) and (4) is Lemma~\ref{lem:ZPerfect}.
\end{proof}
If $G = \mathbb{Z}$ and $X$ is a mixing SFT, the \emph{stabilized inert automorphism group} is the smallest group containing the inert automorphisms of the mixing SFTs $(X, \sigma^n)$, for all $n \geq 1$. The following is immediate from Lemma~\ref{lem:1DCase}.
\begin{proposition}
\label{prop:InertIsEgnets}
The stabilized inert automorphism group of a topologically mixing $\mathbb{Z}$-SFT is equal to its group $\hat{\mathfrak{L}}$.
\end{proposition}
\begin{comment}
\begin{remark}
We note without proof that for $A^\mathbb{Z}$, an automorphism $f$ is inert if and only if
\[ |\{(x|_{[0,2r-1]}, f(x)|_{[r,3r-1]}) \in A^{3r} \;|\; x \in A^\mathbb{Z}\}| = |A|^{3r} \]
for all large enough $r$, which can be interpreted as the ``average information flow'' being zero (see Kari's paper \cite{Ka96}).
Thus, the inerts are precisely the kernel of the homomorphism defined in \cite{Ka96} (and thus the homomorphism is just the dimension representation), and \cite{Ka96}'s result implies that of Wagoner in the full shift case.
\end{remark}
\end{comment}
\section{Proof of main results}
We now prove the main results Theorem~\ref{thm:MainProof} and Theorem~\ref{thm:MainMaximal}. The structure of the proof Theorem~\ref{thm:MainProof} is the following:
\begin{enumerate}
\item We commutator an arbitrary element of $\hat{\mathfrak{G}}$ (really any nontrivial homeomorphism) with a suitably chosen element of $\hat{\mathfrak{G}}$ to obtain a non-trivial map (first three lemmas).
\item We observe that applying the same on a lattice gives us a nontrivial gate lattice (Lemma~\ref{lem:KinCommutator}).
\item We observe that once we have one gate on sparse enough lattices, by EFP we have all of them on sparse enough lattices (done in the proof of Lemma~\ref{lem:GeneratingEgnets}).
\item Once we can apply arbitrary gates on sparse enough lattices, we can apply them on any lattice by refinement (done in the proof of Lemma~\ref{lem:GeneratingEgnets}).
\end{enumerate}
\begin{lemma}
\label{lem:Binice}
For $H \leq G$ of finite index, all elements of $\mathrm{Aut}(X, H)$ are bintinuous. In particular, elements of $\mathfrak{L}$ are bintinuous.
\end{lemma}
\begin{proof}
Since $(X, H)$ is a subshift, this is immediate from the Curtis-Hedlund-Lyndon theorem.
\end{proof}
\begin{lemma}
\label{lem:CommuIsGate}
Let $f$ be a homeomorphism with ntinuous inverse and $\chi$ a gate. Then $[f, \chi]$ is a gate.
\end{lemma}
\begin{proof}
To see that $[f, \chi] = f^{-1} \circ \chi^{-1} \circ f \circ \chi$ is a gate, it suffices to show that $(\chi^{-1})^f$ is a gate, and since $\chi$ is arbitrary it suffices to show that $\chi^f$ is. This is Lemma~\ref{lem:GatesNormal}.
\end{proof}
\begin{lemma}
\label{lem:CommuIsNontrivial}
Let $f$ be a nontrivial homeomorphism with ntinuous inverse. If $X$ is an SFT with MFP then $[f, \chi]$ is a nontrivial gate, for some gate $\chi \in \hat{\mathfrak{G}}$.
\end{lemma}
\begin{proof}
Let $R$ be a window for $X$. Let $x\in X$ be such that $f(x)_g \neq x_g$ for some $g \in G$, and pick a large enough $r$ so that any pattern $P$ on the annulus $A_{r,R}$ has at least $3$ fillings for $B_r$. By gates in $\hat{\mathfrak{G}}$ we can realize any even permutation of $\mathcal{F}(P, RB_r)$ for any $P \in X|A_{r,R}$.
Now pick $\chi$ any nontrivial $3$-rotation of patterns in $\mathcal{F}(x|A_{r,R}, RB_r)$ which has $x$ in its support, but not $f(x)$. If $x|A_{r,R} \neq f(x)|A_{r,R}$, this is trivial, otherwise it follows from $|\mathcal{F}(x|A_{r,R}, B_r)| \geq 3$, since of the at least three different fillings, at most one can agree with $f(x)$. Now $f$ cannot commute with $\chi$, as it does not preserve its support.
\end{proof}
\begin{lemma}
\label{lem:KinCommutator}
Let $f \in \mathrm{Aut}(X,H)$ for some $H \leq G$ of finite index, and $\chi$ a gate. If $K \leq H$ is sparse enough, then we have $[f, \chi]^K = [f, \chi^K]$ (and these expressions are well-defined).
\end{lemma}
\begin{proof}
Since $f$ is $H$-invariant, it is bintinuous. Since $[f, \chi]$ is a gate, and $f$ is also $K$-invariant, we have $f^k = f, (f^{-1})^k = f^{-1}$, so
\[ [f, \chi]^K = \prod_{k \in K} [f, \chi]^k = \prod_{k \in K} f^{-1} \circ (\chi^{-1})^k \circ f \circ \chi^k \]
is well-defined for sparse enough $K$.
Since $f$ is $H$-invariant and $K \leq H$, for sparse enough $K$, $\chi^k$ commutes with $((\chi^{-1})^{k'})^f$ whenever $k \neq k'$. Namely, we have
\[ ((\chi^{-1})^{k'})^f = f^{-1} k'^{-1} \chi^{-1} k' f = k'^{-1} f^{-1} \chi^{-1} f k' = ((\chi^{-1})^f)^{k'}, \]
and it follows from Lemma~\ref{lem:GatesNormal} that $(\chi^{-1})^f$ admits a strong neighborhood $N'$. If $N$ is the strong neighborhood of $\chi$, it suffices that $Nk \cap N'k' = \emptyset$ for $k \neq k'$, which is just the condition $k'k^{-1} \notin N'^{-1}N$ and holds for sparse enough $K$.
Thus, finite subproducts of $[f, \chi]^K$ can be rearranged to approximations of $[f, \chi^K]$, in the sense that if $F \subset K$ is finite then
\begin{align*}
\prod_{k \in F} f^{-1} \circ (\chi^{-1})^k \circ f \circ \chi^k &= \prod_{k \in F} ((\chi^{-1})^k)^f \circ \chi^k \\
&= \prod_{k \in F} ((\chi^{-1})^k)^f \circ \prod_{k \in F} \chi^k \\
&= [f, \chi^F],
\end{align*}
thus the two infinite products are also the same.
\end{proof}
\begin{lemma}
\label{lem:AnSimplePlus}
For any $n \geq 5$ and $f \in S_n \setminus \{1_{S_n}\}$, the smallest subgroup of $S_n$ containing $f$ and invariant under conjugation by $A_n$ contains $A_n$.
\end{lemma}
\begin{proof}
If $f \in S_n \setminus A_n$ is not the identity permutation, observe that $f$ cannot commute with every $3$-cycle, as it does not preserve the support of every $3$-cycle, and thus we find a nontrivial commutator between $f$ and $f^{\pi}$ for a $3$-cycle $\pi$. Thus we have generated a nontrivial element of $A_n$. If $f \in A_n$, the claim is clear since $A_n$ is simple.
\end{proof}
Theorem~\ref{thm:Main} is obtained as an immediate consequence of the following lemma.
\begin{lemma}
\label{lem:GeneratingEgnets}
Let $X$ be a EFP SFT on a residually finite group $G$, and let $f \in \mathrm{Aut}(X, H)$ be nontrivial where $H \leq G$ is of finite index. Then
\[ \langle f^{\hat{\mathfrak{L}}_{\mathcal{I}}} \rangle = \langle f, \hat{\mathfrak{L}}_{\mathcal{I}} \rangle \]
for any lattice net $\mathcal{I}$ containing $H$.
\end{lemma}
In words, the conclusion is that the smallest subgroup of $\mathrm{Homeo}(X)$ which contains all conjugates of $f$ by gate lattices actually contains all gate lattices.
\begin{proof}
Take an arbitrary nontrivial element $f \in \mathrm{Aut}(X, H)$. By Lemma~\ref{lem:Binice}, $f$ is bintinuous. Since $f$ it is nontrivial, by Lemma~\ref{lem:CommuIsNontrivial}, $[f, \chi]$ is a nontrivial gate for some $\chi \in \hat{\mathfrak{G}}$. Writing $\chi_0 = [f, \chi]$, we have by Lemma~\ref{lem:CommuIsGate} that $\chi_0^K = [f, \chi^K]$ for any sparse enough $K \in \mathcal{I}$. Since
$[f, \chi^K] = f^{-1} f^{\chi^K}$, we have that
$\langle f^{\hat{\mathfrak{L}}_{\mathcal{I}}} \rangle$ contains these gate lattices $\chi_0^K$.
We next show that from these, one can generate all even gate lattices $\chi^K$ where $K \leq H$, $K \in \mathcal{I}$. Let $R$ be a window for $X$, let $N$ be a strong neighborhood such that $\chi_0$ performs a nontrivial permutation of $X|N$, and large enough so that the cardinality of the latter set is at least $5$. If we pick a large enough $r$, then every $A_{r,R}$-context $P \in X|A_{r,R}$ allows an extension to any pattern in $N$ by EFP. Thus with the strong neighborhood $RB_r$, $\chi_0$ performs a permutation that fixes the $A_{r,R}$-context and acts nontrivially on the $B_r$-continuation for any context.
In a fixed such context $P \in X|A_{r,R}$ we can represent any even permutation of its extension patterns to $RB_r$ (i.e.\ the set $\mathcal{F}(P, RB_r )$) as a composition of conjugates of $\chi_0$ by commutators with even permutations that fix the context $P$, by Lemma~\ref{lem:AnSimplePlus}. Composing these representations over all contexts, we can represent any even gate as a commutator expression involving only $\hat{\mathfrak{G}}$-conjugates of $\chi_0$. Doing this simultaneously on all positions of a sparse enough finite-index subgroup $K \in \mathcal{I}$, we obtain for any even gate $\chi$ all elements of the form $\chi^K$ for all sparse enough $K \in \mathcal{I}$.
Consider now $K \leq H$, $K \in \mathcal{I}$ and suppose $\chi$ is an even gate and $K$ is sparse enough so that $\chi^K$ can be built with the above construction. Observe that since $f \in \mathrm{Aut}(X, H)$ we have $f^h = f$ for all $h \in H$, and thus if we conjugate the expression for $\chi^K$ (which is a composition of conjugates of $f$ by elements of $\hat{\mathfrak{L}}_{\mathcal{I}}$), the $h$-translation only affects the $\hat{\mathfrak{L}}_{\mathcal{I}}$-elements, so actually we obtain $\chi^{Kh}$ for cosets of $K$ in $H$.
To say the same in formulas, if
\[ \chi^K = \prod_i f^{\chi_i^{K_i}} \]
(note that this is a finite ordered product; actually $K_i = K$ for all $i$ in our construction, but this is immaterial) then using Lemma~\ref{lem:LatticeBase}
we have
\begin{align*}
\prod_i f^{\chi_i^{K_ih}} &= \prod_i (\chi_i^{-1})^{K_ih} \circ f \circ \chi_i^{K_ih} \\
&= \prod_i ((\chi_i^{-1})^{K_i})^h \circ f^h \circ (\chi_i^{K_i})^h \\
&= \prod_i (f^{\chi_i^{K_i}})^h \\
&= (\prod_i f^{\chi_i^{K_i}})^h \\
&= (\chi^K)^h = \chi^{Kh}.
\end{align*}
Consider next an arbitrary $L \leq H$, $L \in \mathcal{I}$, where again $\chi$ is an even gate and $\chi^L$ commutes. Let $K \leq L, K \in \mathcal{I}$ be any sparse enough subgroup such that $\chi^K$ is generated by $f^{\hat{\mathfrak{L}}_{\mathcal{I}}}$. Then by the two previous paragraphs, $\chi^{Kh}$ can also be built, where $h$ runs over right coset representatives for $K$ in $L$. By Lemma~\ref{lem:Decomposition}, we can build $\chi^L$. (To recall the argument, since $\chi^K$ commutes, the composition of the $\chi^{Kh}$ over coset representatives is precisely $\chi^L$.) We conclude that $\langle f^{\hat{\mathfrak{L}}_{\mathcal{I}}} \rangle$ contains every gate lattice $\chi^{Lh}$ where $\chi$ is even, $L \leq H$ and $h \in H$ is arbitrary.
Next, consider an arbitrary commuting $\chi^L$ for $L \in \mathcal{I}$, where not necessarily $L \leq H$. By what we already showed, $\chi^K$ in $\langle f^{\hat{\mathfrak{L}}_{\mathcal{I}}} \rangle$ where $K \leq L \cap H, K \triangleleft G, K \in \mathcal{I}$ is arbitrary. Now conjugating the situation with some $g \in G$, we transform the original map $f$ into some $f^g$, which is still a nontrivial homeomorphism that now commutes with $H^g$, and applying the entire discussion to it (observing $K \leq H^g$ by $K \leq H$ and normality), conjugates of $f^g$ by elements of $\hat{\mathfrak{L}}_{\mathcal{I}}$ also generate the same gate lattice $\chi^K$.
For some $\chi_i, K_i$ we now have
\[ \chi^K = \prod_i (f^g)^{\chi_i^{K_i}}, \]
and we can continue with
\begin{align*}
\chi^K = \prod_i(f^g)^{\chi_i^{K_i}} &= \prod_i(\chi_i^{-1})^{K_i} \circ g^{-1} \circ f \circ g \circ \chi_i^{K_i} \\
&= \prod_i {g^{-1}} \circ ((\chi_i^{-1})^{K_i})^{g^{-1}} \circ f \circ (\chi_i^{K_i})^{g^{-1}} \circ g \\
&= (\prod_i (\chi_i^{-1})^{K_i g^{-1}} \circ f \circ \chi_i^{K_i g^{-1}})^g.
\end{align*}
Conjugating both sides by $g^{-1}$, we see that $\langle f^{\hat{\mathfrak{L}}_{\mathcal{I}}} \rangle$ contains the element $(\chi^K)^{g^{-1}} = \chi^{Kg^{-1}}$ for arbitrary $g \in G$. In particular by composing these with $g^{-1} = t h$ where $t$ ranges over right coset representatives of $K$ in $L$, we obtain precisely $\chi^{Lh}$.
\end{proof}
\begin{theorem}
\label{thm:MainProof}
Let $X$ be an EFP SFT on a residually finite group $G$. Then the group $\hat{\mathfrak{L}}_{\mathcal{I}}$ is simple.
\end{theorem}
\begin{proof}
If $G$ is finite, this is trivial as $\hat{\mathfrak{L}}_{\mathcal{I}}$ is just the alternating group. Otherwise, let $f \in \hat{\mathfrak{L}}_{\mathcal{I}}$ be arbitrary, i.e.\ $f$ is a product of some elements $(\chi_i)^{H_i g_i}$ where the $\chi_i$ are even gates. Picking any normal subgroup $H \in \mathcal{I}$ of $G$ contained in the intersection of the $H_i$, Lemma~\ref{lem:Decomposition} shows that $f$ is a product of gates of the form $(\chi_i)^{H g_i}$ with $H$ normal. By Lemma~\ref{lem:NormalAuto}, $f$ is an automorphism for the $H$-action. By Lemma~\ref{lem:GeneratingEgnets},
\[ \langle f^{\hat{\mathfrak{L}}_{\mathcal{I}}} \rangle = \langle f, \hat{\mathfrak{L}}_{\mathcal{I}} \rangle \]
so the smallest normal subgroup of $\hat{\mathfrak{L}}_{\mathcal{I}}$ containing $f$ is all of $\hat{\mathfrak{L}}_{\mathcal{I}}$. This concludes the proof of simplicity.
\end{proof}
\begin{lemma}
\label{lem:StabUnderAut}
Let $X$ be a subshift on any residually finite group $G$. Then $\mathfrak{L}$ is a normal subgroup of the stabilized automorphism group.
\end{lemma}
\begin{proof}
For finite $G$ this is trivial, since $\mathfrak{L}$ is the entire symmetric group on $X$. It suffices to show $(\chi^{Kg})^f \in \mathfrak{L}$ whenever $f \in \mathrm{Aut}(X,H)$ for finite-index $H$ and $K$ is a sparse enough normal finite-index subgroup of $H$.\footnote{The fact we have to refine with $H$ here is the reason why $\mathfrak{L}_{\mathcal{I}}$ is not obviously normal for general $\mathcal{I}$ -- of course if it is normal then by simplicity of $\mathfrak{L}$ we have $\mathfrak{L} = \mathfrak{L}_{\mathcal{I}}$.} We calculate
\begin{align*}
(\chi^{Kg})^f &= (\prod_{k \in K} \chi^{kg})^f \\
&= \prod_{k \in K} (\chi^{kg})^f \\
&= \prod_{k \in K} f^{-1}h^{-1}t^{-1} k^{-1} \chi kthf \\
&= \prod_{k \in K} h^{-1}f^{-1}t^{-1} k^{-1} \chi ktfh \\
&= \prod_{k \in K} h^{-1}t^{-1}f'^{-1} k^{-1} \chi kf'th \\
&= \prod_{k \in K} h^{-1}t^{-1} k^{-1} f'^{-1} \chi f' kth \\
&= (\chi^{f'})^{Kg}
\end{align*}
Here, the second inequality follows from Lemma~\ref{lem:UniformComposition} and commutation of $(\chi^{kg})^f$ for different $k \in K$, where the latter in turn follows from the Curtis-Hedlund-Lyndon theorem (similarly as in the proof of Lemma~\ref{lem:GatesNormal}) and sufficient sparseness of $K$. The third equality follows by opening up the conjugations and letting $g = th$ where $h \in H$ and $t$ is one of finitely many left coset representatives for $H$. The fourth uses $f^h = f$. The fifth defines $f' = f^{t^{-1}}$. The fifth uses that $f'$ is an automorphism of $(X, H^{t^{-1}})$, and $K \leq H^{t^{-1}}$ because $K$ it is normal in $G$ and $K \leq H$. The last step rewraps the definition; here for the commutation of the product we note that $f'$ is one of $[G : H]$ many automorphisms of subshifts, namely subshifts $(X, H^{t^{-1}})$, thus satisfies the Curtis-Hedlund-Lyndon theorem; we simply initially take $K$ sparse enough for these finitely many maps.
We have shown that conjugating even gate lattices by $f \in \mathrm{Aut}(X,H)$ produces gate lattices, and by the assumption $\hat{\mathfrak{L}} = \mathfrak{L}$ we conclude that $\hat{\mathfrak{L}}$ is normal.
\end{proof}
\begin{example}
Even if $\chi$ is even, the $\chi^{f'}$ appearing in the above formula need not be even, or at least one needs to be careful about the choice of the strong neighborhood. Namely, pick $G = \mathbb{Z}$, and consider $\chi$ and $\chi^{\sigma}$. Of course, they have the same parity if computed with different strong neighborhoods (and therefore with any large enough strong neighborhood both are even), but we show that it is sometimes possible to find a strong neighborhood that works for both, but is even for exactly one of them.
For this, pick $M = \left(\begin{smallmatrix} 1 & 1 \\ 1 & 1 \end{smallmatrix}\right)$ and let $X$ be the corresponding edge shift, i.e.\ at each $n \in \mathbb{Z}+\frac12$ we have a vertex, and at each $n \in \mathbb{Z}$ we have an edge. Let us call the vertices $\{a, b\}$, and note that $M$ simply says we have a unique edges between each pair of vertices. Now pick $\chi$ the unique gate with strong neighborhood $[0,1]$ (so $\chi$ sees two edges) that permutes the word nontrivially if and only if the vertex at $-\frac12$ is $a$. In cycle notation, the action on vertices at $(-\frac12, \frac12, \frac32)$ is
\[ (aaa \; aba)(aab \; abb)(baa)(bab)(bba)(bbb). \]
Note that $\chi^{\sigma}$ performs the same modification on the word (consisting of edges) appearing in $[1,2]$. Now consider these with strong neighborhood $[0,2]$. Obviously $\chi$ is now an even gate, as in any context where the vertex at $-\frac12$ is $a$ we perform two swaps, and in any context where it is $b$, we do nothing. On the other hand, $\chi^{\sigma}$ always performs an odd permutation, since to get a nontrivial action we must pick the vertex at $\frac12$ to be $a$. \qee
\end{example}
Theorem~\ref{thm:MainProof}, combined with the above lemma, immediately gives the following.
\begin{theorem}
\label{thm:MainMaximal}
Let $X$ be an EFP SFT on a residually finite group $G$. If $\hat{\mathfrak{L}} = \mathfrak{L}$, it is a maximal simple subgroup of the stabilized automorphism group of $X$.
\end{theorem}
\bibliographystyle{plain}
|
1,314,259,994,868 | arxiv | \section*{Abstract}
Nowadays processing of Big Security Data, such as log messages, is commonly used for intrusion detection purposed. Its heterogeneous nature, as well as combination of numerical and categorical attributes does not allow to apply the existing data mining methods directly on the data without feature preprocessing. Therefore, a rather computationally expensive conversion of categorical attributes into vector space should be utilised for analysis of such data. However, a well-known k-modes algorithm allows to cluster the categorical data directly and avoid conversion into the vector space. The existing implementations of k-modes for Big Data processing are ensemble-based and utilise two-step clustering, where data subsets are first clustered independently, whereas the resulting cluster modes are clustered again in order to calculate \textit{metamodes} valid for all data subsets. In this paper, the novel frequency-based distance function is proposed for the second step of ensemble-based k-modes clustering. Besides this, the existing feature discretisation method from the previous work is utilised in order to adapt k-modes for processing of mixed data sets. The resulting k-metamodes algorithm was tested on two public security data sets and reached higher effectiveness in comparison with the previous work.
\section*{Introduction}
Nowadays outlier detection algorithms are widely used in different areas, such as banking, insurances, medicine and of course security. In the latter area security experts apply outlier detection methods in order to identify intrusions or other types of malicious behaviour, which should deviate from ``normal'' patterns in the data. This intrusion detection scenario has several characteristics that makes application of outlier detection methods especially complicated. First of all, intrusion detection should often be performed in the absence of the ground truth. Indeed, even for security experts it would be hardly possible to guarantee that the collected monitoring data does not contain any traces of the malicious behaviour in it. This implies that only more sophisticated unsupervised outlier detection methods could normally be used for processing of such data. Next, the heterogeneous nature of Big Security Data (that are collected from a variety of devices, operating systems and services) makes it hard to stick to specific set of numerical features or metrics. Rather, each data feed has its own fields/format, which requires either conversion to a common log format \cite{OLF} or application of generic outlier detection methods applicable on data without feature preparation \cite{Sapegin2017, thesis}. Finally, security data, such as log messages or network traffic (e.g., TCP dump or derived features) normally contain textual data (username, action, domain name, protocol, etc.), or also numerical data, but of different measures and distribution types, i.e. bytes transferred and number of connections per hour. Therefore, most existing data mining algorithms cannot be directly applied on such mixed data.
Rather, most existing methods including classic clustering-based approaches require discretisation of continuous numerical data into categories \cite{Garcia2013, Sapegin2017} and conversion of textual data into vector space model in order to apply distance function on data, which is needed to build clusters and find outliers.
Luckily, textual data of security log messages actually represent a limited number of categories and not a free natural text. So instead of applying techniques such as TF-IDF in order to convert log messages into vector space model, a direct conversion using one-hot encoding --- where each unique field value becomes a separate column/dimension --- is normally performed on the data\footnote{Since each term appears only once in the log line, calculation of term frequencies does not bring any benefits.}. Of course, in the large enterprise networks covering data of, for example, 100,000 employees a username column alone will require 100,000 dimensions in the vector space representation, whereas full vector space can contain 500-600 thousand dimensions/columns. Although special programming language structures and algorithms (such as sparse matrices and spherical k-means clustering) allow processing of high-dimensional vector space models \cite{Sapegin2017,thesis}, such conversion still affects performance and may have higher requirements on RAM usage.
Therefore, intrusion detection for Big Security Data may benefit from utilisation of generic clustering-based outlier detection that does not require data conversion into vector space model. The most well-known algorithm for this task is k-modes, which by default uses hamming distance (``simple matching similarity'') and defines modes (``cluster centers'') as a set of most frequent category for each feature/attribute. Since k-modes was originally proposed by Huang et al in 1998 \cite{Huang1998}, many researchers worked on various improvements of this algorithm.
In 2004, San et al. elaborated on the problem of non-unique cluster modes in k-modes, which ``makes the algorithm unstable depending on mode selection during the clustering process'' \cite{San2004}.
Authors propose to replace modes with ``cluster representatives'', which represents a fuzzy set with all possible categorical values for each attribute from the cluster, whereas each value is characterised by its relative (to the mode/representative) frequency. San et al. also propose a new distance function, where the distance between object and mode/representative is calculated based on the relative frequencies for each attribute in the object.
The similar idea was proposed in 2005 by He et al., where dissimilarity function also takes into account the relative frequency of the attribute, but only if the most frequent attribute category in the mode is equal to the attribute value in the object \cite{He2005}.
Both San et al. and He et al. demonstrated that the frequency-based distance functions allow to achieve higher clustering accuracy. Following it, Ng et al. in 2007 provided a formal proof for the effectiveness of the k-modes with frequency-based dissimilarity measure, as well as confirmed its guaranteed convergence. Next, Cao et al. in 2011 proposed improved dissimilarity measure based on rough set theory \cite{Cao2012}. This novel dissimilarity measure (1) takes into account ``the distribution of attribute values over the whole universe''\footnote{The distance function is based on the similarity measure that takes into account the number of equivalence classes in the whole data set with respect to the attribute value being compared.}, (2) has higher effectiveness (on selected biological and genetical data sets), (3) eliminates some border cases when object assignment to the mode is undetermined and (4) also guarantees convergence for k-modes. Besides that, authors showed both the effectiveness and efficiency of k-modes on large data sets.
The more detailed review of various modification of k-modes algorithm was provided by Goyal and Aggrawal \cite{KModesReview}. The review paper covers not only the improvement of distance functions, but also related work on initialisation of modes and automatic selection of parameter k.
Finally, when distributed computing become widely used, Visalakshi et al. proposed one more important modification of k-modes algorithm, namely the ensemble-based distributed version of k-modes in 2015 \cite{KarthikeyaniVisalakshi2015a}. Under this approach, data are divided into subsets and these subsets are clustered on the different nodes using k-modes. Next, all modes from all subsets are collected by the master node and undergo one more iteration of k-modes clustering in order to calculate global clusters. Authors claim equal or better performance and cluster quality in comparison with non-distributed k-modes as well as classic distributed k-means algorithm \cite{DKM} (authors utilised label/integer encoding in order to apply k-means on categorical data).
However, in the distributed version of k-modes the second step of the algorithm still requires optimisation. Indeed, after all modes from all subsets are collected on the master node, one needs to calculate distances between (1) pairs of modes (from the first step of the algorithm), as well as (2) modes and ``modes of modes'' (which will be called \textit{metamodes} in this paper). Even if the modes themselves were calculated taking relative frequencies of the attribute values into account, there is no existing distance function that can calculate dissimilarity between two modes/representatives containing relative frequencies for all possible attribute values in the cluster.
In this paper we review the distributed k-modes algorithm and propose a novel distance function for clustering of modes (cluster representatives containing frequencies for all possible categorical values for each attribute from the cluster objects). We utilise new distance function for clustering of modes in the second step of the distributed k-modes algorithm. We also prove that the resulting \textit{metamodes} represent global cluster centers more effectively than in the cases when attribute frequencies are discarded after first step of the algorithm. Besides that we also combine distributed k-metamodes with discretisation of numeric data, which allows to apply this clustering method on numeric or mixed (containing both numerical and categorical data) data sets. The resulting algorithm is compared with Hybrid Outlier Detection from related work \cite{Sapegin2017,thesis} and shows similar effectiveness while avoiding computationally expensive conversion into vector space.
The rest of the paper is organised as follows. In Section \ref{sec:kmodes} we describe existing incremental distributed k-modes algorithm. Next, Section \ref{sec:frequency} proposes the novel frequency-based distance function for clustering of modes/representatives. The effectiveness of the k-metamodes with the new distance functions is evaluated in Section \ref{sec:evaluation}, which also contains comparison with existing Hybrid Outlier Detection algorithm. Finally, Section \ref{sec:conclusion} concludes the paper.
\section{Ensemble-based incremental distributed k-modes}\label{sec:kmodes}
Since k-modes is based on the k-means algorithm, it tries to solve the same optimisation problem. Namely, how to partition a set of objects\footnote{For k-means, the set should contain numeric objects only.} $S = {X_1,X_2,...,X_n}$ into $k$ clusters. Formally, this problem P is described as follows \cite{selim1984}:
\begin{equation} \label{eq:1}
\text{Minimise } P(W,Q) = \sum_{l=1}^{k} \sum_{i=1}^{n} w_{i,l}d(X_i,Q_l)
\end{equation}
subject to
\begin{equation} \label{eq:2}
\sum_{l=1}^k w_{i,l} = 1, 1 \leq i \leq n,
\end{equation}
\begin{equation} \label{eq:3}
w_{i,l} \in {0,1}, 1 \leq i \leq n, 1 \leq l \leq k,
\end{equation}
where Q is a set of modes\footnote{For k-means, Q is a set of cluster centers.}, $W = [w_{i,l}]$ is a partition matrix with size of $n \times k$, and $d(X_i,Q_l)$ is distance function between object and mode.
The original k-modes differs from k-means only in the definition of cluster center (which is replaced with mode) and distance function. Therefore, for both algorithms the problem P can be solved by repeating the following steps until P will not converge to the local minimum \cite{Huang1998}:
\begin{itemize}
\item \textbf{step 1:} Fix Q and solve P through finding optimal W. Here the set of modes is fixed and for each object the best mode is identified through calculating the distance between object and mode.
\item \textbf{step 2:} Fix W and solve P through finding optimal Q. Here the modes are recalculated based on the object reassignments from step 1.
\end{itemize}
Thus, in order to identify clusters with k-modes, a distance function between object and mode should be defined\footnote{The same distance function can be used to find distance between two objects, since it will be a border case where mode will be based on one object only.}. The most basic hamming distance can be formally defined as:
\begin{equation} \label{eq:4}
d(X_i,Q_l) = \sum_{j=1}^m \delta(x_{i,j},q_{l,j})
\end{equation}
where
\begin{equation} \label{eq:5}
\delta(x_{i,j},q_{l,j}) = \begin{cases} 0 \; if \; x_{i,j}=q_{l,j}, \\ 1 \; if \; x_{i,j} \neq q_{l,j}
\end{cases}
\end{equation}
Of course, in case of frequency-based modes/representatives, both mode and distance function should be redefined. According to San et al., the frequency-based mode\footnote{Hereafter we will always use the notion ``mode'' when talking about both cluster modes and cluster representatives.} is defined as follows \cite{San2004}:
\begin{equation} \label{eq:6}
Q_l = \{q_{l,1},q_{l,2},...,q_{l,m}\}
\end{equation}
where
\begin{equation} \label{eq:7}
q_{l,j} = \{(c_j,f_{c_j}) | c_j \in V_j\}
\end{equation}
where $V_j$ is a set of all possible values of the attribute $j$ among all objects in the cluster $S'_l$ with mode $Q_l$. Let us also define the $q'_{l,j}=c'_j$, where $c'_j$ is the most frequent attribute value in $V_j$, so that $f_{c'_j} \geq f_{c_p,j}$ $\forall p$ $|$ $V_j$ \cite{He2005}.
When modes are calculated based on the attribute frequencies, a frequency-based distance function can be applied to calculate distance between object and mode, formally the formula \ref{eq:5} should be replaced with:
\begin{equation} \label{eq:8}
\delta(x_{i,j},q_{l,j}) = \begin{cases} 1 - f_{c_j}(c_j=q'_{l,j}|S'_l) \; if \; x_{i,j}=q'_{l,j}, \\ 1 \; if \; x_{i,j} \neq q'_{l,j}
\end{cases}
\end{equation}
In the ensemble-based k-modes (proposed by Visalakshi et al. in \cite{KarthikeyaniVisalakshi2015a}) this distance function can be applied at the first step of the ensemble-based clustering, where each data subset is clustered independently at the ensemble member (k-modes algorithm instance). Nevertheless, the next steps of the proposed ensemble-based clustering expects clustering of modes themselves, i.e. applies k-modes on collection of modes from all ensemble members. In this case the existing frequency-based distance function that uses formula \ref{eq:8} cannot be applied in order to calculate the distance between two modes while taking into account attribute frequencies calculated on the first step of the algorithm. The only existing solution supposes discarding previous attribute frequencies from the calculation of the distance and treating modes as usual objects\footnote{In order to convert mode to object, the classical definition of mode by Huang can be used, i.e. the mode can be converted back to $q'$ or the set of the most frequent values for each attribute.}. However, this approach might be less effective in comparison to frequency-based distance function.
Therefore, in this paper we propose a novel frequency-based distance function for clustering of modes at the third step of ensemble-based k-modes algorithm. The proposed distance function is described in details in the section below.
\section{Frequency-based distance function for calculation of metamodes}\label{sec:frequency}
Similarly with distance function from formula \ref{eq:4}, the distance function for clustering of modes should be able to calculate distance between clustering object (mode) and cluster center (metamode). While the mode is already defined with formula \ref{eq:7}, the metamode can be defined as a set of all attribute frequencies from all objects with all modes in the meta-cluster. Formally,
\begin{equation}\label{eq:9}
\text{Metamode } Z_t = \{z_{t,1},z_{t,2},...,z_{t,m}\}
\end{equation}
where
\begin{equation}\label{eq:10}
z_{t,j} = \{(c_j,f_{c_j})|c_j \in V'_j\}
\end{equation}
where $V'_j$ is a set of all possible values of the attribute $j$ among all objects in all clusters $S'$ with all modes $Q'$ with metamode $Z_t$. In order to be able to calculate the frequencies of the metamode attributes, it is needed to keep original counts (and not frequencies) of attribute values in the mode, since the frequencies are not scaled to the cluster size. Formally, we redefine mode so that formula \ref{eq:7} becomes:
\begin{equation} \label{eq:11}
q''_{l,j} = \{(c_j,f'_{c_j}) | c_j \in V_j\}
\end{equation}
where $f'$ is the number of occurrences of $c_j$ as a value of attribute $j$ in the cluster $S'_l$ with mode $Q_l$. Here we note that the distance between object and mode can still be calculated using formula \ref{eq:8}, since it is easy to calculate $f_{c_j}$ from $f'_{c_j}$ and, correspondingly $q_{l,j}$ from $q''_{l,j}$:
\begin{equation} \label{eq:12}
(f_{c_j} | c_j \in V_j) = f'_{c_j} / n',
\end{equation}
where $n'$ is the number of objects $X$ in the cluster $S'_l$.
Thus, both modes and metamodes become a fuzzy sets containing counts for each possible attribute value for each object in the cluster and meta-cluster, while $V'_j \subset V_j$. This allows us to define a frequency-based distance function to find distance between two modes (or mode and metamode) as sum of Euclidean distances for each attribute:
\begin{equation} \label{eq:13}
d(Q_l,Z_t) = \sum_{i=1}^n \sqrt{\sum_{j=1}^m (q_{l,j}-z_{t,j})^2}
\end{equation}
In order to differentiate it from \ref{eq:8}, we will call it as \textit{meta-frequency-based} distance function in this paper.
Please note that in order to calculate both metamode and distance between mode and metamode, we use both $q_{l,j}$ (fuzzy set of attribute frequencies) and $q''_{l,j}$ (fuzzy set of attribute counts), although $q_{l,j}$ can be calculated from $q''_{l,j}$ on the fly.
The new distance function allows to take into account attribute frequencies in the modes for calculation of distance to the metamode. This approach should be more effective than the case when the mode is converted back to $q'$ in order to discard attribute frequencies and threat mode as an object, which allows to apply distance functions from the previous work \cite{Huang1998, San2004, He2005, Ng2007, Cao2012} mentioned in the introduction and also as formulas \ref{eq:5} and \ref{eq:8}. The effectiveness of the k-metamodes with proposed distance function is evaluated on two data sets as described in the next section.
\section{Evaluation of k-metamodes on public KDD Cup 1999 and UNSW-NB15 network data sets}\label{sec:evaluation}
KDD Cup 1999 data set is the most popular security data set for evaluation of machine learning and data mining algorithms \cite{KDD99CUP}. This dataset contains both numerical and categorical features, which makes it perfect example of data that SIEM and IDS systems need to process. However, this data set is already 20 years old and contains a very high (80\%) attack ratio \cite[Section~4.3.2]{thesis}, which makes only first 400,000 records with attack ratio of 9.8\% suitable for evaluation of unsupervised outlier detection algorithms\footnote{With the high attack ratio, such as 80\%, unsupervised outlier detection tends to learn attacks as ``normal'' and therefore is not suitable for processing of such data.}.
In 2015, Moustafa et al. proposed the newer data set that also has lower attack ratio \cite{nb15a,nb15b}. The UNSW-NB15 data set covers two simulation periods: (1) 16 hours on Jan 22, 2015 and (2) 15 hours on Feb 17, 2015. During these periods of time, the raw traffic was collected and later converted with the help of Argus\cite{Argus} and Bro \cite{Bro} into higher-level set of features (similar to KDD Cup 1999 data set) available as CSV files. Although all types of data (raw traffic in pcap format, BRO files, Argus files and CSV) are available for download, we use CSV files for evaluation of k-metamodes. In total, there are 4 CSV files covering both time periods, as shown in Figure \ref{fig:nb15overview}.
\begin{figure}[htb!]
\centering
\includegraphics[angle=-90,width=\linewidth]{anomalies_distribution_all_data.png}
\centering
\caption{Overview of UNSW-NB15 data set}
\label{fig:nb15overview}
\end{figure}
Figure \ref{fig:nb15overview} shows rates of normal and attack records per second from both time periods, as well as all 4 CSV files. The simulation periods are recognisable through a gap in the middle of the figure, whereas red lines in the figure show the borders between CSV files. The second time period has much higher attack ratio (10 attacks per second or 25.92\% of data), while first time period is more suitable for evaluation of unsupervised outlier detection algorithms (1 attack per second or 2.09\% of data). Therefore, we only apply k-metamodes on the data from 16 hours of simulation on Jan 22, 2015. In total, this part contains 1,087,203 records, 22,215 of them are related to the malicious activity and listed in Table \ref{tab:attacks}.
\begin{table}[htbp!]
\caption{Number of records per attack category}
\begin{center}
\begin{small}
\begin{tabular}{|>{\centering}c|c<{\centering}|}
\hline
Attack category & Number of records \\ \hline
Analysis & 526 \\ \hline
Backdoors & 534 \\ \hline
DoS & 1167 \\ \hline
Exploits & 5409 \\ \hline
Fuzzers & 5051 \\ \hline
Generic & 7522 \\ \hline
Reconnaissance & 1759 \\ \hline
Shellcode & 223 \\ \hline
Worms & 24 \\ \hline
\end{tabular}
\end{small}
\end{center}
\label{tab:formats}
\end{table}
The k-metamodes outlier detection algorithm was applied on both of described data sets.
In order to apply it on KDD Cup 1999 Data, the data were discretised according to previous work \cite[Section~4.3.2]{thesis}. Due to the fact that k-modes is able to process discretised categorical values only, advanced discretisation --- which produces continuous numerical values in the range between 0 and 1 --- cannot be applied on the data. Therefore, we utilised simple discretisation, sample size of 10,000 records\footnote{This sample size was selected to be the same with \cite[Section~4.3.2]{thesis} in order to be able to compare results with the previous work.} and $k=22$ (number of modes per sample), which is the optimal k value for these data and discretisation type (please see \cite[Section~4.3.6]{thesis} for details). Each sample was selected using random sampling without replacement due to the fact that without random sampling some of KDD Cup data subsets (10,000 records each) will not have enough unique records to initialise 22 modes. Next, frequency-based distance function from Formula \ref{eq:8} was used on both steps of ensemble-based k-modes clustering. As mentioned in the Section \ref{sec:kmodes}, at the second step of the clustering, attribute frequencies were discarded in order to apply distance function from \ref{eq:8} for calculation of distance between mode and metamodes. Besides that, for the second step of the clustering, we continue using $k'=22$ (number of metamodes). The resulting AUC (Area Under Curve) values for different number of samples are shown in Figure below.
\begin{figure}[htb!]
\centering
\includegraphics[angle=-90,width=0.5\linewidth]{KDD_all_aucs_freq_freq_s10k_k22_k22.png}
\centering
\caption{AUC values for KDD Cup 1999 data with the different number of training samples, sample size 10,000, k=22, k'=22, frequency-based distance function, distance to all metamodes as outlier score.}
\label{fig:KDDCupAUC}
\end{figure}
In Figure \ref{fig:KDDCupAUC}, the AUC value does not show any dependency on the number of training samples, which supports the claim that training on subsamples of data does not decrease the quality of outlier detection \cite[Section~3.2]{subsampling}. Rather, the k-metamodes trained on just 4 samples 10,000 records each (10\% of the data set altogether) allows to achieve the same AUC value as k-metamodes trained on the full data (40 samples).
However, another factor --- namely the measure selected as outlier score --- may affect the effectiveness of the outlier detection. In the previous work, the outliers were clustered together and each outlier from the cluster was assigned the score of the cluster center \cite[Section 4.3.1.1]{thesis}. K-metamodes outlier detection allows to reproduce this approach. Instead of taking distance from each record to all metamodes as outlier score, it is possible to assign outlier score for each record from the corresponding record's mode. Thus, the outlier score turns into the distance from record's mode to all metamodes. With such outlier score, k-metamodes is able to reach \textbf{98,09\% AUC} on the same data\footnote{Calculation of AUC with the proposed outlier score for different number of samples would be unreasonable, since all records from all samples should be clustered in order to assign outlier score to each record}. We provide ROC and Precision-Recall curves in the Figures below.
\begin{figure}[htb!]
\centering
\includegraphics[width=0.7\linewidth]{ROC_KDD_k22_p40_s10000_frequency_frequency_clustered_distance.png}
\centering
\caption{ROC curve for KDD Cup 1999 data with 40 samples, sample size 10,000, k=22, k'=22, frequency-based distance function, distance from record's mode to all metamodes as outlier score.}
\label{fig:KDDCupFreqROC}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics[width=0.7\linewidth]{PR_KDD_k22_p40_s10000_frequency_frequency_clustered_distance.png}
\centering
\caption{Precision-Recall curve for KDD Cup 1999 data with 40 samples, sample size 10,000, k=22, k'=22, frequency-based distance function, distance from record's mode to all metamodes as outlier score.}
\label{fig:KDDCupFreqPR}
\end{figure}
Figure \ref{fig:KDDCupFreqROC} shows that the k-metamodes algorithm is able to achieve high true positive rate while still keeping false positive rate low. Moreover, Figure \ref{fig:KDDCupFreqPR} demonstrates that high values for both precision and recall can also be achieved applying k-metamodes on the KDD Cup data.
Next, we apply the same algorithm, but with new distance function proposed in the Section \ref{sec:frequency} on the same data. Since the distance from record's mode to all metamodes as outlier score allows to achieve higher effectiveness, we stick to 40 training samples during the data analysis. Unfortunately, the \textbf{utilisation of proposed distance function allows to reach AUC 97,93\%}, which is slightly worse than original distance function.
To check if the proposed distance function does not help to achieve higher effectiveness on other data sets as well, we applied k-metamodes on the UNSW-NB15 data. Similarly to KDD Cup data, this data set was discretised in order to convert continuous numerical values into categories. Heuristically, the sample size of 50,000 records was selected. The optimal k was determined according to \cite[Section~4.3.6]{thesis} an equals 36. The Figure \ref{fig:NB15optimalK} provides the charts for different k and corresponding cluster similarity\footnote{Mean cosine similarity between concept vector and each cluster record according to \cite[Section 4.2.4]{thesis}.}.
\begin{figure}[htb!]
\centering
\includegraphics[width=0.7\linewidth]{NB15_optimal_k.png}
\centering
\caption{Different number of clusters per sample and corresponding cluster similarity for UNSW-NB15 data set, 50,000 records per sample.}
\label{fig:NB15optimalK}
\end{figure}
In Figure \ref{fig:NB15optimalK} the similarity measure is calculated for different k on two types of data subsets. First (called ``data subsets'') are subsets of 50,000 records each, from the UNSW NB15 data set without shuffling. Second (called ``training subsets'') are also subsets of 50,000 records each, but created by applying random sampling without replacement on the UNSW NB15 data set. For both types of subsets, optimal $k$ (equal to 36) allows to reach approximately 90\% cluster similarity.
As we mentioned above, we stick to more effective outlier score, which requires to cluster all records from the dataset with k-metamodes, i.e. the number of ``training'' samples should always cover the data set completely, which implies usage of 22 samples taking into account the data set size of 1,087,203 records and sample size of 50,000 records per sample.
The k-metamodes with frequency-based distance function and parameters discussed above was able to achieve \textbf{AUC of 94,51\%} on this dataset, while k-metamodes with the proposed distance function (defined in the Section \ref{sec:frequency} reaches \textbf{AUC of 96,24\%} on the same dataset. The corresponding Receiver Operating Characteristic, as well as Precision-Recall curves are provided in the Figure below.
\begin{figure}[htb!]
\centering
\subfloat[][ROC curve for UNSW NB15 data with 22 samples, sample size 50,000, k=36, k'=36, frequency-based distance function, distance from record's mode to all metamodes as outlier score.]{
\includegraphics[width=0.47\linewidth]{ROC_NB15_k36_p22_s50000_freq_freq_clustered_distance.png}
\label{fig:NB15_freq_ROC}
}
\hspace{0.01\textwidth}
\subfloat[][ROC curve for UNSW NB15 data with 22 samples, sample size 50,000, k=36, k'=36, meta-frequency-based distance function, distance from record's mode to all metamodes as outlier score.]{
\includegraphics[width=0.47\linewidth]{ROC_NB15_k36_p22_s50000_freq_meta_clustered_distance.png}
\label{fig:NB15_meta_ROC}
}
\hfill
\subfloat[][Precision-Recall curve for UNSW NB15 data with 22 samples, sample size 50,000, k=36, k'=36, frequency-based distance function, distance from record's mode to all metamodes as outlier score.]{
\includegraphics[width=0.47\linewidth]{PR_NB15_k36_p22_s50000_freq_freq_clustered_distance.png}
\label{fig:NB15_freq_PR}
}
\hspace{0.01\textwidth}
\subfloat[][Precision-Recall curve for UNSW NB15 data with 22 samples, sample size 50,000, k=36, k'=36, meta-frequency-based distance function, distance from record's mode to all metamodes as outlier score.]{
\includegraphics[width=0.47\linewidth]{PR_NB15_k36_p22_s50000_freq_meta_clustered_distance.png}
\label{fig:NB15_meta_PR}
}
\caption{ROC and PR curves for k-metamodes on UNSW NB15 data with both frequency-based (on the left) and meta-frequency-based (on the right) distance functions.}
\label{fig:NB15ROCPR}
\end{figure}
In Figure \ref{fig:NB15ROCPR}, both ROC curves for k-metamodes with frequency-based and the proposed meta-frequency-based distance functions show that the algorithm is able to achieve high true positive rate while keeping relatively low false positive rate. In turn, this proves that the k-metamodes is able to place outliers at the top of the algorithm's output by giving higher outlier score to real outliers (true positives). The ROC curve in Figure \ref{fig:NB15_meta_ROC} also shows that with meta-frequency-based distance function, k-metamodes achieves True Positive Rate of more than 90\% while still having False Positive Rate below 10\%. Different to this, without the proposed distance function, True Positive Rate need to be around 60\% in order to keep False Positive Rate below 10\%, as shown in Figure \ref{fig:NB15_freq_ROC}.
On the other hand, both Precision-Recall charts in Figures \ref{fig:NB15_freq_PR} and \ref{fig:NB15_meta_PR} show that k-metamodes applied on the UNSW NB15 data is unable to achieve as high precision, as for KDD Cup data (shown in Figure \ref{fig:KDDCupFreqPR}). For both types of distance functions, the Recall / True Positive Rate of 60\% or more implies relatively low precision which does not even exceed 30\%. However, the precision-recall ratio is not so important for an unsupervised outlier detection algorithm as ROC and AUC, especially in security area. Even if the outlier detection has a high number of false positives, but is able to place true positives at the top of its output, it can still be effectively used for capturing outliers that will be uncaught otherwise (please see \cite[Comparative Evaluation]{Goldstein2016} and \cite[Section~6.1]{thesis} for further details).
We have also checked that increasing the number of clusters per sample, e.g. to $k=100$ does not allow to achieve a higher AUC value. Rather, with the proposed meta-frequency-based distance function and $k=100$ the measured AUC value was 95,13\% (which is less than 96,24\% reached with $k=36$). This fact allows us to conclude that the selected number of clusters per sample has only the minor effect on the outlier detection.
Thus, the results are twofold. On the one hand, the proposed distance function helps to increase AUC from 94,51\% to 96,24\% on the UNSW NB15 data set. On the other hand, on the KDD Cup 1999 data, AUC slightly decreases from 98,09\% to 97,93\%.
In the next section, we compare the k-metamodes with the previous work, namely Hybrid Outlier Detection \cite{Sapegin2017,thesis} to check if k-metamodes is able to achieve higher effectiveness on both data sets.
\subsection{Comparison of k-metamodes with Hybrid Outlier Detection}\label{sec:comparison}
The Hybrid Outlier Detection represents an example of the algorithm that utilises conversion of the features into vector space in order to perform clustering and outlier detection on the categorical data, such as security log messages. Under this algorithm, the data are divided into subsets, whereas each subset undergoes one-hot encoding followed by the clustering using spherical k-means. After the clustering, the concept vectors of clusters are used as training data for the one-class SVM. Each data subset is used to train the corresponding model from the ensemble of one-class SVMs independently. During the application/testing phase, the data is divided into subsets and clustered again in order to check the concept vectors of clusters agains all one-class SVMs from the ensemble. If all models will classify the concept vector as outlier, all the records from the corresponding clusters will be considered outliers and assigned the same outlier score \cite{Sapegin2017,thesis}.
In the previous work, Hybrid Outlier Detection was applied on the KDD Cup data set and also achieved a high AUC value, as shown in Figure \ref{fig:KDDCupAUCHOD}.
\begin{figure}[htb!]
\centering
\includegraphics[angle=-90,width=0.5\linewidth]{HOD_KDDCup_all_aucs_k22_from_thesis.png}
\centering
\caption{AUC values for Hybrid Outlier Detection on KDD Cup 1999 data with the different number of training samples, sample size 10,000, k=22, simple and advanced (with different coefficient) discretisation; reprinted from \cite{Sapegin2017,thesis}.
}
\label{fig:KDDCupAUCHOD}
\end{figure}
Figure \ref{fig:KDDCupAUCHOD} provides the measurements of AUC value for different parameters that were used for Hybrid Outlier Detection. The best AUC reached was 98.4\% (with 4 training samples, k = 22 and advanced discretisation with C = 0.5), which is slightly better than the AUC reached by k-metamodes on the same dataset (98,09\%, as mentioned in the previous section). The higher effectiveness of the Hybrid Outlier Detection can be explained by the fact that if k-metamodes is applied on the data sets with mixed data (containing both numerical and categorical features), only simple discretisation might be used to convert numerical features into categorical ones. For Hybrid Outlier Detection (which uses sperical k-means for clustering) it is possible to apply advanced discretisation and retain the difference between original numerical values even though they are discretised into the same category.
However, on the UNSW NB15 data set, Hybried Outlier Detection does not outperform k-metamodes, as shown in the Figure \ref{fig:NB15AUCHOD}.
\begin{figure}[htb!]
\centering
\includegraphics[angle=-90,width=0.5\linewidth]{HOD_NB15_all_aucs_simple_s50k_k36.png}
\centering
\caption{AUC values for Hybrid Outlier Detection on UNSW NB15 data with the different number of training samples, sample size 50,000, k=36, simple and advanced discretisation ($C=0.5$).}
\label{fig:NB15AUCHOD}
\end{figure}
Figure \ref{fig:NB15AUCHOD} presents the AUC values achieved by Hybrid Outlier Detection with different number of training samples\footnote{Similarly to application of Hybrid Outlier Detection on KDD Cup data, for UNSW NB15 data we did not calculate AUC for number of training samples $>$10 (corresponding to approx. half of the dataset), since the algorithm is expected to work best when trained on the subset of data \cite[Section~3.2]{subsampling}.} and different discretisation types ($C=0.5$ was heuristically selected, since this value on of the discretisation coefficient allows the algorithm to achieve the best AUC on the KDD Cup data). Although that the average AUC achieved by HOD was slightly higher for advanced discretisation, the best AUC value was achieved with simple discretisation and equals 88.17\%, which is less than 96,24\% reached by k-metamodes.
Thus, k-metamodes showed almost the same effectiveness on the KDD Cup data as Hybrid Outlier Detection and reached higher AUC value on the UNSW NB15 data set. Next section provides a more details overview on the effectiveness of both algorithms (including both frequency- and meta-frequency-based k-metamodes) and concludes the paper.
\section{Conclusion}\label{sec:conclusion}
All three algorithms --- k-metamodes with frequency-based distance function, k-metamodes with the proposed meta-frequency-based distance function and Hybrid Outlier Detection --- were applied on both KDD Cup 1999 data and UNSW NB15 data and reached high AUC values, which are shown in Figure \ref{fig:AllAUC} below.
\begin{figure}[htb!]
\centering
\includegraphics[width=\linewidth]{All_data_all_algorithms_best_values.png}
\centering
\caption{}
\label{fig:AllAUC}
\end{figure}
Figure \ref{fig:AllAUC} provides an overview of AUC values for all datasets and algorithms tested in this paper. On the KDD Cup 1999 data, all algorithms reach nearly the same AUC value of approximately 98\%. Although the k-metamodes with the proposed meta-frequency-based distance function demonstrates the worst result on these data, the difference with other algorithms might be considered statistically insignificant. Different to this, on the UNSW NB15 data, k-metamodes outperforms Hybrid Outlier Detection, even if advanced feature discretisation was used for the last one. On this dataset, the usage of the proposed meta-frequency-based distance function allows to reach 2\% higher AUC value. Even though 2\% cannot be considered a significant improvement, from ROC curves provided in Figure \ref{fig:NB15ROCPR} we may conclude that the usage of the proposed distance function allows to reach 90\% Recall / True Positive Rate while keeping False Positive Rate as low as 10\%. Without the proposed distance function the corresponding recall will be much lower, i.e. around 60\%.
Thus, in this paper we proposed a novel frequency-based distance function for clustering modes into metamodes within the second step of k-metamodes algorithm. Besides this, we combined k-metamodes with feature discretisation approach from the previous work. The resulting algorithm\footnote{The source code for both k-metamodes with the proposed distance function and Hybrid Outlier Detection is made available on the Github under MIT license \cite{HODgithub,kmetamodesgithub}.} is able to run on mixed data sets containing both numerical and categorical features and allows to reach higher recall for the same False Positive Rate. These improvements are not only relevant for outlier detection in the area of security analytics, where such mixed data sets are rather common, but also for generic Big Data processing cases.
\nolinenumbers
|
1,314,259,994,869 | arxiv | \section{Introduction}
In the past few years, Mrk~231 has become arguably the best laboratory
to study quasar feedback in action. One reason for this is the
proximity of Mrk~231: at a distance\footnote{Based on a redshift $z$ =
0.0422 and a cosmology with $H_0$ = 73 km s$^{-1}$ Mpc$^{-1}$,
$\Omega_{\rm matter}$ = 0.27, and $\Omega_{\rm vacuum}$ = 0.73.} of
only 178 Mpc, it is the nearest quasar known (Boksenberg et al. 1977)
and thus provides an excellent linear resolution (1$\arcsec$ = 0.863
kpc). But there are several other reasons why Mrk~231 has attracted
attention. It is one of the best local examples of powerful quasar and
starburst activity triggered by a recent merger (e.g., Hamilton \&
Keel 1987; Hutchings \& Neff 1987; Surace et al. 1998; Veilleux et
al. 2002, 2006). It is a morphologically disturbed ultraluminous
infrared galaxy (ULIRG) with an infrared (8 -- 1000 $\mu$m) luminosity
log[$L_{\rm IR}$/$L_\odot$] = 12.54 and a bolometric
luminosity$\footnote{This estimate for the bolometric luminosity
includes all IR and non-IR contributions and is derived by simply
assuming $L_{\rm BOL}$ = 1.15 L$_{\rm IR}$, typical for ULIRGs
(e.g., Sanders \& Mirabel 1996).}$ log[$L_{\rm BOL}$/$L_\odot$]
$\approx$ 12.60 that is produced at $\sim$70\% by the quasar and
$\sim$30\% by a starburst with a star formation rate SFR $\sim$ 140
M$_\odot$ yr$^{-1}$ (Veilleux et al. 2009). More relevant to the
issue of feedback, Mrk~231 is also a member of the rare class of iron
low-ionization broad absorption-line (FeLoBAL) quasars, with an
unresolved nuclear outflow with a velocity of up to $\sim$ --8000 km
s$^{-1}$ measured in Na~I~D 5890, 5896 \AA\ and several other optical
and ultraviolet lines (Boksenberg et al. 1977; Rudy, Foltz, \& Stocke
1985; Hutchings \& Neff 1987; Boroson et al. 1991; Kollatschny,
Dietrich, \& Hagen 1992; Forster, Rich, \& McCarthy 1995; Smith et
al. 1995; Rupke, Veilleux, \& Sanders 2002; Gallagher et al. 2002,
2005; Veilleux et al. 2013a). Finally, and most relevant to the
present paper, recent observations have revealed that Mrk~231 also
hosts a powerful spatially resolved optically-detected wind with
velocities in excess of $\sim$ --1000 km s$^{-1}$ (Rupke, Veilleux, \&
Sanders 2005c; Rupke \& Veilleux 2011, 2013, hereafter RV11 and RV13,
respectively).
The optical observations of RV11 and RV13 show that this wind is
predominantly neutral (traced in absorption by Na~I~D) and extends out
to at least $\sim$3 kpc. The neutral wind gas is symmetrically
distributed around the nucleus and presents a uniform velocity field
with no obvious spatial gradient. These results indicate the presence
of a wide-angle outflow that is expelling gas at a rate of at least
$\sim$200 M$_\odot$ yr$^{-1}$ out of the nuclear region (RV13). The
axis of this conical outflow is roughly along the minor axis of the
near face-on sub-kpc molecular / stellar disk (e.g., Downes \& Solomon
1998; Davies et al. 2004). Remarkably, a number of recent far-infrared
and mm-wave studies (Fischer et al. 2010; Feruglio et al. 2010; Sturm
et al. 2011; Aalto et al. 2012; Cicone et al. 2012; Gonzalez-Alfonso
et al. 2014) have inferred that a similar (if not larger) amount of
molecular gas is entrained in this outflow, reaching velocities that
are similar to those measured optically in the spatially resolved
neutral wind. This multi-phase wind is thus potentially a
life-changing event for Mrk~231, capable of displacing most of the
neutral and molecular gas from the nucleus, and thus quenching nuclear
star formation and also possibly the quasar activity, on a timescale
of only a few $\times$ 10$^6$ yr. This is the essence of quasar
feedback, which is purported to transform many gas-rich mergers into
inactive bright elliptical galaxies (di Matteo, Springel, \& Hernquist
2005; Murray, Quataert, \& Thompson 2005; Veilleux, Cecil, \&
Bland-Hawthorn 2005; Veilleux et al. 2013b; Cicone et al. 2014).
Not surprisingly, Mrk~231 has been the subject of numerous X-ray
studies, using the whole gamut of high-energy space facilities,
starting with {\em Einstein}, {\em ROSAT}, {\em ASCA}, and {\em CGRO}
(e.g., Eales \& Arnaud 1988; Dermer et al. 1997; Turner 1999; Maloney
\& Reynolds 2000). It was observed with {\em Chandra} on multiple
occasions: once in 2000 ($\sim$40~ksec; Gallagher et al.\ 2002), three
times in 2003 ($\sim$40~ksec each spanning over three weeks; Gallagher
et al. 2005), and once again in 2010 ($\sim$5-ksec snapshot; PI
Garmire). The 2000-2003 data were independently re-analyzed by Ptak et
al.\ (2003), Grimes et al.\ (2005), and Teng \& Veilleux
(2010). Mrk~231 was observed with {\em XMM-Newton} in 2001
($\sim$20~ksec; Turner \& Kraemer 2003) and at 12 $-$ 60 keV with {\em
Beppo}SAX at the end of 2001 (Braito et al.\ 2004). The combined
{\em XMM-Newton} + {\em Beppo}SAX spectrum of the nucleus suggests the
existence of a highly absorbed ($N_H \sim 2 \times
10^{24}$~cm$^{-2}$), powerful (intrinsic 2-10 keV luminosity of $\sim1
\times 10^{44}$ erg s$^{-1}$) AGN. The direct AGN continuum appears to
be only detected beyond $\sim$10 keV, while reprocessed radiation
through scattering and/or reflection dominates the observed 2-10 keV
emission (Braito et al. 2004). A more recent (April 2011) 200-ksec
observation of Mrk~231 with {\em Suzaku} by Piconcelli et al. (2013)
indicates that the absorbing material is patchy, now letting through a
significantly larger fraction of the direct $<$10 keV AGN continuum
than in 2001. This patchy material may also be responsible for the
scattering/reflection itself and the optical/UV LoBAL systems (e.g.,
Rupke et al.\ 2002, 2005c; Gallagher et al. 2005; Veilleux et
al. 2013a and references therein). The recent high-energy (3 -- 30
keV) {\em NuSTAR} observation of Mrk~231, supplemented with our new
and simultaneous low-energy (0.5-8 keV) data from {\em Chandra}, puts
into question the existence of Compton-thick material in front of this
source (Teng et al. 2014). Mrk~231 was detected at high energies,
though at much fainter flux levels than previously reported, likely
due to contamination in the large apertures of previous non-focusing
hard X-ray telescopes. The full band (0.5 -- 30 keV) X-ray spectrum of
Mrk~231 suggests that the AGN in this object is absorbed by a patchy
and Compton-thin column of $N_{\rm H,~AGN}$ = $1.12^{+0.29}_{-0.24}
\times 10^{23}$ cm$^{-2}$.
A strong thermal (Mewe-Kaastra-Liedahl or MEKAL) component with $kT
\sim 0.7 - 0.8$ keV is also contributing to the soft X-ray band of
the nuclear spectrum (e.g., Gallagher et al. 2002, 2005; Ptak et
al. 2003; Teng \& Veilleux 2010; Teng et al. 2014). But this nuclear
soft X-ray emission is just the tip of the iceberg: The {\em Chandra}
ACIS-S observations by Gallagher et al. (2002, 2005) have revealed a
spectacular soft X-ray nebula extending out to $\sim$30$\arcsec$
($\sim$25 kpc) from the nucleus of Mrk~231 (see also Grimes et
al. 2005). A reanalysis of these data reveals that the morphology of
the inner portion of this nebula is similar to that of the H$\alpha$
emission mapped by RV11. In particular, there appear to be soft X-ray
enhancements immediately east of the nucleus and $\sim$3.5~kpc south
of the nucleus, respectively, coincident with an H~II region and the
tidal arc seen in RV11. These data show tantalizing evidence for
spatial variations in the properties of the X-ray gas, but the counts
are not sufficient to draw statistically significant conclusions.
The present paper describes the results from our analysis of a
considerably deeper {\em Chandra} ACIS-S data set, combining a new
400-ksec observation with the 2003 archival data obtained with the
same instrument and in the same configuration.
The results from a detailed analysis of the new {\em Chandra} data on
the central AGN, in combination with the high-energy {\em NuSTAR}
spectrum, were presented in Teng et al. (2014). Here, we focus our
attention on the extended X-ray emission of Mrk~231 and discuss
whether some of this emission may relate to the spatially resolved
galactic wind. The acquisition of the data is discussed in \S 2,
followed by a description of the results from the image and spectral
analyses (\S 3) and a discussion of the implications (\S 4). The
conclusions are summarized in \S 5.
\section{Observations}
The rationale behind the setup used for the new 400-ksec observation
(PID 13700587; PI Veilleux) was to match the observational parameters
of the 3 $\times$ 40-ksec exposures obtained in 2003 by Gallagher et
al. (2005) and thus facilitate the task of combining both data sets
into a single 0.52-Msec exposure. The other {\em Chandra} data sets
obtained before or after 2003 are considerably shallower or acquired
in a different mode so no attempt was made to combine them with the
present data set. Mrk~231 was aimed at the back-illuminated S3
detector of ACIS (Garmire et al. 2003). Due to scheduling constraints,
the planned 400-ksec observation was divided into three segments of
40.97, 182.02, and 177.99 ksec, obtained close in time (23, 24, and 27
August 2012, respectively) to reduce the effects of variability in the
AGN and background (e.g., solar flares). All of the observations were
performed in 1/2 subarray mode in order to avoid pileup and take
advantage of {\em Chandra}'s excellent angular resolution.
\section{Results}
\subsection{Image Analysis}
The data from the 2003 and 2012 epochs were combined together. Both
epochs were reduced using CIAO version 4.5 and CALDB version
4.5.6. This processing incorporates energy-dependent subpixel event
repositioning (EDSER algorithm; Li et al. 2004) which delivers
optimized image resolution by subpixel repositioning of individual
X-ray events. These data were re-processed to verify that this
optimization was indeed carried out properly and to check against
flares using the {\em Python}-based ``deflare'' routine, which
discards any data with count rate that is 3 standard deviations from
the mean of the distribution. We make no attempt to deconvolve the
data using, e.g., Lucy or EMC2 algorithms (Lucy 1974; Esch et
al. 2004; Karovska et al. 2005). This stategy better preserves diffuse features
and possible slight asymmetries in the point spread function
(PSF).\footnote{CIAO version 4.5 manual:
http://cxc.harvard.edu/ciao/caveats/psf\_artifact.html.} The image
data were merged for analysis using the CIAO script ``merge$\_$obs''.
All of the data were reprojected to a common reference position. This
minimizes the effect of relative astrometric errors on the final merged
image which might produce false morphological features. The absolute
astrometric error for ACIS-S is $\sim$0$\farcs$6. We checked our final
image against the VLA FIRST position of Mrk 231. The absolute
astrometric offset was measured to be 0$\farcs$09.
The images derived from the combined data set are shown in Figure 1.
The full-band (0.5 -- 8 keV) images (top row in Figure 1) show a large
complex of emission extending over a total scale of at least
80\arcsec\ or $\sim$65 kpc in the north-south direction and 60\arcsec\
or $\sim$50 kpc in the east-west direction. This emission is not
distributed symmetrically with respect to the nucleus, extending
further to the south ($\sim$40 kpc) than to the north ($\sim$18 kpc),
and further east ($\sim$26 kpc) than west ($\sim$16 kpc). This
high-S/N data set also reveals that the X-ray emission is highly
clumpy or filamentary. The morphology of the large-scale X-ray
emission does not share a strong resemblance with that of the tidal
debris (upper middle panel of Figure 1; RV11; Koda et al. 2009). In
fact, the X-ray emission on the north side is {\em weaker} at the
position of the tidal tail.\footnote{The X-ray emission at the
position of the tidal tail that extends more than 30 kpc south of
the nucleus (Koda et al. 2009) also seems weaker, but the effect is
less significant (see top middle panel of Figure 1). } This
indicates that foreground tidal material absorbs some of the X-ray
photons along the line of sight. Iwasawa et al. (2011) reported a
similar X-ray ``shadow'' of a tidal tail in the ULIRG Mrk 273. In \S
3.2.4, we derive a lower limit on the column density of the northern
tidal tail in Mrk~231 based on the apparent deficit of X-ray emission
at this location.
However, it is also clear that not all tidal features cast an X-ray
shadow. The bright tidal arc located $\sim$3.5 kpc south of the
nucleus coincides with X-ray enhancements (upper right panel of Figure
1). This tidal feature is known to be forming stars and may even be
powering its own supernova-driven outflow (RV11). The spectral
properties of the X-ray emitting gas at the location of this tidal arc
are discussed in \S 3.2.3; the results from this spectral analysis
confirm the presence of star-forming complexes at this location.
The false color maps shown on the bottom row of Figure 1 indicate a
trend with distance from the nucleus where the extended emission is
distinctly softer than the nuclear and circumnuclear emission. This is
illustrated more clearly in Figures 2 and 3, where the
azimuthally-averaged radial profile and two-dimensional map of the
hardness ratio [defined as ($HX - SX$) / ($HX + SX$), where HX and SX
are the 2 -- 8 keV and 0.5 -- 2 keV fluxes, respectively] are
presented. Also shown in Figure 2 are the azimuthally-averaged radial
profiles of the background-subtracted soft and hard X-ray emission fit
with a two-component $\beta$-model:
$$\Sigma(R) = \Sigma_0 [1 + (R/R_0)^2]^{-3\beta + 0.5},$$
where $\Sigma(R)$ is the azimuthally-averaged surface brightness
profile, $\Sigma_0$ is the central surface brightness, $R_0$ is the
core radius, and $\beta$ is the power-law index that quantifies the
slope at large radii. Note the dependence of the surface brightness
profile on X-ray energies, resulting in variations of the hardness
ratio. While the hardness ratio profile within $\sim$2 kpc is somewhat
affected by the energy dependence of the central PSF associated with
the AGN (the PSF is broader at higher energies; Wang et al. 2014), the
small values of the hardness ratios at $R \ga$ 2 kpc, where the flux
contribution from the central PSF is considerably smaller, reflect the
softness of the extended nebula. The right panel in Figure 3 also
suggests the presence of a slight asymmetry in the hardness ratio map
of the central region ($R \la$ 2 kpc), with higher hardness ratios
measured west of the nucleus. This result will be revisited in \S 3.2,
taking into account the spectral responses of {\em Chandra} at various
epochs.
\subsection{Spectral Analysis}
For each observation, individual spectra were extracted from
pre-determined extraction regions (see Figure 4) and then combined
together into a merged spectrum for each region using the ``combine''
keyword in the {\em specextract} function in CIAO 4.5. All spectra
used the same background extraction region to assure that differences
are not due to changes in the background since we expect the
variability in the background to be small from observation to
observation. All spectra were binned to at least 15 counts per bin,
unless noted otherwise (e.g., when the counts were high, we binned by
S/N). XSPEC version 12.8.0k was used for this analysis. The errors
quoted below are at the 90\% confidence level.
The different regions were modeled {\em simultaneously} using a
procedure that is similar to that used to fit the quasars in Teng \&
Veilleux (2010). We initially started with the nuclear portion of the
{\em NuSTAR} model (hereafter called the nuclear {\em NuSTAR} model)
i.e., the model that was found by Teng et al. (2014) to best fit the
combined {\em NuSTAR} + {\em Chandra} nuclear data. In this model, the
direct AGN emission is absorbed and scattered by a patchy torus with
$N_{\rm H,~AGN}$ = $1.12^{+0.29}_{-0.24} \times 10^{23}$
cm$^{-2}$. More specifically, the best-fit model includes a leaky
MYTorus component for the AGN emission (Murphy \& Yaqoob 2009), an Fe
line at 6.7 keV from He-like Fe~XXV, and a heavily obscured ($N_{\rm
H,~nuclear~HMXB}$ = 2.4 $\times$ 10$^{23}$ cm$^{-2}$) component from
high-mass X-ray binaries (HMXBs) associated with the nuclear star
formation (SFR of $\sim$140 $M_\odot$ yr$^{-1}$). The {\em NuSTAR}
model of Teng et al. (2014; see Table 1 in that paper for an equation
form of the best-fit {\em NuSTAR} model) also includes a two-component
MEKAL (Mewe-Kaastra-Liedahl) model of the emission from the hot diffuse
gas within the NuSTAR aperture (recall that the half-power diameter of
NuSTAR is $\sim$58$\arcsec$). Since our {\em Chandra} observations
resolve this diffuse emission, these MEKAL components are treated
separately from the nuclear component in the present paper (their
intensities and temperatures are allowed to vary independently of the
nuclear {\em NuSTAR} model; see below).
The flux of the nuclear {\em NuSTAR} model was allowed to vary with
position in the nebula, but the relative intensities of the various
components of this model were held fixed. Previous studies (e.g.,
Gallagher et al. 2002; Teng et al. 2014) have shown that the hot
diffuse gas is best described as the sum of up to two thermal (MEKAL)
plasma components. For completeness, we also explore in this paper the
possibility that the hot diffuse gas is shocked plasma. We confirm
that the best-fit models of our deeper {\em Chandra} data requires at
least one MEKAL component or one shock component to explain the soft
X-ray emission. In all fits, we chose to tie the temperature(s) of the
MEKAL or shock component(s) to be the same in all regions and look for
spatial variations in its (their) intensity (intensities). At a few
locations in the nebula, excess hard X-ray emission was detected and
attributed to HMXBs from circumnuclear star-forming regions. This
contribution was modeled as a cutoff power-law model with cutoff
energy of 10 keV and a fixed $\Gamma$ of 1.1, following Braito et
al. (2004). We assumed no intrinsic absorption to the nebula, i.e.,
the hydrogen column density outside of the nucleus was fixed to the
Galactic value. Finally, lines were identified using the ATOMDB
database (atomdb.org). Line identification was carried out by choosing
the line in the energy range with the expected highest relative
intensity.
In summary, the best-fit model can be described in equation form as:
\begin{eqnarray*}
{\rm Model} &=& N_{\rm H,~Galactic} \times \{f_{\rm nuclear~NuSTAR} \times N_{\rm
H,~nucleus} \times ({\rm MYTorus}[N_{\rm H,~AGN}, PL_{\rm AGN}] \\
& & +~f_{\rm C-thin} \times PL_{\rm AGN} + N_{\rm H,~nuclear~HMXB} \times
PL^{\rm cutoff}_{\rm nuclear~HMXB}) \\
& & +~(N_{\rm H,~host~HMXB} \times PL^{\rm cutoff}_{\rm host~HMXB} + {\rm line[1-4] + Nebula})\},
\end{eqnarray*}
where $N_{\rm H,~Galactic}$ = 1.26 $\times$ 10$^{20}$ cm$^{-2}$, the
Galactic column density in the direction of Mrk~231 (Dickey \& Lockman
1990), $f_{\rm nuclear~NuSTAR}$ is the fraction ($< 1$) of the NuSTAR
spectrum of Teng et al. (2014), minus the two MEKAL models of the
nuclear diffuse emission, within the given aperture, $N_{\rm
H,~nucleus}$ is an additional absorbing column in the line of sight
toward the nucleus, not seen in the shallower\footnote{Note that the
{\em Chandra} data used by Teng et al. (2014) is much shallower than
what we use here since Teng et al. (2014) only used the {\em
Chandra} data that were strictly simultaneous with the NuSTAR
observations -- less than 50 ksec -- to avoid issues associated with
AGN variability.} {\em Chandra} spectrum of Teng et al. (2014),
$N_{\rm H,~AGN}$ is the absorbing column of the AGN emission
calculated as part of the MYTorus model and $PL_{\rm AGN}$ is the
direct AGN emission within the MYTorus model that also includes the
scattered fraction and Fe lines, $f_{\rm C-thin}$ is the fraction (=
0.19$^{+0.04}_{-0.03}$) of the ``leaked'' direct AGN emission, $N_{\rm
H,~nuclear~HMXB}$ $\times$ $PL_{\rm nuclear~HMXB}$ is the highly
obscured emission from HMXBs in the nucleus, $N_{\rm H,~host~HMXB}$
$\times$ $PL_{\rm host~HMXB}$ is the emission from HMXBs outside of
the nucleus (only detected in annulus \#3), line[1 -- 4] are
Gaussian fits to the emission lines, and Nebula = MEKAL$_1$ +
MEKAL$_2$ or vpshock to reproduce the emission from the hot diffuse
gas (vpshock is a constant temperature, plane-parallel shock plasma
model; e.g., Borkowski, Lyerly, \& Reynolds 2001).
\subsubsection{Radial Profile}
The annular spectral extraction regions were defined in the following
fashion: (1) Nucleus –- defined to be $R < $ 1.0 kpc
($\sim$1$\farcs$15). This is a conservatively large radius to make
sure that most of the nuclear emission is included in this region (see
Figure 2). (2) Annulus \#1 (1.0 -- 2.0 kpc) –- this corresponds to the
outflow region from RV11, approximately matching the radial extent of
the Na I~D outflow, but avoiding the bright star-forming arc to the
south. (3) Annulus \#2 (2.0 -- 6.0 kpc) –- this corresponds to the
host galaxy region and includes all of the emission from the southern
star-forming arc. The outer radius was chosen to include only the
portion of the nebula where the PSF wings of the central AGN still
contribute to the hard X-ray flux. This region also corresponds
roughly with the brighter, more relaxed portion of the merger remnant
(e.g., Veilleux et al. 2002, 2006). (4) Annulus \#3 (6.0 -- 16.0 kpc)
-– the outer radius corresponds roughly to the optical edge of the
merger remnant. (5) Annulus \#4 (16.0 -- 40.0 kpc) –- this includes
all of the soft X-ray emission that is outside of the optical remnant
but still largely within the optical tidal complex shown in Koda et
al. (2009).
The extracted X-ray spectra and their best-fit models are shown in
Figure 5 and the derived properties are listed in Table 2. The shapes
of the spectra show a significant dependence on radial distance,
confirming the radial gradient in the hardness ratio displayed in
Figures 2 -- 3. Our best-fit models translate this hardness ratio
gradient into a radial dependence on the intensity of the central AGN
component, consistent with the expected PSF and our image analysis (\S
3.1), relative to the soft X-ray emission from the extended
nebula. The drop in hardness ratio with increasing radial distance
from the nucleus out to annulus \#2 is due entirely to this effect.
In annulus \#3 ($R$ = 6 - 16 kpc), the flux from the PSF wings of the
central AGN is negligible, yet hard ($>$ 2 keV) X-rays are still
detected. The high-energy portion of this spectrum was fitted using a
cutoff power-law component meant to represent the HMXBs from
circumnuclear star-forming regions. The implied SFR in this annulus is
$\sim$10 $M_\odot$ yr$^{-1}$. This component is not needed in the
outer halo (annulus \#4). The soft X-ray continuum emitted by the
nebula is fit equally well with thermal (MEKAL) and shock models (all
have $\chi^2$/d.o.f. $\la$1.1; see notes to Tables 2a and 2b).
The thermal models require a second MEKAL component with lower
temperature (0.27 keV) to reproduce the softer spectra beyond $R$ = 6
kpc (annuli \#3 and \#4). The absence of a low-temperature MEKAL
component inside of 6 kpc may be due to extra heating in this region
from the circumnuclear star formation, the central quasar
activity, and/or the galactic-scale AGN-driven outflow. We return to
this issue in the next section, where we examine the azimuthal
spectral variations in the inner region of Mrk~231.
The spectra in Figure 5 were also examined for absorption and emission
features. No obvious sign of O~VII 0.7 keV and O~VIII 0.9 keV
absorption edges is seen in any of these spectra. To quantify this
statement, we added an edge at 0.7 keV and another one at 0.9 keV in
the fits to simulate the O~VII and O~VIII features, respectively. The
difference in $\chi^2_\nu$ was found to be insignificant (0.01 for two
degrees of freedom). The tentative detection of the 0.7 keV warm
absorber in the 2000 {\em Chandra} spectrum of the nucleus of Mrk~231
(Figure 9 of Gallagher et al. 2002) is therefore not confirmed here
despite the factor of $\sim$2 smaller uncertainties on the equivalent
width measurements. However, a number of emission lines are present
and provide constraints on the temperature and abundances of O, Mg,
Si, and Fe in the nebula (using the abundances of Wilms, Allen, \&
McCray 2000). The features around 1.3 and 1.9 keV are identified as
Mg XI 1.352 keV and Si XIII 1.864 keV, respectively. Both of these
features are present outside of the nucleus: the Si XIII line is
detected in the spectra of annuli \#1, \#2, \#3, and \#4, while Mg XI
is convincingly detected in annuli \#3 and \#4. In the nuclear
spectrum, there are also two lines near 6.4 and 6.7 keV with
relatively small equivalent widths, consistent with previous data
(e.g., Teng \& Veilleux 2010) and the new analysis of Teng et
al. (2014). These two lines are neutral or weakly ionized Fe~K$\alpha$
and He-like Fe~XXV~K$\alpha$, respectively. The Fe~XXV 6.7 keV line
may also be present outside of the nucleus (Figure 5). We return to
this point in \S 3.2.5, where we present narrowband line images
derived from our data.
Overall, the results of the fits listed in Table 2 suggest subsolar Fe
abundances and solar or supersolar Si abundances throughout the nebula
(particularly in the inner region; annulus \#1), while O and Mg
abundances fall between these two extremes. These abundances apply
only to the dominant ($\sim$90\%) warmer thermal component of the
halo. The abundances of the cooler component could not be constrained
separately because this component is too faint. In Table 2, the
abundances of the cooler component were fixed to those of the warmer
component. The absolute values of these abundances should be treated
with caution [e.g., Kim 2012; although our use of a two-temperature
model to fit the halo spectra should reduce the so-called ``Fe bias''
(Buote \& Fabian 1998; Buote 2000)], as implied by the large error
bars on these measurements in Table 2.
However, the relative abundances are considerably more robust. The
relative abundances of the $\alpha$-elements (Si, O, and Mg) with
respect to iron are $\sim$ 2 -- 4 $\times$ solar. Similar supersolar
$\alpha$-element abundances were recently found in the warm thermal
component of the halo of NGC~6240 (Nardini et al. 2013), a galaxy
merger that is caught in the pre-coalescence stage, i.e. at an earlier
stage of evolution than Mrk~231.
We discuss the implications of these results in \S 4.2.
\subsubsection{Neutral Outflow Region}
The spatially resolved Na~ID outflow detected by Rupke et al. (2005c)
and mapped by RV11 and RV13 extends to at least $\sim$2.0 -- 2.5 kpc,
depending on azimuth angle. There is a region $\sim$1.0 -- 2.5 kpc
due north from the nucleus of Mrk~231 where the Na~ID outflow
velocities are noticeably higher than along other directions (see
right panels in Figure 4 of RV11). RV11 argue that the outflowing
neutral gas in this region may be given an extra kick by the radio jet,
seen on smaller scale along this same direction (e.g., Carilli et
al. 1998; Ulvestad et al. 1999). With this in mind, we extracted
spectra from four separate quadrants within annulus \#2 ($R = 1.0 -
2.0$ kpc): one coinciding roughly with the possibly jet-influenced
region (PA = $-$45 -- $+$45$^\circ$) and three other regions of the
same radial and azimuthal extent but due east, south, and west (PA =
45 -- 135, 135 -- 225, and 225 -- 315$^\circ$, respectively). Note
that the radial extent of these regions was limited to 2.0 kpc,
instead of 2.5 kpc, to avoid contaminating emission from the
star-forming arc beyond 2 kpc which would affect the spectra in the
southern comparison region.
The fact that these spectral regions are at the same distance from the
nucleus eliminates any effects that may be due to the radial gradient
of the hardness ratio associated with the wings of the AGN component
discussed in the previous section. The four spectra were binned to at
least 25 counts per bin and fitted simultaneously. We used the same
fitting method here as that used for the annular regions, fitting all
four spectra simultaneously. As in \S 3.2.1, the soft X-ray emission
was modeled using either a single thermal MEKAL component with fixed
$kT$ or a single shock component with fixed $kT$. The strength of this
soft component was left as a free parameter to reflect variations
between the various regions, while the flux from the wings of the PSF
was held fixed. The results are shown in Figure 6 and tabulated in
Table 3 for the various models.
The hardness ratios tabulated in Table 3 indicate that the emission
from the eastern quadrant is slightly softer than that from the other
three regions. RV11 and RV13 have pointed out the presence of a
prominent H~II region $\sim$ 1 kpc due east of the nucleus (see Figure
20 of RV13); this H~II region likely contributes to the excess soft
X-ray emission detected in the eastern spectrum. A fainter H~II region
may also be present in the northern quadrant (see the
low-[N~II]/H$\alpha$ ``blob'' in the middle top panel of Figure 20 of
RV13) and may be contributing to the soft X-ray emission in this
region, although the hardness ratio at that location is not
significantly different from that in the western region. We see no
evidence (e.g., higher temperatures in Table 3) for jet interaction
with the ISM in the northern region. The spectral fits reveal the
presence of a line at 1.9 keV in the eastern quadrant and perhaps also
in the northern quadrant. This line is identified as Si~XIII 1.864
keV.
The fits require super-solar Si abundances that may reflect
$\alpha$-element enhancement from star formation at these particular
locations.
Most notably, the fits indicate a clear deficit of soft X-rays in the
western outflow region. This region shows the faintest H$\alpha$
emission and some of the largest Na~ID equivalent widths (Figures 20
and 18d of RV13, respectively). X-ray absorption by the column of
neutral material at this location ($N_H \sim 10^{22}$ cm$^{-2}$ from
RV13) may explain the weaker soft X-ray flux. Such photoelectric
absorption would harden the soft X-ray spectrum by preferentially
removing the softest X-rays and would thus translate into higher fit
temperatures. To test this possibility, we searched for azimuthal
temperature variations by allowing the temperature of the MEKAL
component to vary for the fits in Table 3. In this case, all quadrants
were assumed to have the same abundances, resulting in Si =
4.7$^{+4.1}_{-2.3}$ solar, Fe = 0.3$^{+0.3}_{-0.2}$ solar, and the
other elements held fixed at solar. The resulting temperatures were
$kT = 0.80^{+0.27}_{-0.19}$ keV, $0.70^{+0.19}_{-0.20}$ keV,
$0.26^{+0.42}_{-0.17}$ keV, and $0.88^{+0.29}_{-0.19}$ keV for the
eastern, southern, western, and northern regions, respectively (the
temperature in the western region is harder to constrain due to the
lower counts in this region). These numbers are all consistent with
each other within the (rather large) uncertainties of these
measurements. The deficit of soft X-rays in the western outflow region
therefore appears to be intrinsic rather than due to photoelectric
absorption by the neutral outflow. We return to this result in \S 4.1.
\subsubsection{Southern Tidal Arc}
Figure 4 shows the extraction windows used to characterize the X-ray
emission from the arc and three comparison regions of the same size
and radial extent (2.0 -- 4.5 kpc) as the arc region but located at
different position angles. The results of the spectral analysis are
shown in Figure 7 and tabulated in Table 4. The soft X-ray emission in
the western arc region is significantly weaker than in the other arc
regions. In contrast, the hard X-ray emission is very nearly constant,
resulting in a higher hardness ratio in the western region. This soft
X-ray flux deficit is analogous to the behavior at smaller radii (\S
3.2.2) and may thus be physically related to the outflow.
The only other significant difference between these spectra is the
detection of emission lines near $\sim$1.8 keV and perhaps also at
$\sim$1.2 keV in the arc region but not in the other regions. These
features are identified as Si XIII 1.864 keV and Ne X 1.211 keV (or Fe
XIX 1.258 keV), respectively. This excess line emission may reflect
$\alpha$-element enhancement due to the starburst in the arc region.
\subsubsection{Northern Tidal Tail}
A polygonal extraction region (Figure 8) was used to extract the 0.5
-- 2 keV spectrum in the region blocked by the northern tidal
tail. The same polygonal footprint was used to extract a comparison
spectrum in a region near the tidal tail, at approximately the same
distance from the center. The best-fit model from annulus \#4 is
scaled to fit the comparison spectrum. Then an absorption component is
added to the best-fit model, with increments of $\Delta N_H$ = 5
$\times$ 10$^{20}$ cm$^{-2}$ at a time, until the total 0.5 -- 2 keV
counts in the simulated spectrum correspond to the total counts from
the region affected by the tidal tail. The hydrogen column density
derived in this manner is 2.5 $\times$ 10$^{21}$ cm$^{-2}$. This
column density is formally a lower limit since foreground emission
would imply a larger column density. The value of this lower limit is
slightly smaller than the column density derived by Iwasawa et
al. (2011) in the tidal tail of Mrk~273, 6 $\times$ 10$^{21}$
cm$^{-2}$, but similar to the typical HI column density of an edge-on
galaxy disk (e.g., Begeman 1989; Barber et al. 1996 have also made
measurements of disk shadowing against the extragalactic X-ray
background). This difference in column densities between the tidal
tails of Mrk~231 and Mrk~273 may simply be an orientation effect: the
tidal tail in Mrk~273 appears thinner, less diffuse than the northern
tidal tidal in Mrk~231, perhaps indicating a more edge-on orientation.
\subsubsection{Narrowband Line Images}
Guided by the detection of emission lines outside of the nucleus
(Figures 5, 6, and 7), we extracted narrowband line images to
investigate the two-dimensional spatial distribution of the
line-emitting gas. The results for Si XIII 1.864 keV, neutral or
slightly ionized Fe K$\alpha$ 6.4 keV, and the sum of He-like Fe XXV
6.7 keV and H-like Fe XXVI 6.9 keV are shown in Figure 9 (Mg XI 1.352
keV was too faint for this exercise). The derived strengths of the
line emission are unreliable in the central 3.0 kpc diameter region
due to Poisson noise from the very strong underlying AGN continuum;
this region is therefore masked in all three panels of Figure 9.
We find clear evidence for Si XIII emission outside of the nucleus,
extending at least $\sim$5 kpc south of the nucleus. Some of the
strongest Si XIII emission coincides with the southern
star-forming arc, confirming the results from our spectral analysis
(\S 3.2.3).
We do not find convincing evidence for Fe K$\alpha$ emission outside of
the nucleus. The small extension to the north-west in the middle panel
of Figure 9 is not statistically significant. A slightly more
significant extension is seen in the Fe XXV 6.7 keV + Fe XXVI 6.9 keV
line emission map (right panel). The spectrum extracted from a
2-arcsec diameter region centered on the brightest part of that
extension is shown in black in Figure 10. The spectrum in red is from
an equal-size region on the opposite side of the nucleus where there
is no obvious Fe XXV + Fe XXVI emission in the line emission map.
There is a difference of 16 counts between these two spectra in the
0.5-7 keV band, but some of this excess in counts could be due to a
slightly higher continuum level. A line seems to be present in the
black spectrum but not in the red one. Formally the fit shown in
Figure 10 gives $E$(line) = 6.629$^{+0.063}_{-0.044}$ keV, $EW$(line)
= 2.042$^{+1.693}_{-1.743}$, and a narrow width ($\sigma$ = 0 keV
formally). This is thus a tentative detection of Fe XXV 6.7 keV
outside of the nucleus.
The fits to the annular regions discussed in \S 3.2.3 (Figure 5, Table
2) suggested the presence of Fe XXV 6.7 keV in annulus \#3 ($R = 6 -
16$ kpc), where the AGN does not contribute any continuum
emission. This line emission is not immediately obvious in the Fe XXV
+ Fe XXVI emission map. The spectrum extracted from annulus \#3 is
shown in more detail in Figure 11 using a finer binning than in Figure
5. The best fit ($\chi^2_\nu$ = 1.18) shown in this figure constrains
the equivalent width of the Fe XXV 6.7 keV line to be
2.09$^{+1.94}_{-1.40}$ keV. Given the large error bars, the detection
of this line is thus again only tentative.
\section{Discussion}
\subsection{The Quasar-Driven Wind of Mrk~231}
The main objective of the new {\em Chandra} data set was to provide
enough counts to allow us to carry out a detailed spectral imaging
analysis of the galactic-scale outflow region. The results of this
analysis were discussed in detail in \S 3.2. Everywhere in the
nebula, we find that thermal (MEKAL) models fit the data equally well
as simple shock models. The gas in the outer ($R > 6$ kpc) nebula is
characterized by a dominant ($\sim$90\%) thermal component at $kT \sim
0.7$ keV and a fainter ($\sim$10\%) component at $kT \sim 0.3$ keV. The
cooler component is not present in the inner ($R <$ 6 kpc) region. No
sign of O~VII 0.7 keV and O~VIII 0.9 keV absorption edges (``warm
absorbers'') is found in any of the spectra, further indicating the
absence of cool gas in the central region of Mrk~231. This result is
somewhat surprising since warm absorbers are ubiquitous in Seyferts
(e.g., Crenshaw et al. 2003) and QSOs (e.g., Piconcelli et al. 2005;
Teng \& Veilleux 2010).
A more detailed study of the outflow region, where we sliced the
annular region between $R$ = 1 and 2 kpc into four equal-size
quadrants (N, S, E, and W), reveals significant variations in hardness
ratios and intensities. The spectrum in the eastern outflow region is
found to be significantly softer and shows a larger Si/Fe abundance
ratio than the other outflow regions. This is likely due to the
presence of an HII region visible in the optical data (RV13). A
fainter HII region may also be affecting the spectrum in the northern
quadrant. A deficit of soft X-ray emission is present in the western
quadrant, coincident with fainter H$\alpha$ emission and some of the
largest neutral-gas Na~ID absorption columns. This is the strongest
evidence in our data that the hot X-ray emitting gas ``knows'' about
the massive neutral outflow in this object. There is also tentative
evidence for Fe XXV 6.7 keV line emission extending up to $\sim$3 kpc
north-west of the nucleus, but this is only a 2-sigma detection so it
is not very significant (Figures 9 -- 10).
It is perhaps surprising that our data do not show any obvious
temperature enhancements in the neutral outflow region given that the
velocities of the outflowing neutral gas are typically $\sim$1000 km
s$^{-1}$. One would naively expect this neutral outflow to interact
with the ambient ISM and produce strong ionizing shocks. For
non-radiative, strong shocks in a fully ionized monoatomic gas, the
post-shock temperature $T_{sh} = 3 \mu v^2_{sh}/16 k$, where $\mu$ is
the mean mass per particle and $k$ is the Boltzmann constant (McKee \&
Hollenback 1980). For shock velocities of $\sim$1000 km s$^{-1}$, we
expect hot gas with $T \sim$ 1.6 $\times$ 10$^7$ K or $kT \sim$ 1.40
keV. The tentative detection of extended Fe XXV 6.7 keV emission would
imply shock velocities of $\ga$2000 km s$^{-1}$, if produced by a
collisionally ionized plasma with a temperature $T \sim 7 \times 10^7$
K as in NGC~6240 (Wang et al. 2014). These high velocities are not
seen in the neutral gas. Our fits do not formally rule out the
possibility that HMXBs at this location may be responsible for this
extended Fe XXV emission.
These simple back-of-the-envelope arguments emphasize the fact that
one has to be very cautious when using the properties of the neutral
outflow to predict those of the hot X-ray emitting material. This may
not be too surprising given the huge disparity in temperatures
($<$1000 K versus $\sim$10$^7$ K) and densities ($>$1 cm$^{-3}$ versus
$\sim$10$^{-2}$ cm$^{-3}$; Table 3) between these two gas phases. It
would be more prudent to compare the properties of the X-ray emitting
gas with those of the warm ionized gas traced by H$\alpha$, which is
often found to be spatially correlated with the soft X-ray emission
(e.g., Cecil, Bland-Hawthorn, \& Veilleux 2002; Strickland et
al. 2004; Bolatto et al. 2013). Unfortunately, contaminating
H$\alpha$ emission from HII regions complicates the picture in Mrk~231
and the H$\alpha$ outflow is detected convincingly only within $\sim$1
kpc (RV13). Indeed, all of the evidence (RV11, RV13, Fischer et
al. 2010; Feruglio et al. 2010; Gonzalez-Alfonso et al. 2014) suggests
that the outflow of Mrk 231 is heavily mass-loaded with neutral and
molecular material but relatively little ionized gas; our new {\em
Chandra} data add support to this idea. The absence of hot-shocked
ionized gas in the wind region naively suggests that the outflow is
momentun-driven rather than energy-driven. However, this is not a
strong conclusion since we cannot strictly rule out the presence of a
very hot ($>>$10$^7$ K) and tenuous wind fluid gas (present in the
energy-driven wind of M82; Griffiths et al. 2000; Stevens, Read, \&
Bravo-Guerrero 2003).
The absence of the cooler thermal component within $\sim$6 kpc
indicates either more efficient cooling on large scale or additional
heating on small scale. Adiabatic expansion of a free wind can
naturally explain negative temperature gradients but, as we just
mentioned, there is very little evidence that any of the soft X-ray
emitting gas detected within $\sim$6 kpc is directly associated with
the wind event. Moreover, there is no evidence at present that the
galactic wind in Mrk~231 extends beyond $\sim$3 kpc (RV11, RV13), let
alone $>$6 kpc (note however that this is purely a limitation of the
current optical data: Na ID absorption is not detected beyond $\sim$3
kpc because the galaxy continuum is too faint). Even if the wind did
extend beyond 6 kpc, it is not clear how the wind scenario could
explain the presence of the (dominant) warmer component on the largest
scale (out to $\sim$25 kpc from the nucleus). Finally, the drop in
X-ray surface brightness in Figure 2 is also less steep than the $\sim
R^{-3}$ profile expected in the case of a freely expanding wind (where
the gas and electron density profiles $n_g$ and $n_e$ go as $\sim
r^{-2}$, from mass conservation, and the X-ray surface brightness
$\Sigma_X \sim n_e^2 dV/dS$, where $dV$ stands for the volume element
and $dS$ is the surface element). We therefore favor the scenario
where the absence of the cooler component within 6 kpc is due to extra
heating.
Known sources of heating in this region include the galactic-scale
wind, the circumnuclear starburst, and the quasar itself. We discuss
each in turn. The total kinetic energy of the AGN-driven
galactic-scale outflow in Mrk 231 is substantial ($\sim$ 6 $\times$
10$^{57}$ ergs) and dominated by the contributions from the neutral
and molecular components (RV11, RV13, Feruglio et al. 2010, Cicone et
al. 2012, and Gonzalez-Alfonso et al. 2014). For comparison, the {\em
total} thermal energy of the X-ray emitting gas in the inner ($R <$
6 kpc) region of Mrk~231 is $3 \eta n_e V k T \approx 10^{57}$ ergs,
where we used a filling factor $\eta \sim 1$ (volume-filling), an
electron density $n_e$ $\sim$ 2 $\times$ 10$^{-2}$ cm$^{-3}$ (Table 3;
$\propto \eta^{-1/2}$), a spherical volume $V \sim 2.4 \times 10^{67}$
cm$^3$, and a temperature of 8 $\times 10^6$ K (0.7 keV). The absence
of cooler ($\sim$0.3 keV) X-ray emitting gas in the inner region of
Mrk~231 may therefore be due to the influence of the AGN-driven
outflow on the ambient ISM, even if only a small fraction of this
kinetic energy is thermalized via shocks (e.g., Cox et al. 2006).
However, the lack of a clear signature of ionizing shocks in the
neutral outflow region and the current size limits on this outflow
($\ga$3 kpc) weaken this wind scenario.
The star formation rate of Mrk~231 is $\sim$140 $M_\odot$ yr$^{-1}$
(Veilleux et al. 2009). This starburst therefore injects mechanical
energy in the surrounding medium at a rate $\sim$1 $\times$ 10$^{44}$
erg s$^{-1}$ (e.g., Veilleux et al. 2005). The age of this starburst
is not well constrained. Assuming a conservatively small value of $5
\times 10^6$ yrs for the starburst age, the starburst can contribute
up to 10$^{57}$ ergs if the thermalization efficiency is $>$10\%. This
is more than enough to explain the lack of a cool component in the
inner 6 kpc. However, there is no evidence that the starburst extends
much beyond $\sim$3 kpc, except in the southern tidal arc where the
starburst may be blowing its own low-velocity wind (RV11, RV13).
The bolometric luminosity of the quasar in Mrk~231 is $\sim$3 $\times$
10$^{12}$ $L_\odot$ (Veilleux et al. 2009), but most of this energy is
emitted in the infrared and cannot contribute to heating the hot X-ray
emitting gas in the inner 6 kpc of the nucleus. The recent {\em
NuSTAR} data of Teng et al. (2014) indicate that Mrk~231 is
underluminous in the X-rays with an intrinsic absorption-corrected 0.5
-- 30 keV luminosity of 1.0 $\times$ 10$^{43}$ erg s$^{-1}$. Assuming
a quasar lifetime of 10$^7 - 10^8$ yrs (see, e.g., review by Martini
2004), only $\sim$2 -- 20\% of the 0.5 -- 30 keV total energy output
from the quasar in Mrk 231 needs to be absorbed within the inner 6 kpc
of the quasar to contribute $\sim$ 10$^{57}$ ergs to the heating of
the inner region. The analysis of the {\em NuSTAR} data indicates that
a patchy and Compton-thin ($N_H \sim 1.12^{+0.29}_{-0.24} \times
10^{23}$ cm$^{-2}$) column absorbs (1 - C-thin) = $\sim$80\% of the
intrinsic 0.5 -- 30 keV luminosity of the quasar in Mrk~231. Assuming
that our line of sight to the quasar is typical of other directions,
this energy is amply sufficient to explain the lack of a cool
component in the inner 6 kpc of Mrk 231. However, this energy is not
deposited at the right place. The absorbing column measured by {\em
NuSTAR} is spatially unresolved ($<$ 1 kpc) so the energy absorbed
by this material is deposited on a scale considerably smaller than 6
kpc. Perhaps some fraction of the quasar energy output that makes it
out of the nucleus is absorbed by material on $>$kpc scale, including
possibly the neutral atomic and molecular gas entrained in the
wide-angle outflow (large-scale ionization is seen in the
circumgalactic nebula of MR2251$-$178 for instance; Kreimeyer \&
Veilleux 2013). Another possibility is that the energy deposited on
sub-kpc scale is redistributed on larger scales via slower dynamical
processes, e.g., buoyant bubbles of hot gas mixing with the cooler
X-ray material, as seen in the intracluster medium of a growing number
of galaxy clusters (e.g., Hlavacek-Larrondo et al. 2012).
Our current data set does not allow us to discriminate between heating
by the galactic wind, the circunuclear starburst, and the quasar.
Most likely, all three contribute at some level to the absence of the
cool thermal component within $\sim$6 kpc.
\subsection{Origin of the X-ray Halo}
The spectacular X-ray halo in Mrk~231 has many properties that are
similar to those of the halo in NGC~6240 (Nardini et al. 2013), a
galaxy merger at an earlier stage of evolution than Mrk~231. We argue
below that they likely have similar origins.
The total luminosity of the nebula around Mrk 231 is $\sim$2 $\times$
10$^{41}$ erg s$^{-1}$ (annuli \#1 -- \#4 in Table 2, excluding the
contribution from the wings of the AGN PSF and host HMXBs), similar to
that of the halo in NGC~6240 and, as pointed by Nardini et al. (2003),
comparable to that of small groups of galaxies (Mulchaey 2000) and
giant ellipticals (Canizares et al. 1987; Mathews \& Brighenti 2003).
The full halo spectrum of NGC~6240 calls for two thermal (MEKAL)
components with temperatures $kT \sim$ 0.8 and 0.25 keV, while our
best-fit model for the halo in Mrk~231 involves two thermal components
with $kT$ $\sim$0.7 keV and $\sim$0.3 keV with the latter component
contributing only $\sim$10\% of the luminosity. Given the smaller
dimensions of the nebula around Mrk~231 ($\sim$65 $\times$ 50 kpc)
relative to that of NGC~6240 ($\sim$110 $\times$ 80 kpc), the
sound-crossing time, $D/c_s = D (5 kT/3 \mu m_p)^{-1/2}$, a rough
measure of the dynamical age of the halo, is proportionally smaller in
Mrk~231: $\sim$100 Myr versus $\sim$200 Myr. The thermal energy
content of the halo in Mrk 231, $E_{th} \sim 2 \times 10^{58}$ erg, is
also slightly smaller than that of the halo in NGC~6240 ($\sim$5
$\times$ 10$^{58}$ erg), but still considerable if we compare it to
the amount of kinetic energy deposited during the merger of two
identical progenitors, of order $M_g v_c^2/8$, where $M_g$ is the mass
of the X-ray emitting gas and $v_c$ is the relative speed during the
collision (Nardini et al. 2013). In the case of Mrk~231, $M_g = \eta
n_e V m_p \sim 7 \times 10^{9}$ $M_\odot$ so $v_c \sim 1400$ km
s$^{-1}$ is needed, which is subtantially faster than typical head-on
collisions in non-cluster environments (e.g., Taffy Galaxies; Braine
et al. 2003). A comparison of the thermal energy content of the halo
in Mrk 231 with its X-ray luminosity implies a cooling timescale
$\sim$10$^{17}$ sec or $\sim$3 Gyr.
As discussed in \S 4.1, the present-day starburst, AGN, and galactic
wind in the core of Mrk~231 no doubt have injected some energy in the
X-ray nebula and may be responsible for the absence of cool gas in the
inner ($<$ 6 kpc) region of the nebula. But, they are unlikely to have
supplied the entire thermal energy content of the X-ray halo given the
energetics of these processes (see \S 4.1). The large size of the
nebula also puts severe constraints on the duration of these events:
e.g., $\ga$100 Myr for an average velocity of $\la$500 km s$^{-1}$,
which seems at odd with the starburst age ($<$10$^7$ yrs) and the
dynamical age of the present-day galactic wind (a few $\times$ 10$^6$
yrs).
Independent constraints on the processes at work come from our
abundance analysis of the halo (annuli \#3 and \#4 in \S 3.2.1; Tables
2a and 2b). We measure $\alpha$-element (particularly Si, but also O
and Mg) abundances relative to iron that are $\sim$2 -- 4 $\times$
solar throughout the nebula without significant radial gradient. A
similar abundance pattern was found in the halo of NGC~6240 (Nardini
et al. 2013). Iron is mainly produced from type Ia supernovae (SNe;
i.e., exploded white dwarfs in close binary systems) on a timescale of
$\sim$0.1-1 Gyr, while the $\alpha$-elements come primarily from type
II SNe (i.e., core-collapsed massive stars) on timescales of a few
tens of Myr. Synthesis models for Type II SNe (e.g, Heger \& Woosley
2010; Nomoto et al. 2006, 2013) predict Si/Fe ratios of up to
$\sim$3-5 solar, while ratios of $\sim$0.5 solar are expected from
Type Ia SNe (e.g., Nagataki \& Sato 1998; Seitenzahl et al. 2013).
The supersolar Si/Fe ratios in the halos of Mrk~231 and NGC~6240
therefore suggest uniform enrichment by type II SNe out to the largest
scales ($\sim$65 $\times$ 50 kpc in the case of Mrk~231 and 110
$\times$ 80 kpc for NGC 6240). Supersolar Si/Fe ratios have been found
in the past [e.g., face-on spiral galaxies (Schlegel et al. 2003;
Soria \& Wu 2003), galaxy mergers like the Antennae (Baldi et
al. 2006a, 2006b), and the central regions of young elliptical
galaxies with sites of recent (a few tens of Myr) merger-induced star
formation (Kim et al. 2012)], but in all of these cases, the
supersolar Si/Fe ratios are measured on kpc scale, not several tens of
kpc as in Mrk~231 and NGC~6240.
In Mrk~231, we see evidence for $\alpha$-element enhancements near and
around the circumnuclear starburst (\S 3.2.2) but the on-going (age
$<$10$^7$ yrs) episode of active star formation ($SFR \sim$ 140
$M_\odot$ yr$^{-1}$) does not seem capable of explaining the
large-scale enhancements seen in the halo. The estimated mass in
silicon in the halo is $\sim$6 $\times$ 10$^6$ $M_\odot$, assuming a
solar silicon abundance (Table 2a).
The maximum silicon yield of type II SNe is $\sim$0.1 -- 0.3 $M_\odot$
for a massive-star progenitor with $Z \le 0.02$ (e.g. yields tables in
Nomoto et al. 2013) and thus implies the need for $\sim$3 $\times$
10$^7$ type II SNe or a sustained star formation rate of $\sim$140
$M_\odot$ yr$^{-1}$ over $\ge$10$^7$ yrs. Moreover, the metals
produced in the circumnuclear starburst need to be redistributed over
the entire $\sim$55 $\times$ 60 kpc nebula to be consistent with the
observations. In some cases, there is direct evidence that galactic
winds help carry the $\alpha$-element enhanced material produced by
starburst into the halos of galaxies (e.g., M82: Tsuru et al. 2007;
Ranalli et al. 2008; Konami et al. 2011 and references therein), but
the present-day galactic wind in Mrk~231 cannot be responsible for the
supersolar Si/Fe ratios throughout the halo of Mrk 231 unless the wind
actually extends $\sim$1 order of magnitude further than currently
measured ($\ga$3 kpc, RV11, RV13). Another, more likely, scenario is
that the $\alpha$-elements produced near the center have been
redistributed on larger scale by previous outflow events.
In NGC~6240, the warmer ($kT_1 \sim$ 0.8 keV) component of the halo is
distinctly metal-richer ($Z_\alpha \sim$ 0.5 solar) than the cooler
($kT_2 \sim 0.25$ keV) component (fixed at $Z = 0.1$ solar). Nardini
et al. (2013) associate the first component with chemically-evolved,
starburst-processed gas and the second component with
gravitationally-bound, pre-existing halo material. The cooler halo
component in Mrk~231 is $\sim$10$\times$ weaker than the warmer
component so, unfortunately, our {\em Chandra} data do not have
sufficient counts to constrain the metal abundance in this fainter
component, hence its origin\footnote{For instance, we determined that
models with a low metal abundance of $Z = 0.1$ solar in the cooler
component fit the data equally well as models with $Z = 1$
solar.}. However, regardless of the exact origin of the cooler halo
component in Mrk~231, we mentioned in the previous paragraph that the
sheer amount of metals in the warmer component and its widespread
supersolar Si/Fe ratios point to enhanced star formation over
timescales comparable to the dynamical time ($\sim$100 Myr) to both
produce these metals and redistribute them across the halo via outflow
events. The merger itself may also help in redistributing the
metals. Mrk~231 is in the final throes of a major merger of two
gas-rich disk galaxies with masses similar those of the Milky Way and
Andromeda (Cox \& Loeb 2008). The colliding gas in the parent disk
galaxies of the merger is shock-heated to X-ray-emitting temperatures
and eventually mixed with the pre-existing halo material to contribute
to the observed X-ray halo (Jog \& Solomon 1992; Cox et al. 2006).
The lack of systematic Si/Fe ratio variations across the halo of
Mrk~231 suggests that the collision and outflow events were efficient
at erasing abundance gradients.
Some fraction of the X-ray gas present in the halo of Mrk~231 will
likely be retained by the merger remnant and become the X-ray-emitting
halo of the resulting young elliptical galaxy. While solar Si/Fe
ratios are typically seen in the X-ray halos of present-day large
elliptical galaxies (e.g,. Humphrey \& Buote 2006; Loewenstein \&
Davis 2010, 2012; Konami et al. 2014), supersolar Si/Fe ratios are
often measured in the stellar components of $>$L$^*$ ellipticals
(e.g., Worthey 1998; Graves et al. 2010; Conroy et al. 2014). The
stellar abundances are generally explained by invoking a short
timescale of star formation (efficient quenching before the onset of
type Ia SNe) or variations in the IMF. Given the mass outflow rate
and gas content of Mrk~231, the implied gas depletion time scale is
only 10 -- 20 Myr (Sturm et al. 2011; Gonzalez-Alfonso et
al. 2014). The outflow in Mrk~231 could thus quench star formation on
this short time scale, if the ejected gas does not return to the
center to form stars. This type of life-changing outflow event appears
to be common among local ULIRGs (Veilleux et al. 2013; Spoon et
al. 2013; Cicone et al. 2014). Assuming Mrk 231-like outflow events
are also common in major mergers in the early universe, they might
help explain the enhancement of $\alpha$-element, relative to iron,
observed in the stellar population of local large elliptical galaxies.
\section{Conclusions}
We have combined a deep 400-ksec ACIS-S observation of Mrk~231 with
120-ksec archival data acquired with the same instrument and setup to
allow us to carry out the first spatially resolved spectral analysis
of a hot X-ray emitting circumgalactic nebula around a quasar. Mrk~231
is particularly well suited for this study since it is the nearest
quasar and is well known to host a powerful galactic-scale
outflow. The main results from our study are the followings:
\begin{itemize}
\item The morphology of the $\sim$65 $\times$ 50 kpc X-ray nebula does
not resemble that of the tidal complex of Mrk~231 seen at optical
wavelengths. The only spatial match is the small tidal arc located
$\sim$3.5 kpc south of the nucleus, where excess soft X-ray
continuum emission and Si XIII 1.8 keV line emission are detected,
consistent with star formation and its associated $\alpha$-element
enhancement, respectively. We also detect a deficit in the soft
X-ray flux map at the position of the northern tidal tail,
suggesting that this structure casts an X-ray shadow due to a
hydrogen column density of at least 2.5 $\times$ 10$^{21}$
cm$^{-2}$.
\item The soft X-ray spectrum of the nebula beyond 6 kpc is best
described as the sum of two thermal components of temperatures
$\sim$3 and $\sim$8 million K with spatially uniform super-solar
$\alpha$ element abundances, relative to iron. A similar result was
recently found in the X-ray halo of the pre-merger NGC~6240.
Enhanced star formation activity over an extended period of time
($\sim$10$^8$ yrs) is needed to produce the vast amount of $\alpha$
elements detected in the nebula of Mrk~231. Multiple outflow events,
such as the on-going quasar-driven galactic wind, may help carry the
$\alpha$ elements produced in the circumnuclear region out to the
largest scales. Such wind-driven metal transport is directly seen to
take place in the nearby starburst M~82, albeit on a considerably
smaller scale. The stirring of the gas associated with the merger
itself may also redistribute the metals across the nebula and help
erase remaining abundance gradients.
\item The hard X-ray continuum emission in the inner ($\le$6 kpc)
nebula is consistent with being due entirely to the bright central
quasar and the wings of the {\em Chandra} point spread function.
The $\sim$3 million K thermal component detected in the halo is not
present within 6 kpc of the nucleus. No sign of O~VII 0.7 keV and
O~VIII 0.9 keV absorption edges (``warm absorbers'') is found in any
of the spectra, further indicating a lack of cool X-ray detectable
gas in the central region of Mrk~231. Energetically, heating from
the circumnuclear starburst, the central quasar activity, or the
wide-angle quasar-driven outflow is each capable of explaining this
lack of cool gas. The strongest evidence in our data that the hot
X-ray emitting gas ``knows'' about the massive neutral/molecular
outflow in this object is a deficit of soft X-ray emission in the
western quadrant extending 1 -- 2 kpc (perhaps as far as 2 -- 4.5
kpc) from the nucleus. This region coincides with fainter H$\alpha$
emission and some of the largest columns of outflowing neutral gas
probed by observations of the Na I optical doublet. Shocks created
by the interaction of the wind with the ambient ISM may heat the gas
to high temperatures at this location. Indeed, there is tantalizing
(2-$\sigma$) evidence for Fe XXV 6.7 keV line-emitting gas extending
up to $\sim$3 kpc north-west of the nucleus. If produced by a
collisionally ionized plasma with a temperature $T \sim 7 \times
10^7$ K, as in the case of NGC~6240, this would imply shock
velocities $\ga$2000 km s$^{-1}$. HMXBs may also be responsible for
this extended line emission.
\end{itemize}
\acknowledgements We thank the anonymous referee for thoughtful and
constructive comments that improved this paper. Support for this work
was provided by NASA through {\em Chandra} contract GO2-13129X (S.V.)
and the NASA Postdoctoral Program (NPP) Fellowship (S.H.T., S.V.).
S.V. acknowledges support from the Alexander von Humboldt Foundation
for a ``renewed visit'' to Germany, and thanks the host institution,
MPE Garching, where a portion of this paper was written. He is also
grateful to R. Mushotzky for discussions of the interpretation of the
elemental abundances. This work has made use of NASA's Astrophysics
Data System Abstract Service and the NASA/IPAC Extragalactic Database
(NED), which is operated by the Jet Propulsion Laboratory, California
Institute of Technology, under contract with the National Aeronautics
and Space Administration.
\clearpage
|
1,314,259,994,870 | arxiv | \section{Introduction}
Fluid flows have a important role in engineering science. To understand the complex phenomena that arise in them, computing analytical solutions may be helpful. Indeed, an analytical solution may serve as a simplified model of the flow in a particular configuration. It may also be used to calibrate or tests numerical schemes or turbulent models.
Some methods exist in finding particular solutions of partial differential equations (separation of variables, assuming axisymmetry, ...). Most of them consist in choosing an ansatz and reduce the equations accordingly. However, finding ansätze which effectively reduce the equation is not an easy task. A tool which provides a systematic way of finding the such ansätze is the Lie symmetry-group theory \cite{olver86,ibragimov}.
In incompressible fluid mechanics, the Lie group theory has been used to find analytical solutions of the Navier--Stokes equations \cite{fushchych94}, and particularly vortex-like ones \cite{grassi00}, to study turbulence \cite{unal97,oberlack01,ejm07} or to model particular flows such as boundary layers \cite{khujadze04} and thin isothermal and non-isothermal shear layers \cite{nova09}. In this article, we deal with the compressible case where only few exact solutions are available. The reader may refer to some papers such as \cite{colonius91,curle71,tsangaris00,sachdev05} to find analytical solutions. Most of them are either one-dimensional or axisymmetric. The aim of the present work is to carry out a group theoretic analysis of the governing equations and to propose a wide class of analytical solutions.
In section \ref{section:equation}, the hypotheses on the flow are listed and the velocity-pressure-density formulation of the compressible Navier-Stokes equations is presented. In section \ref{symmetry}, the Lie method of symmetry computation is briefly recalled. The Lie point-symmetry group of the equations and its Lie algebra are then analysed. It will be shown that the Lie-algebra is solvable. The symmetry group will be used to find ansätze and reduce the equations in the subsequent sections. We do not intend to be exhaustive. Rather, some interesting self-similar solutions, such as vortex-like ones, are presented, in order to complete the set of available solutions in the literature. In sections \ref{steady}, \ref{unsteady} and \ref{3d}, respectively steady bidimensional, unsteady bidimensional and three-dimensional cases are considered.
\section{Model equations\label{section:equation}}
The motion of a fluid is governed by the Navier-Stokes equations \cite{batchelor_2000}:
\begin{equation}\begin{cases}
\td{\rho}{t}+\rho\div\u=0
\\\\
\rho\td{\u}{t}= \div σ
\\\\\displaystyle
ρ\td {}t⦅e+÷{\u^2}2⦆=\div(σ\u+κ∇ T)
\end{cases}\label{comp0}
\end{equation}
in absence of body force. In these equations, $\rho$, $\u=(u,v,w)$, $e$ and $T$ are respectively the density, the velocity, the internal energy and the temperature. $κ$ is the thermal diffusion coefficient, taken constant.
With the hypothesis of a Newtonian fluid, the stress tensor $\sigma$ writes:
\begin{equation}
\sigma=\left(-p-\cfrac{2\mu}{3}\,\div\u\right)\tsr{I_d}+2\mu\tsr{S}.
\end{equation}
where $p$ is the pressure field, µ is the dynamic viscosity, $\tsr{I_d}$ is the three-dimensional identity matrix and $\tsr{S}$ is the strain rate tensor
\begin{equation}
\tsr{S}=\cfrac{\nabla\vt{u}+\tpleft{\,\nabla\vt{u}}}{2}.
\end{equation}
The variation of $µ$ with the temperature is neglected. Equations (\ref{comp}) can also be formulated as follows
\begin{equation}\begin{cases}
\td{\rho}{t}+\rho\div\u=0
\\\\
\rho\td{\u}{t}= \div σ
\\\\\displaystyle
ρ\td et=σ:\tsr S+κΔT
\end{cases}\label{comp}
\end{equation}
where the double dot sign stands for the Frobenius inner product:
\[ \sigma:\tsr S=\operatorname{tr}(\tpleft{\sigma}\ \tsr S)=∑_{i,j}\sigma_{ij}S_{ij}. \]
Assume that the fluid is an ideal gas. We then have:
\begin{equation}
p=ρTR \label{p}
\end{equation}
where $R$ is the gas constant. Moreover, the internal energy is proportional to the temperature, that is
\begin{equation}
e=C_vT \label{e}
\end{equation}
where $C_v$ is the (constant) specific heat at constant volume.
Inserting relations (\ref{e}) and (\ref{p}) in the energy equation of (\ref{comp}) and using the mass balance equation, we get the density-velocity-pressure formulation (see also \cite{toutant17,zhang92,garnier09}):
\begin{equation}\begin{cases}
\td{\rho}{t}+\rho\div\u=0
\\\\
\rho\td{\u}{t}= \div σ
\\\\
\cfrac{C_v}{R}\left(\td{p}{t}+p\div \u\right)=\sigma:\tsr{S}+\cfrac{\kappa}{R}\ \Delta \left(\cfrac{p}{\rho}\right)
\end{cases}\label{nsc}
\end{equation}
Expliciting material derivatives with the relation
\begin{equation}
\td{}t=\pd {}t+(\u\cdot ∇),
\end{equation} one gets a (five-dimensional) partial differential equation
\begin{equation}
\mathbf E⦅t,\vt{x},\u_{(2)},p_{(2)},ρ_{(2)}⦆=0
\label{nsc3}
\end{equation}
where $\u_{(2)}$, $p_{(2)}$ and $ρ_{(2)}$ gather $u$, $v$, $w$, $p$, $ρ$ and all of their partial derivatives up to second order. A componentwise expression of equation (\ref{nsc3}) is given in appendix \ref{componentwise}, equation (\ref{nsc2}).
\section{Lie symmetry group and Lie algebra\label{symmetry}}
In this section, we study the Lie symmetry group admitted by equations (\ref{nsc}) and its Lie algebra. The suited framework to compute Lie symmetry groups is the language of jet space but to simplify the presentation, we avoid its introduction. More details can be found in \cite{olver86,bluman02,hydon00a,ibragimov}.
A transformation
\begin{equation}
T\ :\ (t,\vt{x},\u,p,\rho)\longmapsto(\hat{t},\hat{\vt{x}},\hat{\u},\hat{p},\hat{\rho})
\label{transformation}\end{equation}
is called a (point-)symmetry of equations (\ref{nsc3}) if it transforms any solution of (\ref{nsc3}) into another solution, that is
\begin{equation}
\mathbf E (t,\vt{x},\u_{(2)},p_{(2)},\rho_{(2)})=0\quad\quad\Longrightarrow\quad\quad \mathbf E(\hat{t},\hat{\vt{x}},\hat{\u_{(2)}},\hat{p_{(2)}},\hat{\rho_{(2)}})=0.\label{sym}
\end{equation}
where $\hat{\u_{(2)}}$, $\hat{p_{(2)}}$ and $\hat{\rho_{(2)}}$ represent the transforms $\hat{\u}$, $\hat{p}$, $\hat{\rho}$ of $\u$, $p$, $ρ$ and their partial derivatives with respect to $\hat t$ and $\hat{\vt{x}}$ up to second order. Our aim is to find all the local Lie symmetry groups of (\ref{nsc}), that are families of symmetries
\[ G=\{T_{ϵ} : (t,\vt{x},\u,p,\rho)\longmapsto(\hat{t},\hat{\vt{x}},\hat{\u},\hat{p},\hat{\rho})\ |\ ϵ∈I⊂ℝ,\, T_{ϵ} \text{ symmetry of (\ref{nsc})}\}\]
having a structure of a local Lie group. For the sake of simplicity, we assume that the group is additive. In particular, $0∈I$ and $T_{ϵ=0}$ is the identity transformation.
Computing local Lie symmetrie group is generally easier if symmetry condition (\ref{sym}) is linearized at the vicinity of the identity. To this aim, consider the vector field
\begin{equation}X=\xi^t\pd{}{t}+\xi^x\pd{}{x}+\xi^y\pd{}{y}+\xi^z\pd{}{z} +\xi_u\pd{}{u}+\xi_v\pd{}{v}+\xi_w\pd{}{w} +\xi_p\pd{}{p}+\xi_\rho\pd{}{\rho} \label{generator}\end{equation}
where the components
\begin{equation}
\xi^r(t,\vt{x},\u,p,ρ)=\left.\td{\hat{r}}{ϵ}\right|_{ϵ=0} ,\quad r=t,x,y,
\end{equation}
represent the infinitesimal variation of the independant variables under the action of $G$ and
\begin{equation}
\xi_q(t,\vt{x},\u,p,ρ)=\left.\td{\hat{q}}{ϵ}\right|_{ϵ=0} ,\quad q=u,v,w,p,
\end{equation}
represent those of the dependant variables. Vector field $X$ is called the generator of $G$. According to the Lie group theory \cite{olver86,ibragimov}, if $G$ is a Lie symmetry of (\ref{nsc}) then
\begin{equation}
\left.pr^{(2)}X\cdot \mathbf E\right|_{\mathbf E=0}=0
\label{symcond}\end{equation}
where $pr^{(2)}X$ is the second-order prolongation of $X$. It writes:
\[ pr^{(2)}X=X+X^{(1)}+X^{(2)} \]
where
\begin{equation}
X^{(1)}=∑_q∑_{s}ξ^s_q\pdfrac{}{q_s}
\label{x1}
\end{equation}
takes into account the infinitesimal variation of first order partial derivatives under the action of $G$ (see also equation (\ref{x1components})) and
\begin{equation}
X^{(2)}=\displaystyle∑_q∑_{r,s}\xi^{rs}_q\pdfrac{}{q_{rs}}
\label{x2}
\end{equation}
acts on second order derivatives.
In (\ref{x1}) and (\ref{x2}), the sums are over all dependent variables $q=u,v,w,p,ρ$ and over all independent variables $r,s=t,x,y,z$.
The coefficients of $X^{(1)}$ and $X^{(2)}$ are linked to those of $X$ by the relations:
\begin{equation} \xi_q^s=D_s\xi_q-\sum_{m=t,x,y,z}q_mD_s\xi^m,\label{x1relation}\end{equation}
\begin{equation} \xi_q^{rs}=D_s\xi_q^r-\sum_{m=t,x,y,z}q_{rm}D_s\xi^m, \label{x2relation}\end{equation}
$D_r$ being the total derivation operator with respect to $r$.
Infinitesimal symmetry condition (\ref{symcond}) applied to (\ref{nsc}) leads to system of partial differential equations on the $\xi^r$ and $ξ_q$. After solving this system, one finds that $X$ generates a Lie symmetry of (\ref{nsc}) if it is a linear combination of the following vector fields (see appendix \ref{componentwise}):
\begin{equation} X_1=\pd{}{t}, \qquad X_2=\pd{}{x}, \qquad X_3=\pd{}{y}, \qquad X_4=\pd{}{z}, \nonumber\end{equation}
\begin{equation} X_5=t\pd{}{x}+\pd{}{u}, \qquad X_6=t\pd{}{y}+\pd{}{v}, \qquad X_7=t\pd{}{z}+\pd{}{w}, \nonumber\end{equation}
\begin{equation} X_8=y\pd{}{z}-z\pd{}{y}+v\pd{}{w}-w\pd{}{v}, \qquad X_9=z\pd{}{x}-x\pd{}{z}+w\pd{}{u}-u\pd{}{w}, \nonumber\end{equation}
\begin{equation} X_{10}=x\pd{}{y}-y\pd{}{x}+u\pd{}{v}-v\pd{}{u}, \nonumber\end{equation}
\begin{equation} X_{11}=2t\pd{}{t}+x\pd{}{x}+y\pd{}{y}+z\pd{}{z}-u\pd{}{u}-v\pd{}{v}-w\pd{}{w}-2p\pd{}{p}, \nonumber\end{equation}
\begin{equation} X_{12}=x\pd{}{x}+y\pd{}{y}+z\pd{}{z}+u\pd{}{u}+v\pd{}{v}+w\pd{}{w}-2\rho\pd{}{\rho}. \nonumber\end{equation}
The generic element $T_{ϵ}$ of the one-dimensional Lie symmetry group generated by each of these vector fields can be computed by solving the equations
\begin{equation}
\begin{cases}
\td{\hat r}{ϵ}=\xi^r(\hat t,\hat{\vt{x}},\hat{\u},\hat p,\hat {ρ}),\quad r=t,x,y,z,\\\\
\td{\hat q}{ϵ}=\xi_q(\hat t,\hat{\vt{x}},\hat{\u},\hat p,\hat {ρ}),\quad q=u,v,w,p,ρ,\\\\
\hat r(ϵ=0)=r,\\\\
\hat q(ϵ=0)=q.
\end{cases}
\end{equation}
These groups combines into the 12-dimensional Lie symmetry group $G$ of equations (\ref{nsc}), generated by the following point transformations:
\begin{itemize}
\item time translations, obtained from $X_1$:
\begin{equation} (t,\vt{x},\u,p,\rho)\mapsto(t+ϵ,\vt{x},\u,p,\rho)\label{time},\end{equation}
\item space translations, encoded by $X_2$, $X_3$, and $X_4$:
\begin{equation} (t,\vt{x},\u,p,\rho)\mapsto(t,\vt{x}+\vt{ε},\u,p,\rho),\end{equation}
\item Galilean transformations, corresponding to $X_5$, $X_6$, $X_7$:
\begin{equation} (t,\vt{x},\u,p,\rho)\mapsto(t,\vt{x}+\vt{ε}t,\u+\vt{ε},p,\rho), \end{equation}
\item rotations, induced by $X_8$, $X_9$ and $X_{10}$,:
\begin{equation} (t,\vt{x},\u,p,\rho)\mapsto(t,\tsr{R}\vt{x},\tsr{R}\u,p,\rho), \end{equation}
\item and the two scale transformations, generated respectively by $X_{11}$ and $X_{12}$:
\begin{equation} (t,\vt{x},\u,p,\rho)\mapsto(\e^{2ϵ}t,\e^{ϵ}\vt{x},\e^{-ϵ}\u,\e^{-2ϵ}p,\rho), \end{equation}
\begin{equation} (t,\vt{x},\u,p,\rho)\mapsto(t,\e^{ϵ}\vt{x},\e^{ϵ}\u,p,\e^{-2ϵ}\rho). \label{scale2}\end{equation}
\end{itemize}
In these expressions, $ϵ$, $\vt{ε}$ and $\tsr{R}$ are respectively arbitrary scalar, vector and 3D rotation matrix.
Vector fields $X_i$, $i=1,\cdots,12$, constitute a basis of the 12-dimensional Lie algebra $\mathfrak{g}$ of the Lie symmetry group $G$. The commutation table of $\mathfrak{g}$ is presented on Table \ref{commutationtable}. It shows that the subalgebra
\begin{equation}
\mathfrak{g}^\text{rad}=\operatorname{span}(X_1,X_2,X_3,X_4,X_5,X_6,X_7,X_{11},X_{12})
\end{equation} is solvable. Indeed, the derived series terminates in the zero algebra:
\begin{equation}\begin{array}{rll}
[\mathfrak{g}^\text{rad},\mathfrak{g}^\text{rad}]&=&\operatorname{span}(X_1,X_2,X_3,X_4,X_5,X_6,X_7),\\[5pt
[[\mathfrak{g}^\text{rad},\mathfrak{g}^\text{rad}],\mathfrak{g}^\text{rad}]&=&\operatorname{span}(X_2,X_3,X_4),\\[5pt
[[[\mathfrak{g}^\text{rad},\mathfrak{g}^\text{rad}],\mathfrak{g}^\text{rad}],\mathfrak{g}^\text{rad}]&=&\{0\}.
\end{array}\end{equation}
\begin{table}\small
$$\begin{array}{c|cccc|ccc|ccc|cc}
&X_1&X_2&X_3&X_4&X_5&X_6&X_7&X_8&X_9&X_{10}&X_{11}&X_{12}
\\\hline
X_1&0&0&0&0&X_2&X_3&X_4&0&0&0&2X_1&0
\\
X_2&0&0&0&0&0&0&0&0&\rule[3pt]{.3em}{.5pt} X_4&X_3&X_2&X_2
\\
X_3&0&0&0&0&0&0&0&X_4&0&\rule[3pt]{.3em}{.5pt} X_2&X_3&X_3
\\
X_4&0&0&0&0&0&0&0&\rule[3pt]{.3em}{.5pt} X_3&X_2&0&X_4&X_4
\\\hline
X_5&\rule[3pt]{.3em}{.5pt} X_2&0&0&0&0&0&0&0&\rule[3pt]{.3em}{.5pt} X_7&X_6&\rule[3pt]{.3em}{.5pt} X_5&X_5
\\
X_6&\rule[3pt]{.3em}{.5pt} X_3&0&0&0&0&0&0&X_7&0&\rule[3pt]{.3em}{.5pt} X_5&\rule[3pt]{.3em}{.5pt} X_6&X_6
\\
X_7&\rule[3pt]{.3em}{.5pt} X_4&0&0&0&0&0&0&\rule[3pt]{.3em}{.5pt} X_6&X_5&0&\rule[3pt]{.3em}{.5pt} X_7&X_7
\\\hline
X_8&0&0&\rule[3pt]{.3em}{.5pt} X_4&X_3&0&\rule[3pt]{.3em}{.5pt} X_7&X_6&0&\rule[3pt]{.3em}{.5pt} X_{10}&X_9&0&0
\\
X_9&0&X_4&0&\rule[3pt]{.3em}{.5pt} X_2&X_7&0&\rule[3pt]{.3em}{.5pt} X_5&X_{10}&0&\rule[3pt]{.3em}{.5pt} X_8&0&0
\\
X_{10}&0&\rule[3pt]{.3em}{.5pt} X_3&X_2&0&\rule[3pt]{.3em}{.5pt} X_6&X_5&0&\rule[3pt]{.3em}{.5pt} X_9&X_8&0&0&0
\\\hline
X_{11}&\m2 X_1&\rule[3pt]{.3em}{.5pt} X_2&\rule[3pt]{.3em}{.5pt} X_3&\rule[3pt]{.3em}{.5pt} X_4&X_5&X_6&X_7&0&0&0&0&0
\\
X_{12}&0&\rule[3pt]{.3em}{.5pt} X_2&\rule[3pt]{.3em}{.5pt} X_3&\rule[3pt]{.3em}{.5pt} X_4&\rule[3pt]{.3em}{.5pt} X_5&\rule[3pt]{.3em}{.5pt} X_6&\rule[3pt]{.3em}{.5pt} X_7&0&0&0&0&0
\end{array}$$\caption{Commutation table of $\mathfrak{g}$}\label{commutationtable}
\end{table}
The subalgebra $\mathfrak{g}^\text{s}=\operatorname{span}(X_8,X_9,X_{10})$ of infinitesimal rotations is semi-simple. The Levi decomposition of $\mathfrak{g}$ is then
\begin{equation} \mathfrak{g}=\mathfrak{g}^\text{rad}\oplus \mathfrak{g}^\text{s}
\end{equation}
$\mathfrak{g}^\text{rad}$ being the radical.
In the next sections, successive reductions are applied to the equations and some self-similar solutions are computed.
We begin with steady bidimensional solutions.
\section{Steady bidimensional solutions}\label{steady}
In this section, we take $w=0$ and look for solutions invariant under both $X_1$ and $X_4$ that are steady and bidimensional solutions. We write:
\begin{equation*}u(t,x,y)=u_1(x,y), \qquad v(t,x,y)=v_1(x,y), \qquad p(t,x,y)=p_1(x,y),\end{equation*}
\begin{equation} \rho(t,x,y)=\rho_1(x,y). \end{equation}
The reduced equations write:
\begin{eqnarray}\begin{cases}
\pd{\rho_1 u_1}{x}+\pd{\rho_1 v_1}{y}=0 ,
\\[10pt]
\rho_1\left(u_1\pd{u_1}{x}+v_1\pd{u_1}{y}\right) +\pd{p_1}{x}=
\cfrac{\mu}{3}\left(4\pd{^2u_1}{x^2}+\ppd{v_1}{x}{y} +3\pd{^2u_1}{y^2}\right),
\\[10pt]
\rho_1\left(u_1\pd{v_1}{x}+v_1\pd{v_1}{y}\right) +\pd{p_1}{y}=
\cfrac{\mu}{3}\left(\ppd{u_1}{x}{y}+4\pd{^2v_1}{y^2} + 3\pd{^2 v_1}{x^2}\right),
\\[10pt]
\cfrac{C_v}{R}\left(\pd{p_1u_1}{x}+\pd{p_1v_1}{y}\right)=\sigma:\tsr{S}+\cfrac{\kappa }{R}
\left(\pd{^2p_1/\rho_1}{x^2}+\pd{^2p_1/\rho_1}{y^2}\right).
\end{cases}\label{eqx4x1}\end{eqnarray}
To find solutions, we reduce these new equations further. The Lie algebra of equations (\ref{eqx4x1}) is spanned by:
\begin{equation}\begin{array}{l}
Y_1=\pd{}{x}, \hspace{6mm} Y_2=\pd{}{y}, \hspace{6mm} Y_3=x\pd{}{y}-y\pd{}{x}+u_1\pd{}{v_1}-v_1\pd{}{u_1}, \\[10pt]
Y_4=x\pd{}{x}+y\pd{}{y}-u_1\pd{}{u_1}-v_1\pd{}{v_1}-2p_1\pd{}{p_1}, \\[10pt]
Y_5=x\pd{}{x}+y\pd{}{y}+u_1\pd{}{u_1}+v_1\pd{}{v_1}-2\rho_1\pd{}{\rho_1}.
\end{array}\end{equation}
\subsection{Reduction with $Y_1$}
Reduction under vector field $Y_1$ suggests a solution in the form
\begin{eqnarray}u_1(x,y)=u_2(y), \qquad v_1(x,y)=v_2(y), \\\\ p_1(x,y)=p_2(y), \qquad \rho_1(x,y)=\rho_2(y).\end{eqnarray}
Inserting these expressions in equations (\ref{eqx4x1}), it follows:
\begin{equation}\begin{cases}
(\rho_2v_2)'=0, \\
\rho_2v_2u_2'=\mu u_2'',\\
3\rho_2v_2v'_2+p_2'=4\mu v_2',\\
3C_v(p_2v_2)'=R(4\mu v_2'^2-3v_2'p_2+3\mu u_2'^2)+3\kappa (p_2/\rho_2)''.
\end{cases}\label{x2x1y1}\end{equation}
In equations (\ref{x2x1y1}) and in the rest of the document, a prime symbol is used to designate a derivation of a function depending on a single variable. One solution of (\ref{x2x1y1}) is
\begin{equation}\rho_2(y)=\cfrac{a}{v_2(y)},\qquad u_2(y)=u_3+u_4\e^{\frac{ay}{\mu}},\qquad v_2(y)=v_3\e^{\frac{ay}{\mu}}, \qquad p_2(y)=p_3\e^{\frac{ay}{\mu}}\end{equation}
where $a$, $u_3$, $u_4$ and $p_3$ are constants linked by the relations
\begin{equation}p_3=\cfrac{av_3}{3} \eqspc{and} u_4^2=\left(\cfrac{2C_v}{3R}-\cfrac{4\kappa}{3R\mu}-1\right)v_3^2.\end{equation}
We deduce the following class of solutions of (\ref{nsc}):
\begin{equation}\begin{mybox}\\[-10pt]\displaystyle
u(t,x,y)=u_3\pm\sqrt{\cfrac{2C_v\mu-4\kappa-3R\mu}{3R\mu}}\ v_3\e^{\frac{ay}{\mu}}, \qquad v(t,x,y)=v_3\e^{\frac{ay}{\mu}},
\\[10pt]\displaystyle
p(t,x,y)=\cfrac{av_3}{3}\e^{\frac{ay}{\mu}}, \qquad \rho(t,x,y)=\cfrac{a}{v_3}\e^{-\frac{ay}{\mu}}. \\[-10pt]
\end{mybox}\label{solx1x2x4}\end{equation}
It can be observed that the pressure and the density are respectively proportional and inversly proportional to $v$.
When $u_3=0$, the flow is parallel, as can be observed in Figure \ref{figx1x2x4}, left. Indeed, in a suitable orthogonal frame,
$$u=u_4\e^{\frac a\mu(cx+dy)},\hspc v=0$$
for some constants $u_4$, $c$ and $d$. This solution exhibits an exponential growth of norm of the velocity in both $x$ and $y$ directions.
\begin{figure}[ht]\centering
\includegraphics[width=37mm]{x1x2x40}\hfill
\includegraphics[width=37mm]{x1x2x41}\hfill
\includegraphics[width=37mm]{x1x2x42}
\caption{Left: $\u=(\e^y,\e^y)$. Center: $\u=(1+\e^y,\e^y)$. Right: $\u=(1-\e^y,\e^y)$}
\label{figx1x2x4}
\end{figure}
Some other simple solutions belonging to class (\ref{solx1x2x4}) are plotted in Figure \ref{figx1x2x4}.
\subsection{Reduction with $Y_3$}
Self-similar solutions under the infenitesimal rotation $Y_3$ write:
\begin{equation} u_1=u_2(r)\cos \theta-v_2(r)\sin \theta, \qquad v_1=u_2(r)\sin \theta+v_2(r)\cos \theta,
\nonumber\end{equation} \begin{equation}p_1=p_2(r),\qquad \rho_1=\rho_2(r) \end{equation}
where $(r,\theta)$ are the polar coordinates. The equations are reduced into
\begin{equation}\begin{cases}
(r\rho_2 u_2)'=0
\\[5pt]
\rho_2\left(u_2u'_2-\cfrac{v_2^2}{r}\right)+p_2'=\cfrac{4\mu}{3} \left(u_2''+\cfrac{u_2'}{r}-\cfrac{u_2}{r^2}\right)
\\[15pt]
\rho_2\left(u_2v_2'-\cfrac{u_2v_2}{r}\right)=\mu
\left(v_2''+\cfrac{v_2'}{r}-\cfrac{v_2}{r^2}\right)
\\[15pt]
\cfrac{Cv}{R}\left((p_2u_2)'+\cfrac{p_2u_2}{r}\right)=E+\cfrac{\kappa}{R} \left( (p_2/\rho_2)''+\cfrac{(p_2/\rho_2)'}{r}\right)
\end{cases}\label{x2x1y2}\end{equation}
with
\begin{equation}E= -\cfrac{p_2(ru_2)'}{r}+\cfrac{\mu}{3r^2} \left( 4u_2^2+3v_2^2-4ru_2u_2'-6rv_2v_2'+4r^2u_2'^2+3r^2v_2'^2 \right). \nonumber\end{equation}
These equations admit the following infinitesimal symmetry
\begin{equation}
r\pd{}{r}-2u_2\pd{}{u_2}-2v_2\pd{}{v_2}+\rho_2\pd{}{\rho_2}-3p\pd{}{p_2}.
\end{equation}
A self-similar solution under this symmetry verify:
\begin{equation}
u_3=r^2u_2(r), \qquad v_3=r^2v_2(r), \qquad p_3=r^3p_2(r), \qquad \rho_3=\cfrac{\rho_2(r)}{r}.
\label{varx2x1y2}\end{equation}
where $u_3,v_3,p_3$ and $ρ_3$ are constants. Inserting (\ref{varx2x1y2}) into equations (\ref{x2x1y2}) leads to the following algebraic relations on these constants:
\begin{equation}
\begin{cases}
\rho_3u_3=-\mu\\
\rho_3v_3^2+3p_3+2\mu u_3=0\\
3p_3(4C_v\mu+R\mu-16\kappa)=\mu R\rho_3(28u_3^2+27v_3^2).
\label{eqx2x1y22}\end{cases}
\end{equation}
The velocity components verify:
\[u(r,\theta)=\cfrac{u_3\cos \theta-v_3\sin \theta}{r^2}, \quad
v(r,\theta)=\cfrac{u_3\sin \theta+v_3\cos \theta}{r^2}.\]
If $\vt{\mathrm{e}}_r$ and $\vt{\mathrm{e}}_\theta$ designate the unitary radial and angular vectors, then the solution, in polar coordinates, is
\begin{equation}
\begin{mybox} \vt u(r,\theta)=\cfrac{1}{r^2}(u_3\vt{\mathrm{e}}_r+v_3\vt{\mathrm{e}}_\theta),\hspc p(r,\theta)=\cfrac{p_3}{r^3}, \hspc \rho(r,\theta)=\rho_3r \\[-10pt]\end{mybox}
\label{solx2x1y22}\end{equation}
This solution represents a steady vortex flow. It is sketched in Figure \ref{vortex_sink}. The constant $u_3$ beeing negative, the origin is a sink. The sign of $v3$ determines the direction of rotation. The velocity magnitude increases as $r^{-2}$ towards the sink.
\begin{figure}\centering
\includegraphics[width=4cm]{vortex_sink_r2}
\caption{Steady vortex sink. $‖\u‖\propto r^{-2}$}\label{vortex_sink}
\end{figure}
\subsection{Reduction with $Y_4$}
A basis of invariants under the scale transformation generated by $Y_4$ is
\begin{equation} \eta=\cfrac{y}{x}, \hspc u_2(\eta)=xu_1(x,y), \hspc v_2(\eta)=xv_1(x,y),\nonumber \end{equation}
\begin{equation} p_2(\eta)=x^2p_1(x,y), \hspc \rho_2(\eta)=\rho_1(x,y).\end{equation}
The reduced equations are
\begin{equation}
\begin{cases}
( ρ_2v_2-ρ_2u_2η)'=0
\\ \displaystyle
ρ_2⦅v_2u'_2-u_2(u_2η)'⦆-p_2'η-2p_2+÷{µ}3⦅v_2η-3u_2-4u_2η^2⦆''=0
\\ \displaystyle
ρ_2⦅v_2v'_2-u_2(v_2η)'⦆+p_2'+÷{µ}3⦅u_2η-4v_2-3v_2η^2⦆''=0
\\ \displaystyle
C_v⦅p_2(v_2-u_2η)⦆'-2C_vp_2u_2+Rp_2(v_2-u_2η)'-Rµ\tsr S_2+
\\\displaystyle
\qquad\qquad\qquad\qquad\qquad κ⦅⦅÷{p_2}{ρ_2}⦆''η^2+6⦅÷{p_2}{ρ_2}⦆'η+6÷{p_2}{ρ_2}⦆=0
\end{cases}
\label{eqx1x4y4}
\end{equation}
where
\[ \tsr S_2=÷43(v'_2-u'_2η-u_2)^2+÷43v'_2u_2+(u'_2-v'_2η-v_2)^2.\]
The solution of system (\ref{eqx1x4y4}) is
\begin{equation}
u_2(\eta)=\cfrac{(\rho_3-v_3\eta)u_3}{\rho_3(1+\eta^2)},\hspc v_2(\eta)=u_2(\eta)\eta+\cfrac{v_3}{\rho_2(\eta)}, \nonumber\end{equation}
\begin{equation} p_2(\eta)=\cfrac{(\rho_3^2+v_3^2)u_2(\eta)}{2(v_3\eta-\rho_3)}, \hspc \rho_2(\eta)=\cfrac{\rho_3-v_3\eta}{(1+\eta^2)u_2(\eta)}
\end{equation}
where $u_3$ and $v_3$ are constants and
\begin{equation}\rho_3=\cfrac{-2\kappa+4\mu R}{C_v}.\end{equation}
As a result,
\begin{equation}\begin{mybox}\\
u(t,x,y)=\cfrac{(\rho_3x-v_3y)u_3}{(x^2+y^2)\rho_3}, \hspc v(t,x,y)=\cfrac{(\rho_3y+v_3x)u_3}{(x^2+y^2)\rho_3}, \\\\ p(t,x,y)=\cfrac{-(\rho_3^2+v_3^2)u_3}{2(x^2+y^2)\rho_3}, \hspc \rho(t,x,y)=\cfrac{\rho_3}{u_3}
\\\end{mybox}\end{equation}
This is an incompressible solution, representing also a vortex, but with a velocity magnitude proportional to $r^{-1}$ towards the origin. If $u_3>0$, it models a swirling source and when $u_3<0$, the origin is a sink. These two cases are represented in Figure \ref{vortex_r1} when $|u_3|=1$ and $v_3/ρ_3=1$.
\begin{figure}[ht]\centering
\includegraphics[width=4cm]{vortex_sink_r1}\qquad\quad
\includegraphics[width=4cm]{vortex_source_r1}
\caption{Vortex sink (left) and source (right). $‖\u‖\propto r^{-1}$}\label{vortex_r1}
\end{figure}
\subsection{Reduction with a linear combination of $Y_1$ and $Y_2$}
Invariant solutions under a linear combination $-bY_1+aY_2$, for some constants $a$ and $b$, can be written as follows:
\begin{eqnarray}
u_1(x,y)=u_2(\eta), \qquad v_1(x,y)=v_2(\eta), \\[5pt]p_1(x,y)=p_2(\eta), \qquad \rho_1(x,y)=\rho_2(\eta),
\label{mby1ay2}\end{eqnarray}
where the self-similarity variable is
\[
η=ax+by.
\]
Relations (\ref{mby1ay2}) transform equations (\ref{x2x1y2}) into
\begin{equation}
\begin{cases}
a(\rho_1u_1)'+b(\rho_1v_1)'=0
\vspace{5pt}\\
3\rho_1\left( au_1u_1'+bv_1u_1'\right) +3ap_1'= \mu \big( (4a^2+3b^2)u_1''+abv_1''\big)
\vspace{5pt}\\
3\rho_1\left(au_1v_1'+bv_1v_1'\right) +3bp_1'= \mu \big((4b^2+3a^2)v_1''+abu_1''\big)
\vspace{4pt}\\
\cfrac{C_v}{R}\big(a(p_1u_1)'+b(p_1v_1)'\big) =\sigma:\tsr{S} +(a^2+b^2)\cfrac{\kappa}{R} (p_1/\rho_1)''
\end{cases}
\end{equation}
One solution can easily be found if $u_2$ and $v_2$ are linear. In this case
\begin{eqnarray}
u_2(\eta)=u_3\eta, \qquad v_2(\eta)=\cfrac{-au_3\eta}{b}, \qquad p_2(\eta)=p_3\\\\
\rho_2(\eta)=\cfrac{-2\kappa p_3b^2}{\mu u_3^2R\eta^2(a^2+b^2)+2\kappa p_3(\rho_3-\rho_4\eta^2)}.
\end{eqnarray}
where $u_3$, $p_3$, $\rho_3$ and $\rho_4$ are constants. We get.
\begin{equation}
\begin{mybox}
u(t,x,y)=(ax+by)u_3, \quad v(t,x,y)=-\cfrac{a}{b}(ax+by)u_3, \quad p(t,x,y)=p_3, \\[10pt]
\rho(t,x,y)=\cfrac{-2\kappa p_3b^2}{\mu u_3^2R(ax+by)^2(a^2+b^2)+2\kappa p_3(\rho_3-\rho_4(ax+by)^2)}, \\[-10pt]\end{mybox}
\end{equation}
This solution represents a parallel, but direction-changing flow, with a uniform pressure. It is represented in Figure \ref{parallel} in the case $a>0$ and $b>0$. When $a=0$ the flow is parallel to $x$-axis.
\begin{figure}\centering
\includegraphics[width=4cm]{parallel}
\caption{Parallel flow}\label{parallel}
\end{figure}
In the next section, we seek usteady solutions.
\section{Unsteady bidimensional solutions}\label{unsteady}
In the case of unsteady bidimensional flow, the equations are
\begin{eqnarray}\begin{cases}
\pd{\rho}{t}+\pd{\rho u}{x}+\pd{\rho v}{y}=0
\\[10pt]
\rho\left(\pd{u}{t}+u\pd{u}{x}+v\pd{u}{y}\right) +\pd{p}{x}=
\cfrac{\mu}{3}\left(4\pd{^2u}{x^2}+\ppd{v}{x}{y} +3\pd{^2u}{y^2}\right)
\\[10pt]
\rho\left(\pd{v}{t}+u\pd{v}{x}+v\pd{v}{y}\right) +\pd{p}{y}=
\cfrac{\mu}{3}\left(\ppd{u}{x}{y}+4\pd{^2v}{y^2} + 3\pd{^2 v}{x^2}\right)
\\[10pt]
\cfrac{C_v}{R}\left(\pd{p}{t}+\pd{pu}{x}+\pd{pv}{y}\right)=\sigma:\tsr{S}+\cfrac{\kappa }{R}
\left(\pd{^2p/\rho}{x^2}+\pd{^2p/\rho}{y^2}\right)
\end{cases}
\label{unsteady_bidim}
\end{eqnarray}
The Lie algebra of these equations is spanned by $X_1$, $X_2$, $X_3$, $X_5$, $X_6$, $X_{10}$, $X_{11}$
and $X_{12}$ (without the terms in $\pd {}z$ and $\pd {}w$). Let us begin with solutions homogeneous in $x$ direction, {\it i.e.} invariant under $X_2$.
\subsection{Reduction with $X_2$}
$X_2$-invariant solutions are of the form
\begin{equation}u(t,x,y)=u_1(t,y), \qquad v(t,x,y)=v_1(t,y), \nonumber\end{equation} \begin{equation}p(t,x,y)=p_1(t,y), \qquad \rho(t,x,y)=\rho_1(t,y).\end{equation}
In this case, equations (\ref{unsteady_bidim}) reduce into
\begin{eqnarray}\begin{cases}
\pd{\rho_1}{t}+\pd{\rho_1 v_1}{y}=0
\\[10pt]
\rho_1\left(\pd{u_1}{t}+v_1\pd{u_1}{y}\right)\! =
\mu\pd{^2u_1}{y^2}
\\[10pt]
\rho_1\left(\pd{v_1}{t}+v_1\pd{v_1}{y}\right)\! =-\pd{p_1}{y}+
\cfrac{4\mu}{3}\ \pd{^2v_1}{y^2}
\\[10pt]
\cfrac{C_v}{R}\!\left(\!\pd{p_1}{t}+\pd{p_1v_1}{y}\!\right)\!\!=\!-p_1\pd{v_1}{y}
\!+\!\cfrac{4\mu}{3}\!\left(\!\pd{v_1}{y}\!\!\right)^2 \!\!+\!\mu\!\left(\!\pd{u_1}{y}\!\!\right)^2\!\! +\!\cfrac{\kappa }{R}
\pd{^2(p_1/ρ_1)}{y^2}
\end{cases}\label{eqx2}\end{eqnarray}
These equations admit the following infinitesimal symmetries
\[Z_1=\pd{}{t}, \hspc Z_2=\pd{}{y}, \hspc Z_3=\pd{}{u_1}, \hspc Z_4=t\pd{}{y}+\pd{}{v_1}\]
\[Z_5=2t\pd{}{t}+y\pd{}{y}-u_1\pd{}{u_1}-v_1\pd{}{v_1}-2p_1\pd{}{p_1}, \]
\[Z_6=y\pd{}{y}+u_1\pd{}{u_1}+v_1\pd{}{v_1}-2\rho_1\pd{}{\rho_1}.\]
A basis of invariants under $Z_4$ is
\begin{equation}v_2(t)=v_1(t,y)-\cfrac{y}{t}, \qquad u_1(t,y)=u_2(t), \nonumber\end{equation}\begin{equation} p_1(t,y)=p_2(t), \qquad \rho_1(t,y)=\rho_2(t).\end{equation}
The corresponding solution for $t>0$ is
\begin{equation}\begin{mybox}
u(t,x,y)=u_3, \qquad\qquad v(t,x,y)=\cfrac{v_3+y}{t}, \\[5pt] p(t,x,y)=\cfrac{4\mu+p_3\ t^{-R/C_v}}{3t}, \qquad
\rho(t,x,y)=\cfrac{\rho_3}{t}, \\[-10pt]\end{mybox}\label{eq:unsteady_2d_y5}\end{equation}
where $u_3$, $v_3$, $p_3$ and $\rho_3$ are constants. The pressure and density are uniform but time-dependent. The flow is sketched in Figure \ref{unsteady_2d_y5} for a fixed $t>0$ and $v_3=0$. $x$-axis can be seen as a wall. When $u_3=0$, the flow is parallel to $y$-axis.
\begin{figure}[ht]\centering
\includegraphics[width=4cm]{unsteady_2d_y5_0}\qquad\quad
\includegraphics[width=4cm]{unsteady_2d_y5_1}
\caption{Solution (\ref{eq:unsteady_2d_y5}). Left: $u_3=0$. Right $u_3>0$}\label{unsteady_2d_y5}
\end{figure}
Invariants under the vector field $Z_5$ are
\begin{equation}
\eta=\cfrac{y}{\sqrt{t}}, \qquad u_2(\eta)=yu_1(t,y), \qquad v_2(\eta)=yv_1(t,y), \nonumber\end{equation}\begin{equation} p_2(\eta)=y^2p_1(t,y), \qquad\rho_2(\eta)=\rho_1(t,y).
\end{equation}
These invariants reduce equations (\ref{eqx2}) into
\[
\begin{cases}
-\eta^3\rho_2'+2(v_2\rho_2'+v_2'\rho_2)\eta-2\rho_2v_2=0 \\[5pt]
-\eta^3\rho_2u'_2+2\rho_2v_2(u_2'\eta-u_2)+2\mu(-\eta^2u''_2+2\eta u'_2-2u_2)=0 \\[5pt]
-\eta^3\rho_2v_2'+2\rho_2(v_2v'_2\eta-v_2^2)+2\eta p_2'-4p_2+\cfrac{8\mu}{3}(-\eta^2v_2''+2\eta v_2'-2v_2)=0 \\[5pt]
\cfrac{C_v}{R}\big( -\eta^3p_2'+2v_2p'_2\eta-6v_2p_2 +2\eta p_2v_2'\big) = 2p_2\big(-\eta v'_2+ v_2\big)
\\
\qquad +2\mu\big(\eta^2u_2'^2-2\eta u_2u'_2+u_2^2 \big) +\cfrac{8\mu}{3} \big( \eta^2v_2'^2 -2\eta v_2'v_2+v_2^2\big)
+\cfrac{2\kappa\eta^4}{R}\bigg( \cfrac{p_2}{\eta^2\rho_2}\bigg)''
\end{cases}
\]
A solution of these equations can be found with the ansatz:
\begin{equation}
u_2(\eta)=u_3\eta^2, \qquad v_2(\eta)=v_3\eta^2, \qquad p_2(\eta)=p_3\eta^2, \qquad \rho_2=\rho_3\eta^{-2}
\end{equation}
where $u_3$, $v_3$, $p_3$ and $ρ_3$ are constants.
Inserting these relations into the equations implies relations on theses constants and leads to the following parallel flow:
\begin{equation}\begin{mybox}
u(t,x,y)=\cfrac{u_3y}{t}, \qquad\qquad v(t,x,y)=\cfrac{y}{t}, \\[10pt] p(t,x,y)=\cfrac{\mu\rho_3R(4+3u_3^2)}{3t(\rho_3R-2\kappa)}, \qquad
\rho(t,x,y)=\cfrac{\rho_3t}{y^2}.
\\[-10pt]\end{mybox}\label{eq:unsteady_2d_x2y7}\end{equation}
The flow is plotted in Figure \ref{unsteady_2d_x2y7-y8}, left.
\begin{figure}[ht]\centering
\includegraphics[width=4cm]{unsteady_2d_x2y7}\qquad\quad
\includegraphics[width=4cm]{unsteady_2d_x2y8}
\caption{Left: solution (\ref{eq:unsteady_2d_x2y7}). Right: solution (\ref{eq:unsteady_2d_x2y8})}\label{unsteady_2d_x2y7-y8}
\end{figure}
Self-similar solutions of equations (\ref{eqx2}) under $Z_6$ have the form
\begin{equation} u_1(t,y)=yu_2(t), \qquad v_1(t,y)=yv_2(t), \nonumber\end{equation}\begin{equation}
p_1(t,y)=p_2(t),\qquad \rho_1(t,y)=\rho_2(t)/y^2.\end{equation}
This change of variables simplifies the equations into
\begin{equation}\begin{cases}
-\rho_2'+v_2\rho_2=0\\
u_2'+u_2v_2=0\\
v_2'+v_2^2=0\\
\cfrac{3C_v}{R} (p_2'+p_2v_2)=-3p_2v_2+\mu(3u_2^2+4v_2^2)+\cfrac{6\kappa p_2}{R\rho_2}
\end{cases}\label{eqx2y8}\end{equation}
One solution of equations (\ref{eqx2y8}) is
\begin{equation}
u_2(t)=u_3, \qquad v_2(t)=0, \qquad \rho_2(t)=\rho_3, \nonumber\end{equation}\begin{equation} p_2(t)=p_3\exp\left(\frac{2\kappa t}{\rho_3C_v}\right) -\cfrac{\rho_3\mu Ru_3^2}{2\kappa},
\end{equation}
where $u_3$, $p_3$ and $\rho_3$ are arbitrary constants. We get the following self-similar solution under $X_2$ and $Z_6$:
\begin{equation}
\begin{mybox}\\[-10pt]
u(t,x,y)=u_3y, \hspc p(t,x,y)=p_3\exp\left(\cfrac{2\kappa t}{\rho_3C_v}\right) -\cfrac{\rho_3\mu Ru_3^2}{2\kappa}, \\[10pt]
v(t,x,y)=0, \hspc \rho(t,x,y)=\cfrac{\rho_3}{y^2}.
\\[-10pt]\end{mybox}\label{eq:unsteady_2d_x2y8}
\end{equation}
This solution is graphically presented in Figure \ref{unsteady_2d_x2y7-y8}, right. If one limits to $y∈[0,h]$ for some constant $h$, it may represent a Couette flow, with a uniform but time-dependent pressure field and a density depending on the wall distance.
Another solution of (\ref{eqx2y8}) is
\[
u_2(t)=\cfrac{u_3}{t+v_3},\qquad v_2=\cfrac{1}{t+v_3},\qquad \rho_2(t)=(t+v_3)\rho_3, \]
\[ p_2(t)=p_3\exp\left(\cfrac{2\kappa-\rho_3(C_v+R)}{\rho_3C_v}\ln (t+v3)\right) +\cfrac{\rho_3R\mu(4+3u_3^2)}{3(t+v_3)(\rho_3R-2\kappa)}.
\]
where $u_3$, $p_3$ and $\rho_3$ are constants. Hence,
\begin{equation}
\begin{mybox}
u(t,x,y)=\cfrac{u_3y}{t+v_3}, \qquad
v(t,x,y)=\cfrac{y}{t+v_3}, \qquad\rho(t,x,y)=\cfrac{(t+v_3)\rho_3}{y^2}, \\[10pt]
p(t,x,y)=p_3\exp\!\left(\!\cfrac{2\kappa-\rho_3(C_v+R)}{\rho_3C_v}\ln (t+v_3)\!\!\right) \!+\!\cfrac{\rho_3R\mu(4+3u_3^2)}{3(t+v_3)(\rho_3R-2\kappa)}.
\\[-10pt]\end{mybox}\end{equation}
This solution has the same profile as solution (\ref{eq:unsteady_2d_x2y7}) which is plotted in the left part of Figure \ref{unsteady_2d_x2y7-y8}, but with an algebraic time evolution of the pressure instead of a hyperbolic one.
Still in the bidimensional case, we reduce equations (\ref{unsteady_bidim}) under infinitesimal Galilean transformation $X_5$ and under infinitesimal scale transformations.
\subsection{Galilean transformation}
Self-similar solutions of (\ref{unsteady_bidim}) under $X_5$ can be expressed as follows
\begin{equation}
u(t,x,y)=u_1(t,y)+\cfrac{x}{t}, \qquad v(t,x,y)=v_1(t,y), \nonumber\end{equation}\begin{equation} p(t,x,y)=p_1(t,y), \qquad \rho(t,x,y)=\rho_1(t,y).
\end{equation}
The reduced equations read:
\begin{equation}
\begin{cases}
\pd{\rho_1}{t}+\cfrac{\rho_1}{t}+\pd{\rho_1 v_1}{y}=0
\\[5pt]
\rho_1\left(\pd{u_1}{t}+\cfrac{u_1}{t}+v_1\pd{u_1}{y}\right) =
\mu\left(\pd{^2u_1}{y^2}\right)
\\[10pt]
\rho_1\left(\pd{v_1}{t}+v_1\pd{v_1}{y}\right) +\pd{p_1}{y}=
\cfrac{4\mu}{3}\ \pd{^2v_1}{y^2}
\\[10pt]
\cfrac{C_v}{R}\left(\pd{p_1}{t}+\cfrac{p_1}{t}+\pd{p_1v_1}{y}\right)=\sigma:\tsr{S}+\cfrac{\kappa }{R}
\left(\pd{^2p_1/\rho_1}{y^2}\right)
\end{cases}\label{eqx5}
\end{equation}
with
\begin{equation}
\cfrac{\sigma:\tsr{S}}{\mu}=-\cfrac{p_1}{\mu}\left(\cfrac{1}{t}+\pd{v_1}{y} \right)+\left(\pd{u_1}{y}\right)^2
+\cfrac{4}{3}\left(\pd{v_1}{y}\right)^2+\cfrac{4}{3t^2} -\cfrac{4}{3t}\ \pd{v_1}{y} .
\end{equation}
The symmetries of (\ref{eqx5}) are generated by
\begin{multiequation} W_1=\pd{}{y}, \qquad W_2=\cfrac{1}{t}\pd{}{u_1}, \qquad W_3=t\pd{}{y}+\pd{}{v_1}, \\\\
W_{4}=2t\pd{}{t}+y\pd{}{y}-u_1\pd{}{u_1}-v_1\pd{}{v_1}-2p_1\pd{}{p_1}, \\\\ W_5=y\pd{}{y}+u_1\pd{}{u_1}+v_1\pd{}{v_1}-2\rho_1\pd{}{\rho_1}. \end{multiequation}
Solutions of (\ref{eqx5}) which are invariant under $W_1$ are $y$-independant. Solving equations (\ref{eqx5}) with this constraint leads to
\begin{equation}
u_1(t,y)=\cfrac{u_2}{t}, \qquad v_1(t,y)=v_2,\qquad p_1(t,y)=p_2t^{(-1-R/C_v)}+\cfrac{4\mu}{3t}, \nonumber\end{equation}\begin{equation} \rho_1(t,y)=\cfrac{\rho_2}{t}.
\end{equation}
where $u_2$, $v_2$, $p_2$ and $\rho_2$ are constants. Consequently,
\begin{equation}
\begin{mybox}
u(t,x,y)=\cfrac{x+u_2}{t}, \hspc
v(t,x,y)=v_2, \\[5pt]
p(t,x,y)=p_2t^{(-1-R/C_v)}+\cfrac{4\mu}{3t}, \hspc\rho(t,x,y)=\cfrac{\rho_2}{t}.
\\[-10pt]\end{mybox}\label{eq:unsteady_2d_x5y1}\end{equation}
This solution is a rotation of solution (\ref{eq:unsteady_2d_y5}). It is presented in Figure \ref{fig:unsteady_2d_x5y1}.
\begin{figure}[ht]
\centering
\includegraphics[width=4cm]{unsteady_2d_x5y1}
\caption{Solution (\ref{eq:unsteady_2d_x5y1}) with $t>0$ and $u_2=0$}
\label{fig:unsteady_2d_x5y1}
\end{figure}
If instead of $W_1$, we consider $W_3$ then the self-similar solutions of equations (\ref{eqx5}) are of the form:
\[ u_1(t,y)=u_2(t), \quad v_1(t,y)=v_2(t)+÷yt,\quad p_1(t,y)=p_2(t),\quad \rho_1(t,y)=ρ_2(t).\]
With these relations, the solutions to equations (\ref{eqx5}) is
\[ u_2=÷{u_3}t,\quad v_2=÷{v_3}t,\quad ρ_2=÷{ρ_3}t,\quad p_2=÷{4Rµ}{3(2R+C_v)t}+p_3t^{-2-2R/C_v}\]
where $u_3,v_3,p_3$ and $ρ_3$ are arbitrary scalars. The corresponding solution of (\ref{nsc}) is
\begin{equation}
\begin{mybox}
u(t,x,y)=\cfrac{x+u_3}{t}, \qquad\qquad
v(t,x,y)=\cfrac{v_3+y}{t}, \\[5pt]\displaystyle
p(t,x,y)=÷{4Rµ}{3(2R+C_v)t}+p_3t^{-2-2R/C_v}, \hspc\rho(t,x,y)=\cfrac{\rho_3}{t}.
\\[-10pt]\end{mybox}\label{eq:unsteady_2d_x5y3}\end{equation}
It represents a source flow around the point $(-u_3,-v_3)$. The velocity field is plotted in Figure \ref{fig:unsteady_2d_x5y3}.
\begin{figure}[ht]
\centering
\includegraphics[width=4cm]{unsteady_2d_x5y3}
\caption{Solution (\ref{eq:unsteady_2d_x5y3}) with $t>0$}
\label{fig:unsteady_2d_x5y3}
\end{figure}
To find self-similar solutions under infinitesimal scale transformation $W_5$, we set:
\begin{multiequation}
u_1(t,y)=u_2(t)y, \qquad v_1(t,y)=v_2(t)y, \\\\ p_1(t,y)=p_2(t), \qquad\rho_1(t,y)=\cfrac{\rho_2(t)}{y^2}.
\end{multiequation}
It follows from the reduced equations that
\begin{equation}
v_2(t)=\cfrac{\delta}{t+v_3}, \qquad u_2(t)=\cfrac{u_3}{t(t+v_3)^\delta}, \qquad \rho_2(t)=\cfrac{(t+v_3)^\delta\rho_3}{t}
\end{equation}
where $u_3$ and $\rho_3$ are constants and $\delta=0$ or $1$. Thus,
\begin{equation}
\begin{mybox}\\[-10pt]
u(t,x,y)=\cfrac{u_3y}{t(t+v_3)^\delta}+\cfrac{x}{t}, \quad v(t,x,y)=\cfrac{y\delta }{t+v_3}, \quad
\rho(t,x,y)=\cfrac{(t+v_3)^\delta\rho_3}{ty^2}.\\[-10pt]\end{mybox}\label{eq:unsteady_2d_x5y5}
\end{equation}
The pressure $p(t)$ is the solution of the ordinary differential equation:
\begin{equation}
\cfrac{C_v}{R}p' +\cfrac{(C_v+R)(2t+v_3)^\delta\rho_3-2\kappa t^2}{\rho_3Rt(t+v_3)^\delta}\ p=\mu\cfrac{3u_3^2+4(t^2+v_3t+v_3^2)^\delta}{3t^2(t+v_3)^{2\delta}}.
\end{equation}
If $u_3=0$ and $δ=0$, the velocity field is similar to Figure \ref{unsteady_2d_y5} but 90 degrees rotated.
In other cases, the flow is graphically presented in Figure \ref{fig:unsteady_2d_x5y5}.
\begin{figure}[t]
\centering
\includegraphics[width=37mm]{unsteady_2d_x5y5_2}\hfill
\includegraphics[width=37mm]{unsteady_2d_x5y5_0}\hfill
\includegraphics[width=37mm]{unsteady_2d_x5y5_1}
\caption{Solutions (\ref{eq:unsteady_2d_x5y5}) with $t>0$ and $v_3=0$. \\Left: $u_3=0,δ=1$. Center: $u_3>0,δ=0$. Right: $u_3>0,δ=1$}
\label{fig:unsteady_2d_x5y5}
\end{figure}
Another solution of equations (\ref{eqx5}) can be found by setting $v_1$ constant and $u_1$ linear in $y$. This leads to the following solution of (\ref{nsc})
\begin{equation}
\begin{mybox}
u(t,x,y)=\cfrac{x+y+u_2}{t}-v_2, \hspc
v(t,x,y)=v_2, \\[5pt]
\rho(t,x,y)=\cfrac{1}{t}\ \cfrac{1}{ (y-v_2t)^2\rho_3+(y-v_2t)\rho_4+\rho_5}
\\[-10pt]\end{mybox}\end{equation}
where $u_2$, $v_2$, $\rho_3$, $\rho_4$ and $\rho_5$ are constants. $p(t)$ is the solution of
\begin{equation}
\cfrac{C_v}{R}p'+\cfrac{C_v+R-2\kappa \rho_3t^2}{Rt}\ p-\cfrac{7\mu}{t^2}=0.\label{eq:unsteady_2d_x5}
\end{equation}
\begin{figure}[ht]
\centering
\includegraphics[width=4cm]{unsteady_2d_x5}
\caption{Solution (\ref{eq:unsteady_2d_x5}) with $t=1$ and $v_2=1$ and $u_2=0$}
\label{fig:unsteady_2d_x5}
\end{figure}
In the next subsection, we calculate bidimensional solutions of (\ref{nsc}) which are self-similar under scale transformations.
\subsection{Scale transformations}
Infinitesimal symmetry $X_{11}$ suggests a change of variables:
\begin{multiequation}
u(t,x,y)=\cfrac{u_1(\chi,\eta)}{\sqrt{t}}, \qquad v(t,x,y)=\cfrac{v_1(\chi,\eta)}{\sqrt{t}}, \qquad p(t,x,y)=\cfrac{p_1(\chi,\eta)}{t}, \\
\rho(t,x,y)=\rho_1(\chi,\eta) \eqspc{with}\chi=\cfrac{x}{\sqrt{t}}, \qquad \eta=\cfrac{y}{\sqrt{t}}.
\end{multiequation}
The equations of the new variables are:
\[\begin{cases}
-\cfrac{\chi}{2}\pd{\rho_1}{\chi}-\cfrac{\eta}{2}\pd{\rho_1}{\eta}+\pd{\rho_1u_1}{\chi}+\pd{\rho_1v_1}{\eta}=0
\\\\
\rho_1\left(-\cfrac{\chi}{2}\pd{u_1}{\chi}-\cfrac{\eta}{2}\pd{u_1}{\eta}-\cfrac{1}{2}u_1+u_1\pd{u_1}{\chi} +v_1\pd{u_1}{\eta}\right) =
\\\hspace{7.5em}-\pd{p_1}{\chi}+\cfrac{\mu}{3}\left(4\pd{^2u_1}{χ^2}+\ppd{v_1}{\chi}{\eta}+3\pd{^2u_1}{η^2}\right)
\\[10pt]
\rho_1\left(-\cfrac{\chi}{2}\pd{v_1}{\chi}-\cfrac{\eta}{2}\pd{v_1}{\eta}-\cfrac{1}{2}v_1+u_1\pd{v_1}{\chi} +v_1\pd{v_1}{\eta}\right) =
\\\hspace{7.5em}-\pd{p_1}{\eta}+\cfrac{\mu}{3}\left(4\pd{^2v_1}{η^2}+\ppd{u_1}{\chi}{\eta}+3\pd{^2v_1}{χ^2}\right)
\\[10pt]
\cfrac{C_v}{R}\left(-\cfrac{\chi}{2}\pd{p_1}{\chi}-\cfrac{\eta}{2}\pd{p_1}{\eta}-p_1+\pd{p_1u_1}{\chi} +\pd{p_1v_1}{\eta}\right) =
\\\hspace{7.5em}-p⦅\pd {u_1}{χ}+\pd {v_1}{η}⦆+µS'+\cfrac{\kappa}{R} ⦅\pd{^2}{χ^2}+\pd{^2}{η^2}⦆\cfrac{p_1}{\rho_1}
\end{cases}\]
where
\[
S'=÷43⟦⦅\pd {u_1}{χ}⦆^2+⦅\pd {v_1}{η}⦆^2-\pd {u_1}{χ}\pd {v_1}{η}⟧+⦅\pd {u_1}{η}+\pd {v_1}{χ}⦆^2.
\]
A solution to these equations is
\begin{equation}
\begin{array}{l}
u_1(\chi,\eta)=u_2\chi, \qquad v_1(\chi,\eta)=0, \qquad \rho_1(\chi,\eta)=\cfrac{\rho_2}{\chi^2}, \\ p_1(\chi,\eta)=\cfrac{4\mu^2\rho_2R}{3\rho_2R-6\kappa}
\end{array}
\end{equation}
for some constants $u_2$ and $ρ_2$. We deduce that
\begin{equation}
\begin{mybox}
u(t,x,y)=\cfrac{u_2x}{t}, \hspc
v(t,x,y)=0, \\[10pt]
p(t,x,y)=\cfrac{4\mu^2\rho_2R}{3(\rho_2R-2\kappa)t}, \hspc \rho(t,x,y)=\cfrac{\rho_2t}{x^2}.
\\[-10pt]\end{mybox}\end{equation}
To get self-similar solutions of the bidimension equations (\ref{unsteady_bidim}) under $X_{12}$, one makes the change of variables:
\begin{eqnarray}\eta=\cfrac{y}{x}, \qquad u_1(t,\eta)=\cfrac{u(t,x,y)}{x}, \qquad v_1(t,\eta)=\cfrac{v(t,x,y)}{x}, \\[10pt] p_1(t,\eta)=p(t,x,y), \qquad \rho_1(t,\eta)=x^2\rho(t,x,y).\end{eqnarray}
The equations become:
\begin{equation}\begin{cases}
\pd{\rho_1}{t}+\pd {}{η}\bigg[\rho_1(v_1-u_1\eta)\bigg]=0 \\[10pt]
\rho_1\left[\pd{u_1}{t}+u_1^2+(v_1-u_1\eta) \pd{u_1}{η}\right]-\eta\pd {p_1}{η} =\mu\left(\cfrac{4\eta^2}{3}+1\right) \ppd{u_1}{η}{η}-\mu\cfrac{η}{3} \ppd{v_1}{η}{η}
\\[10pt]
\rho_1\left[\pd{v_1}{t}+u_1v_1+(v_1-u_1η)\pd{v_1}{η}\right]+\pd {p_1}{η}=-µ\cfrac{η}{3} \ppd{u_1}{η}{η}+ \mu \left(\cfrac{4}{3}+\eta^2\right) \ppd{v_1}{η}{η}
\\[10pt]\displaystyle
\cfrac{C_v}{R}\left[\pd{p_1}{t}+(v_1-ηu_1)\pd {p_1}{\eta}\right]= -⦅÷{C_v}{R}+1⦆p_1⦅u_1-η\pd {u_1}{η}+\pd {v_1}{η}⦆\\
\hspace{35mm} +µ\tsr D+\cfrac{\kappa}{R}\left[(1+\eta^2)\ppd{}{η}{η}\cfrac{p_1}{\rho_1} -2\eta\pd{}{η}\cfrac{p_1}{\rho_1}+2\cfrac{p_1}{\rho_1}\right]
\end{cases}\label{eqx2x12}\end{equation}
where
\[
\begin{array}{r}\displaystyle
\tsr D=-÷23⦅u_1-η\pd {u_1}{η}+\pd {v_1}{η}⦆^2+2⦅u_1-η\pd {u_1}{η}⦆^2+2⦅\pd {v_1}{η}⦆^2
\\\displaystyle
+⦅\pd {u_1}{η}-η\pd {v_1}{η}+v_1⦆^2
\end{array}
\]
A solution of (\ref{eqx2x12}) is
\begin{equation} u_1(t,\eta)=\cfrac{1}{t}, \qquad v_1(t,\eta)=\cfrac{\eta}{t}, \qquad p_1(t,\eta)=p_2(t), \qquad \rho_1(t,\eta)=\rho_2 \end{equation}
where $\rho_2$ is a constant, and $p_2(t)$ is the solution of
\begin{equation} 3C_v\rho_2t^2p'_2(t)+6t(C_v\rho_2-\kappa t+\rho_2R)p_2(t)-4\mu\rho_2R=0 \label{whitt}\end{equation}
With the original variables, we get:
\begin{equation}\begin{mybox}u(t,x,y)=\cfrac{x}{t}, \hspc v(t,x,y)=\cfrac{y}{t}, \\ p(t,x,y)=p_2(t), \hspc \rho(t,x,y)=\cfrac{\rho_2}{x^2}.\\[-10pt]\end{mybox}\end{equation}
The pressure $p_2(t)$ can be written in terms of Whittaker $M$ functions \cite{olver10}. For instance, when $C_v\rho_2=1$, if we call $b=\rho_2R$ then
\begin{equation} p_2(t)=\frac{2b\mu \e^{\kappa t} M_{b,b+1/2}(2\kappa t)}
{3(2b+1)\kappa t^2(2\kappa t)^b}+p_3\cfrac{e^{2\kappa t}}{t^{2+2b}}
\end{equation}
$p_3$ being a constant.
Other solutions of (\ref{eqx2x12}) can be obtained knowing that these equations admit the following infinitesimal symmetry:
$$(1+\eta^2)\pd{}{\eta}+(\eta u_1-v_1)\pd{}{u_1}+(\eta v_1+u_1)\pd{}{v_1}-2\rho_1\eta\pd{}{\rho_1}.$$
It suggests the change of variables:
\begin{equation}
\begin{array}{l}
u_1(t,\eta)=u_2(t)\eta+v_2(t), \qquad v_1(t,\eta)=v_2(t)\eta-u_2(t), \\[5pt] p_1(t,\eta)=p_2(t), \qquad \rho_1(t,\eta)=\cfrac{\rho_2(t)}{1+\eta^2},
\end{array}
\label{vareqx2x12y3}\end{equation}
corresponding to a velocity field
\[
\vt u(t,r,θ)=v_2(t)r\vt e_r+u_2(t)r\vt e_{θ}.
\]
Inserting these relations into equations (\ref{eqx2x12}), we obtain:
\[
v_2(t)=÷{f'(t)}{2f(t)},\qquad u_2(t)=±√{v_2^2(t)+v_2'(t)},\qquad ρ_2(t)=ρ_3
\]
\[
p_2(t)=÷{µR}3\,÷{h(t)}{f(t)^{1+R/C_v}}\ ∫÷{f(t)^{R/C_v-1}f'(t)^2}{h(t)}dt
\]
with $f(t)=at^2+2bt+2$, $a$, $b$ and $ρ_3$ being constants such that $2a\geq b^2$. It follows that
\begin{equation}
\begin{mybox}
\\[-10pt]\displaystyle
\u(t,r,θ)=÷{at+b}{at^2+2bt+2}r\vt{\mathrm{e}}_r±÷{√{2a-b^2}}{at^2+2bt+2}r\vt{\mathrm{e}}_{θ}, \qquad ρ(t,r,θ)=÷{ρ_3}{r^2},
\\[10pt]\displaystyle
p(t,r,θ)=÷{4µR\e^{4κt/C_vρ_3}}{(at^2+2bt+2)^{R/C_v+1}} ∫÷{(at^2+2bt+2)^{R/C_v-1}(at+b)^2}{3C_v\e^{4κt/C_vρ_3}}dt.
\\[-10pt]
\end{mybox}
\label{eq:unsteady_2d_x5z}
\end{equation}
If $a=b^2/2$ then the flow is radial, with $\vt u=\cfrac r{t+2/b}\vt e_r$. The velocity field is similar to that of solution (\ref{eq:unsteady_2d_x5y3}), plotted in Figure \ref{fig:unsteady_2d_x5y3}, for a fixed $t$ but the pressure and density fields are different. The evolution of the flow with time can be visualized in Figure \ref{fig:unsteady_2d_x5z} in the case $b=0$, $a=1$ and $u_2(t)>0$.
\begin{figure}[ht]
\centering
\includegraphics[width=4cm]{unsteady_2d_x5z_-1}\qquad
\includegraphics[width=4cm]{unsteady_2d_x5z_0}
\vspace{\baselineskip}
\includegraphics[width=4cm]{unsteady_2d_x5z_1}\qquad
\includegraphics[width=4cm]{unsteady_2d_x5z_10}
\caption{Solution (\ref{eq:unsteady_2d_x5z}) with $b=0$ and $a=1$, at $t=-1$ (top left), $t=0$ (top right), $t=1$ (bottom left) and $t=10$ (bottom right)}
\label{fig:unsteady_2d_x5z}
\end{figure}
In the following section, we compute some three-dimensional solutions of equations (\ref{nsc}).
\section{3D-solutions}\label{3d}
We first consider solutions invariant under $X_1$, $X_2$ and $X_4$. Such solutions depend only on $y$:
\begin{eqnarray}u(t,x,y,z)=u_1(y), \qquad v(t,x,y,z)=v_1(y), \qquad w(t,x,y,z)=w_1(y), \\\\
p(t,x,y,z)=p_1(y), \qquad \rho(t,x,y,z)=\rho_1(y).\end{eqnarray}
The reduced equations are:
\begin{equation}\begin{cases}
(\rho_1v_1)'=0 ,\\
\rho_1v_1u_1'=\mu u_1'',\\
3\rho_1v_1v'_1+p_1'=4\mu v_1'',\\
\rho_1v_1w_1'=\mu w_1'',\\
3C_v(p_1v_1)'=R(4\mu v_1'^2-3v_1'p_1+3\mu u_1'^2+3\mu w_1'^2)+3\kappa (p_2/\rho_2)''.
\end{cases}\end{equation}
We find the following solution which is an extension of (\ref{solx1x2x4}):
\begin{equation}\begin{mybox}\\[-10pt]\displaystyle
u(t,y)=u_3\pm\sqrt{\frac{(2C_v\mu-4\kappa-3R\mu)v_3^2-3R\mu w_3^2}{3R\mu}}\ \e^{ay/µ},
\\[15pt] v(t,y)=v_3\e^{ay/µ},\qquad w(t,x,y,z)=w_3\e^{ay/\mu},
\\[5pt]
p(t,y)=\cfrac{av_3}{3}\e^{ay/µ}, \qquad \rho(t,y)=\cfrac{a}{v_3}\e^{-ay/µ}, \\[-10pt]
\end{mybox}\end{equation}
In these expressions, $a$, $u_3$ $v_3$ and $w_3$ are arbitrary constants.
The generator $(X_2+X_5)+(X_3+X_6)$ leads to the ansatz:
\begin{equation}
\begin{array}{l}u(t,x,y,z)=u_1(t,z)+\cfrac{xu_0}{tu_0+a}, \qquad v(t,x,y,z)=v_1(t,z)+\cfrac{yv_0}{tv_0+b}, \\[10pt] w(t,x,y,z)=w_1(t,z), \quad p(t,x,y,z)=p_1(t,z), \quad \rho(t,x,y,z)=\rho_1(t,z)
\end{array}
\end{equation}
where $a$, $b$, $u_0$ and $v_0$ are arbitrary constants. Equations (\ref{nsc}) reduce into:
\begin{equation}\begin{cases} \pd{\rho_1}{t}+w_1\pd{\rho_1}{z}+\rho_1δ=0,
\\[10pt]
\rho_1\left(\pd{u_1}{t}+\cfrac{u_0u_1}{u_0t+a}+w_1\pd{u_1}{z}\right)=\mu\pd{^2u}{z^2} ,
\\[10pt]
\rho_1\left(\pd{v_1}{t}+\cfrac{v_0v_1}{v_0t+b}+w_1\pd{v_1}{z}\right)=\mu\pd{^2v}{z^2} ,
\\[10pt]
\rho_1\left(\pd{w_1}{t}+w_1\pd{w_1}{z}\right)+\pd{p_1}{z}=\cfrac{4\mu}{3}\ \pd{^2w}{z^2} ,
\\[10pt]
\cfrac{C_v}{R}\left(\pd{p_1}{t}+w_1\pd{p_1}{z}\right)=⦅\cfrac{C_v}R+1⦆δ p_1+µ\tsr S_2+\cfrac{\kappa}{R}\pd{^2}{z^2}⦅\cfrac{p_1}{\rho_1}⦆
\end{cases}
\label{eqx5x6}\end{equation}
where
\[
δ=\cfrac{u_0}{u_0t+a}+\cfrac{v_0}{v_0t+b}+\pd{w_1}{z}
\]
is the divergence of $\u$ and
\[
\tsr S_2=-÷23δ^2+÷{2u_0^2}{(tu_0+a)^2}+÷{2v_0^2}{(tv_0+b)^2}+⦅\pd {u_1}z⦆^2\!+⦅\pd {v_1}z⦆^2\!+2⦅\pd {w_1}z⦆^2\!.
\]
The infinitesimal symmetries of these equations are
\begin{equation}
\begin{array}{l}
R_1=\pd{}{z}, \qquad R_2=t\pd{}{z}+\pd{}{w}, \qquad R_3=\cfrac{1}{u_0t+a}\pd{}{u},
\\[10pt]
R_4=\cfrac{1}{v_0t+b}\pd{}{v}, \qquad R_5=z\pd{}{z}+u\pd{}{u}+v\pd{}{v}+w\pd{}{w}-2\rho\pd{}{\rho}.
\end{array}
\end{equation}
Without loss of generalty, assume that $u_0=v_0=1$.
Symmetry $R_1$ reduces equations (\ref{eqx5x6}) into:
\begin{equation}\begin{cases}
(t+a)(t+b)\rho_2'+\rho_2=0\\
(t+a)u_2'+u_2=0\\
(t+b)v_2'+v_2=0\\
w_2'=0\\
C_vp_2'=R\ \sigma:\tsr{S}
\end{cases}\end{equation}
where
\begin{eqnarray} u_2(t)=u_1(t,z), \qquad v_2(t)=v_1(t,z), \qquad w_2(t)=w_1(t,z), \\[10pt] p_2(t)=p_1(t,z), \qquad\rho_2(t)=\rho_1(t,z). \end{eqnarray}
The resolution of the equations gives the bidimensional solution
\begin{equation}\begin{mybox}
u(t,x,y,z)=\cfrac{x+u_3}{t+a}, \qquad v(t,x,y,z)=\cfrac{y+v_3}{t+b},
\\[5pt]
w(t,x,y,z)=w_3, \qquad
\rho(t,x,y,z)=\cfrac{\rho_3}{h(t)},
\\[10pt]\displaystyle
p(t,x,y,z)=\cfrac{4R\mu}{3C_v}h(t)^{-R/C_v-1}
\int \cfrac{h(t)+(a-b)^2}{h(t)^{1-R/C_v}}\d t
\end{mybox}\label{3d_x2x5_x3x6_y1}\end{equation}
where $u_3$, $v_3$, $w_3$ and $ρ_3$ are arbirtary constants and
\[
h(t)=(t+a)(t+b).
\]
In the particular case where $a=b$, the expression of $p$ in (\ref{3d_x2x5_x3x6_y1}) simplifies into
\[ p(t,x,y,z)= \cfrac{4R\mu}{3(2R+C_v)t}+p_3t^{-2R/C_v-2},\]
$p_3$ being a constant.
An invariant solution of equations (\ref{eqx5x6}) under $R_2$ has the following form:
\begin{equation}
\begin{array}{l} u_1(t,z)=u_2(t),\quad v_1(t,z)=v_2(t),\quad w_1(t,z)=w_2(t)+\cfrac{zw_0}{tw_0+c}
\\[10pt]
p1(t,z)=p_2(t),\quad ρ_1(t,z)=ρ_2(t).
\end{array}
\label{varx5x6r2}
\end{equation}
Assume that $w_0=1$. A straightforward solution of (\ref{eqx5x6}), when $a=0$, is
\begin{equation}\begin{mybox}
u(t,x,y,z)=\cfrac{x+u_3}{t}, \qquad v(t,x,y,z)=\cfrac{y+v_3}{t+b},
\\[10pt]
w(t,x,y,z)=\cfrac{z+w_3}{t+c}, \qquad \rho(t,x,y,z)=\cfrac{\rho_3}{h(t)},\\[10pt]
p(t,x,y,z)=\cfrac{4R\mu}{3C_v}\ h(t)^{-1-\frac{R}{C_v}} \displaystyle\int ÷{t^2(b-c)^2+bc(t+b)(t+c)}{h(t)^{1-\frac{R}{C_v}}}\d t.
\\[-10pt]\end{mybox}
\label{eqx5x6r2_1}
\end{equation}
where $u_3$, $v_3$, $w_3$, $\rho_3$ are constants and
\begin{equation}
h(t)=t(t+b)(t+c).
\end{equation}
Another solution of (\ref{eqx5x6}) in the form (\ref{varx5x6r2}), with $a=b=c=t_0$, is
\begin{equation}\begin{mybox}
u(t,x,y,z)=\cfrac{x+u_3}{t+t_0}, \qquad v(t,x,y,z)=\cfrac{y+v_3}{t+t_0},
\\[10pt]
w(t,x,y,z)=\cfrac{z+w_3}{t+t_0}, \qquad \rho(t,x,y,z)=\cfrac{\rho_3}{(t+t_0)^3},\\[10pt]
p(t,x,y,z)=p_3(t+t_0)^{-3-\frac{3R}{C_v}}
\\[-10pt]\end{mybox}
\label{eqx5x6r2_2}
\end{equation}
where $u_3$, $v_3$, $w_3$, $\rho_3$ are constants. In (\ref{eqx5x6r2_1}) and (\ref{eqx5x6r2_2}), the flow is fully three-dimensional. It is purely radial around the source point $(u_3,v_3,w_3)$. The pressure and the density are uniform but time-dependent.
A solution of (\ref{eqx5x6}) invariant under $R_5$ takes the form:
\begin{eqnarray}u_1(t,z)=u_2(t)z, \qquad v_1(t,z)=v_2(t)z, \qquad w_1(t,z)=w_2(t)z, \\[10pt] p_1(t,z)=p_2(t),\qquad \rho_1(t,z)=\rho_2(t)z^{-2}. \end{eqnarray}
Inserting these expressions into (\ref{eqx5x6}) gives:
\begin{eqnarray} u_2(t)=\cfrac{u_3}{(t+a)(t+c)}, \qquad v_2(t)=\cfrac{v_3}{(t+b)(t+c)}, \qquad w_2(t)=\cfrac{1}{t+c}, \\[10pt]
\rho_2(t)=\cfrac{(t+c)\rho_3}{(t+a)(t+b)}\end{eqnarray}
where $u_3$, $v_3$ and $\rho_3$ are arbitrary constants. The pressure reads:
$$p_2(t)=\cfrac{\mu R\, h(t)^{-\frac{R}{C_v}-1}}{3C_v} h_c(t)f(t)\int \cfrac{h(t)^{\frac{R}{C_v}-1} g(t)}{h_c(t)f(t)} \d t$$
with
\begin{equation} h(t)=(t+a)(t+b)(t+c), \hspc h_c(t)=(t+c)^\frac{2\kappa(a-c)(b-c)}{\rho_3C_v}, \end{equation}
\begin{equation} f(t)=\exp\left({\frac{\kappa t(t+2a+2b-2c)}{\rho_3C_v}}\right) \end{equation}
and
\begin{eqnarray}
g(t)=4(a^2+b^2+c^2-ab-bc-ca)t^2 \\[5pt]\phantom{g(t)}
+4[a^2(b+c)+b^2(c+a)+c^2(a+b)-6abc]t \\[5pt]\phantom{g(t)}
+4[a^2b(b-c)+b^2c(c-a)+c^2a(a-b)]\\[5pt]\phantom{g(t)}
+3[v_3^2(t+a)^2+u_3^2(t+b)^2].\end{eqnarray}
At last,
\begin{equation}\begin{mybox}
u(t,x,y,z)=\cfrac{u_3z+x(t+c)}{(t+a)(t+c)}, \qquad v(t,x,y,z)=\cfrac{v_3z+y(t+c)}{(t+b)(t+c)}, \\[10pt]
w(t,x,y,z)=\cfrac{z}{t+c},
\qquad p(t,x,y,z)=p_2(t), \\[10pt] \rho(t,x,y,z)=\cfrac{\rho_3(t+c)}{(t+a)(t+b)z^2}.
\\[-10pt]\label{eqx5x6r5}\end{mybox}\end{equation}
The projection of $\vt u$ on $xy$-planes are sketched in Figure \ref{fig:x5x6r5} for $c=0$, $u_3=v_3=1$ at $t=1$. The center-point, at which the velocity is vertical, is located at
\begin{equation} x=\frac{-u_3z}{t+c}, \hspc y=\frac{-v_3z}{t+c}.
\label{eqx5x6r5_centerpoint}
\end{equation}
\begin{figure}[ht]\centering
\includegraphics[width=5cm]{x5x6r5_11}\qquad
\includegraphics[width=5cm]{x5x6r5_13}
\\[10pt]
\includegraphics[width=5cm]{x5x6r5_21}\qquad
\includegraphics[width=5cm]{x5x6r5_23}
\caption{Projections of solution (\ref{eqx5x6r5}) $xy$-planes at $z=1/4$ (left) and at $z=3/4$ (right). Top: $a=b=1$, bottom: $a=1,b=10$}\label{fig:x5x6r5}
\end{figure}
As can be observed, the $xy$-plane-projected flow is radial around the center-point defined by equation (\ref{eqx5x6r5_centerpoint}) when $a=b=1$. It is not the case any longer if $a≠b$. The value of $c$ changes the position of the center-point (\ref{eqx5x6r5_centerpoint}) but not the shape of the $xy$-projections. It however has more influence on the projection of $\vt u$ on $xz$- or $yz$-plane. It can be stated in Figure \ref{fig:x5x6r5_xz}.
\begin{figure}[ht]\centering
\includegraphics[width=5cm]{x5x6r5_z0} \qquad
\includegraphics[width=5cm]{x5x6r5_z5} \\[10pt]
\includegraphics[width=5cm]{x5x6r5_z50}
\caption{Projection of solution (\ref{eqx5x6r5}) on $xz$-plane when $a=b=1$. Top left: $c=0$, top right: $c=5$, bottom: $c=50$}\label{fig:x5x6r5_xz}
\end{figure}
It is noteworthy that the generator
\begin{equation}\pd{}{z}+f(t)\pd{}{p} \label{y}\end{equation}
is an infinitesimal symmetry of the four first equations of (\ref{eqx5x6}) for any regular function $f$. It leads to the ansatz
\begin{eqnarray}
u_1(t,z)=u_2(t),\qquad v_1(t,z)=v_2(t),\qquad w_1(t,z)=w_2(t),\\\\ p_1(t,z)=f(t)z+p_2(t),\qquad \rho_1(t,z)=\rho_2(t).
\end{eqnarray}
From the three first equations of (\ref{eqx5x6}), this ansatz allows to obtain:
\begin{eqnarray}
u_2=\cfrac{u_3}{t+a}, \qquad v_2=\cfrac{v_3}{t+b}, \qquad \rho_2=\cfrac{\rho_3}{(t+a)(t+b)}, \\\\ f(t)=\cfrac{-\rho_3w_2'(t)}{(t+a)(t+b)}
\end{eqnarray}
where $u_3$, $v_3$ and $ρ_3$ are arbitrary constants
The last equation of (\ref{eqx5x6}) becomes:
\begin{eqnarray}
-3\rho_3[C_vw_2''(t)(t+a)(t+b)+Rw'_2(t)(a+b+2t)]z+\Phi(t)
\end{eqnarray}
where $\Phi(t)$ is the $z$-independant rest of the equation. As seen, this equation still contains the variable $z$.
It has not been reduced because vector field (\ref{y}) is not a symmetry of the last equation of (\ref{eqx5x6}). Cancelling the coefficient of $z$ and $\Phi(t)$, we get:
\begin{equation}
w_2(t)=w_3\int h(t)^{-\frac{R}{C_v}}\d t \eqspc{with} h(t)=(t+a)(t+b),
\end{equation}
$w_3$ being a constant, and
\begin{equation}
p_2(t)=h(t)^{-\frac{R}{C_v}-1}\int \left[\rho_3w_3w_2(t)+\cfrac{ 4R\mu }{3C_v}\ g(t)
\
h(t)^{\frac{R}{C_v}-1}\right]\d t
\end{equation}
with
\begin{equation}
g(t)=(t+a)(t+b)+(a+b)^2.
\end{equation}
It follows that:
\begin{equation}\begin{mybox}
u(t,x,y,z)=\cfrac{x+u_3}{t+a}, \qquad v(t,x,y,z)=\cfrac{y+v_3}{t+b}, \\[10pt]
w(t,x,y,z)=w_3\displaystyle\int h(t)^{-\frac{R}{C_v}}\d t, \qquad \rho(t,x,y,z)=\cfrac{\rho_3}{h(t)},
\\[10pt] p(t,x,y,z)=p_2(t)-\rho_3w_3zh(t)^{-1-\frac{R}{C_v}}.
\\[-10pt]\end{mybox}\end{equation}
Lastly, consider the symmetry generator $X_1-aX_2-bX_3-cX_4$ where $a$, $b$ and $c$ are constants. A basis of invariants under this vector field is
\[
ξ=t+ax+by+cz,\]\[ u_1=u,\qquad v_1=v,\qquad w_1=w,\qquad p_1=p,\qquad ρ_1=ρ.
\]
The equations can then be reduced with the ansatz
\[
u(t,x,y,z)=u_1(ξ),\qquad
v(t,x,y,z)=v_1(ξ),\qquad
w(t,x,y,z)=w_1(ξ),\]\[
p(t,x,y,z)=p_1(ξ),\qquad
ρ(t,x,y,z)=ρ_1(ξ).
\]
Indeed, system (\ref{nsc}) become
\begin{equation}
\begin{cases}
(ρ_1(1+D))'=0
\\\displaystyle
ρ_1(1+D)u_1'+ap_1'-µ⦅\frac a3D+Au_1⦆''=0
\\\displaystyle
ρ_1(1+D)v_1'+bp_1'-µ⦅\frac b3D+Av_1⦆''=0
\\\displaystyle
ρ_1(1+D)w_1'+cp_1'-µ⦅\frac c3D+Aw_1⦆''=0
\\\displaystyle
C_vp_1'(1+D)+(C_v+R)p_1D'+Rµ\tsr S_2-κA⦅÷{p_1}{ρ_1}⦆''=0
\end{cases}
\end{equation}
where $A=a^2+b^2+c^2$, $D$ is the function of $ξ$ defined by
\[
D=au_1+bv_1+cw_1
\]
and
\[
\begin{array}{l}\displaystyle
\tsr S_2=÷23(D')^2+2(au'_1)^2+2(bv'_1)^2+2(cw'_1)^2
\\[10pt]\phantom{\tsr S_2=}
+(bu'_1+av'_1)^2+(cu'_1+bw'_1)^2+(cv'_1+bw'_1)^2.
\end{array}
\]
After integration, the first four equations of (\ref{eq_travel}) become
\begin{equation}
\begin{array}{l}
ρ_1(1+D)=ρ_2
\\\displaystyle
ρ_2u_1+ap_1-µ⦅\frac a3D+Au_1⦆'=ρ_2u_2
\\\displaystyle
ρ_2v_1+bp_1-µ⦅\frac b3D+Av_1⦆'=ρ_2v_2
\\\displaystyle
ρ_2w_1+cp_1-µ⦅\frac c3D+Aw_1⦆'=ρ_2w_2
\end{array}
\label{eq_travel}
\end{equation}
for some constants $ρ_2$, $u_2$, $v_2$ and $w_2$. These equations can be used to express $p_1$, $v_1$ and $w_1$ as functions of $u_1$. Using the last equation of (\ref{eq_travel}) and going back to the original variables, we get a traveling wave solution of (\ref{nsc}):
\begin{equation}
\begin{mybox}\\[-10pt]
u=u_3\e^{÷{ρ_2ξ}{µA}}+u_4,
\\[10pt]\displaystyle
v=v_3\e^{÷{ρ_2ξ}{µA}}+v_2+b÷{u_4-u_2}a, \qquad w=w_3\e^{÷{ρ_2ξ}{µA}}+w_2+c÷{u_4-u_2}a,
\\[10pt]\displaystyle
p=ρ_2÷{(au_3+bv_3+cw_3)\e^{÷{ρ_2ξ}{µA}}}{3A}+ρ_2÷{u_2-u_4}{a},
\\[10pt]\displaystyle
ρ=÷{ρ_2}{(au_3+bv_3+cw_3)\e^{÷{ρ_2ξ}{µA}}+1+au_2+bv_2+cw_2 + A\cfrac{u_4-u_2}a }
\end{mybox}
\label{eq:travel}
\end{equation}
where
\[ξ=t+ax+by+cz,\]
In (\ref{eq:travel}) $u_3$, $u_4$, $v_3$ and $w_3$ are constants linked by
\begin{equation}
µAR(u_3^2+v_3^2+w_3^2)+(4κ-2µC_v)(au_3+bv_3+cw_3)^2=0
\label{eq:travel_cond1}
\end{equation}
and either
\begin{equation}
au_3+bv_3+cw_3=0
\label{eq:travel_cond2}
\end{equation}
which leads to $u_3=v_3=w_3=0$ and corresponds to a constant solution or
\[
a(C_vµ-κ)(1+au_2+bv_2+cw_2)+A[(2C_v+3R)µ-2κ](u_2-u_4)=0.
\]
Disregarding condition (\ref{eq:travel_cond2}), it can be seen from condition (\ref{eq:travel_cond1}) that this traveling wave solution exists only when $2κ\leq µC_v$.
\section{Conclusion}
The infinitesimal Lie symmetries of the compressible Navier-Stokes equations were computed. The corresponding Lie group action were presented. From the commutation table, the Levi decomposition of the Lie algebra were presented.
Self-similar solutions were computed from the symmetries of the equations and successive reductions. These solutions represents many types of model flows. One can cite for example flows representing bidemensional vortices, evolving as $r^{-1}$ or $r^{-2}$ from a well or toward a sink point. Bidimensional solutions depending exponentially on $y$ could also be obtained. One can also cite the three dimensional vortex-like and traveling wave solutions.
Note that, since a Lie-symmetry takes a solution into another one, many other solutions from those presented here can be obtained, by composing them by transformations (\ref{time})--(\ref{scale2}).
In the present analysis, we did not intend to be exhaustive. Many other combinations of the presented infinitesimal generators may lead to completely new solutions.
|
1,314,259,994,871 | arxiv | \section{Introduction}
\label{sec:intro}
The subject of this paper is a new type of tiling of certain subsets $D$ of
$\mathbb{R}^{M}$. Such a domain $D$ is a fractal blow-up (as defined in
Section~\ref{defsec}) of certain similitude iterated function systems (IFSs);
see also \cite{manifold, strichartz}. For an important class of such tilings
it is the case that $D=\mathbb{R}^{M}$, as exemplified by the tiling of
Figure~\ref{fig:b} (on the right ) that is based on the \textquotedblleft
golden b" tile (on the left). We are also interested, however, in situations
where $D$ has non-integer Hausdorff dimension. The left panel in
Figure~\ref{sidebyside} shows the domain $D$, the right panel a tiling of $D$.
These examples are explored in Section~\ref{exsec}. In this work, tiles may be
fractals; pairs of distinct tiles in a tiling are required to be
non-overlapping, i.e., they intersect on a set whose Hausdorff dimension is
lower than that of the individual tiles. \begin{figure}[tbh]
\includegraphics[width=3cm, keepaspectratio]{GB.png} \hskip 10mm
\includegraphics[width=8cm, keepaspectratio]{EX2a.png} \caption{Golden b and
golden b tiling.}%
\label{fig:b}%
\end{figure}%
\begin{figure}[ptb]%
\centering
\includegraphics[
height=1.6475in,
width=5.2477in
]%
{sidebyside.png}%
\caption{The left image shows part of an infinite fractal blow-up $D$; the
right image shows part of a tiling of $D$ using a finite set of prototiles.
See Section \ref{exsec}.}%
\label{sidebyside}%
\end{figure}
These tilings come in families, one family for each similitude IFS whose
functions $f_{1},f_{2}\dots,f_{N}$ have scaling ratios that are integer powers
$s^{a_{1}},s^{a_{2}},\dots,s^{a_{N}}$ of a single real number $s$ and whose
attractor is non-overlapping. Each such family contains, in general, an
uncountable number of tilings. Each family has a finite set of prototiles.
The paper is organized as follows. Sections \ref{tilingsec} and \ref{defsec}
provide background and definitions relevant to tilings and to iterated
function systems. The construction of our tilings is given in Section
\ref{defsec}. The main theorems are stated precisely in Section \ref{defsec}
and proved in subsequent sections. Results appear in Section \ref{realabsec}
that define and discuss the relative and absolute addresses of tiles. These
concepts, useful towards understanding the relationships between different
tilings, are illustrated in Section \ref{exsec}. Also in Section~\ref{exsec}
are examples of tilings of $\mathbb{R}^{2}$ and of a quadrant of
$\mathbb{R}^{2}$. The Ammann (the golden b) tilings and related fractal
tilings are also discussed in that section, as is a blow-up of a Cantor set.
A subset $P$ of a tiling $T$ is called a \textit{patch} of $T$ if it is
contained in a ball of finite radius. A tiling $T$ is \textit{quasiperiodic}
(also called repetitive) if, for any patch $P$, there is a number $R>0$ such
that any disk of radius $R$ centered at a point contained in a tile of $T$
contains an isometric copy of $P$. Two tilings are \textit{locally isomorphic}
if any patch in either tiling also appears in the other tiling. A tiling $T$
is \textit{self-similar} if there is a similitude $\psi$ such that $\psi(t)$
is a union of tiles in $T$ for all $t\in T$. Such a map $\psi$ is called a
\textit{self-similarity}.
Let $\mathcal{F}$ be a similitude IFS whose functions have scaling ratios
$s^{a_{1}},s^{a_{2}},\dots,s^{a_{N}}$ as defined above. Let $[N]^{\ast}$ be
the set of finite words over the alphabet $[N]:=\{1,2,\dots,N\}$ and
$[N]^{\infty}$ be the set of infinite words over the alphabet $[N]$. For a
fixed IFS $\mathcal{F}$, our results show that:
\begin{enumerate}
\item For each $\theta\in[N]^{*}$, our construction yields a bounded tiling,
and for each $\theta\in[N]^{\infty}$, our construction yields an unbounded
tiling. In the latter case, the tiling, denoted $\pi(\theta)$, almost always
covers ${\mathbb{R}}^{M}$ when the attractor of the IFS has nonempty interior.
\item The mapping $\theta\mapsto\pi(\theta)$ is continuous with respect to the
standard topologies on the domain and range of $\pi$.
\item Under quite general conditions, the mapping $\theta\mapsto\pi(\theta)$
is injective.
\item For each such tiling, the prototile set is $\{sA, s^{2}A,\dots,
s^{a_{\max}} A\}$, where $A$ is the attractor of the IFS and $a_{\max} =
\max\{a_{1}, a_{2}, \dots, a_{N}\}$.
\item The constructed tilings, in the unbounded case, are repetitive
(quasiperiodic) and any two such tilings are locally isomorphic.
\item For all $\theta\in[N]^{\infty}$, if $\theta$ is eventually periodic,
then $\pi(\theta)$ is self-similar.
\item If $\mathcal{F}$ is strongly rigid, then how isometric copies of a pair
bounded tilings can overlap is extremely restricted: if the two tilings are
such that their overlap is a subset of each, then one tiling must be contained
in the other.
\item If $\mathcal{F}$ is strongly rigid, then the constructed tilings have no
non-identity symmetry. In particular, they are non-periodic.
\end{enumerate}
The concept of a rigid and a strongly rigid IFS is discussed in
Sections~\ref{strongsec}.
A special case of our construction (polygonal tilings, no fractals) appears in
\cite{polygon}, in which we took a more recreational approach, devoid of
proofs. Other references to related material are \cite{anderson, sadun}. This
work extends, but is markedly different from \cite{tilings}.
\section{\label{tilingsec}Tilings, Similitudes and Tiling Spaces}
Given a natural number $M$, this paper is concerned with certain tilings of
strict subsets of Euclidean space $\mathbb{R}^{M}$ and of $\mathbb{R}^{M}$
itself. A \textit{tile} is a perfect (i.e. no isolated points) compact
nonempty subset of $\mathbb{R}^{M}$. Fix a Hausdorff dimension $0<D_{H}\leq
M$. A \textit{tiling} in $\mathbb{R}^{M}$ is a set of tiles, each of Hausdorff
dimension $D_{H}$, such that every distinct pair is non-overlapping. Two tiles
are \textit{non-overlapping} if their intersection is of Hausdorff dimension
strictly less than $D_{H}$. The \textit{support} of a tiling is the union of
its tiles. We say that a tiling tiles its support. Some examples are presented
in Section~\ref{exsec}.
A \textit{similitude} is an affine transformation $f:{\mathbb{R}}%
^{M}\rightarrow{\mathbb{R}}^{M}$ of the form $f(x)=s\,O(x)+q$, where $O$ is an
orthogonal transformation and $q\in\mathbb{R}^{M}$ is the translational part
of $f(x)$. The real number $s>0$, a measure of the expansion or contraction of
the similitude, is called its \textit{scaling} \textit{ratio}. An
\textit{isometry} is a similitude of unit scaling ratio and we say that two
sets are isometric if they are related by an isometry. We write $\mathcal{E}$
to denote the group of isometries on $\mathbb{R}^{M}$.
The \textit{prototile set} $\mathcal{P}$ of a tiling $T$ is a minimal set of
tiles such that every tile in $T$ is an isometric copy of a tile in
$\mathcal{P}$. The tilings constructed in this paper have a finite prototile set.
Given a tiling $T$ we define $\partial T$ to be the union of the set of
boundaries of all of the tiles in $T$ and we let $\rho:\mathbb{R}%
^{M}\rightarrow\mathbb{S}^{M}$ be the usual $M$-dimensional stereographic
projection to the $M$-sphere, obtained by positioning $\mathbb{S}^{M}$ tangent
to $\mathbb{R}^{M}$ at the origin. We define the distance between tilings $T$
and $T^{\prime}$ to be%
\[
d_{\tau}(T,T^{\prime})=h(\overline{\rho(\partial T)},\overline{\rho(\partial
T^{\prime})})
\]
where the bar denotes closure and $h$ is the Hausdorff distance with respect
to the round metric on $\mathbb{S}^{M}$. Let $\mathbb{K(R}^{M})$ be the set of
nonempty compact subsets of $\mathbb{R}^{M}$. It is well known that $d_{\tau}$
provides a metric on the space $\mathbb{K}({\mathbb{R}}^{M})$ and that
$(\mathbb{K}({\mathbb{R}}^{M}),d_{\tau})$ is a compact metric space.
This paper examines spaces consisting, for example, of $\pi(\theta)$ indexed
by $\theta\in\left[ N\right] ^{\ast}$ with metric $d_{\tau}$. Although we
are aware of the large literature on tiling spaces, we do not explore the
larger spaces obtained by taking the closure of orbits of our tilings under
groups of isometries as in, for example, \cite{anderson, sadun}. We focus on
the relationship between the addressing structures associated with IFS theory
and the particular families of tilings constructed here.
\section{\label{defsec} Definition and Properties of IFS Tilings}
Let $\mathbb{N=\{}1,2,\cdots\}$ and $\mathbb{N}_{0}=\{0,1,2,\cdots\}$. For
$N\in\mathbb{N}$, let $[N]=\{1,2,\cdots,N\}$. Let $[N]^{\ast}=\cup
_{k\in\mathbb{N}_{0}}[N]^{k}$, where $[N]^{0}$ is the empty string, denoted
$\varnothing$.
See \cite{hutchinson} for formal background on iterated function systems
(IFSs). Here we are concerned with IFSs of a special form: let $\mathcal{F}%
=\{{\mathbb{R}}^{M};f_{1},f_{2},\cdots,f_{N}\}$, with $N\geq2$, be an IFS of
contractive similitudes where the scaling factor of $f_{n}$ is $s^{a_{n}}$
with $0<s<1$ where $a_{n}\in\mathbb{N}$. There is no loss of generality in
assuming that the greatest common divisor is one: $\gcd\{a_{1},a_{2}%
,\cdots,a_{N}\}=1$. That is, for $x\in{\mathbb{R}}^{M}$, the function
$f_{n}:\mathbb{R}^{M}\rightarrow\mathbb{R}^{M}$ is defined by
\[
f_{n}(x)=s^{a_{n}}O_{n}(x)+q_{n}%
\]
where $O_{n}$ is an orthogonal transformation and $q_{n}\in{\mathbb{R}}^{M}$.
It is convenient to define%
\[
a_{\max}=\max\{a_{i}:i=1,2,\dots,N\}.
\]
The \textit{attractor} $A$ of $\mathcal{F}$ is the unique solution in
$\mathbb{K(R}^{M})$ to the equation
\[
A=\bigcup\limits_{i\in\lbrack N]}f_{i}(A)\text{.}%
\]
It is assumed throughout that $A$ obeys the open set condition (OSC) with
respect to $\mathcal{F}$. As a consequence, the intersection of each pair of
distinct tiles in the tilings that we construct either have empty intersection
or intersect on a relatively small set. More precisely, the OSC implies that
the Hausdorff dimension of $A$ is strictly greater than the Hausdorff
dimension of the \textit{set of overlap} $\mathcal{O=\cup}_{i\neq j}%
f_{i}(A)\cap f_{j}(A)$. Similitudes applied to subsets of the set of overlap
comprise the sets of points at which tiles may meet. See \cite[p.481]{bandt}
for a discussion concerning measures of attractors compared to measures of the
set of overlap.
In what follows, the space $[N]^{\ast}\cup\lbrack N]^{\infty}$ is equipped
with a metric $d_{[N]^{\ast}\cup\lbrack N]^{\infty}}$ such that it becomes
compact. First, define the \textquotedblleft length" $\left\vert
\theta\right\vert $ of $\theta\in\lbrack N]^{\ast}\cup\lbrack N]^{\infty}$ as
follows. For $\theta=\theta_{1}\theta_{2}\cdots\theta_{k}\in\lbrack N]^{\ast}$
define $\left\vert \theta\right\vert =k$, and for $\theta\in\lbrack
N]^{\infty}$ define $\left\vert \theta\right\vert =\infty$. Now define
$d_{[N]^{\ast}\cup\lbrack N]^{\infty}}(\theta,\omega)=0$ if $\theta=\omega,$
and
\[
d_{[N]^{\ast}\cup\lbrack N]^{\infty}}(\theta,\omega)=2^{-\mathcal{N(}%
\theta,\omega)}%
\]
if $\theta\neq\omega$, where $\mathcal{N(}\theta,\omega)$ is the index of the
first disagreement between $\theta$ and $\omega$ (and $\theta$ and $\omega$
are understood to disagree at index $k$ if either $|\theta|<k$ or $|\omega|<k$
). It is routine to prove that $([N]^{\ast}\cup\lbrack N]^{\infty
},d_{[N]^{\ast}\cup\lbrack N]^{\infty}})$ is a compact metric space.
A point $\theta\in\lbrack N]^{\infty}$ is \textit{eventually periodic} if
there exists $m\in\mathbb{N}_{0}$ and $n\in\mathbb{N}$ such that $\theta
_{m+i}=\theta_{m+n+i}$ for all $i\geq1$. In this case we write $\theta
=\theta_{1}\theta_{2}\cdots\theta_{m}\overline{\theta_{m+1}\theta_{m+2}%
\cdots\theta_{m+n}}$.
For $\theta=\theta_{1}\theta_{2}\cdots\theta_{k}\in\lbrack N]^{\ast}$, the
following simplifying notation will be used:
\[
\begin{aligned} f_{\theta} &= f_{\theta_{1}} f_{\theta_{2}}\cdots f_{\theta_k} \\
f_{-\theta} &=f_{\theta_{1}}^{-1}f_{\theta_{2}}^{-1}\cdots f_{\theta_k}^{-1}=(f_{\theta_k\theta_{k-1}\cdots\theta_{1}})^{-1},
\end{aligned}
\]
with the convention that $f_{\theta}$ and $f_{-\theta}$ are the identity
function $id$ if $\theta=\varnothing$. Likewise, for all $\theta\in\lbrack
N]^{\infty}$ and $k\in\mathbb{N}_{0}$ define $\theta|k=\theta_{1}\theta
_{2}\cdots\theta_{k}$, and
\[
f_{-\theta|k}=f_{\theta_{1}}^{-1}f_{\theta_{2}}^{-1}\cdots f_{\theta_{k}}%
^{-1}=(f_{\theta_{k}\theta_{k-1}\cdots\theta_{1}})^{-1},
\]
with the convention that $f_{-\theta|0}=id$.
For $\sigma=\sigma_{1}\sigma_{2}\cdots\sigma_{k}\in\lbrack N]^{\ast}$ and with
$\left\{ a_{1},\dots,a_{N}\right\} $ the scaling powers defined above, let
\[
e(\sigma)=a_{\sigma_{1}}+a_{\sigma_{2}}+\cdots+a_{\sigma_{k}}\qquad
\text{and}\qquad e^{-}(\sigma)=a_{\sigma_{1}}+a_{\sigma_{2}}+\cdots
+a_{\sigma_{k-1}},
\]
with the conventions $e(\varnothing)=e^{-}(\varnothing)=0$. Let
\[
\Omega_{k}:=\{\sigma\in\lbrack N]^{\ast}:e(\sigma)>k\geq e^{-}(\sigma)\}
\]
for all $k\in\mathbb{N}_{0}$, and note that $\Omega_{0}=[N]$. We also write,
in some places, $\sigma^{-}=\sigma_{1}\sigma_{2}\cdots\sigma_{k-1}$ so that
\[
e^{-}(\sigma)=e(\sigma^{-}).
\]
\begin{definition}
\label{defONE} A mapping $\mathbb{\pi}$ from $[N]^{\ast}\cup\lbrack
N]^{\infty}$ to collections of subsets of $\mathbb{R}^{M}$ is defined as
follows. For $\theta\in\lbrack N]^{\ast}$
\[
\mathbb{\pi}(\theta):=\{f_{_{-\theta}}f_{\sigma}(A):\sigma\in\Omega
_{e(\theta)}\},
\]
and for $\theta\in\lbrack N]^{\infty}$
\[
\mathbb{\pi}(\theta):=\bigcup\limits_{k\in\mathbb{N}_{0}} \pi(\theta|k).
\]
Let $\mathbb{T}$ be the image of $\pi$, i.e.
\[
\mathbb{T=\{\pi}(\theta):\theta\in\lbrack N]^{\ast}\cup\lbrack N]^{\infty}\}.
\]
\end{definition}
It is consequence of Theorem~\ref{theoremONE}, stated below, that the elements
of $\mathbb{T}$ are tilings. We refer to $\mathbb{\pi}(\theta)$ as an
\textit{IFS tiling}, but usually drop the term \textquotedblleft IFS". It is a
consequence of the proof of Theorem \ref{theoremONE}, given in Section
\ref{ProofofONE}, that the support of $\mathbb{\pi}(\theta)$ is what is
sometimes referred to as a \textit{fractal blow-up} \cite{manifold,
strichartz}. More exactly, if $F_{k}:=f_{_{-\theta|k}}(A)$, then
\[
\text{support}\,(\mathbb{\pi}(\theta))=\bigcup\limits_{k\in\mathbb{N}_{0}%
}F_{k}.
\]
Thus the support of $\mathbb{\pi}(\theta)$ is the limit of an increasing union
of sets $F_{0}\subseteq F_{1}\subseteq F_{2}\subseteq\cdots$, each similar to
$A$.
The theorems of this paper are summarized in the rest of this section. The
first two theorems, as well as a proposition in Section \ref{realabsec},
reveal general information about the tilings in $\mathbb{T}$ without the
rigidity condition that is assumed in the second two theorems. The proof of
the following theorem appears in Section~\ref{ProofofONE}.
\begin{theorem}
\label{theoremONE} Each set $\pi(\theta)$ in $\mathbb{T}$ is a tiling of a
subset of $\mathbb{R}^{M}$, the subset being bounded when $\theta\in[N]^{*}$
and unbounded when $\theta\in\lbrack N]^{\infty}$. For all $\theta\in\lbrack
N]^{\infty}$ the sequence of tilings $\left\{ \pi(\theta|k)\right\}
_{k=0}^{\infty}$ is nested according to%
\begin{equation}
\{f_{i}(A):i\in\lbrack N]\}=\pi(\varnothing)\subset\pi(\theta|1)\subset
\pi(\theta|2)\subset\pi(\theta|3)\subset\cdots\text{ .} \label{eqthmONE}%
\end{equation}
For all $\theta\in\lbrack N]^{\infty}$, the prototile set for $\mathbb{\pi
}(\theta)$ is $\{s^{i}A:i=1,2,\cdots,a_{\max}\}$.\textit{ }Furthermore
\[
\pi:[N]^{\ast}\cup\lbrack N]^{\infty}\rightarrow\mathbb{T}%
\]
is a continuous map from the compact metric space $[N]^{\ast}\cup\lbrack
N]^{\infty}$ into the space $(\mathbb{K}({\mathbb{R}}^{M}),d_{\tau})$.
\end{theorem}
The proof of the following theorem is given in Section \ref{proofofTHREE}.
\begin{theorem}
\label{theoremTHREE}
\begin{enumerate}
\item Each tiling in $\mathbb{T}$ is quasiperiodic and each pair of such
tilings in $\mathbb{T}$ are locally isomorphic.
\item If $\theta$ is eventually periodic, then $\pi(\theta)$ is self-similar.
In fact, if $\theta=\alpha\overline{\beta}$ for some $\alpha,\beta\in\left[
N\right] ^{\ast}$ then $f_{-\alpha}f_{-\beta}\left( f_{-\alpha}\right)
^{-1}$ is a self-similarity of $\pi(\theta)$.
\end{enumerate}
\end{theorem}
In Section \ref{strongsec} the concept of \textit{rigidity} of an IFS is
defined. We postpone the definition because additional notation is required.
There are numerous examples of rigid $\mathcal{F}$, including the golden b IFS
in Section \ref{exsec}. The following theorem is proved in Section
\ref{strongsec}.
\begin{theorem}
\label{intersectthm} Let $\mathcal{F}$ be strongly rigid. If $\theta
,\theta^{\prime}\in\lbrack N]^{\ast}$and $E\in\mathcal{E}$ are such that
$\pi(\theta)\cap E\pi(\theta^{\prime})$ is a nonempty common tiling, then
either $\pi(\theta)\subset E\pi(\theta^{\prime})$ or $E\pi(\theta^{\prime
})\subset\pi(\theta)$. If $e(\theta)=e(\theta^{\prime}),$ then $E\pi
(\theta^{\prime})=\pi(\theta).$
\end{theorem}
A \textit{symmetry} of a tiling is an isometry that takes tiles to tiles. A
tiling is \textit{periodic} if there exists a translational symmetry;
otherwise the tiling is \textit{non-periodic}. For example, any tiling of a
quadrant of $\mathbb{R}^{2}$ by congruent squares is periodic. The proof of
the following theorem is given in Section \ref{proofofTWO}.
\begin{theorem}
\label{theoremTWO}If $\mathcal{F}$ is strongly rigid, then there does not
exist any non-identity isometry $E\in\mathcal{E}$ and $\theta\in\lbrack
N]^{\infty}$ such that $E\pi(\theta)\subset\pi(\theta)$.
\end{theorem}
The following theorem is proved in Section~\ref{invertsec}.
\begin{theorem}
\label{1to1thm}If $\pi(i)\cap\pi(j)$ does not tile $\left( support\text{ }%
\pi(i)\right) \cap\left( support\text{ }\pi(j)\right) $ for all $i\neq j$,
then $\pi:[N]^{\ast}\cup\lbrack N]^{\infty}\rightarrow\mathbb{T}$ is one-to-one.
\end{theorem}
\section{Structure of $\{\Omega_{k}\}$ and Symbolic IFS Tilings}
The results in this section, which will be applied later, relate to a symbolic
version of the theory in this paper. The next two lemmas provide recursions
for the sequence $\Omega_{k}:=\{\sigma\in\lbrack N]^{\ast}:e(\sigma)>k\geq
e^{-}(\sigma)\}$. In this section the square union symbol $\bigsqcup$ denotes
a disjoint union.
\begin{lemma}
\label{lemma1} For all $k\geq a_{\max}$
\begin{equation}
\Omega_{k}={\bigsqcup_{i=1}^{N}i\, \Omega_{k-a_{i}}}. \label{indexformula}%
\end{equation}
\end{lemma}
\begin{proof}
For all $k\in\mathbb{N}_{0}$ we have%
\begin{align*}
i\,\Omega_{k} & =\{i\sigma:\sigma\in\lbrack N]^{\ast},e(\sigma) > k \geq
e^{-}(\sigma)\}\\
& =\{\omega:\omega\in\lbrack N]^{\ast},e(\omega) > k+a_{i} \geq e^{-}%
(\omega),\omega_{1}=i\}\\
& =\Omega_{k+a_{i}}\cap i[N]^{\ast}\text{.}%
\end{align*}
It follows that
\[
i\,\Omega_{k-a_{i}}=\Omega_{k}\cap i[N]^{\ast}%
\]
for all $k\geq a_{i}$, from which it follows that $\Omega_{k}={\bigsqcup
_{i=1}^{N}i\Omega_{k-a_{i}}}$ for all $k\geq a_{\max}$.
\end{proof}
\begin{lemma}
\label{lemstruc2} With $\Omega_{k}^{^{\prime}} :=\{\omega\in\lbrack N]^{\ast
}:e(\omega)=k+1\}$, we have $\Omega_{k}^{^{\prime}}\subset\Omega_{k}$ and
\[
\Omega_{k+1}=\{\Omega_{k}\backslash\Omega_{k}^{\prime}\} \, \bigsqcup\,
\left\{ {\bigsqcup_{i=1}^{N}\Omega_{k}^{^{\prime}}i}\right\} .
\]
\end{lemma}
\begin{proof}
(i) We first show that $\{\Omega_{k}\backslash\Omega_{k}^{\prime}\} \,
\bigsqcup\, \left\{ {\bigsqcup_{i=1}^{N}\Omega_{k}^{^{\prime}}i}\right\}
\subset\Omega_{k+1}$.
Suppose $\theta\in\Omega_{k}\backslash\Omega_{k}^{\prime}$. Then $e^{-}%
(\theta)\leq k<e(\theta)$ and $e(\theta)\neq k+1$. Hence $e^{-}(\theta)\leq
k+1<e(\theta)$ and so $\theta\in\Omega_{k+1}$.
Suppose ${\theta\in\Omega_{k}^{^{\prime}}i}$ for some $i\in\lbrack N].$ Then
${\theta=\theta}^{-}{i}$ where ${\theta}^{-}\in{\Omega_{k}^{^{\prime}}}$,
$e^{-}(\theta)=e({\theta}^{-})=k+1$ and $e(\theta)=e({\theta}^{-}%
{i)=k+1+a}_{i}.$ Hence $e\left( \theta\right) >k+1=e^{-}(\theta)$. Hence
$e^{-}(\theta)\leq k+1<e\left( \theta\right) $. Hence $\theta\in\Omega
_{k+1}$.
(ii) We next show that $\Omega_{k+1}\subset\{\Omega_{k}\backslash\Omega
_{k}^{\prime}\}\, \bigsqcup\, \left\{ {\bigsqcup_{i=1}^{N}\Omega
_{k}^{^{\prime}}i}\right\} $.
Let $\theta\in\Omega_{k+1}.$ Then $e^{-}(\theta)=e(\theta^{-})\leq
k+1<e(\theta)$.
If $e(\theta^{-})=k+1$, then $\theta\in{\Omega_{k}^{^{\prime}}}\theta
_{\left\vert \theta\right\vert }\subset\{\Omega_{k}\backslash\Omega
_{k}^{\prime}\} \, \bigsqcup\, \left\{ {\bigsqcup_{i=1}^{N}\Omega
_{k}^{^{\prime}}i}\right\} $.
If $e(\theta^{-})\neq k+1$, then $e(\theta^{-})<k+1$. So $e(\theta^{-})\leq
k<k+1<e(\theta)$; so $\theta\in\Omega_{k}\backslash\Omega_{k}^{\prime}%
\subset\{\Omega_{k}\backslash\Omega_{k}^{\prime}\} \, \bigsqcup\, \left\{
{\bigsqcup_{i=1}^{N}\Omega_{k}^{^{\prime}}i}\right\} $.
\end{proof}
For all $\theta\in\lbrack N]^{\ast},$ define $c(\theta)=\{\omega\in\lbrack
N]^{\infty}:\omega_{1}\omega_{2}\cdots\omega_{\left\vert \theta\right\vert
}=\theta\}$. (Such sets are sometimes called \textit{cylinder sets}.) With the
metric on $[N]^{\infty}$ defined to be $d_{0}(\theta,\omega)=2^{-\min
\{k:\theta_{k}\neq\omega_{k}\}}$ for $\theta\neq\omega$, the diameter of
$c(\theta)$ is $2^{-(\left\vert \theta\right\vert +1)}$. The following lemma
tells us how $\{c(\theta):\theta\in\Omega_{k}\}$ may be considered as a tiling
of the symbolic space $[N]^{\infty}$.
\begin{lemma}
\label{lemstruc3} For each $k\in\mathbb{N}_{0}$ the collection of sets
$\{c(\theta):\theta\in\Omega_{k}\}$ form a partition of $[N]^{\infty}$, each
part of which has diameter belonging to $\{s^{k+1},s^{k+2},\dots s^{k+a_{\max
}}\}$ where $s=1/2$. That is,
\[
\left[ N\right] ^{\infty}=\bigsqcup\limits_{\theta\in\Omega_{k}}c(\theta)
\]
for all $k\in\mathbb{N}_{0}$.
\end{lemma}
\begin{proof}
Assume that $\omega\in[N]^{\infty}$. There is a unique $j$ such that $\omega|j
\in\Omega_{k}$. Letting $\theta= w|j$ we have $\omega\in c(\theta)
\subset[N]^{\infty}$. Therefore $[N]^{\infty} = \bigcup_{\theta\in\Omega_{k}}
c(\theta)$.
Assume that $\theta,\theta^{\prime}\in\Omega_{k}$. If $\omega\in c(\theta)\cap
c(\theta^{\prime})$, then by the definition of cylinder set either
$\theta=\theta^{\prime}$ or $|\theta|\neq|\theta^{\prime}|$. However, if
$|\theta|\neq|\theta^{\prime}|$, then $\omega\big ||\theta|=\theta\in
\Omega_{k}$ and $\omega\big ||\theta^{\prime}|=\theta^{\prime}\in\Omega_{k}$,
which would contradict the uniqueness of $j$. Therefore $[N]^{\infty
}=\bigsqcup_{\theta\in\Omega_{k}}c(\theta)$.
\end{proof}
\section{A Canonical Sequence of Self-similar Tilings}
To facilitate the proofs of the theorems stated in Section~\ref{defsec},
another family of tilings is introduced, tilings isometric to those that are
the subject of this paper. Let
\[
A_{k}=s^{-k}A
\]
for all $k\in\mathbb{N\cup\{}-1,-2,\dots,-a_{\max}\}$, and define, for all
$k\in\mathbb{N}$, a sequence of tilings $T_{k}$ of $A_{k}$ by
\[
T_{k}=\{ s^{-k} f_{\sigma}(A):\sigma\in\Omega_{k}\}.
\]
The following lemma says, in particular, that $T_{k}$ is a non-overlapping
union of copies of $T_{k-a_{i}}$ for $i\in\lbrack N]$ when $k\geq a_{\max}$,
and $T_{k}$ may be expressed as a non-overlapping union of copies of
$T_{k-e(\omega)}$ for $\omega\in\Omega_{{l}}$ when $k$ is somewhat larger than
$l\in\mathbb{N}_{0}$. In this section the square union notation $\bigsqcup$
denotes a non-overlapping union.
\begin{lemma}
\label{lemma02} For all $k\in\mathbb{N}_{0}$ the support of $T_{k}$ is $A_{k}%
$. For all $\theta\in\lbrack N]^{\ast}$,
\[
\pi(\theta)=E_{\theta}T_{e(\theta)}%
\]
where $E_{\theta}$ is the isometry $f_{-\theta}s^{e(\theta)}$. Also
\begin{equation}
T_{k}={\bigsqcup_{i=1}^{N}}E_{k,i}T_{k-a_{i}} \label{Tkformula}%
\end{equation}
for all $k\geq a_{\max}$, where each of the mappings $E_{k,i}=s^{-k}\circ
f_{i}\circ s^{k-a_{i}}$ is an isometry. More generally,
\begin{equation}
T_{k}={\bigsqcup_{\omega\in\Omega_{l}}}E_{k,\omega}T_{k-e(\omega)},
\label{Tkformula2}%
\end{equation}
for all $k\geq l+a_{\max}$ and for all $l\in\mathbb{N}_{0}$, where each of the
mappings $E_{k,\omega} =s^{-k}\circ f_{\omega}\circ s^{k-e(\omega)}$ is an isometry.
\end{lemma}
\begin{proof}
It is well-known that if $\mathcal{P}$ is a partition of $[N]^{\infty},$ then
$A=%
{\textstyle\bigcup_{\omega\in\mathcal{P}}}
\phi(\omega)$ where $\phi:[N]^{\infty}\rightarrow A$ is the usual (continuous)
coding map defined by $\phi(\omega)=\lim_{k\rightarrow\infty}f_{\omega|k}(x)$
for any fixed $x\in A$. By Lemma \ref{lemstruc3} we can choose $\mathcal{P=}%
\{c(\theta):\theta\in\Omega_{k}\}$. Hence, the support of $T_{k}$ is
\begin{align*}
s^{-k}\{%
{\textstyle\bigcup}
\{f_{\sigma}(A) :\sigma\in\Omega_{k}\}\} & =s^{-k}\{%
{\textstyle\bigcup}
\{\phi(\omega):\omega\in\{c(\theta):\theta\in\Omega_{k}\}\}\}\\
& =s^{-k}A\text{.}%
\end{align*}
The expression $\pi(\theta)=E_{\theta}T_{e(\theta)}$ where $E_{\theta}$ is the
isometry $f_{-\theta}s^{e(\theta)}$ follows from the definitions of
$\pi(\theta)$ and $T_{k}$ on taking $k=e(\theta)$.
Equation (\ref{Tkformula}) follows from Lemma \ref{lemma1} according to these
steps.
\begin{align*}
T_{k} & = \{ s^{-k} f_{\sigma}(A):\sigma\in\Omega_{k}\}\text{ (by
definition)}\\
& =s^{-k}\{f_{\sigma}(A):\sigma\in{\bigsqcup_{i=1}^{N}i\Omega_{k-a_{i}}%
\}}\text{ (by Lemma \ref{lemma1})}\\
& =s^{-k}{\bigsqcup_{i=1}^{N}}\{f_{i\sigma}(A):\sigma\in\Omega{_{k-a_{i}}%
\}}\text{ (identity)}\\
& =s^{-k}{\bigsqcup_{i=1}^{N}}f_{i}(\{f_{\sigma}(A):\sigma\in\Omega
{_{k-a_{i}}\})}\text{ (identity)}\\
& ={\bigsqcup_{i=1}^{N}}E_{k,i}T_{k-a_{i}}\text{ (by definition)}%
\end{align*}
The function $E_{k,i}=s^{-k}\circ f_{i}\circ s^{k-a_{i}}$ is an isometry
because it is a composition of three similitudes, of scaling ratios $s^{-k}$,
$s^{a_{i}},$ and $s^{k-a_{i}}$. The proof of the last assertion is immediate:
tiles meet at images under similitudes of the set of overlap $\mathcal{O=\cup
}_{i\neq j}f_{i}(A)\cap f_{j}(A)$.
Equation (\ref{Tkformula2}) can be proved by induction on $l,$ starting from
Equation (\ref{Tkformula}) and using Lemma \ref{lemstruc2}.
\end{proof}
The following definition, formalizing the notion of an ``isometric combination
of tilings", will be used later, but it is convenient to place it here.
\begin{definition}
Let $\{U_{i}:i\in\mathcal{I\}}$ be a collection of tilings. An
\textbf{isometric combination of the set of tilings} $\{U_{i}:i\in
\mathcal{I\}}$ is a tiling $V$ that can be written in the form
\[
V={\bigsqcup_{i=1}^{K}}E^{(i)}U^{(i)}%
\]
for some $K\in\mathbb{N}$, where $E^{(i)}\in\mathcal{E}$, $U^{(i)}\in
\{U_{i}:i\in\mathcal{I\}}$, for all $i\in\{1,2,\dots,K\}.$
\end{definition}
For example, Lemma \ref{lemma02} tells us that any $T_{k}$ can be written as
an isometric combination of any set of tilings of the form $\{T_{j,}%
T_{j+1},\dots,T_{j+a_{\max}-1}\}$ when $k\geqslant j.$
\begin{proposition}
\label{lemmass} The sequence $\left\{ T_{k}\right\} $ of tilings is
self-similar in the following sense. Each of the sets in the magnified tiling
$s^{-1}T_{k}$ is a union of tiles in $T_{k+1}$.
\end{proposition}
\begin{proof}
This follows at once from Lemma \ref{lemstruc2}. The tiling $T_{k+1}$ is
obtained from $T_{k}$ by applying the similitude $s^{-1}$ and then splitting
those resulting sets that are isometric to $A$. By splitting we mean we
replace $EA$ by $\{Ef_{1}(A),$ $Ef_{2}(A),\dots, Ef_{N}(A)\}$, see Section
\ref{strongsec}.
\end{proof}
\section{Theorem \ref{theoremONE}: Existence and Continuity of
Tilings\label{ProofofONE}}
Let
\[
A_{-\theta|k}:=f_{-\theta|k}A
\]
for all $\theta\in\lbrack N]^{\infty}$. It is immediate from Definition
\ref{defONE} that the support of the tiling $\pi(\theta|k)$ is $A_{-\theta|k}$
and that $\pi(\theta|k)$ is isometric to the tiling $T_{e(k)}$ of $A_{e(k)}$.
We use this fact repeatedly in the rest of this paper.
\begin{flushleft}
\textbf{Theorem~\ref{theoremONE}.} Each set $\pi(\theta)$ in $\mathbb{T}$ is a
tiling of a subset of $\mathbb{R}^{M}$, the subset being bounded when
$\theta\in[N]^{*}$ and unbounded when $\theta\in\lbrack N]^{\infty}$. For all
$\theta\in\lbrack N]^{\infty}$ the sequence of tilings $\left\{ \pi
(\theta|k)\right\} _{k=0}^{\infty}$ is nested according to%
\begin{equation}
\{f_{i}(A):i\in\lbrack N]\}=\pi(\varnothing)\subset\pi(\theta|1)\subset
\pi(\theta|2)\subset\pi(\theta|3)\subset\cdots\text{ .}%
\end{equation}
For all $\theta\in\lbrack N]^{\infty}$, the prototile set for $\mathbb{\pi
}(\theta)$ is $\{s^{i}A:i=1,2,\cdots,a_{\max}\}$.\textit{ }Furthermore
\[
\pi:[N]^{\ast}\cup\lbrack N]^{\infty}\rightarrow\mathbb{T}%
\]
is a continuous map from the compact metric space $[N]^{\ast}\cup\lbrack
N]^{\infty}$ into the space $(\mathbb{K}({\mathbb{R}}^{M}),d_{\tau})$.
\end{flushleft}
\begin{proof}
Using Lemma \ref{lemma02}, for $\theta=\theta_{1}\theta_{2}\cdots\theta_{l}%
\in\lbrack N]^{\ast}$ and $\theta^{-}=\theta_{1}\theta_{2}\cdots\theta_{l-1}%
$,
\begin{align*}
\mathbb{\pi}(\theta) & =E_{\theta}T_{e(\theta)}={\bigsqcup_{i=1}^{N}%
}E_{\theta}E_{e(\theta),i}T_{k-a_{i}}\\
& \supset E_{\theta}E_{e(\theta),\theta_{l}}T_{k-a_{\theta_{l}}}%
=E_{\theta^{-}}T_{e(\theta^{-})}=\mathbb{\pi}(\theta^{-})\text{.}%
\end{align*}
It follows that $\{\pi(\theta|k)\}$ is an increasing sequence of tilings for
all $\theta\in\lbrack N]^{\infty}$, as in Equation (\ref{eqthmONE}), and so
converges to a well-defined limit. Since the maps in the IFS are strict
contractions, their inverses are expansive, whence $\pi(\theta)$ is unbounded
for all $\theta\in\lbrack N]^{\infty}$.
The fact that the tiles here are indeed tiles as we defined them at the start
of this paper follows from three readily checked observations. (i) The tiles
are nonempty perfect compact sets because they are isometric to the attractor,
that is not a singleton, of an IFS of similitudes. (ii) There are only
finitely many tiles that intersect any ball of finite radius. (iii) Any two
tiles can meet only on a set that is contained in the image under a similitude
of the set of overlap.
Next we prove that there are exactly $a_{\max}$ distinct tiles, up to
isometry, in any tiling $\pi(\theta)$ for $\theta\in\lbrack N]^{\infty}$. The
tiles of $\pi(\theta)$ take the form $\{f_{_{-\theta|k}}f_{\sigma}%
(A):\sigma\in\Omega_{e(\theta|k)}\}$ for some $k\in\mathbb{N}$. The mappings
here are similitudes whose scaling factors are $\{s^{e(\sigma)-e(\theta
|k)}:e(\sigma)-e(\theta|k)>0\geq e(\sigma)-e(\theta|k)-a_{|\sigma|}\},$ namely
$\{s^{m}:m>0\geq m-a_{|\sigma|}\}$ for which the possible values are at most
all of $\{1,2,\dots,a_{\max}\}$. That all of these values occur for large
enough $k$ follows from $\gcd\{a_{i}:i=1,2,\dots, N\}=1$.
Next we prove that $\pi:[N]^{\ast}\cup\lbrack N]^{\infty}\rightarrow
\mathbb{T}$ is a continuous map from the compact metric space $[N]^{\ast}%
\cup\lbrack N]^{\infty}$ onto the space $(\mathbb{T},d_{T}).$ The map
$\pi|_{[N]^{\ast}}:[N]^{\ast}\rightarrow\mathbb{T}$ is continuous on the
discrete part of the space $([N]^{\ast},d_{[N]^{\ast}\cup\lbrack N]^{\infty}%
})$ because each point $\theta\in\lbrack N]^{\ast}$ possesses an open
neighborhood that contains no other points of $[N]^{\ast}\cup\lbrack
N]^{\infty}$. To show that $\pi$ is continuous at points of $[N]^{\infty}$ we
follow a similar method to the one in \cite{anderson}. Let $\varepsilon>0$ be
given and let $B(R)$ be the open ball of radius $R$ centered at the origin.
Choose $R$ so large that $h(\rho(\overline{B(R)}),\mathbb{S}^{M})<\varepsilon
$. This implies that if two tilings differ only where they intersect the
complement of $\overline{B(R)}$, then their distance $d_{\tau}$ apart is less
than $\varepsilon$. But geometrical consideration of the way in which
\textit{support(}$\pi(\theta_{1}\theta_{2}\theta_{3}..\theta_{k}))$ grows with
increasing $k$ shows that we can choose $K$ so large that \textit{support(}%
$\pi(\theta_{1}\theta_{2}\theta_{3}..\theta_{k}))\cap\overline{B(R)}$ is
constant for all $k\geq K$. It follows that%
\[
h(\rho(\pi(\theta_{1}\theta_{2}..\theta_{k})),\rho(\pi(\theta_{1}\theta
_{2}..\theta_{l})))\leq\varepsilon
\]
and as a consequence
\[
h(\rho(\partial\pi(\theta_{1}\theta_{2}..\theta_{k})),\rho(\partial\pi
(\theta_{1}\theta_{2}..\theta_{l})))\leq\varepsilon
\]
for all $k,l\geq K$. It follows that $h(\rho(\pi(\theta)),\rho(\pi
(\omega)))\leq\varepsilon)$ whenever $\theta_{1}\theta_{2}..\theta_{K}%
=\omega_{1}\omega_{2}..\omega_{K}$. It follows that $\pi$ is continuous.
\end{proof}
\section{\label{proofofTHREE}Theorem \ref{theoremTHREE}: When Do all Tilings
Repeat the Same Patterns?}
\begin{flushleft}
\textbf{Theorem 2.}
\end{flushleft}
\begin{enumerate}
\item Each unbounded tiling in $\mathbb{T}$ is quasiperiodic and all tilings
in $\mathbb{T}$ have the local isomorphism property.
\item If $\theta$ is eventually periodic, then $\pi(\theta)$ is self-similar.
In fact, if $\theta=\alpha\overline{\beta}$ for some $\alpha,\beta\in\left[
N\right] ^{\ast},$ then $f_{-\alpha}f_{-\beta}\left( f_{-\alpha}\right)
^{-1}$ is a self-similarity of $\pi(\theta)$.
\end{enumerate}
\begin{proof}
(1) First we prove quasiperiodicity. This is related to the self-similarity of
the sequence of tilings $\left\{ T_{k}\right\} $ mentioned in Proposition
\ref{lemmass}.
Let $\theta\in\left[ N\right] ^{\infty}$ be given and let $P$ be a patch in
$\pi(\theta)$. There is a $K_{1}\in\mathbb{N}$ such that $P$ is contained in
$\pi(\theta|K_{1})$. Hence an isometric copy of $P$ is contained in $T_{K_{2}%
}$ where $K_{2}=e(\theta|K_{1})$. Now choose $K_{3}\in\mathbb{N}$ so that an
isometric copy of $T_{K_{2}}$ is contained in each $T_{k}$ with $k\geq K_{3}.$
That this is possible follows from the recursion (\ref{Tkformula2}) of Lemma
\ref{lemma02} and \textit{gcd}$\{a_{i}\}=1$. In particular, $T_{K_{2}}\subset
T_{K_{3}+i}$ for all $i\in\{1,2,...,a_{\max}\}$.
Now let $K_{4}=K_{3}+a_{\max}$. Then, for all $k\geq K_{4}$, the tiling
$T_{k}$ is an isometric combination of $\{T_{K_{3}+i}:$\textit{ }%
$i=1,2,...,a_{\max}\}$, and each of these tilings contains a copy of
$T_{K_{2}}$ and in particular a copy of $P$.
Let $D=\max\{\left\Vert x-y\right\Vert :x,y\in A\}$ be the diameter of $A$.
The support of $T_{k}$ is $s^{-k}A$ which has diameter $s^{-k}D.$ Hence
\textit{support}$(T_{k})\subset$ $B(x,2s^{-k}D)$, the ball centered at $x$ of
radius $2s^{-k}D$, for all $x\in$ \textit{support}$(T_{k})$. It follows that
if $x\in$\textit{support}$\pi(\theta^{\prime})$ for any $\theta^{\prime}%
\in\left[ N\right] ^{\infty}$, then $B(x,2s^{-K_{4}}D)$ contains a copy of
\textit{support}$(T_{K_{2}})$ and hence a copy of $P$. Therefore all unbounded
tilings in $\mathbb{T}$ are quasiperiodic.
In \cite{Rad} Radin and Wolff define a tiling to have the local ismorphism
property if for every patch $P$ in the tiling there is some distance $d(P)$
such that every sphere of diameter $d(P)$ in the tiling contains an isometric
copy of $P$. Above, we have proved a stronger property of tilings, as defined
here, of fractal blow-ups. Given $P,$ there is a distance $d(P)$ such that
each sphere of diameter $d(P),$ centered at any point belonging to the support
of any unbounded tiling in $\mathbb{T}$, contains a copy of $P$.
(2) Let $\theta=\alpha\overline{\beta}=\alpha_{1}\alpha_{2}\cdots\alpha
_{l}\beta_{1}\beta_{2}\cdots\beta_{m}\beta_{1}\beta_{2}\cdots\beta_{m}%
\beta_{1}\beta_{2}\cdots\beta_{m}\cdots$. We have the equivalent increasing
unions
\[
\pi(\theta)=%
{\textstyle\bigcup\limits_{k\in\mathbb{N}}}
E_{\theta|k}T_{e(\theta|k)}=%
{\textstyle\bigcup\limits_{j\in\mathbb{N}}}
E_{\theta|(l+jm)}T_{e(\theta|(l+jm))}=%
{\textstyle\bigcup\limits_{j\in\mathbb{N}}}
E_{\theta|(l+jm+m)}T_{e(\theta|(l+jm+m))}%
\]
where, for all $k$,
\[
E_{\theta|k}=f_{-\theta|k}s^{e(\theta|k)}\text{.}%
\]
We can write
\[
\pi(\theta)=%
{\textstyle\bigcup\limits_{j\in\mathbb{N}}}
E_{\theta|(l+jm)}T_{e(\theta|(l+jm))}=f_{-\alpha}%
{\textstyle\bigcup\limits_{j\in\mathbb{N}}}
f_{-\beta}^{j}s^{e(\theta|(l+jm))}T_{e(\theta|(l+jm))},
\]
and also
\[
\pi(\theta)=%
{\textstyle\bigcup\limits_{j\in\mathbb{N}}}
E_{\theta|(l+jm+m)}T_{e(\theta|(l+jm+m))}=f_{-\alpha}f_{-\beta}%
{\textstyle\bigcup\limits_{j\in\mathbb{N}}}
f_{-\beta}^{j}s^{e(\theta|(l+jm+m))}T_{e(\theta|(l+jm+m))}\text{.}%
\]
Here $f_{-\beta}^{j}s^{e(\theta|(l+jm+m))}T_{e(\theta|(l+jm+m))}$ is a
refinement of $f_{-\beta}^{j}s^{e(\theta|(l+jm))}T_{e(\theta|(l+jm))}$. It
follows that $\left( f_{-\alpha}f_{-\beta}\right) ^{-1}\pi(\theta)$ is a
refinement of $\left( f_{-\alpha}\right) ^{-1}\pi(\theta)$, from which it
follows that $\left( f_{-\alpha}\right) \left( f_{-\alpha}f_{-\beta
}\right) ^{-1}\pi(\theta)$ is a refinement of $\pi(\theta)$. Therefore, every
set in $\left( f_{-\alpha}f_{-\beta}\right) \left( f_{-\alpha}\right)
^{-1}\pi(\theta)$ is a union of tiles in $\pi(\theta)$.
\end{proof}
\section{\label{realabsec} Relative and Absolute Addresses}
In order to understand how different tilings relate to one another, the
notions of relative and absolute addresses of tiles are introduced. Given an
IFS $\mathcal{F}$, the \textit{set of absolute addresses} is defined to be:
\[
\mathbb{A}:=\{\theta.\omega:\theta\in\lbrack N]^{\ast},\,\omega\in
\Omega_{e(\theta)},\,\theta_{\left\vert \theta\right\vert }\neq\omega_{1}\}.
\]
Define $\widehat{\pi}:\mathbb{A\rightarrow\{}t\in T:T\in\mathbb{T\}}$ by
\[
\widehat{\pi}(\theta.\omega)=f_{-\theta}.f_{\omega}(A).
\]
We say that $\theta.\omega$ is an \textit{absolute address} of the tile
$f_{-\theta}.f_{\omega}(A)$. It follows from Definition \ref{defONE} that the
map $\widehat{\pi}$ is surjective: every tile of $\mathbb{\{}t\in
T:T\in\mathbb{T\}}$ possesses at least one address. The condition
$\theta_{\left\vert \theta\right\vert }\neq\omega_{1}$ is imposed to make
cancellation unnecessary.
The \textit{set of relative addresses} is associated with the tiling $T_{k}$
of $A_{k}=s^{-k}A$ and is defined to be $\{.\omega:\omega\in\Omega_{k}\}$.
\begin{proposition}
\label{lembij}There is a bijection between the set of relative addresses
$\{.\omega:\omega\in\Omega_{k}\}$ and the tiles of $T_{k}$, for all
$k\in\mathbb{N}_{0}$.
\end{proposition}
\begin{proof}
This follows from the non-overlapping union
\[
A=%
{\textstyle\bigsqcup\limits_{\omega\in\Omega_{k}}}
f_{\omega}(A)\text{.}%
\]
This expression follows immediately from Lemma \ref{lemstruc3}; see the start
of the proof of Lemma \ref{lemma02}.
\end{proof}
Accordingly, we say that $.\omega$, or equivalently $\varnothing.\omega,$
where $\omega\in\Omega_{k}$, is \textit{the relative address} of the tile
$s^{-k}f_{\omega}(A)$ in the tiling $T_{k}$ of $A_{k}$. Note that a tile of
$T_{k}$ may share the same relative address as a different tile of $T_{l}$ for
$l\neq k$.
Define the \textit{set of labelled tiles} of $T_{k}$ to be%
\[
\mathcal{A}_{k}=\{(.\omega,s^{-k}f_{\omega}(A)):\omega\in\Omega_{k}\}
\]
for all $k\in\mathbb{N}_{0}$. A key point about relative addresses is that the
set of labelled tiles of $T_{k}$ for $k\in\mathbb{N}$ can be computed
recursively. Define
\[
\mathcal{A}_{k}^{^{\prime}}=\{(\omega,s^{-k}f_{\omega}(A))\in\mathcal{A}%
_{k}:e(\omega)=k+1\}\subset\mathcal{A}_{k}\text{.}%
\]
An example of the following inductive construction is illustrated in\ Figure
\ref{l-maps}, and some corresponding tilings $\pi(\theta)$ labelled by
absolute addresses are illustrated in Figure \ref{absolute}.
\begin{lemma}
\label{branchlem}For all $k\in\mathbb{N}_{0}$ we have%
\[
\mathcal{A}_{k+1}=\mathcal{L(A}_{k}\backslash\mathcal{A}_{k}^{^{\prime}}%
)\cup\mathcal{M(A}_{k}^{^{\prime}})
\]
where
\begin{align*}
\mathcal{L}(\omega,s^{-k}f_{\omega}(A)) & =(\omega,s^{-k-1}f_{\omega}(A)),\\
\mathcal{M}(\omega,s^{-k}f_{\omega}(A)) & =\big \{(\omega i,s^{-k-1}%
f_{\omega i}(A)):i\in\lbrack N]\big \}\text{.}%
\end{align*}
\end{lemma}
\begin{proof}
This follows immediately from Lemma \ref{lemstruc2}.
\end{proof}
\section{\label{strongsec}Strong Rigidity, Definition of ``Amalgamation and
Shrinking" Operation $\alpha$ on Tilings, and Proof of Theorem
\ref{intersectthm}.}
We begin this key section by introducing an operation, called
\textquotedblleft amalgamation and shrinking", that maps certain tilings into
tilings. This leads to the main result of this section, Theorem
\ref{intersectthm}, which, in turn, leads to Theorem \ref{theoremTWO}.
\begin{definition}
\label{rigiddef} Let $T_{0}=\{f_{i}(A):i\in\left[ N\right] \}$. The IFS
$\mathcal{F}$ is said to be \textbf{rigid} if (i) there exists no non-identity
isometry $E\in\mathcal{E}$ such that $T_{0}\cap ET_{0}$ is non-empty and tiles
$A\cap ET$, and (ii) there exists no non-identity isometry $E\in\mathcal{E}$
such that $A=EA$.
\end{definition}
\begin{definition}
Define $\mathbb{T}^{\prime}$ to be the set of all tilings using the set of
prototiles $\left\{ s^{i}A:i=1,2,...,a_{\max}\right\} $. Any tile that is
isometric to $s^{a_{\max}}A$ is called a \textbf{small tile}, and any tile
that is isometric to $sA$ is called a \textbf{large tile}. We say that a
tiling $P\in\mathbb{T}^{\prime}$ comprises a set of \textbf{partners }if
$P=ET_{0}$ for some $E\in\mathcal{E}$. Define $\mathbb{T^{\prime\prime}\subset
T}^{\prime}$ to be the set of all tilings in $\mathbb{T}^{\prime}$ such that,
given any $Q\in\mathbb{T^{\prime\prime}}$ and any small tile $t\in Q$, there
is a set of partners of $t$, call it $P(t)$, such that $P(t)\subset Q$. Given
any $Q\in\mathbb{T^{\prime\prime}}$ we define $Q^{\prime}$ to be the union of
all sets of partners in $Q$.
\end{definition}
\begin{definition}
Let $\mathcal{F}$ be a rigid IFS. The amalgamation and shrinking operation
$\alpha:\mathbb{T^{\prime\prime}\rightarrow T}^{\prime}$ is defined by
\[
\alpha Q=\{st:t\in Q\backslash Q^{\prime}\}\cup%
{\displaystyle\bigsqcup_{\{E\in\mathcal{E}:ET_{0}\subset Q^{\prime}\}}}
sEA\text{.}%
\]
\end{definition}
\begin{lemma}
\label{inverselemma} If $\mathcal{F}$ is rigid, the function $\alpha
:\mathbb{T^{\prime\prime}\rightarrow T}^{\prime}$is well-defined and
bijective; in particular, $\alpha^{-1}:\mathbb{T}^{\prime}\rightarrow
\mathbb{T^{\prime\prime}}$ is well defined by
\[
\alpha^{-1}(Q)=\{\alpha_{Q}^{-1}(q):q\in Q\}
\]
where
\[
\alpha_{Q}^{-1}(q)=\left\{
\begin{array}
[c]{c}%
s^{-1}q\text{ if }q\in Q\text{ is not a large tile}\\
s^{-1}ET_{0}\text{ if }Eq\text{ is a large tile, some }E\in\mathcal{E}%
\end{array}
\right.
\]
\end{lemma}
\begin{proof}
Because $\mathcal{F}$ is rigid, there can be no ambiguity with regard to which
sets of tiles in a tiling are partners, nor with regard to which tiles are the
partners of a given small tile. Hence $\alpha:\mathbb{T^{\prime\prime
}\rightarrow T}^{\prime}$ is well defined. Given any $T^{\prime}\in
\mathbb{T}^{\prime}$ we can find a unique $Q\in\mathbb{T^{\prime\prime}}$ such
that $\alpha(Q)=T^{\prime},$ namely $\alpha^{-1}(Q)$ as defined in the lemma.
\end{proof}
\begin{lemma}
\label{alphaTlem}Let $\mathcal{F}$ be rigid and $k\in\mathbb{N}$. Then
(i) $T_{k}\in\mathbb{T^{\prime\prime}}$;
(ii) $\alpha T_{k}=T_{k-1}$ and $\alpha^{-1}T_{k-1}=T_{k}$.
\end{lemma}
\begin{proof}
As described in Lemma \ref{branchlem}$,$ $T_{k}$ can constructed in a
well-defined manner, starting from from $T_{k-1}$, by scaling and splitting,
that is, by applying $\alpha^{-1}$. Conversely $T_{k-1}$ can be constructed
from $T_{k}$ by applying $\alpha$. Statements (i) and (ii) are consequences.
\end{proof}
\begin{lemma}
\label{srinterlem} If $\mathcal{F}$ is rigid, $L,M\in\mathbb{T^{\prime\prime}%
}$, and $L\cap M$ tiles support($L)\,\cap\,$support($M),$ then $L\, \cap\,
M\in\mathbb{T^{\prime\prime}}$. Moreover,
\[
\alpha(L\cap M)=\alpha(L)\cap\alpha(M),
\]
and $\alpha(L\cap M)$ tiles support$\, \alpha(L)\, \cap\, $support$\,
\alpha(M)$.
\end{lemma}
\begin{proof}
Since $L,M\in\mathbb{T^{\prime\prime}\subset T}^{\prime}$ lie in the range of
$\alpha^{-1},$ we can find unique $L^{\prime},M^{\prime}\in\mathbb{T}^{\prime
}$ such that
\[
L=\alpha^{-1}L^{\prime}\text{ and }M=\alpha^{-1}M^{\prime}.
\]
Note that $\alpha^{-1}(T^{\prime})=\left\{ \alpha^{-1}(t):t\in T^{\prime
}\right\} $ for all $T^{\prime}\in\mathbb{T}^{\prime}$, which implies that
$\alpha^{-1}$ commutes both with unions of disjoint tilings and also with
intersections of tilings whose intersections tile the intersections of their
supports. It follows that $L\cap M\in\mathbb{T^{\prime\prime}}$,
\begin{align*}
\alpha(L\cap M) & =\alpha(\alpha^{-1}L^{\prime}\cap\alpha^{-1}M^{\prime})\\
& =\alpha(\alpha^{-1}(L^{\prime}\cap M^{\prime}))\\
& =L^{\prime}\cap M^{\prime}\\
& =\alpha\left( L\right) \cap\alpha\left( M\right) \text{,}%
\end{align*}
and support $\alpha(L\cap M)=$\, support $\, \alpha\left( L\right) \cap
\,$support $\alpha\left( M\right) $.
\end{proof}
\begin{definition}
\label{strongdef}$\mathcal{F}$ is \textbf{strongly rigid} if $\mathcal{F}$ is
rigid and whenever $i,j\in\{0,1,2,\dots,a_{\max}-1\},E\in\mathcal{E}$, and
$T_{i}\cap ET_{j}$ tiles $A_{i}\cap EA_{j}$, either $T_{i}\subset ET_{j}$ or
$T_{i}\supset ET_{j}.$
\end{definition}
Section \ref{exsec} contain a few examples of strongly rigid IFSs.
\begin{lemma}
\label{intersectlemma} Let $\mathcal{F}$ be strongly rigid, $k,l\in
\mathbb{N}_{0}$, and $E\in\mathcal{E}$.
(i) If $ET_{k}\cap T_{k}$ is nonempty and tiles $EA_{k}\cap A_{k},$ then
$E=id$.
(ii) If $EA_{k}\cap A_{k+l}$ is nonempty and $ET_{k}\cap T_{k+l}$ tiles
$EA_{k}\cap A_{k+l}$, then $ET_{k}\subset T_{k+l}$.
\end{lemma}
\begin{proof}
Suppose $ET_{k}\cap T_{l}\neq\varnothing$ and t.i.s. (tiles intersection of
supports). Without loss of generality assume $k\leq l$, for if not, then apply
$E^{-1}$, then redefine $E^{-1}$ as $E$.
Both $ET_{k}$ and $T_{l}$ lie in the domain of $\alpha^{k}$, so we can apply
Lemma \ref{srinterlem} $k$ times, yielding
\begin{align}
\alpha^{k}(ET_{k}\cap T_{l}) & =s^{k}Es^{-k}T_{0}\cap T_{l-k} \label{aboveq}%
\\
& :=\widetilde{E}T_{0}\cap T_{l-k}\neq\varnothing,\nonumber
\end{align}
where $\widetilde{E}T_{0}\cap T_{l-k}$ t.i.s. Now observe that by Lemma
\ref{lemma02} we can write, for all $k^{\prime}\geq l^{\prime}+a_{\max}$,%
\[
T_{k^{\prime}}={\bigsqcup_{\omega\in\Omega_{l^{\prime}}}}E_{k^{\prime},\omega
}T_{k^{\prime}-e(\omega)}\left( =\left\{ E_{k^{\prime},\omega}T_{k^{\prime
}-e(\omega)}:\omega\in\Omega_{l^{\prime}}\right\} \right) ,
\]
where $E_{k^{\prime},\omega}\in\mathcal{E}$ for all $k^{\prime},\omega$.
Choosing $l^{\prime}=k^{\prime}-a_{\max}$ and noting that, for $\omega
\in\Omega_{l^{\prime}}$, we have $e(\omega)\in\{l^{\prime}+1,\dots,l^{\prime
}+a_{\max}\},$ and for $\omega\in\Omega_{k^{\prime}-a_{\max}}$ we have
$e(\omega)\in\{k^{\prime}-a_{\max}+1,\dots,k^{\prime}\}.$ Therefore
$k^{\prime}-e(\omega)\in\{0,1,\dots,a_{\max}-1\}$ and we obtain the explicit
representation%
\[
T_{k^{\prime}}={\bigsqcup_{\omega\in\Omega_{k^{\prime}-a_{\max}}}}%
E_{k^{\prime},\omega}T_{k^{\prime}-e(\omega)}%
\]
which is an isometric combination of $\{T_{0},T_{1},\dots,T_{a_{\max}-1}\}$.
In particular, we can always reexpress $T_{l-k}$ in (\ref{aboveq}) as
isometric combination of $\{T_{0},T_{1},\dots,T_{a_{\max}-1}\}$ and so there
is some $E^{\prime}$ and some $T_{m}\in\{T_{0},T_{1},\dots,T_{a_{\max}-1}\}$
such that
\[
\widetilde{E}T_{0}\cap E^{\prime}T_{m}\neq\varnothing\text{ and t.i.s.}%
\]
By the strong rigidity assumption, this implies $\widetilde{E}T_{0}\subset
E^{\prime}T_{m}$, which in turn implies
\[
\widetilde{E}T_{0}\subset T_{l-k}%
\]
and t.i.s. Now apply $\alpha^{-k}$ to both sides of this last equation to
obtain the conclusions of the lemma.
\end{proof}
\begin{flushleft}
\textbf{Theorem \ref{intersectthm}.} Let $\mathcal{F}$ be strongly rigid. If
$\theta,\theta^{\prime}\in\lbrack N]^{\ast}$and $E\in\mathcal{E}$ are such
that $\pi(\theta)\cap E\pi(\theta^{\prime})$ is not empty and tiles
$A_{-\theta}\cap EA_{-\theta^{\prime}}$, then either $\pi(\theta)\subset
E\pi(\theta^{\prime})$ or $E\pi(\theta^{\prime})\subset\pi(\theta)$. In this
situation, if $e(\theta)=e(\theta^{\prime}),$ then $E\pi(\theta^{\prime}%
)=\pi(\theta).$
\end{flushleft}
\begin{proof}
This follows from Lemma \ref{intersectlemma}. If $\theta,\theta^{\prime}%
\in\lbrack N]^{\ast}$and $E\in\mathcal{E}$ are such that $\pi(\theta)\cap
E\pi(\theta^{\prime})$ is not empty and tiles $A_{-\theta}\cap EA_{-\theta
^{\prime}}$, then $\theta,\theta^{\prime}\in\lbrack N]^{\ast}$and
$E\in\mathcal{E}$ are such that $E_{\theta}T_{e(\theta)}\cap EE_{\theta
^{\prime}}T_{e(\theta^{\prime})}$ is not empty and tiles $E_{\theta
}A_{e(\theta)}\cap EE_{\theta^{\prime}}A_{e(\theta^{\prime})}$, where
$E_{\theta}=f_{-\theta}s^{e(\theta)}$ and $E_{\theta^{\prime}}=f_{-\theta
^{\prime}}s^{e(\theta^{\prime})}$ are isometries$.$ Assume, without loss of
generality, that $e(\theta)\leq e(\theta^{\prime})$ and apply $E_{\theta
^{\prime}}^{-1} E^{-1}$ to obtain that $\theta,\theta^{\prime}\in\lbrack
N]^{\ast}$ and $E^{\prime}=E_{\theta^{\prime}}^{-1}E^{-1} E_{\theta}%
\in\mathcal{E}$ are such that $E^{\prime}T_{e(\theta)}\cap T_{e(\theta
^{\prime})}$ is not empty and tiles $E^{\prime}A_{e(\theta)}\cap
A_{e(\theta^{\prime})}.$ By Lemma \ref{intersectlemma} it follows that
$E^{\prime}T_{e(\theta)}\subset T_{e(\theta^{\prime})}$, i.e. $E_{\theta
^{\prime}}^{-1}E^{-1}E_{\theta}T_{e(\theta)}\subset T_{e(\theta^{\prime})},$
i.e. $\pi(\theta)\subset E\pi(\theta^{\prime}).$ If also $e(\theta^{\prime
})\leq e(\theta)$ (i.e. $e(\theta^{\prime})=e(\theta)$), then also
$E\pi(\theta^{\prime})\subset\pi(\theta)$. Therefore $E\pi(\theta^{\prime
})=\pi(\theta).$
\end{proof}
\section{\label{proofofTWO}Theorem \ref{theoremTWO}: When is a Tiling
Non-Periodic?}
\begin{flushleft}
\textbf{Theorem \ref{theoremTWO}.} \textit{If }$F$\textit{ is strongly rigid,
then there does not exist any non-identity isometry }$E\in\mathcal{E}$\textit{
and }$\theta\in\lbrack N]^{\infty}$\textit{ such that }$E\pi(\theta)\subset
\pi(\theta)$\textit{.}
\end{flushleft}
\begin{proof}
Suppose there exists an isometry $E$ such that $E\pi(\theta)=\pi(\theta).$
Then we can choose $K\in\mathbb{N}_{0}$ so large that $E\pi(\theta|K)\cap
\pi(\theta|K)\neq\varnothing$ and $E\pi(\theta|K)\cap\pi(\theta|K)$ tiles
$EA_{-\theta|K}\cap A_{-\theta|K}.$ By Theorem \ref{intersectthm} it follows
that%
\[
E\pi(\theta|K)=\pi(\theta|K)
\]
This implies
\[
EE_{\theta}T_{e\left( \theta|K\right) }=E_{\theta}T_{e\left( \theta
|K\right) }%
\]
whence, because $E_{\theta}T_{e\left( \theta|K\right) }$ is in the domain of
$\alpha^{e\left( \theta|K\right) }$ and $\alpha^{e\left( \theta|K\right)
}T_{e\left( \theta|K\right) }=T_{0},$ we have by Lemma \ref{alphaTlem}
\begin{align*}
\alpha^{e\left( \theta|K\right) }E & E_{\theta}T_{e\left( \theta
|K\right) } =\alpha^{e\left( \theta|K\right) }E_{\theta}T_{e\left(
\theta|K\right) }\\
& \Longrightarrow s^{e\left( \theta|K\right) }EE_{\theta}s^{-e\left(
\theta|K\right) }\alpha^{e\left( \theta|K\right) }T_{e\left(
\theta|K\right) }=s^{e\left( \theta|K\right) }E_{\theta}s^{-e\left(
\theta|K\right) }\alpha^{e\left( \theta|K\right) }T_{e\left(
\theta|K\right) }\\
& \Longrightarrow s^{e\left( \theta|K\right) }EE_{\theta}s^{-e\left(
\theta|K\right) }T_{0}=s^{e\left( \theta|K\right) }E_{\theta}s^{-e\left(
\theta|K\right) }T_{0}\\
& \Longrightarrow s^{e\left( \theta|K\right) }EE_{\theta}s^{-e\left(
\theta|K\right) }=s^{e\left( \theta|K\right) }E_{\theta}s^{-e\left(
\theta|K\right) }\text{ (using rigidity)}\\
& \Longrightarrow E=id\text{.}%
\end{align*}
\end{proof}
It follows that if $\mathcal{F}$ is strongly rigid, then $\pi(\theta)$ is
non-periodic for all $\theta$.
\section{\label{invertsec}When is $\pi:[N]^{\ast}\cup\lbrack N]^{\infty
}\rightarrow\mathbb{T}$ invertible?}
\begin{lemma}
\label{noninjlem}For all $\mathcal{F}$ the restricted mapping $\pi|_{\left[
N\right] ^{\ast}\text{.}}:[N]^{\ast}\rightarrow\mathbb{T}$ is injective.
\end{lemma}
\begin{proof}
To simplify notation, write $\pi=\pi|_{\left[ N\right] ^{\ast}\text{.}}$ We
show how to calculate $\theta$ given $\pi\left( \theta\right) $ for
$\theta\in\left[ N\right] ^{\ast}.$ By Lemma~\ref{lemma02} we have
$\pi(\theta)=E_{\theta}T_{e(\theta)}$, where $E$ is the isometry $f_{-\theta
}s^{e(\theta)}$. Given $\pi(\theta)$, we can calculate
\[
e(\theta)=\frac{\ln\left\vert A\right\vert -\ln\left\vert \pi(\theta
)\right\vert }{\ln s},
\]
where $\left\vert U\right\vert $ denotes the diameter of the set $U$.
We next show that if $E_{\theta}=E_{\theta^{\prime}}$ for some $\theta
\neq\theta^{\prime}$ with $e(\theta)=e(\theta^{\prime})$, then $\pi
(\theta)\neq\pi(\theta^{\prime})$. To do this, suppose that $E_{\theta
}=E_{\theta^{\prime}}$. This implies that $f_{-\theta}=f_{-\theta^{\prime}}$
which implies
\[
\left( f_{-\theta^{\prime}}\right) ^{-1}f_{-\theta}=id\text{,}%
\]
which is not possible when $\theta\neq\theta^{\prime},$ as we prove next. The
similitude $\left( f_{-\theta^{\prime}}\right) ^{-1}f_{-\theta}$ maps
$\left( f_{-\theta}\right) ^{-1}(A)\subset A$ to $\left( f_{-\theta
^{\prime}}\right) ^{-1}(A)\subset A$, and these two subsets of\ $A$ are
distinct for all $\theta,\theta^{\prime}\in\left[ N\right] ^{\ast}$with
$\theta\neq\theta^{\prime}$, as we prove next.
Let $\omega,\omega^{\prime}$ denote the two strings $\theta,\theta^{\prime}$
written in inverse order, so that $\theta\neq\theta^{\prime}$ is equivalent to
$\omega\neq\omega^{\prime}$. First suppose $\left\vert \omega\right\vert
=\left\vert \omega^{\prime}\right\vert =m$ for some $m\in\mathbb{N}.$ Then
use
\[
A=%
{\displaystyle\bigsqcup\limits_{\omega\in\left[ N\right] ^{m}}}
f_{\omega}(A),
\]
which tells us that $f_{\omega}(A)$ and $f_{\omega^{\prime}}(A)$ are disjoint.
Since $\left( f_{-\theta^{\prime}}\right) ^{-1}f_{-\theta}$ maps
\linebreak$\left( f_{-\theta}\right) ^{-1}(A)=f_{\omega}(A)$ to the distinct
set $\left( f_{-\theta^{\prime}}\right) ^{-1}(A)=f_{\omega^{\prime}}(A)$, we
must have $\left( f_{-\theta^{\prime}}\right) ^{-1}f_{-\theta}\neq id$.
Now suppose $\left\vert \omega\right\vert =m<\left\vert \omega^{\prime
}\right\vert =m^{\prime}.$ If both strings $\omega$ and $\omega^{\prime}$
agree through the first $m$ places, then $f_{\omega}(A)$ is a strict subset of
$f_{\omega^{\prime}}^{-1}(A)$ and again we cannot have $\left( f_{-\theta
^{\prime}}\right) ^{-1}f_{-\theta}=id$. If both strings $\omega$ and
$\omega^{\prime}$ do not agree through the first $m$ places, then let $p<m$ be
the index of their first disagreement. Then we find that $f_{\omega}(A)\ $is a
subset of $f_{\omega|p}(A)$, while $f_{\omega^{\prime}}(A)$ is a subset of the
set $f_{\omega^{\prime}|p}(A)$, which is disjoint from $f_{\omega|p}(A)$.
Since $\left( f_{-\theta^{\prime}}\right) ^{-1}f_{-\theta}$ maps $f_{\omega
}(A)$ to $f_{\omega^{\prime}}(A)$, we again have that $\left( f_{-\theta
^{\prime}}\right) ^{-1}f_{-\theta}\neq id$.
\end{proof}
We are going to need a key property of certain shifts maps on tilings, defined
in the next lemma.
\begin{lemma}
The mappings $S_{i}:\{\pi(\theta):\theta\in\lbrack N]^{l}\cup\lbrack
N]^{\infty},l\geq a_{i}\}\rightarrow\mathbb{T}^{\prime}$ for $i\in\lbrack N]$
are well-defined by%
\[
S_{i}=f_{i}s^{-a_{i}}\alpha^{a_{i}}\text{.}%
\]
It is true that%
\[
S_{\theta_{1}}\pi(\theta)=\pi(S\theta)
\]
for all $\theta\in\lbrack N]^{l}\cup\lbrack N]^{\infty}$ where $l\geq
a_{\theta_{1}}$.
\end{lemma}
\begin{proof}
We only consider the case $\theta\in\lbrack N]^{\infty}$. The case $\theta
\in\lbrack N]^{l}$ is treated similarly. A detailed calculation, outlined
next, is needed. The key idea is that $\pi\left( \theta\right) $ is broken
up into a countable union of disjoint tilings, each of which belongs to the
domain of $\alpha^{k}$ for all $k\leq K$ for any $K\in\mathbb{N}$. For all
$K\in\mathbb{N}$ we have:%
\[
\pi\left( \theta\right) =E_{\theta|K}T_{e\left( \theta|K\right) }%
{\textstyle\bigsqcup}
{\textstyle\bigsqcup_{k=K}^{\infty}}
E_{\theta|k+1}T_{e\left( \theta|k+1\right) }\backslash E_{\theta
|k}T_{e\left( \theta|k\right) }\text{.}%
\]
The tilings on the r.h.s. are indeed disjoint, and each set belongs to the
domain of $\alpha^{e\left( \theta|K\right) }$, so we can use Lemma
\ref{srinterlem} applied countably many times to yield%
\[
S_{\theta_{1}}\pi\left( \theta\right) =S_{\theta_{1}}\left( E_{\theta
|K}T_{e\left( \theta|K\right) }\right)
{\textstyle\bigsqcup_{k=K}^{\infty}}
S_{\theta_{1}}\left( E_{\theta|k+1}T_{e\left( \theta|k+1\right) }\right)
\backslash S_{\theta_{1}}\left( E_{\theta|k}T_{e\left( \theta|k\right)
}\right) \text{.}%
\]
Evaluating, we obtain successively
\begin{align*}
S_{\theta_{1}}\pi\left( \theta\right) & =f_{\theta_{1}}s^{-a_{\theta_{1}}%
}\alpha^{a_{\theta_{1}}}\left( E_{\theta|K}T_{e\left( \theta|K\right)
}\right)
{\textstyle\bigsqcup_{k=K}^{\infty}}
f_{\theta_{1}}s^{-a_{\theta_{1}}}\alpha^{a_{\theta_{1}}}\left( E_{\theta
|k+1}T_{e\left( \theta|k+1\right) }\right) \backslash f_{\theta_{1}%
}s^{-a_{\theta_{1}}}\alpha^{a_{\theta_{1}}}\left( E_{\theta|k}T_{e\left(
\theta|k\right) }\right) ,\\
S_{\theta_{1}}\pi\left( \theta\right) & =f_{\theta_{1}}E_{\theta
|K}s^{-a_{\theta_{1}}}\alpha^{a_{\theta_{1}}}T_{e\left( \theta|K\right) }%
{\textstyle\bigsqcup_{k=K}^{\infty}}
f_{\theta_{1}}E_{\theta|k+1}s^{-a_{\theta_{1}}}\alpha^{a_{\theta_{1}}%
}T_{e\left( \theta|k+1\right) }\backslash f_{\theta_{1}}E_{\theta
|k+1}s^{-a_{\theta_{1}}}\alpha^{a_{\theta_{1}}}T_{e\left( \theta|k\right)
},\\
S_{\theta_{1}}\pi\left( \theta\right) & =f_{\theta_{1}}E_{\theta
|K}s^{-a_{\theta_{1}}}T_{e\left( S\theta|K-1\right) }%
{\textstyle\bigsqcup_{k=K}^{\infty}}
f_{\theta_{1}}E_{\theta|k+1}s^{-a_{\theta_{1}}}T_{e\left( S\theta|k\right)
}\backslash f_{\theta_{1}}E_{\theta|k}s^{-a_{\theta_{1}}}T_{e\left(
S\theta|k-1\right) },\\
S_{\theta_{1}}\pi\left( \theta\right) & =E_{S\theta|\left( K-1\right)
}T_{e\left( S\theta|K-1\right) }%
{\textstyle\bigsqcup_{k=K}^{\infty}}
E_{S\theta|k}T_{e\left( S\theta|k-1\right) }\backslash E_{S\theta
|k-1}T_{e\left( S\theta|k-1\right) }=\pi\left( S\theta\right) .
\end{align*}
\end{proof}
\begin{flushleft}
\textbf{Theorem \ref{1to1thm}.} If $\pi(i)\cap\pi(j)$ does not tile $\left(
support\text{ }\pi(i)\right) \cap\left( support\text{ }\pi(j)\right) $ for
all $i\neq j$, then $\pi:[N]^{\ast}\cup\lbrack N]^{\infty}\rightarrow
\mathbb{T}$ is one-to-one.
\end{flushleft}
\begin{proof}
The map $\pi$ is one-to-one on $[N]^{\ast}$ by Lemma \ref{noninjlem}, so we
restrict attention to points in $[N]^{\infty}$. If $\theta$ and $\theta
^{\prime}$ are such that $\theta_{1}=i$ and $\theta_{1}^{\prime}=j$, then the
result is immediate because $\pi(\theta)$ contains $\pi(i)$ and $\pi
(\theta^{\prime})$ contains $\pi(j)$. If $\theta$ and $\theta^{\prime}$ agree
through their first $K$ terms with $K\geq1$ and $\theta_{K+1}\neq$
$\theta_{K+1}^{\prime}$, then $\pi(S^{K}\theta)\neq\pi(S^{K}\theta^{\prime})$.
Now apply $S_{\theta_{1}}^{-1}S_{\theta_{2}}^{-1}...S_{\theta_{K}}^{-1}$ to
obtain $\pi(\theta)\neq\pi(\theta^{\prime})$. (We can do this last step
because $S_{i}^{-1}=\left( f_{i}s^{-a_{i}}\alpha^{a_{i}}\right) ^{-1}%
=\alpha^{-a_{i}}s^{a_{i}}f_{i}^{-1}$ has as its domain all of $\mathbb{T}%
^{\prime}$ and maps $\mathbb{T}^{\prime}$ into $\mathbb{T}^{\prime}$.)
\end{proof}
\section{\label{exsec}Examples}
\subsection{Golden b tilings}
A \textit{golden b} $G\subset{\mathbb{R}}^{2}$ is illustrated in Figure
\ref{golden-01}. This hexagon is the only rectilinear polygon that can be
tiled by a pair of differently scaled copies of itself \cite{S, Sch}.
Throughout this subsection the IFS is
\[
\mathcal{F}=\{\mathbb{R}^{2};f_{1},f_{2}\}
\]
where
\[
f_{1}(x,y)=%
\begin{pmatrix}
0 & s\\
-s & 0
\end{pmatrix}%
\begin{pmatrix}
x\\
y
\end{pmatrix}
+%
\begin{pmatrix}
0\\
s
\end{pmatrix}
,\quad f_{2}(x,y)=%
\begin{pmatrix}
-s^{2} & 0\\
0 & s^{2}%
\end{pmatrix}%
\begin{pmatrix}
x\\
y
\end{pmatrix}
+%
\begin{pmatrix}
1\\
0
\end{pmatrix}
,
\]
where the scaling ratios $s$ and $s^{2}$ obey $s^{4}+s^{2}=1$, which tells us
that $s^{-2}=\alpha^{-2}$ is the golden mean. The attractor of $\mathcal{F}$
is $A=G$. It is the union of two prototiles $f_{1}(G)$ and $f_{2}(G)$. Copies
of these prototiles are labelled $L$ and $S$. In this example, note that
$e(\theta)=\theta_{1}+\theta_{2}+\cdots+\theta_{\left\vert \theta\right\vert
}$ for $\theta\in\lbrack2]^{\ast}$.
\begin{figure}[h]
\centering
\includegraphics[width=5cm, keepaspectratio]{golden01.png}\caption{A golden b
is a union of two tiles, a small one and its partner, a large one. The
vertices of this golden b are located at $(0,0)$ $(1,0)$ $(1,\alpha^{3})$
$(\alpha^{2},\alpha^{3})$ $(\alpha^{2},\alpha)$ $(0,\alpha)$ in
counterclockwise order, starting at the lower left corner, where $\alpha^{-2}$
is the golden mean. This picture also represents a tiling $T_{0}%
=\pi(\varnothing)$. }%
\label{golden-01}%
\end{figure}
The figures in this section illustrate some earlier concepts in the context of
the golden b. Using some of these figures, it is easy to check that
$\mathcal{F}$ is strongly rigid, so the tilings $\pi(\theta)$ have all of the
properties ascribed to them by the theorems in the earlier sections.
\begin{figure}[ptb]
\centering
\includegraphics[width=8cm, keepaspectratio]{GS-C.png} \caption{Structures of
$A_{\theta_{1}\theta_{2}\cdots\theta_{k}1}$ and $A_{\theta_{1}\theta_{2}%
\cdots\theta_{k}2}$ relative to $A_{\theta_{1}\theta_{2}\cdots\theta_{k}}$.}%
\label{construction}%
\end{figure}
\begin{figure}[ptb]
\centering
\vskip 7mm \includegraphics[width=8cm, keepaspectratio]{GS-fixedagain.png}
\caption{Some of the sets $A_{\theta_{1}\theta_{2}\theta_{3}..\theta_{k}}$ and
the corresponding tilings $\pi(\theta_{1}\theta_{2}\theta_{3}..\theta_{k})$.
The recursive organization is such that $\pi(\varnothing)\subset\pi(\theta
_{1})\subset\pi(\theta_{1}\theta_{2})\subset\cdots$ regardless of the choice
$\theta_{1}\theta_{2}\theta_{3}..\in\{1,2\}^{\infty}$. }%
\label{img_1015x}%
\end{figure}
\begin{figure}[ptb]
\centering
\includegraphics[width=8cm, keepaspectratio
]{L-maps.png}\caption{Relative addresses, the addresses of the tiles that
comprise the tilings $T_{0},T_{1},T_{2},T_{3}$ of $A_{0},A_{1},A_{2}%
,A_{3}\text{.}$}%
\label{l-maps}%
\end{figure}
\begin{figure}[ptb]
\centering
\vskip 9mm \includegraphics[width=8cm, keepaspectratio]{GS-absolute.png}
\caption{Absolute addresses associated with the golden b.}%
\label{absolute}%
\end{figure}
\begin{figure}[ptb]
\centering
\vskip 9mm \includegraphics[width=8cm, keepaspectratio]{IMG_1013.png}
\caption{The boundaries of the tilings $\pi(\emptyset)$, $\pi(1)$, $\pi(2)$,
with the parts of the boundaries of the tiles in $\pi(1)$ that are not parts
of the boundaries of tiles in $\pi(2)$ superimposed in red on the rightmost
image.}%
\label{img1013}%
\end{figure}
The relationships between $A_{\theta_{1}\theta_{2}\cdots\theta_{k}1}$ and
$A_{\theta_{1}\theta_{2}\cdots\theta_{k}2}$ relative to $A_{\theta_{1}%
\theta_{2}\cdots\theta_{k}}$ are illustrated in Figure \ref{construction}.
Figure \ref{img_1015x} illustrates some of the sets $A_{\theta_{1}\theta
_{2}\theta_{3}..\theta_{k}}$ and the corresponding tilings $\pi(\theta
_{1}\theta_{2}\theta_{3}..\theta_{k})$.
In Section \ref{realabsec}, procedures were described by which the relative
addresses of tiles in $T(\theta|k)$ and the absolute addresses of tiles in
$\pi(\theta|k)$ may be calculated recursively. Relative addresses for some
golden b tilings are illustrated in Figure \ref{l-maps}. Figure \ref{absolute}
illustrates absolute addresses for some golden b tilings.
The map $\pi:[2]^{\ast}\cup\lbrack2]^{\infty}\rightarrow\mathbb{T}$ is 1-1 by
Theorem \ref{1to1thm}, because $\pi(1)\cup\pi(2)$ does not tile the
interesection of the supports of $\pi(1)$ and $\pi(2),$ as illustrated in
Figure \ref{img1013}.
We note that $\pi(\overline{12})$ and $\pi(\overline{21})$ are aperiodic
tilings of the upper right quadrant of $\mathbb{R}^{2}$.
\subsection{Fractal tilings with non-integer dimension}
The left hand image in Figure \ref{gold8map}, shows the attractor of the IFS
represented by the different coloured regions, there being 8 maps, and
provides an example of a strongly rigid IFS. The right hand image represents
the attractor of the same IFS minus one of the maps, also strongly rigid, but
in this case the dimensions of the tiles is less than two and greater than
one. Figure \ref{sidebyside} (in Section~\ref{sec:intro}) illustrates a part
of a fractal blow up of a different but related 7 map IFS, also strongly
rigid, and the corresponding tiling.%
\begin{figure}[ptb]%
\centering
\includegraphics[
height=2.5728in,
width=5.2477in
]%
{gold8map2.png}%
\caption{See text.}%
\label{gold8map}%
\end{figure}
Figure \ref{beetile} left shows a tiling associated with the IFS $\mathcal{F}$
represented on the left in Figure \ref{gold8map}, while the tiling on the
right is another example of a fractal tiling, obtained by dropping one of the
maps of $\mathcal{F}$.%
\begin{figure}[ptb]%
\centering
\includegraphics[
height=2.5728in,
width=5.2468in
]%
{beetiling.png}%
\caption{See text.}%
\label{beetile}%
\end{figure}
\subsection{Tilings derived from Cantor sets}
Our results apply to the case where $\mathcal{F}=\{\mathbb{R}^{M}%
;f_{i}(x)=s^{a_{i}}O_{i}+q_{i},i\in\lbrack N]\}$ where $\{O_{i},q_{i}%
:i\in\lbrack N]\}$ is fixed in a general position, the $a_{i}$s are positive
integers, and $s$ is chosen small enough to ensure that the attractor is a
Cantor set. In this situation the set of overlap is empty and it is to be
expected that $\mathcal{F}$ is strongly rigid, in which case all tilings (by a
finite set of prototiles, each a Cantor set) will be non-periodic. We can then
take $s$ to be the supremum of value such that the set of overlap is nonempty,
to yield interesting ``just touching" tilings.
|
1,314,259,994,872 | arxiv | \section{Introduction}
For a smooth projective variety $X$ defined over a number field, one can ask whether the set of rational points is dense.
It is expected that the set of rational points reflects the positivity of the canonical bundle $\omega_{X}$ (cf. the Bombieri-Lang conjecture) and it is tempting to study the intermediate case $\omega_{X}=\mathcal{O}_{X}$.
In this case, $X$ belongs to the class of special varieties introduced by Campana \cite{C} which is conjecturally the same as that of varieties on which rational points are potentially dense, that is, Zariski dense
after passing to some finite field extension.
An interesting subcase is given by abelian varieties, for which potential density is well-known.
It is challenging to consider the subcase given by Calabi-Yau varieties in a {\it strict sense},
i.e., smooth projective varieties $X$ with $\omega_{X}=\mathcal{O}_{X}$ and $H^{0}(X,\Omega_{X}^{i})=0$ for all $0<i<\dim X$, which are simply connected over $\mathbb{C}$.
For elliptic K3 surfaces and K3 surfaces with infinite automorphism groups, potential density holds due to Bogomolov-Tschinkel \cite{BT3}.
Moreover, there are several works on K3 surfaces over the rational numbers with Zariski dense sets of rational points; for instance, quartic K3 surfaces have long been studied \cite{E, LMvL, SD1, SD2}.
Very little is known in higher dimensions.
It is stated by Tschinkel \cite[after Problem 3.5]{T} that it would be worthwhile to find non-trivial examples of Calabi-Yau threefolds over number fields with
Zariski
dense sets of rational points.
It is only recent that the first such examples were actually obtained:
Bogomolov-Halle-Pazuki-Tanimoto \cite{BHPT} studied Calabi-Yau threefolds with abelian surface fibrations
and showed potential density for threefolds (including
simply connected ones) constructed by Gross-Popescu \cite{GP}.
However, it is not immediately clear whether their method can be used to determine the minimal field extensions over which rational points become Zariski dense.
In this short note, we construct higher-dimensional Calabi-Yau varieties in a strict sense defined over a given number field with Zariski dense sets of rational points.
We give two elementary constructions in arbitrary dimensions (Section \ref{C1}, \ref{C2}) as well as another construction in dimension three which involves certain Calabi-Yau threefolds containing an Enriques surface (Section \ref{C3}).
The constructions also show that potential density holds for
(sufficiently)
general members of the families.
The third construction is a by-product of the author's attempt to analyze in detail the recent construction due to Ottem-Suzuki \cite{OS} of a pencil of Enriques surfaces with non-algebraic integral Hodge classes.
For the third construction,
our example
contains no abelian surface and is a unique minimal model in its birational equivalence class,
thus a theorem of Bogomolov-Halle-Pazuki-Tanimoto \cite[Theorem 1.2]{BHPT} cannot be applied.
For all the constructions, elliptic fibration structures are crucial.
We work over a number field unless otherwise specified.
\begin{ack*}
The author wishes to thank Lawrence Ein,
Yohsuke Matsuzawa,
John Christian Ottem,
Ramin Takloo-Bighash, and Burt Totaro for interesting discussions.
\end{ack*}
\section{Construction I}\label{C1}
The first idea is to construct elliptic Calabi-Yau varieties in a strict sense (i.e., $H^{0}(X,\Omega^{i}_{X})=0$ for all $0<i<\dim X$ and simply connected over $\mathbb{C}$) whose base spaces are rational and which admits infinitely many sections.
For that purpose, we introduce variants of Schoen's Calabi-Yau fiber products of rational elliptic surfaces \cite{Sch}.
We let $S\subset \mathbb{P}^{1}\times \mathbb{P}^{2}$ be a smooth hypersurface of bi-degree $(1,3)$.
Then the first projection defines an elliptic fibration $f\colon S\rightarrow \mathbb{P}^{1}$.
Moreover, via the second projection, $S$ is the blow-up of $\mathbb{P}^{2}$ along the nine points given by the intersection of two cubic curves, hence rational.
For $n\geq 3$, we let $Y\subset \mathbb{P}^{1}\times \mathbb{P}^{n-1}$ be a smooth hypersurface of bi-degree $(1,n)$.
The first projection restricts to a fibration into Calabi-Yau hypersurfaces $g\colon Y\rightarrow \mathbb{P}^{1}$ and $Y$ is again rational via the second projection.
We assume that over any point in $\mathbb{P}^{1}$ either $f$ or $g$ is smooth, which is satisfied if $Y$ is general with respect to $S$.
We define $X=S\times_{\mathbb{P}^{1}}Y$ and let $\pi\colon X\rightarrow \mathbb{P}^{1}$ be the natural projection.
\begin{lem}\label{lCYI}
The fiber product $X$ is a Calabi-Yau $n$-fold in a strict sense.
\end{lem}
\begin{rem}
If $S_{\mathbb{C}}$ is very general, it is classical that the elliptic fibration $f\colon S_{\mathbb{C}}\rightarrow \mathbb{P}^{1}_{\mathbb{C}}$ admits infinitely many sections,
which implies that the same holds for the natural projection $X_{\mathbb{C}}\rightarrow Y_{\mathbb{C}}$.
This provides examples of Calabi-Yau varieties in a strict sense defined over $\mathbb{C}$ containing infinitely many rational divisors in all dimensions $\geq 3$.
We will construct below such an example over a given number field.
\end{rem}
\begin{proof}[Proof of Lemma \ref{lCYI}]
It is immediate to see that $X$ is smooth.
The fiber product $X$ is a complete intersection in $\mathbb{P}^{1}\times \mathbb{P}^{2}\times \mathbb{P}^{n-1}$ of hypersurfaces of tri-degree $(1,3,0)$ and that of tri-degree $(1,0,n)$.
We have $\omega_{X}=\mathcal{O}_{X}$ by the adjunction formula and an easy computation shows that $H^{i}(X,\mathcal{O}_{X})=0$ for all $0<i<n$.
For simple connectedness over $\mathbb{C}$,
Schoen proved this result for $n=3$ (see \cite[Lemma 1.1]{Sch0} for the strategy).
In fact, his method works for $n\geq 3$ and the argument goes as follows.
Let $U\subset \mathbb{P}^{1}_{\mathbb{C}}$ be the open subset over which $\pi\colon X_{\mathbb{C}}\rightarrow \mathbb{P}^{1}_{\mathbb{C}}$ is smooth and let $V=\pi^{-1}(U)$.
The natural map $\pi|_{V}\colon V\rightarrow U$ is topologically locally trivial and we let $F$ be a fiber.
We note that $\pi\colon X_{\mathbb{C}}\rightarrow \mathbb{P}^{1}_{\mathbb{C}}$ admits a section since $f$ and $g$ do, so does $\pi|_{V}\colon V\rightarrow U$.
Then we have a commutative diagram
\[
\xymatrix{
\pi_{1}(F)\ar[r] &\pi_{1}(V)\ar@{->>}[r]\ar@{->>}[d] &\pi_{1}(U)\ar@{->>}[d]\ar@/_1pc/[l]\\
& \pi_{1}(X_{\mathbb{C}})\ar@{->>}[r]& \pi_{1}(\mathbb{P}^{1}_{\mathbb{C}}) \ar@/_1pc/[l],
}
\]
where the upper row is exact by the homotopy long exact sequence.
Chasing the diagram and using the fact that $\pi_{1}(\mathbb{P}^{1}_{\mathbb{C}})$ is trivial, we are reduced to showing that $\pi_{1}(F)$ has the trivial image in $\pi_{1}(X_{\mathbb{C}})$.
Writing $F=F_{1}\times F_{2}$, where $F_{1}$ is a fiber of $f$ and $F_{2}$ is a fiber of $g$,
we see that the Van-Kampen theorem implies
\[\pi_{1}(F)=\pi_{1}(F_{1})\times \pi_{1}(F_{2}).
\]
Now it is enough to verify that the image of $\pi_{1}(F_{1})$ and $\pi_{1}(F_{2})$ in $\pi_{1}(X_{\mathbb{C}})$ are trivial.
This is immediate since $\pi_{1}(F_{1})\rightarrow \pi_{1}(X_{\mathbb{C}})$ (resp. $\pi_{1}(F_{2})\rightarrow \pi_{1}(X_{\mathbb{C}})$) factors through the fundamental group of a section of $X_{\mathbb{C}}\rightarrow S_{\mathbb{C}}$ (resp. $X_{\mathbb{C}}\rightarrow Y_{\mathbb{C}}$) and
since $S_{\mathbb{C}}$ (resp. $Y_{\mathbb{C}}$) is simply connected (it is rational).
The proof is complete.
\end{proof}
We give a construction of $X$ defined over $\mathbb{Q}$ such that $X(\mathbb{Q})$ is Zariski dense.
We start from constructing $S$ defined over $\mathbb{Q}$ such that the elliptic fibration $f\colon S\rightarrow \mathbb{P}^{1}$ admits infinitely many sections over $\mathbb{Q}$, or equivalently,
the generic fiber $E/\mathbb{Q}(t)$ admits a $\mathbb{Q}(t)$-rational point and the Mordel-Weil group $E(\mathbb{Q}(t))$ has a positive rank.
The construction is as follows.
Let $C\subset \mathbb{P}^{2}$ be an elliptic curve defined over $\mathbb{Q}$ with a Zariski dense set of $\mathbb{Q}$-rational points.
Let $O\in C(\mathbb{Q})$ (resp. $P\in C(\mathbb{Q})$) be the origin (resp. a non-torsion point).
Let $D\subset \mathbb{P}^{2}$ be another elliptic curve defined over $\mathbb{Q}$ which goes through both $O$ and $P$ and which intersects transversally with $C$.
Let $S\subset \mathbb{P}^{1}\times \mathbb{P}^{2}$ be the hypersurface of bi-degree $(1,3)$ defined over $\mathbb{Q}$ corresponding to the pencil of elliptic curves generated by $C$ and $D$.
A zero-section of $f\colon S\rightarrow \mathbb{P}^{1}$ is given by the $(-1)$-curve over $O$
and the Mordel-Weil group $E(\mathbb{Q}(t))$ has a positive rank since the image of the specialization homomorphism $E(\mathbb{Q}(t))\rightarrow C(\mathbb{Q})$ has a positive rank.
We conclude that $S$ has the desired property.
Let $Y$ be smooth, defined over $\mathbb{Q}$, and general so that over any point in $\mathbb{P}^{1}$ either $f$ or $g$ is smooth.
Then $X=S\times_{\mathbb{P}^{1}}Y$ is a Calabi-Yau $n$-fold in a strict sense defined over $\mathbb{Q}$.
Moreover the elliptic fibration $X=S\times_{\mathbb{P}^{1}}Y\rightarrow Y$ admits infinitely many sections over $\mathbb{Q}$ by construction.
We note that the set $Y(\mathbb{Q})$ is Zariski dense since $Y$ is rational over $\mathbb{Q}$.
The following theorem is now immediate:
\begin{thm}\label{t1}
The set $X(\mathbb{Q})$ is Zariski dense.
\end{thm}
\section{Construction II}\label{C2}
We let $X\subset (\mathbb{P}^{1})^{n+1}$ be a smooth hypersurface of multi-degree $(2^{n+1})$.
The following is an immediate consequence of the Lefschetz hyperplane section theorem:
\begin{lem}
If $n\geq 2$, then $X$ is a Calabi-Yau $n$-fold in a strict sense.
\end{lem}
We give a construction of $X$ defined over $\mathbb{Q}$ such that $X(\mathbb{Q})$ is Zariski dense.
We recall that a multi-section of an elliptic fibration is {\it saliently ramified} if it is ramified in a point which lies in a smooth elliptic fiber.
The following theorem is due to Bogomolov-Tschinkel:
\begin{thm}[\cite{BT1,BT2}]\label{BT}
Let $\phi\colon \mathcal{E} \rightarrow B$ be an elliptic fibration over a number field $K$.
If there exists a saliently ramified multi-section $\mathcal{M}$ such that $\mathcal{M}(K)$ is Zariski dense,
then $\mathcal{E}(K)$ is Zariski dense.
\end{thm}
\begin{proof}
We sketch the proof for the convenience of the reader.
It is obvious that the subset $\phi(\mathcal{M}(K))\subset B$ is Zariski dense.
Let $\phi_{\mathcal{J}}\colon \mathcal{J}\rightarrow B$ be the corresponding Jacobian fibration.
We consider a rational map
\[
\tau\colon \mathcal{E}\dashrightarrow \mathcal{J}, \, p\mapsto dp - \mathcal{M}_{\phi(p)},
\]
where $d=\deg(\mathcal{M}/B)$.
Then $\tau(B)$ cannot be contained in the $m$-torsion part $\mathcal{J}[m]$ for any positive integer $m$.
Now Merel's theorem (or Mazur's theorem when $K=\mathbb{Q}$) implies
that there exists a non-empty Zariski open subset $U\subset B$ such that a rational point $p_{b}\in \mathcal{M}(K)$ is non-torsion on the fiber $\mathcal{J}_{b}$ for any $b\in \phi(\mathcal{M}(K))\cap U$.
Finally, the fiberwise action of the Jacobian fibraion on $\mathcal{E}$ translates rational points on $\mathcal{M}$, which concludes the proof.
\end{proof}
We start from constructing a smooth hypersurface $X_{1}\subset \mathbb{P}^{1}\times \mathbb{P}^{1}$ of bi-degree $(2,2)$ defined over $\mathbb{Q}$ such that $X_{1}(\mathbb{Q})$ is Zariski dense.
We set
\[
\mathbb{P}^{1}\times \mathbb{P}^{1}=\Proj \mathbb{Q}[S_{1},T_{1}]\times \Proj \mathbb{Q}[S_{2},T_{2}].
\]
For instance, we can take $X_{1}$ to be the hypersurface defined by the equation
\[
S_{1}^2S_{2}T_{2}+S_{1}T_{1}(2S_{2}^2+2S_{2}T_{2}+3T_{2}^{2})+T_{1}^2T_{2}(S_{2}+T_{2})=0.
\]
Then $X_{1}$ is an elliptic curve with a non-torsion point defined over $\mathbb{Q}$, hence $X_{1}(\mathbb{Q})$ is Zariski dense.
For $n>1$, we set
\[
(\mathbb{P}^{1})^{n+1}=\Proj \mathbb{Q}[S_{1},T_{1}]\times \cdots \times \Proj \mathbb{Q}[S_{n+1},T_{n+1}]
\]
and inductively define $X_{n} \subset (\mathbb{P}^{1})^{n+1}$ to be a general hypersurface of multi-degree $(2^{n+1})$ defined over $\mathbb{Q}$ containing $X_{n-1}$ as the fiber of the projection $pr_{n+1}\colon X_{n}\rightarrow \mathbb{P}^{1}$ over $T_{n+1}=0$.
\begin{lem}\label{l0}
The hypersurface $X_{n}$ is smooth and $X_{n-1}$ is a saliently ramified multi-section of the elliptic fibration $pr_{1,\cdots,n-1}\colon X_{n}\rightarrow (\mathbb{P}^{1})^{n-1}$.
\end{lem}
\begin{proof}
For smoothness, it is enough to show that $X_{n}$ is smooth around $X_{n-1}$.
This is obvious because $X_{n-1}$ is a fiber of the flat proper morphism $pr_{n+1}\colon X_{n}\rightarrow \mathbb{P}^{1}$ and $X_{n-1}$ is smooth by induction.
We are reduced to showing the second assertion.
Let $B\subset (\mathbb{P}^{1})^{n-1}$ be the branch locus, that is, the set of critical values of $pr_{1,\cdots,n-1}\colon X_{n-1}\rightarrow (\mathbb{P}^{1})^{n-1}$.
We note that this morphism is generically finite, but not finite when $n>3$.
We only need to prove that the fiber of $pr_{1,\cdots, n-1}\colon X_{n}\rightarrow (\mathbb{P}^{1})^{n-1}$ over a general point in $B$ is smooth.
Let $\Sigma\subset X_{n}$ be
the set of critical points
of $pr_{1,\cdots, n-1}\colon X_{n}\rightarrow (\mathbb{P}^{1})^{n-1}$.
By generality, it is sufficient to show that $\dim \Sigma\cap X_{n-1}=n-3$.
This can be checked by a direct computation as follows.
The equation of $X_{n}\subset (\mathbb{P}^{1})^{n+1}$ can be written as
\[
S_{n+1}^{2}F+S_{n+1}T_{n+1}G+T_{n+1}^{2}H=0,
\]
where $F=0$ defines $X_{n-1}\subset (\mathbb{P}^{1})^{n}$.
If we write $F$ as
\[F=S_{n}^{2}F_{1}+S_{n}T_{n}F_{2}+T_{n}^{2}F_{3},
\]
then the set $\Sigma\cap X_{n}$ is defined by
\[
S_{n+1}=2S_{n}F_{1}+T_{n}F_{2}=S_{n}F_{2}+2T_{n}F_{3}=G=0.
\]
The first three equations define the ramification locus, that is, the set of critical points of $pr_{1,\cdots, n-1}\colon X_{n-1}\rightarrow (\mathbb{P}^{1})^{n-1}$, which is of dimension $n-2$,
thus the four equations together define a closed subset of dimension $n-3$ by generality, as we wanted.
The proof is complete.
\end{proof}
Now Theorem \ref{BT} implies:
\begin{thm}\label{t2}
The set $X_{n}(\mathbb{Q})$ is Zariski dense for any $n\geq 1$.
\end{thm}
\begin{rem}
For a smooth
hypersurface $X$ in $(\mathbb{P}^{1})^{n+1}$ of multi-degree $(2^{n+1})$, the birational automorphism group $\Bir(X_{\mathbb{C}})$ is infinite by Cantat-Oguiso \cite{CO}.
It
would be possible
to give a proof of the
density of rational points by using the action of $\Bir(X_{\mathbb{C}})$.
\end{rem}
\section{Calabi-Yau threefolds containing an Enriques surface}\label{CY3}
In this section, we work over the complex numbers.
We construct a Calabi-Yau threefold containing an Enriques surface and prove basic properties of the threefold, which will be used in Section \ref{C3}.
Let $\mathbb{P}=\mathbb{P}_{\mathbb{P}^{2}}(\mathcal{O}^{\oplus 3}\oplus \mathcal{O}(1))$.
On the projective bundle $\mathbb{P}$, we consider a map of vector bundles
\[
u\colon \mathcal{O}^{\oplus 3}\rightarrow \mathcal{O}(2H_{1})\oplus \mathcal{O}(2H),
\]
where $H_{1}$ (resp. $H$) is the pull-back of the hyperplane section class on $\mathbb{P}^{2}$ (resp. the tautological class on $\mathbb{P}$).
Let $X$ be the rank one degeneracy locus of $u$.
\begin{lem}\label{l1}
If $u$ is general, $X$ is a Calabi-Yau threefold.
We have the topological Euler characteristic $\chi_{\Top}(X)=c_{3}(T_{X})=-84$ and the Hodge numbers $h^{1,1}(X)=2$ and $h^{1,2}(X)=44$.
\end{lem}
\begin{proof}
Since the vector bundle $\mathcal{O}(2H_{1})\oplus \mathcal{O}(2H)$ is globally generated, $X$ is a smooth threefold by the Bertini theorem for degeneracy loci.
Another projective model of $X$ is defined as the zero set of a naturally defined section of $\mathcal{O}(1)^{\oplus 3}$ on the projective bundle $\widetilde{\mathbb{P}}=\mathbb{P}_{\mathbb{P}}(\mathcal{O}(2H_{1})\oplus \mathcal{O}(2H))$.
Let $\widetilde{H}$ be the tautological class on $\widetilde{\mathbb{P}}$.
The adjunction formula gives $\omega_{X}=\mathcal{O}_{X}(\widetilde{H}-2H)$.
On the other hand, $\widetilde{H}-2H$ is the class of the intersecction $X\cap \mathbb{P}_{\mathbb{P}}(\mathcal{O}(2H_{1}))$, which is empty, thus we have $\mathcal{O}_{X}(\widetilde{H}-2H)=\mathcal{O}_{X}$.
It follows that $\omega_{X}=\mathcal{O}_{X}$.
The rest of the statement is a consequence of
a direct computation using the conormal exact sequence and the Koszul resolution of the ideal sheaf of $X$ in $\widetilde{\mathbb{P}}$.
The proof is complete.
\end{proof}
We assume that $u$ is general in what follows.
\begin{lem}\label{l2}
The threefold $X$ admits an elliptic fibration $\phi \colon X\rightarrow \mathbb{P}^{2}$.
Moreover, $X$ contains an Enriques surface $S$ and the linear system $|2S|$ defines a K3 fibration $\psi\colon X\rightarrow \mathbb{P}^{1}$.
\end{lem}
\begin{rem}
There are several examples of Calabi-Yau threefolds containing an Enriques surface.
For instance, see Borisov-Nuer \cite{BN}.
\end{rem}
\begin{proof}[Proof of Lemma \ref{l2}]
A natural projection $\mathbb{P}\rightarrow \mathbb{P}^{2}$ restricts to a surjection $\phi\colon X\rightarrow \mathbb{P}^{2}$ with the geometric generic fiber a complete intersection of two quadrics in $\mathbb{P}^{3}$, which is an elliptic curve.
The morphism $\phi$ has equidimensional fibers, hence it is flat.
Moreover, the map $u$ restricts to a map of vector bundles
\[
v\colon \mathcal{O}^{\oplus 3}\rightarrow \mathcal{O}(2,0)\oplus \mathcal{O}(0,2)
\]
on $\mathbb{P}_{\mathbb{P}^{2}}(\mathcal{O}^{\oplus 3})=\mathbb{P}^{2}\times \mathbb{P}^{2}$.
The intersection $X\cap \mathbb{P}_{\mathbb{P}^{2}}(\mathcal{O}^{\oplus 3})$ is the rank one degeneracy locus of $v$, which is an Enriques surface $S$
by generality
(see \cite[Lemma 2.1]{OS}).
Then the linear system $|2S|$ defines a K3 fibration $\psi\colon X\rightarrow \mathbb{P}^{1}$ by \cite[Proposition 8.1]{BN}.
The proof is complete.
\end{proof}
\begin{cor}
The threefold $X$ contains no abelian surface.
\end{cor}
\begin{proof}
If $X$ contains an abelian surface $A$, then it is easy to see that the linear system $|A|$ defines an abelian surface fibration $\eta \colon X\rightarrow \mathbb{P}^{1}$.
Then pulling back $\NS(\mathbb{P}^{1}\times \mathbb{P}^{1})_{\mathbb{R}}$ by $(\psi, \eta)\colon X\rightarrow \mathbb{P}^{1}\times \mathbb{P}^{1}$ defines a two-dimensional linear subspace of $\NS(X)_{\mathbb{R}}$ which does not contain an ample divisor.
Therefore we should have $\rho(X)\geq 3$.
This is impossible since we have $\rho(X)=h^{1,1}(X)=2$ by Lemma \ref{l1}.
\end{proof}
\begin{lem}\label{l3}
$\Nef(X)=\overline{\Eff(X)}=\mathbb{R}_{\geq 0}[H_{1}]\oplus \mathbb{R}_{\geq 0}[S]$.
\end{lem}
\begin{proof}
Since the linear systems $|H_{1}|$ and $|2S|$ define the fibrations $\phi\colon X\rightarrow \mathbb{P}^{2}$ and $\psi\colon X\rightarrow \mathbb{P}^{1}$ respectively,
the divisors $H_{1}$ and $S$ are semi-ample but not big, so their classes give extremal rays in $\overline{\Eff(X)}$.
This finishes the proof.
\end{proof}
\begin{cor}
The threefold $X$ is a unique minimal model in its birational equivalence class.
Moreover, $\Bir(X)=\Aut(X)$ and these groups are finite.
\end{cor}
\begin{proof}
Let $f\colon X\dashrightarrow X'$ be a birational map with $X'$ a minimal model.
Then $f$ can be decomposed into a sequence of flops by a result of Kawamata \cite[Theorem 1]{Ka2}.
Note that any flopping contraction of a Calabi-Yau variety is given by a codimension one face of the nef cone (see \cite[Theorem 5.7]{Ka1}).
Since the codimension one faces $\mathbb{R}_{\geq 0}[H_{1}]$ and $\mathbb{R}_{\geq 0}[S]$ of $\Nef(X)$ give fibrations, it follows that $f$ is in fact an isomorphism.
For the last statement, we note that Oguiso proved that the automorphism group of any odd-dimensional Calabi-Yau variety in a wider sense with $\rho=2$ is finite.
The proof is complete.
\end{proof}
\begin{prop}\label{p}
The threefold $X$ is simply connected.
\end{prop}
\begin{proof}
Applying \cite[Lemma 5.2.2]{K} to the K3 fibration $\psi\colon X\rightarrow \mathbb{P}^{1}$, whose smooth fibers are simply connected, we are reduced to showing that $2S$ is the only one multiple fiber of $\psi$.
The K3 fibration $\psi\colon X\rightarrow \mathbb{P}^{1}$ and the morphism $X\rightarrow \mathbb{P}^{5}$ given by the linear system $|H|$ induce a morphism $X\rightarrow \mathbb{P}^{1}\times \mathbb{P}^{5}$,
which is the blow-up of a non-normal complete intersection $X_{0}$ in $\mathbb{P}^{1}\times \mathbb{P}^{5}$ of three hypersurfaces of bi-degree $(1,2)$ along the non-normal locus with the exceptional divisor $S$.
Now we only need to prove that the first projection $pr_{1}\colon X_{0}\rightarrow \mathbb{P}^{1}$ admits no multiple fibers.
This follows from the Lefschetz hyperplane section theorem.
The proof is complete.
\end{proof}
\section{Construction III}\label{C3}
We recall a system of affine equations for an Enriques surface introduced by Colliot-Th\'el\`ene--Skorobogatov--Swinnerton-Dyer \cite{CTSSD}.
\begin{prop}[\cite{CTSSD}, Proposition 4.1, Example 4.1.2; \cite{L}, Proposition 1.1]\label{CTSSD}
Let $k$ be a field of characteristic zero.
Let
$c, d, f\in k[t]$
be polynomials of degree $2$ such that $c\cdot d\cdot (c-d)\cdot f$ is separable.
Let $S^{0}$ be the affine variety in
$\mathbb{A}^{4}=\Spec k[t, u_{1}, u_{2}, u_{3}]$
defined by
\[
u_{1}^{2}-c(t) = f(t)u_{2}^{2}, \,u_{1}^{2}-d(t) = f(t)u_{3}^{2}.
\]
Then a minimal smooth projective model $S$ of $S^{0}$ is an Enriques surface.
The projection
$S^{0} \rightarrow \mathbb{A}^{1}=\Spec k[t]$
induces an elliptic fibration $S\rightarrow \mathbb{P}^{1}$ with reduced discriminant $c\cdot d \cdot (c-d)\cdot f$
which admits double fibers over $f=0$ whose reductions are smooth elliptic curves.
\end{prop}
Now we apply Proposition \ref{CTSSD} to
$k=\mathbb{Q}$ and
\[c=-3t^{2}-2t+8,\, d=\frac{t^{2}-15t+16}{2},\, f=\frac{t^{2}+1}{2}.\]
Let $S$ be the corresponding Enriques surface.
We prove:
\begin{lem}\label{l6}
The surface $S$ has a Zariski dense set of $\mathbb{Q}$-rational points.
\end{lem}
\begin{proof}
We follow the strategy of the proof of \cite[Proposition 5]{Sko}.
It will be easier to work on the K3 double cover $\widetilde{S}$.
By setting
\[w^{2}=\frac{t^{2}+1}{2},\, v_{2}=wu_{2},\, v_{3}=wu_{3},\]
we obtain the defining equations of its affine model $\widetilde{S}^{0}$ in
$\mathbb{A}^{5}=\Spec \mathbb{Q}[t,u_{1},v_{2},v_{3},w]$:
\[
u_{1}^{2}-(-3t^{2}-2t+8) = v_{2}^{2}, \, u_{1}^{2}-\frac{t^{2}-15t+16}{2} = v_{3}^{2},\, \frac{t^{2}+1}{2}=w^{2}.
\]
The projection
$\widetilde{S}^{0}\rightarrow C^{0}=\Spec\mathbb{Q}[t,w]/(\frac{t^{2}+1}{2}-w^{2})$
defines an elliptic fibration $\widetilde{S}\rightarrow C$ with reduced discriminant $c\cdot d\cdot (c-d)$.
We consider the curve $E^{0}\subset \widetilde{S}^{0}$ cut out by
\[
u_{1}=t-3,\,v_{2}=2t-1,
\]
which is isomorphic to the affine curve in
$\mathbb{A}^{3}=\Spec \mathbb{Q}[t,v_{3}, w]$
defined by
\[
\frac{1}{2}(t+1)(t+2)=v_{3}^{2},\, \frac{t^{2}+1}{2}=w^{2}.
\]
Then $E^{0}$ gives an elliptic curve $E$.
We prove that $E(\mathbb{Q})$ is dense.
Let $O\in E$ be the point given by
\[
t=-1,\, v_{3}=0, \, w=-1
\]
and $P\in E$ be the point given by
\[
t=7,\, v_{3}=-6,\, w=5.
\]
We only need to prove that $P-O\in \Pic^{0}(E)$ is of infinite order.
It is a simple matter to check that the map
\[
(t,v_{3},w)\mapsto \left(\frac{2(3t-4w-1)}{t-2w-1},\frac{8v_{3}}{t-2w-1}\right)
\]
gives a transformation into the Weierstrass model
\[
y^{2}=x^{3}-52x+144
\]
and sends $O$ (resp. $P$) to the point at infinity (resp. $(x,y)=(0,12)$).
Thus it is enough to verify that $(x,y)=(0,12)$ defines a point of infinite order.
This follows from a theorem of Lutz and Nagell \cite[VIII. Corollary 7.2]{S}
since the $y$-value $12$ is non-zero and $12^{2}$ does not divide $4\cdot (-52)^{3}+27\cdot 144^{2}$.
Moreover, $E$ is a saliently ramified multi-section of the elliptic fibration $\widetilde{S}\rightarrow C$.
Indeed, it is easy to verify that $E\rightarrow C$ is branched over
\[t=-1,\, w=\pm 1,
\]
while $t=-1$ is not a root of the reduced discriminant
\[
c\cdot d\cdot (c-d)= (-3t^{2}-2t+8)\left(\frac{t^{2}-15t+16}{2}\right)\left(\frac{-7t^{2}+11t}{2}\right).
\]
Now Theorem \ref{BT} shows that $\widetilde{S}(\mathbb{Q})$ is dense.
This in turn implies that $S(\mathbb{Q})$ is dense.
The proof is complete.
\end{proof}
We define a compactification of $S^{0}$ in $\mathbb{P}^{2}\times \mathbb{P}^{2}$ as follows.
We set
\[
\mathbb{P}^{2}\times \mathbb{P}^{2}=\Proj\mathbb{Q}[X_{0},X_{1},X_{2}]\times \Proj\mathbb{Q}[Y_{0},Y_{1},Y_{2}].
\]
On $\mathbb{P}^{2}\times \mathbb{P}^{2}$, we consider the map of vector bundles
\[
v\colon \mathcal{O}^{\oplus 3}\rightarrow \mathcal{O}(2,0)\oplus \mathcal{O}(0,2)
\]
given by the $2\times 3$ matrix
\[
\left(
\begin{array}{ccc}
X_{0}^{2}& X_{1}^{2}& X_{2}^{2}\\
2Y_{0}^{2}+6Y_{1}^{2}+4Y_{1}Y_{2}-16Y_{2}^{2} & 2Y_{0}^{2}-Y_{1}^{2}+15Y_{1}Y_{2}-16Y_{2}^{2} & Y_{1}^{2}+Y_{2}^{2}
\end{array}
\right).
\]
Let $S'$ be the rank one degeneracy locus of $v$.
It is straightforward to see that $S'$ is indeed a compactification of $S^{0}$.
The surface $S'$ is a local complete intersection, so in particular, Gorenstein.
The surface $S'$ has isolated singular points and blowing up along the points gives a crepant resolution $S\rightarrow S'$.
Finally, we give the construction of the Calabi-Yau theefold.
On the projective bundle $\mathbb{P}=\mathbb{P}_{\mathbb{P}^{2}}(\mathcal{O}^{\oplus 3}\oplus \mathcal{O}(1))$, we let
\[
u\colon \mathcal{O}^{\oplus 3}\rightarrow \mathcal{O}(2H_{1})\oplus \mathcal{O}(2H)
\]
be
general among all maps defined over $\mathbb{Q}$ which restrict to $v$ on $\mathbb{P}_{\mathbb{P}^{2}}(\mathcal{O}^{\oplus 3})=\mathbb{P}^{2}\times \mathbb{P}^{2}$.
We define $X$ to be the rank one degeneracy locus of $u$.
\begin{thm}\label{D}
The threefold $X$ is a Calabi-Yau threefold in a strict sense defined over $\mathbb{Q}$ with a Zariski dense set of $\mathbb{Q}$-rational points.
Moreover, $X$ satisfies the following geometric properties:
\begin{enumerate}
\item $X_{\mathbb{C}}$ admits K3 and elliptic fibrations;
\item $X_{\mathbb{C}}$ contains no abelian surface;
\item $X_{\mathbb{C}}$ is a unique minimal model in its birational equivalence class;
\item $\Bir(X_{\mathbb{C}})=\Aut(X_{\mathbb{C}})$ and these groups are finite.
\end{enumerate}
\end{thm}
\begin{proof}
One can check that $X$ is smooth and the proofs in Section \ref{CY3} still go through.
It remains to show that $X(\mathbb{Q})$ is Zariski dense.
By a similar argument to that in Lemma \ref{l0}, $S'$ is a saliently ramified multi-section of the elliptic fibration $\phi \colon X\rightarrow \mathbb{P}^{2}$.
Since $S'(\mathbb{Q})$ is dense by Lemma \ref{l6}, the result follows from Theorem \ref{BT}.
The proof is complete.
\end{proof}
One can be more explicit.
We fix a non-zero element $Z\in H^{0}(\mathbb{P},\mathcal{O}_{\mathbb{P}}(H-H_{1}))$.
Let $u$ be given by the matrix
\[
\left(
\begin{array}{ccc}
P_{1}& Q_{1}& R_{1}\\
P_{2} & Q_{2}& R_{2}
\end{array}
\right),
\]
where
\[
P_{1}=X_{0}^{2},\, Q_{1}=X_{1}^{2},\, R_{1}=X_{2}^{2}
\]
and
\begin{align*}
P_{2}&=(X_{1}^{2}+X_{2}^{2})Z^{2}+(X_{1}Y_{1}+X_{2}Y_{2})Z+2Y_{0}^{2}+6Y_{1}^{2}+4Y_{1}Y_{2}-16Y_{2}^{2},\\
Q_{2}&=(X_{0}^{2}+X_{2}^{2})Z^{2}+(X_{0}Y_{0}+X_{2}Y_{2})Z+2Y_{0}^{2}-Y_{1}^{2}+15Y_{1}Y_{2}-16Y_{2}^{2},\\
R_{2}&=(X_{0}^{2}+X_{1}^{2})Z^{2}+(X_{0}Y_{0}+X_{1}Y_{1})Z+Y_{1}^{2}+Y_{2}^{2}.
\end{align*}
Then {\it Macaulay 2} shows that the corresponding $X$ is smooth and that the discriminant locus $\Delta\subset \mathbb{P}^{2}$ of $\phi\colon X\rightarrow \mathbb{P}^{2}$ and the branch locus $B\subset \mathbb{P}^{2}$ of $\phi|_{S}\colon S\rightarrow \mathbb{P}^{2}$ meet properly.
Consequently, the set $X(\mathbb{Q})$ is Zariski dense.
|
1,314,259,994,873 | arxiv | \section{Introduction}\label{intro_sec}
Microreversibility expresses the symmetry of the microscopic Hamiltonian dynamics of a system under the time-reversal transformation \cite{deGr}. Such a symmetry holds both for classical and quantum systems in the absence or the presence of an external magnetic field $\bm{B}$. In the latter case, the time-reversal symmetry applies to the total system that includes the charged particles in the external circuits whose electric currents generate the magnetic field $\bm{B}$. This symmetry lies at the heart of the study of nonequilibrium systems, as a large number of important results of nonequilibrium statistical physics appear to be fundamentally rooted in microreversibility.
Nonequilibrium systems are characterized by the occurrence of net currents, e.g., of energy or particles. The latter describe the response of the system to the nonequilibrium constraints to which it is subjected. Such constraints can be mechanical or thermodynamic forces, commonly referred to as affinities \cite{deGr,deDon,Prig} and rising, in particular, from differences of temperatures or chemical potentials. Close to equilibrium, the currents are proportional to the affinities. This is for instance the case in Fick's law of diffusion \cite{deGr} or Ohm's law \cite{Imry}. In this linear response regime, microreversibility manifests itself into the well-known Onsager-Casimir reciprocity relations satisfied by the linear response coefficients \cite{Ons31a,Ons31b,Cas45}, the Green-Kubo formulae \cite{Gre52,Gre54,Kub57}, or the fluctuation-dissipation theorem \cite{CW51,Kub66}.
However, many nonequilibrium systems operate in regimes where the currents have nonlinear dependences on the nonequilibrium constraints. This is for instance the case in mesoscopic electronic circuits where large voltage differences can be implemented, hence inducing deviations from Ohm's law \cite{NYH10,NYH11}. Remarkably, microreversibility greatly influences the nonlinear transport properties of a nonequilibrium system as well, as was first noted for Hamiltonian systems or stochastic processes \cite{BK77,S92,S94}. The more recent \textit{fluctuation relations} (or fluctuation theorems) are also deeply rooted in microreversibility \cite{ECM93,ES94,GC95,Jar97,Kur98,LS99,Cro99,ZC03,Kur00,AGM09,EHM09,CHT11,HPPG11,Gas13_1, Gas13_2}. The latter stand as exact results that remain valid arbitrarily far from equilibrium, and typically express a particular symmetry relation satisfied by the probability distribution of the nonequilibrium currents.
We recently studied the fluctuation relation for open systems, connected to reservoirs of energy and particles, in nonequilibrium steady states both in the absence \cite{BG18} and the presence \cite{BG19} of an external magnetic field $\bm{B}$. We showed that the fluctuation relation reduces by about half the total number of statistical cumulants, and of their responses to the affinities, that need to be known in order to fully describe the statistical properties of the nonequilibrium currents. Interestingly, the analysis performed in \cite{BG18,BG19} made extensive use of Euler's polynomials (see e.g. \cite{AbrSteg,GradRyz}). In particular, we showed in \cite{BG19} that the cumulants that are constrained by the fluctuation relation can be expressed as linear combinations of the unconstrained ones, the coefficients of which being precisely the coefficients of Euler's polynomials. Surprisingly however, the latter have not been previously of much use in nonequilibrium statistical physics. To further investigate the connection between Euler's polynomials and fluctuation relations, one of the most important quantitative tool of nonequilibrium statistical physics, is thus an important question of theoretical interest.
In this paper, we demonstrate that Euler's polynomials are fundamentally rooted in the fluctuation relation, and this for systems both in the absence and the presence of external magnetic fields. In our case, the fluctuation relation consists in a symmetry property satisfied by the generating function of the statistical cumulants of the nonequilibrium currents. We show that it can be adequately rewritten in a form that explicitly involves the coefficients of Euler's polynomials. This alternative expression of the fluctuation relation allows us to readily express some of the cumulants and their responses as linear combinations of the remaining ones, and thus to unambiguously recover results that we previously obtained in \cite{BG19}. By introducing Euler's polynomials at the level of the fluctuation relation itself, the present work hence clarifies the fundamental connection, touched upon in \cite{BG18,BG19}, between these particular polynomials and microreversibility.
We first state in section~\ref{FR_sec} the fluctuation relation that we consider throughout this work, before we discuss in section~\ref{Euler_sec} some of the properties of Euler's polynomials. We then reformulate in section~\ref{alt_FR} the fluctuation relation by means of Euler's polynomials, and introduce the statistical cumulants in section~\ref{cumul_sec}. Here we show that the alternative form of the fluctuation relation obtained in section~\ref{alt_FR} readily yields results previously obtained in \cite{BG19}. Concluding remarks are drawn in section~\ref{conclusion_sec}.
\section{Fluctuation relation}\label{FR_sec}
We consider an open system connected to reservoirs of energy and particles in the presence of an external magnetic field $\bm{B}$. The system is assumed to reach a nonequilibrium steady state after a long enough time. The statistical properties of the nonequilibrium currents that cross the system can then be described by the generating function $Q \left( \bm{\lambda} , \bm{A} ; \bm{B} \right)$ of the statistical cumulants. It is a function of the counting parameters $\bm{\lambda}$ associated with the currents, the affinities $\bm{A}$ that drive them away from equilibrium, and the magnetic field (the latter being treated as a parameter in the sequel). It is worth specifying that the dimension of the vectors $\bm{\lambda}$ and $\bm{A}$ is equal to the total number of currents. The function $Q$ is known \cite{AGM09,Gas13_1,Gas13_2,SU08} to satisfy the multivariate fluctuation relation
\begin{eqnarray}
Q \left( \bm{\lambda} , \bm{A} ; \bm{B} \right) = Q \left( \bm{A} - \bm{\lambda} , \bm{A} ; -\bm{B} \right)
\label{fluct_rel}
\end{eqnarray}
as a consequence of microreversibility.
After the substitution $\bm{\lambda}\to -\bm{\lambda}$, and assuming the generating function $Q$ to be analytic so that the action of the derivatives $\partial_{\bm{\lambda}} \equiv \partial / \partial \bm{\lambda}$ is well defined, the fluctuation relation~\eref{fluct_rel} can be written as
\begin{eqnarray}
Q \left( -\bm{\lambda} , \bm{A} ; \bm{B} \right) = {\rm e}^{\bm{A}\cdot\partial_{\bm{\lambda}}} \, Q \left( \bm{\lambda} , \bm{A} ; -\bm{B} \right)
\label{fluct_rel_alt}
\end{eqnarray}
in terms of the translation operator ${\rm e}^{\bm{A}\cdot\partial_{\bm{\lambda}}}$, acting as ${\rm e}^{\bm{A}\cdot\partial_{\bm{\lambda}}}g(\bm{\lambda})=g(\bm{\lambda}+\bm{A})$ on any function $g(\bm{\lambda})$.
\section{Euler's polynomials}\label{Euler_sec}
The generating function of Euler's polynomials $E_n(x)$ has the form
\begin{eqnarray}
\frac{2\, {\rm e}^{xt}}{{\rm e}^t+1} = \sum_{n=0}^{\infty} \frac{1}{n!} \, E_n(x) \, t^n
\label{Euler_poly_GF}
\end{eqnarray}
for $x$ a real number \cite{AbrSteg,GradRyz}, where the $n^{\mathrm{th}}$ order polynomial $E_n(x)$ can be written in the form
\begin{eqnarray}
E_n(x) = \sum_{i=0}^{n} e_i^{(n)} \, x^i \, .
\label{Euler_poly_def}
\end{eqnarray}
Taking $x=0$ in \eref{Euler_poly_GF} gives
\begin{eqnarray}
\frac{2}{{\rm e}^t+1} = \sum_{n=0}^{\infty} \frac{1}{n!} \, e_0^{(n)} \, t^n \, ,
\label{Euler_poly_GF_0}
\end{eqnarray}
hence generating the coefficients
\begin{eqnarray}
e_0^{(n)} \equiv E_n(0) \, .
\label{e0n}
\end{eqnarray}
Setting $x=1$ in \eref{Euler_poly_GF}, we instead get that
\begin{eqnarray}
\frac{2\, {\rm e}^t}{{\rm e}^t+1} = \sum_{n=0}^{\infty} \frac{1}{n!} \, E_n(1) \, t^n \, .
\label{Euler_poly_GF_1}
\end{eqnarray}
Since
\begin{eqnarray}
\frac{2\, {\rm e}^{t}}{{\rm e}^t+1} = \frac{2}{{\rm e}^{-t}+1} = \sum_{n=0}^{\infty} \frac{(-1)^n}{n!} \, e_0^{(n)} \, t^n \, ,
\end{eqnarray}
we find that
\begin{eqnarray}
e_0^{(n)}= E_n (0) = (-1)^n \, E_n(1) \, .
\label{e0n_0+1}
\end{eqnarray}
Adding together~\eref{Euler_poly_GF_0} and~\eref{Euler_poly_GF_1} leads to
\begin{eqnarray}
\sum_{n=0}^{\infty} \frac{1}{n!} \, [E_n(0)+E_n(1)] \, t^n = 2 \, ,
\label{Euler_nbers_GF_0+1}
\end{eqnarray}
so that
\begin{eqnarray}
E_0(0)+E_0(1) =2 \, , \qquad\mbox{and} \qquad
E_n (0) + E_n(1) = 0 \quad\mbox{for} \quad n\ge 1 \, .
\label{Euler_nbers_rel}
\end{eqnarray}
As a consequence of~\eref{e0n_0+1} and~\eref{Euler_nbers_rel}, we recover the properties that
\begin{eqnarray}
&& e_0^{(0)}=E_0(0)= E_0(1)= 1 \, ,
\end{eqnarray}
and
\begin{eqnarray}
&& e_{0}^{(2j-1)} = E_{2j-1}(0)= - E_{2j-1}(1) \, , \\
&& e_{0}^{(2j)} = E_{2j}(0)= E_{2j}(1) = 0 \, ,
\end{eqnarray}
for $j\ge 1$ \cite{AbrSteg,GradRyz}.
Now, since the hyperbolic tangent can be expressed as
\begin{eqnarray}
\tanh \frac{t}{2} = \frac{{\rm e}^{t/2}-{\rm e}^{-t/2}}{{\rm e}^{t/2}+{\rm e}^{-t/2}} = \frac{{\rm e}^{t}-1}{{\rm e}^{t}+1} = \frac{1}{{\rm e}^{-t}+1} - \frac{1}{{\rm e}^t+1} \, ,
\label{tanh}
\end{eqnarray}
we find the following relation between the hyperbolic tangent and the constant terms of Euler's polynomials:
\begin{eqnarray}
\tanh \frac{t}{2} = - \sum_{j=1}^{\infty} \frac{e_0^{(2j-1)}}{(2j-1)!} \, t^{2j-1}
\, .
\label{tanh-e0n}
\end{eqnarray}
It is also worth noting that the constant terms $e_0^{(n)}$ of Euler's polynomials are related to Bernoulli's numbers $B_n$ according to
\begin{eqnarray}
e_0^{(n)} = E_n(0) = - \frac{2}{n+1} \, (2^{n+1}-1) \, B_{n+1} \, ,
\label{e0n-Bernoulli}
\end{eqnarray}
for any $n>0$, so that we recover the known power series expansion
\begin{eqnarray}
\tanh \frac{t}{2} = \sum_{j=1}^{\infty} c_{2j-1}\, t^{2j-1} \qquad \mbox{with}\qquad c_{2j-1} = \frac{2}{(2j)!} \, (2^{2j}-1) \, B_{2j}
\label{tanh-Bn}
\end{eqnarray}
(see e.g. equation 4.5.64 of reference \cite{AbrSteg}).
\section{Alternative forms of the fluctuation relation}\label{alt_FR}
We introduce the functions
\begin{eqnarray}
Q_{\pm}(\bm{\lambda},\bm{A};\bm{B}) \equiv \frac{1}{2} \left[ Q(\bm{\lambda},\bm{A};\bm{B}) \pm Q(-\bm{\lambda},\bm{A};\bm{B}) \right] \, ,
\label{Q_pm}
\end{eqnarray}
giving the parts of the cumulant generating function that are even ($Q_{+}$) or odd ($Q_{-}$) in the counting parameters $\bm{\lambda}$ and such that
\begin{eqnarray}
Q(\pm \bm{\lambda},\bm{A};\bm{B}) = Q_{+}(\bm{\lambda},\bm{A};\bm{B}) \pm Q_{-}(\bm{\lambda},\bm{A};\bm{B}) \, .
\end{eqnarray}
Besides, the symmetric and antisymmetric parts of an arbitrary function $f$ of $\bm{B}$ are defined as
\begin{eqnarray}
f^{\mathrm{S},\mathrm{A}}(\bm{B}) \equiv \frac{1}{2} \left[ f( \bm{B}) \pm f( -\bm{B}) \right] \, ,
\label{sym_antisym_gen_def}
\end{eqnarray}
so that
\begin{eqnarray}
f( \pm \bm{B}) = f^{\mathrm{S}}(\bm{B}) \pm f^{\mathrm{A}}(\bm{B}) \, .
\label{fct_from_sym_antisym}
\end{eqnarray}
Now, substituting the fluctuation relation~\eref{fluct_rel_alt} into the definition~\eref{Q_pm} of $Q_{\pm}$ yields
\begin{eqnarray}
Q_{\pm}(\bm{\lambda},\bm{A};\bm{B}) = \frac{1}{2} \left[ Q(\bm{\lambda},\bm{A};\bm{B}) \pm {\rm e}^{\bm{A}\cdot\partial_{\bm{\lambda}}} \, Q(\bm{\lambda},\bm{A};-\bm{B}) \right] \, .
\label{FR_Q_pm}
\end{eqnarray}
Moreover, taking the symmetric and antisymmetric parts of equations~\eref{FR_Q_pm} in the magnetic field $\bm{B}$ gives the four following relations:
\begin{eqnarray}
&& Q_{+}^{\mathrm{S}}(\bm{\lambda},\bm{A};\bm{B}) = \frac{1}{2} \left( 1 + {\rm e}^{\bm{A}\cdot\partial_{\bm{\lambda}}} \right) \, Q^{\mathrm{S}}(\bm{\lambda},\bm{A};\bm{B}) \, , \label{FR_Q+S} \\
&& Q_{-}^{\mathrm{S}}(\bm{\lambda},\bm{A};\bm{B}) = \frac{1}{2} \left( 1 - {\rm e}^{\bm{A}\cdot\partial_{\bm{\lambda}}} \right) \, Q^{\mathrm{S}}(\bm{\lambda},\bm{A};\bm{B}) \, , \label{FR_Q-S} \\
&& Q_{+}^{\mathrm{A}}(\bm{\lambda},\bm{A};\bm{B}) = \frac{1}{2} \left( 1 - {\rm e}^{\bm{A}\cdot\partial_{\bm{\lambda}}} \right) \, Q^{\mathrm{A}}(\bm{\lambda},\bm{A};\bm{B}) \, , \label{FR_Q+A}\\
&& Q_{-}^{\mathrm{A}}(\bm{\lambda},\bm{A};\bm{B}) = \frac{1}{2} \left( 1 + {\rm e}^{\bm{A}\cdot\partial_{\bm{\lambda}}} \right) \, Q^{\mathrm{A}}(\bm{\lambda},\bm{A};\bm{B}) \, . \label{FR_Q-A}
\end{eqnarray}
Multiplying~\eref{FR_Q+S} and~\eref{FR_Q-A} by $(1 - {\rm e}^{\bm{A}\cdot\partial_{\bm{\lambda}}})$, and \eref{FR_Q-S} as well as~\eref{FR_Q+A} by $(1 + {\rm e}^{\bm{A}\cdot\partial_{\bm{\lambda}}})$, shows that we have the identities
\begin{eqnarray}
&& \left( 1 - {\rm e}^{\bm{A}\cdot\partial_{\bm{\lambda}}} \right) \, Q_{+}^{\mathrm{S}}(\bm{\lambda},\bm{A};\bm{B}) = \left( 1 + {\rm e}^{\bm{A}\cdot\partial_{\bm{\lambda}}} \right) \, Q_{-}^{\mathrm{S}}(\bm{\lambda},\bm{A};\bm{B})\, , \label{FR_Q+-S} \\
&& \left( 1 + {\rm e}^{\bm{A}\cdot\partial_{\bm{\lambda}}} \right) \,Q_{+}^{\mathrm{A}}(\bm{\lambda},\bm{A};\bm{B}) = \left( 1 - {\rm e}^{\bm{A}\cdot\partial_{\bm{\lambda}}} \right) \, Q_{-}^{\mathrm{A}}(\bm{\lambda},\bm{A};\bm{B}) \, . \label{FR_Q+-A}
\end{eqnarray}
Inverting $(1 + {\rm e}^{\bm{A}\cdot\partial_{\bm{\lambda}}})$ and using~\eref{tanh}, these two relations equivalently read
\begin{eqnarray}
&& Q_{-}^{\mathrm{S}}(\bm{\lambda},\bm{A};\bm{B}) = - \tanh\left( \frac{1}{2} \bm{A}\cdot\partial_{\bm{\lambda}} \right) \, Q_{+}^{\mathrm{S}}(\bm{\lambda},\bm{A};\bm{B})\, , \label{FR_Q+-S-th} \\
&& Q_{+}^{\mathrm{A}}(\bm{\lambda},\bm{A};\bm{B}) = - \tanh\left( \frac{1}{2} \bm{A}\cdot\partial_{\bm{\lambda}} \right) \, Q_{-}^{\mathrm{A}}(\bm{\lambda},\bm{A};\bm{B}) \, . \label{FR_Q+-A-th}
\end{eqnarray}
Finally, combining~\eref{FR_Q+-S-th}-\eref{FR_Q+-A-th} with the expansion~\eref{tanh-e0n} of the hyperbolic tangent in terms of the constant terms~\eref{e0n} of Euler's polynomials, we thus obtain
\begin{eqnarray}
&& Q_{-}^{\mathrm{S}}(\bm{\lambda},\bm{A};\bm{B}) = \sum_{j=1}^{\infty} \frac{e_0^{(2j-1)}}{(2j-1)!} \left( \bm{A}\cdot\partial_{\bm{\lambda}} \right)^{2j-1} \, Q_{+}^{\mathrm{S}}(\bm{\lambda},\bm{A};\bm{B})\, , \label{FR_Q+-S-Euler} \\
&& Q_{+}^{\mathrm{A}}(\bm{\lambda},\bm{A};\bm{B}) = \sum_{j=1}^{\infty} \frac{e_0^{(2j-1)}}{(2j-1)!} \left( \bm{A}\cdot\partial_{\bm{\lambda}} \right)^{2j-1} \, Q_{-}^{\mathrm{A}}(\bm{\lambda},\bm{A};\bm{B}) \, , \label{FR_Q+-A-Euler}
\end{eqnarray}
which are equivalent to the original fluctuation relation~\eref{fluct_rel}.
\section{Cumulants and their responses}\label{cumul_sec}
The cumulants and their nonequilibrium responses around equilibrium are defined by
\begin{eqnarray}
Q_{\alpha_1 \cdots \alpha_m \, , \, \beta_1 \cdots \beta_n} (\bm{B}) \equiv \frac{\partial^{m+n} Q}{\partial \lambda_{\alpha_1} \cdots \partial \lambda_{\alpha_m} \partial A_{\beta_1} \cdots \partial A_{\beta_n}} \left( \bm{0}, \bm{0} ; \bm{B} \right) \, ,
\label{m_cumulant_n_resp_def}
\end{eqnarray}
and they can be obtained by expanding the cumulant generating function $Q$ in powers of the counting parameters $\bm{\lambda}$ and the affinities $\bm{A}$ according to
\begin{eqnarray}
\fl Q \left( \pm \bm{\lambda} , \bm{A} ; \bm{B} \right) = \sum_{m , n = 0}^{\infty} \frac{(\pm 1)^m}{m! \, n!} \, Q_{\alpha_1 \cdots \alpha_m \, , \, \beta_1 \cdots \beta_n} (\bm{B})\, \lambda_{\alpha_1} \cdots \lambda_{\alpha_m} A_{\beta_1} \cdots A_{\beta_n}
\label{Q_exp_count_par_and_aff}
\end{eqnarray}
with Einstein's convention of summation over repeated indices. The expansion~\eref{Q_exp_count_par_and_aff} readily allows us to obtain the corresponding power series of the functions $Q_{\pm}^{\mathrm{S},\mathrm{A}}$ by means of their definitions~\eref{Q_pm} and~\eref{sym_antisym_gen_def}.
Now, we have that
\begin{eqnarray}
\fl \left( \bm{A} \cdot \partial_{\bm{\lambda}} \right)^k Q \left( \pm \bm{\lambda} , \bm{A} ; \bm{B}\right) = \sum_{m=0}^{\infty}\sum_{n=k}^{\infty} \frac{(\pm 1)^{m+k}}{m! \, n!} \, Q_{\alpha_1 \cdots \alpha_m \, , \, \beta_1 \cdots \beta_n}^{\{k\}}(\bm{B}) \, \lambda_{\alpha_1} \cdots \lambda_{\alpha_m} A_{\beta_1} \cdots A_{\beta_n} \nonumber \\
\label{A_d_Q}
\end{eqnarray}
for any $k \geqslant 0$. In~\eref{A_d_Q}, the quantities $Q^{\{k\}}$ are defined by
\begin{eqnarray}
Q_{\alpha_1 \cdots \alpha_m \, , \, \beta_1 \cdots \beta_n}^{\{k\}}(\bm{B}) &\equiv& \sum_{j = 1}^{n} Q_{\alpha_1 \cdots \alpha_m \beta_j \, , \, \beta_1 \cdots \beta_{j-1} \beta_{j+1} \cdots \beta_n}^{\{k-1\}}(\bm{B}) \nonumber\\[0.2cm]
&=& \sum_{j_1 = 1}^{n} \sum_{j_{2}=1 \atop j_{2} \neq j_{1}}^{n} \cdots \sum_{j_{k}=1 \atop j_{k} \neq j_1 , \ldots , j_{k-1}}^{n} Q_{\alpha_1 \cdots \alpha_m \beta_{j_1} \cdots \beta_{j_k} \, , \, (\boldsymbol{\cdot})}(\bm{B})
\label{Q_k_def}
\end{eqnarray}
for $k \geqslant 1$, with $Q^{\{0\}} \equiv Q$, and where $(\boldsymbol{\cdot})$ denotes the set of all subscripts $\beta$ that are different from the subscripts $\beta$ present on the left of the comma, i.e., $\beta_{j_1} , \ldots , \beta_{j_k}$. The results~\eref{A_d_Q}-\eref{Q_k_def} can be shown by induction on the integer $k$ \cite{BG19}, and by noting that the differential operator $\bm{A} \cdot \partial_{\bm{\lambda}} = A_{\gamma} \, \partial_{\lambda_{\gamma}}$ acts as
\begin{eqnarray}
\fl \left( \bm{A} \cdot \partial_{\bm{\lambda}} \right) \, Q_{\alpha_1 \cdots \alpha_m \, , \, \beta_1 \cdots \beta_n}^{\{k\}} (\bm{B}) \, \lambda_{\alpha_1} \cdots \lambda_{\alpha_m} = m \, Q_{\alpha_1 \cdots \alpha_{m-1} \gamma \, , \, \beta_1 \cdots \beta_n}^{\{k\}} (\bm{B}) \, \lambda_{\alpha_1} \cdots \lambda_{\alpha_{m-1}} A_{\gamma} \, ,
\label{A_d_action_def}
\end{eqnarray}
where we used the invariance of the cumulants~\eref{m_cumulant_n_resp_def} under any permutation of the subscripts either on the left or the right of the comma and again Einstein's convention for repeated indices. One can thus see that the summation over the mute indices $\gamma,\beta_1, \ldots, \beta_n$ implied by Einstein's convention can be adequately rewritten (by means of a mere change of indices) so as to obtain
\begin{eqnarray}
\fl \left( \bm{A} \cdot \partial_{\bm{\lambda}} \right) \, Q_{\alpha_1 \cdots \alpha_m \, , \, \beta_1 \cdots \beta_n}^{\{k\}} (\bm{B}) \, \lambda_{\alpha_1} \cdots \lambda_{\alpha_m} A_{\beta_1} \cdots A_{\beta_n} \nonumber \\
= m \, Q_{\alpha_1 \cdots \alpha_{m-1} \beta_1 \, , \, \beta_2 \cdots \beta_{n+1}}^{\{k\}} (\bm{B}) \, \lambda_{\alpha_1} \cdots \lambda_{\alpha_{m-1}} A_{\beta_1} \cdots A_{\beta_{n+1}} \, .
\label{A_d_action_full}
\end{eqnarray}
The fact that the results~\eref{A_d_Q}-\eref{Q_k_def} remain true for the integer $k+1$ then readily follows from~\eref{A_d_action_full}.
In addition, it has been shown in reference~\cite{BG19} that
\begin{eqnarray}
Q_{\alpha_1 \cdots \alpha_m \, , \, \beta_1 \cdots \beta_n}^{\{k\}}(\bm{B}) = k! \, Q_{\alpha_1 \cdots \alpha_m \, , \, \beta_1 \cdots \beta_n}^{(k)}(\bm{B})
\label{Q_k_alt_expr}
\end{eqnarray}
in terms of the quantities
\begin{eqnarray}
Q_{\alpha_1 \cdots \alpha_m \, , \, \beta_1 \cdots \beta_n}^{(k)}(\bm{B}) \equiv \sum_{j_1 = 1}^{n} \sum_{j_{2}=1 \atop j_{2} > j_{1}}^{n} \cdots \sum_{j_{k}=1 \atop j_{k} > j_{k-1}}^{n} Q_{\alpha_1 \cdots \alpha_m \beta_{j_1} \cdots \beta_{j_k} \, , \, (\bm{\cdot})}(\bm{B})
\label{Q_k_expr}
\end{eqnarray}
for $k \geqslant 1$, with again $Q^{(0)} \equiv Q$.
Substituting the power series~\eref{Q_exp_count_par_and_aff}-\eref{A_d_Q} into~\eref{FR_Q+-S-Euler} and using~\eref{Q_k_alt_expr} generates the identity
\begin{eqnarray}
\fl \frac{1}{2} \sum_{m,n=0}^{\infty} \frac{1 - (- 1)^{m}}{m! \, n!} \, Q_{\alpha_1 \cdots \alpha_m \, , \, \beta_1 \cdots \beta_n}^{\mathrm{S}}(\bm{B}) \, \lambda_{\alpha_1} \cdots \lambda_{\alpha_m} A_{\beta_1} \cdots A_{\beta_n} \nonumber \\
\fl = \frac{1}{2} \sum_{m=0}^{\infty} \sum_{j=1}^{\infty} \sum_{n=2j-1}^{\infty} e_0^{(2j-1)} \, \frac{1 + (- 1)^{m+2j-1}}{m! \, n!} \, Q_{\alpha_1 \cdots \alpha_m \, , \, \beta_1 \cdots \beta_n}^{(2j-1) \, \mathrm{S}}(\bm{B}) \, \lambda_{\alpha_1} \cdots \lambda_{\alpha_m} A_{\beta_1} \cdots A_{\beta_n} \, .
\label{Q_min_S_Q_plus_S}
\end{eqnarray}
Identifying the terms with the same powers of $\lambda_{\alpha}$ and $A_{\beta}$ on both sides of~\eref{Q_min_S_Q_plus_S}, and noting that we have
\begin{eqnarray}
\sum_{j=1}^{\infty} \sum_{n=2j-1}^{\infty} (\cdot) = \sum_{n=1}^{\infty} \sum_{j=1}^{\mathbb{E} \left( \frac{n+1}{2} \right)} (\cdot) \, ,
\label{double_sum_identity}
\end{eqnarray}
where $\mathbb{E}(x)$ denotes the integer part of the positive real number $x$ (i.e., the natural number $k>0$ such that $k \leqslant x < k + 1$), we find for $m$ odd that
\begin{eqnarray}
Q_{\alpha_1 \cdots \alpha_m \, , \, \beta_1 \cdots \beta_n}^{\mathrm{S}} (\bm{B}) = \sum_{j=1}^{\mathbb{E} \left( \frac{n+1}{2} \right)} e_0^{(2j-1)} \, Q_{\alpha_1 \cdots \alpha_m \, , \, \beta_1 \cdots \beta_n}^{(2j-1)\, \mathrm{S}} (\bm{B}) \, .
\label{coeff_S_Euler}
\end{eqnarray}
This result holds for any odd integer $m \geqslant 1$ and any $n \geqslant 1$. In the case where the index $n$ is even, i.e., $n=2l$ with $l \geqslant 1$, hence making the total number $\mathcal{N} \equiv m+n$ odd, the result~\eref{coeff_S_Euler} is equivalent to equation~(58) of reference~\cite{BG19}. On the other hand, for $n$~odd, i.e., $n=2l-1$ with $l \geqslant 1$, now making the total number $\mathcal{N} \equiv m+n$ even, the result~\eref{coeff_S_Euler} is equivalent to equation~(59) of reference~\cite{BG19}.
Moreover, replacing~\eref{Q_exp_count_par_and_aff}-\eref{A_d_Q} into~\eref{FR_Q+-A-Euler} and using~\eref{Q_k_alt_expr} generates the identity
\begin{eqnarray}
\fl \frac{1}{2} \sum_{m,n=0}^{\infty} \frac{1 + (- 1)^{m}}{m! \, n!} \, Q_{\alpha_1 \cdots \alpha_m \, , \, \beta_1 \cdots \beta_n}^{\mathrm{A}}(\bm{B}) \, \lambda_{\alpha_1} \cdots \lambda_{\alpha_m} A_{\beta_1} \cdots A_{\beta_n} \nonumber \\
\fl = \frac{1}{2} \sum_{m=0}^{\infty} \sum_{j=1}^{\infty} \sum_{n=2j-1}^{\infty} e_0^{(2j-1)} \, \frac{1 - (- 1)^{m+2j-1}}{m! \, n!} \, Q_{\alpha_1 \cdots \alpha_m \, , \, \beta_1 \cdots \beta_n}^{(2j-1) \, \mathrm{A}}(\bm{B}) \, \lambda_{\alpha_1} \cdots \lambda_{\alpha_m} A_{\beta_1} \cdots A_{\beta_n} \, .
\label{Q_plus_A_Q_minus_A}
\end{eqnarray}
Again, identifying the terms with the same powers of $\lambda_{\alpha}$ and $A_{\beta}$ on both sides of~\eref{Q_plus_A_Q_minus_A} and using~\eref{double_sum_identity}, we obtain for $m$ even that
\begin{eqnarray}
Q_{\alpha_1 \cdots \alpha_m \, , \, \beta_1 \cdots \beta_n}^{\mathrm{A}} (\bm{B}) = \sum_{j=1}^{\mathbb{E} \left( \frac{n+1}{2} \right)} e_0^{(2j-1)} \, Q_{\alpha_1 \cdots \alpha_m \, , \, \beta_1 \cdots \beta_n}^{(2j-1)\, \mathrm{A}} (\bm{B}) \, , \label{coeff_A_Euler}
\end{eqnarray}
which is valid for any even integer $m \geqslant 0$ and any $n \geqslant 1$. Now, in the case of an even integer $n=2l$ (with $l \geqslant 1$), hence making the total number $\mathcal{N} \equiv m+n$ even, the result~\eref{coeff_A_Euler} is equivalent to equation~(101) of reference~\cite{BG19}, while for an odd integer $n=2l-1$ (with $l \geqslant 1$), hence yielding an odd $\mathcal{N} \equiv m+n$, the result~\eref{coeff_A_Euler} is equivalent to equation~(102) of reference~\cite{BG19}.
The results~\eref{coeff_S_Euler} and~\eref{coeff_A_Euler} generalize, to systems with a nonzero magnetic field, relations previously obtained in \cite{S92,AG07,AndPhD} in the absence of a magnetic field. Indeed, when $\bm{B}={\bf 0}$ only equation~\eref{coeff_S_Euler} holds, because equation~\eref{coeff_A_Euler} then gives $0=0$. In this case, expressions that can be found in \cite{S92,AG07,AndPhD} are recovered from~\eref{coeff_S_Euler} in view of equations~\eref{e0n-Bernoulli} and~\eref{tanh-Bn}.
\vskip 0.3 cm
\section{Conclusion}\label{conclusion_sec}
In this paper, we investigated the connection between fluctuation relations and Euler's polynomials. We considered a general open system, subjected to an external magnetic field $\bm{B}$, that reaches a nonequilibrium steady state in the long-time limit. The statistical properties of the nonequilibrium currents that take place within the system are then constrained by a fluctuation relation of the form~\eref{fluct_rel}.
The latter is a symmetry property satisfied by the generating function $Q\left( \bm{\lambda} , \bm{A} ; \bm{B} \right)$ of the statistical cumulants, which is a function of the counting parameters $\bm{\lambda}$ and the affinities $\bm{A}$, the magnetic field $\bm{B}$ being treated as a parameter. We reformulated this fluctuation relation in terms of the constant terms $e_0^{(n)}$ of Euler's polynomials $E_n(x)$, with $n \geqslant 1$ an integer that denotes the degree of $E_n(x)$. This could be done by separating the generating function $Q$ into components $Q_{\pm}$ that are even and odd with respect to $\bm{\lambda}$, as well as into symmetric and antisymmetric parts $Q^{\mathrm{S},\mathrm{A}}$ with respect to~$\bm{B}$. Indeed, the fluctuation relation~\eref{fluct_rel} is then mathematically equivalent to the two identities~\eref{FR_Q+-S-Euler} and~\eref{FR_Q+-A-Euler} satisfied by symmetric and antisymmetric parts, respectively. Finally, we showed that these identities yield the sets of relations~\eref{coeff_S_Euler} and~\eref{coeff_A_Euler} for symmetric and antisymmetric quantities, respectively.
A surprising aspect of our recent works \cite{BG18,BG19} has been the use of Euler's polynomials within our mathematical analysis of a fluctuation relation of the form~\eref{fluct_rel}, both in the absence \cite{BG18} and the presence \cite{BG19} of a magnetic field. In particular, we showed in \cite{BG19} that the fluctuation relation constrains about half of the (symmetric and antisymmetric parts of the) cumulants and their responses to the affinities. We then expressed these constrained quantities as linear combinations of the unconstrained ones, the coefficients of which turning to be the constant terms $e_{0}^{(n)}$ of Euler's polynomials. These linear combinations precisely correspond to the relations~\eref{coeff_S_Euler} and~\eref{coeff_A_Euler} inferred in this paper from the alternative expressions~\eref{FR_Q+-S-Euler}-\eref{FR_Q+-A-Euler} of the fluctuation relation~\eref{fluct_rel}. Accordingly, the present work demonstrates that the occurrence of the coefficients of Euler's polynomials happens to be a direct consequence of the symmetry property expressed by fluctuation relations, i.e., microreversibility.
\section*{Acknowledgments}
This research is financially supported by the Universit\'e Libre de Bruxelles (ULB) and the Fonds de la Recherche Scientifique~-~FNRS under the Grant PDR~T.0094.16 for the project ``SYMSTATPHYS".
\section*{References}
\bibliographystyle{unsrt}
|
1,314,259,994,874 | arxiv | \section{The Identification of Source Problem}
The goal of a forensic scientist is help decide between two competing forensic hypotheses, one presented by the prosecution, denoted $H_p$, and one by the defense, denoted $H_d$. Sampling models corresponding to each of the forensic hypotheses are denoted $M_p$ and $M_d$, respectively. The statement of the forensic hypotheses and the sampling models depends upon the source identification question being asked to the scientist. The specific source identification question considers whether the trace originates from a fixed specific source. For the specific source identification problem, the corresponding forensic hypotheses and sampling model statements are given below\footnote{The specific source identification problem can be contrasted with the common source identification problem which seeks to answer the question of whether or not two traces of unknown origin share a common source. In the common source case, the source is not fixed.}.
\begin{description}
\item[\textit{Forensic Hypotheses}]
\footnote{The forensic hypotheses for the common source problem are given by
\begin{description}
\item $H_p$: The two traces originated from the same source.
\item $H_d$: The two traces originated from two different sources.
\end{description}}
\begin{description}
\item $H_p$: The trace originated from the specific source.
\item $H_d$: The trace did not originate from the specific source, but from another source in the relevant alternative source population.
\end{description}
\pagebreak
\item [\textit{Sampling Models}]
\footnote{The sampling models for the common source problem are given by
\begin{description}
\item $M_p$: The two traces were generated by the same randomly selected source in the relevant alternative source population.
\item $M_d$: The two traces were generated by two different randomly selected sources in the relevant alternative source population.
\end{description}}
\begin{description}
\item $M_p$: The trace and the control samples were both generated by the specific source.
\item $M_d$: The trace was not generated by the specific source, but generated by some other randomly selected source in the relevant alternative source population.
\end{description}
\end{description}
Two types of information exist in a case to support either the defense hypothesis or the prosecution hypothesis, quantifiable and conceptual [\cite{AitkenTaroni}]. The conceptual information includes relevant background and circumstantial information, and is denoted $I$. All the pieces of quantifiable information gathered in relation to a specific trace are collectively called the evidence, denoted $E$. For statistical purposes, $E$ is a set of random elements, whereas $I$ is a set of constraints that determine the form of the sampling models for $E$.
The evidence in the specific source problem is composed of three elements, $E=\{E_u, \ E_s, \ E_a \}$, where $E_u$ denotes a set of observations made on objects from an unknown source, $E_s$ denotes a set of observations made on objects from a fixed specific source, and $E_a$ denotes a set of observations made on objects from a relevant population of possible alternative sources. For the specific source models, we are assuming that $E_u$, $E_s$, and $E_a$ are three independent samples drawn in the following way:
\begin{enumerate}
\item{$E_{s}$ is a simple random sample from a given specific source determined by $M_p$. Let $\theta_s$ denote the parameters necessary to describe this sampling induced distribution.}
\item{$E_{a}$ is constructed by first taking a simple random sample of sources from a given relevant population of alternative sources; then from each sampled source we have a simple random sample. Let $\theta_a$ denote the parameters necessary to describe this sampling induced distribution.}
\item{$E_{u}$ is a simple random sample from a single source. It is unknown whether the source of $E_{u}$ is the specific source determined by $M_p$ or if $E_{u}$ typically arises from a randomly selected source in the relevant population of possible alternative sources.
The sampling distribution of $E_{u}$ is characterized by either the parameters $\theta_s$ under $M_p$ or $\theta_a$ under $M_d$.}
\end{enumerate}
Given the background information $I$, it is assumed that $E_s$ contains no further information about $\theta_a$ and conversely, that $E_a$ contains no further information about $\theta_s$. For the purpose of this paper, we will limit ourselves to the models described above. When the specific source identification problem deviates from the models described above, for example when $\theta_a$ only contains information on the means of the sources in the alternative population, then the statistical methods will tend to be more complicated than those that follow for this development.
In most approaches used to compare the prosecution and defense models, point estimates are used for parts, or all, of the parameters describing the distribution of the evidence. In particular, several authors suggest to calculate Bayes Factors for deciding between the two models [\cite{AitkenStoney}]. However, they assume that $\theta_a$ is known, or they estimate it from the data within a nominal degree of uncertainty. This approach leads to a Bayes Factor that is intractable and often unfeasible to calculate, especially when the evidence is high-dimensional. In this paper, we propose a factorization of the Bayes Factor that accounts for the uncertainty on the $\theta_a$ in a reasonable and coherent manner and which can be calculated in practical situations.
\section{Notational Conventions}
We use the following conventions for distinguishing between sampling-induced probability and probability used as a measure of belief:
\begin{enumerate}
\item Latin letters denote sampling induced probability measures; for example, $f(e_s|\theta_s)$ denotes the likelihood of observing the realized value of the sample from the specific source given the actual value of the specific source distribution parameters.
\item Greek letters denote a probability measure that is a measure of belief; for example, $\pi(\theta_s|e_s)$ denotes the posterior density of $\theta_s | e_s$, which describes our belief about the value of $\theta_s$ after observing a sample $E_s|\theta_s$.
\item When combining a belief with a sampling induced probability through Bayes theorem, the result is another belief that is informed or updated by the observed sample. We denote the resulting distribution with a $\pi$.
\end{enumerate}
In this setting, the stochastic nature of the evidence $E$ is characterized by an unknown but fixed set of parameters $\theta$. However, $\theta$ is usually of interest only in so far as knowledge of its value facilitates the quantification of support that $E$ provides for either the prosecution hypothesis or the defense hypothesis. In this sense, having to estimate $\theta$ is a nuisance, and hence in the statistical nomenclature these parameters in this situation are known as \textit{`nuisance parameters'}. The application of Bayesian methods to our problem requires us to specify priors for these nuisance parameters; one summarizing our belief about how the specific source generates evidence, $\pi(\theta_s)$, and another summarizing our prior belief about how the alternative source population stochastically generates evidence, $\pi(\theta_a)$.
\section{Statistical Methods}
\subsection{Introduction to the Bayes Factor}
Dating back to the 1970's, the specific source identification problem has been approached within the context of subjective Bayesian forensic hypothesis testing (See \cite{AitkenStoney}, \cite{Lindley}, and \cite{Shafer} for more details). Historically, the specific source identification problem has been limited to applications in which the data is inherently low dimensional, the stochastic nature of the evidence can be characterized by a common parametric family of distributions, and the evidence from the alternative source population is sufficiently precise that it completely characterizes the stochastic nature of the alternative source population. By approaching the specific source identification problem as a Bayesian hypothesis test, the forensic statistician is tasked with providing a summary of the scientific evidence that is logical and coherent for updating a prior belief structure concerning the two competing hypotheses. The summary is typically known as a Bayes Factor [\cite{Good}]. Traditionally, this summary is used within the context of a Bayes' rule as follows:
\begin{equation}\label{BF}
\underbrace{
\frac
{P\left( H_p | e, I\right)}
{P\left( H_d | e, I\right)}
}_\text{Posterior Odds}
=
\underbrace{
\frac
{P\left( e |H_p, I \right)}
{P\left( e |H_d, I \right)}
}_{
\begin{smallmatrix}
\text{Bayes Factor and/or} \\
\text{Likelihood Ratio}
\end{smallmatrix}
}
\times \
\underbrace{
\frac
{P\left( H_p | I \right)}
{P\left( H_d | I \right)}
}_\text{Prior Odds}
,
\end{equation}
where $e$ is the realization of the evidence, $H_p$ is the prosecution hypothesis, $H_d$ is the defense hypothesis, and $I$ is the relevant background information common to both hypotheses. The prior odds summarize our relative belief concerning the validity of the prosecution and defense forensic hypotheses before observing the evidence.
The Bayes Factor then allows us to update our belief following the observation of the evidence and arrive at the posterior odds concerning the relative validity of the two hypotheses. If the Bayes Factor (and the corresponding posterior odds) is sufficiently high, then we support the prosecution hypothesis; on the other hand if it is sufficiently close to zero, we support the defense hypothesis. In effect the Bayes Factor is providing a numerical summary of the answer to both of these questions:
\begin{quote}
\textit{``What is my belief about the likelihood of observing the evidence under the prosecution hypothesis?"}
\end{quote}
versus
\begin{quote}
\textit{``What is my belief about the likelihood of observing the evidence under the defense hypothesis?"}
\end{quote}
It is extremely important note that the use of a Bayes Factor in the context of formal Bayesian model selection, two sets of probability measures are required. The first is the prior beliefs concerning the relative validity of the two competing models, which has been described as the prior odds in Equation~\ref{BF}. The second is a set of priors that characterizes the belief about the parameters for the stochastic generation of the evidence under the prosecution and defense models. Since the main focus of this paper is to study the Bayes Factor for the specific source identification problem under certain conditions, we will only discuss the second set of priors for characterizing the parameters of the sampling models.
\subsection{Known Alternative Source Population Parameters}
As an introduction to derivations associated with the Bayes Factor for the specific source identification problem, we will derive the value of evidence as presented by \cite{Lindley}. In this section we are assuming that we have a well-studied alternative source population with known parameters, $\theta_{a_{_0}}$. The only unknown parameters that are contributing to the uncertainty about the value of the evidence are the ones associated with the specific source, $\theta_s$. Let $e=\{e_s, \ e_u, \ e_a \}$ represent the realization of the random element $E$ for a specific case at hand. Let $\theta = \{ \theta_s, \ \theta_{a_{_0}} \}$ and $\pi(\theta)=\pi(\theta_s)$ be a probability distribution used to describe our prior belief about $\theta$ since there is no uncertainty about $\theta_{a_{_0}}$.
Define the value of the evidence as
\begin{equation}\label{Vss}
V_{ss}(e) = \dfrac{\pi(e|H_p, I)}{\pi(e|H_d, I)}.
\end{equation}
Computing the value of evidence in this form involves evaluating the likelihood of the entire set of evidence $e$. This can be computationally intensive and often unfeasible. To obtain a computationally tractable form of the value of evidence, \cite{AitkenTaroni} have proposed a factorization of $V_{ss}(e)$ which assumes that $\theta_a$ is known or can be estimated from the data. The factorization develops as follows:
\begin{align*}
V_{ss}(e) &= \dfrac{\pi(e|M_p)}{\pi(e|M_d)} &\text{see Note~\ref{prfV1} below}
\\ &= \dfrac{\int f(e|\theta, M_p) \pi(\theta|M_p) d\theta}{\int f(e|\theta,M_d) \pi(\theta|M_d) d\theta} &\text{see Note~\ref{prfVmarg} below}
\\ &= \dfrac{\int f(e_u|\theta, M_p) f(e_s|\theta, M_p) f(e_a|\theta, M_p) \pi(\theta) d\theta}{\int f(e_u|\theta, M_d) f(e_s|\theta, M_d) f(e_a|\theta, M_d) \pi(\theta) d\theta} &\text{see Note~\ref{prfV2} below}
\\ &= \dfrac{\int f(e_u|\theta_s) f(e_s|\theta_s) f(e_a|\theta_{a_{_0}}) \pi(\theta_s) d\theta_s}{\int f(e_u|\theta_{a_{_0}}) f(e_s|\theta_s) f(e_a|\theta_{a_{_0}}) \pi(\theta_s) d\theta_s} &\text{see Note~\ref{prfV3.1} below}
\\ &= \dfrac{f(e_a|\theta_{a_{_0}}) \int f(e_u|\theta_s) f(e_s|\theta_s) \pi(\theta_s) d\theta_s}{f(e_u|\theta_{a_{_0}}) f(e_a|\theta_{a_{_0}}) \int f(e_s|\theta_s) \pi(\theta_s) d\theta_s}
\\ &= \dfrac{1}{f(e_u|\theta_{a_{_0}}) } \times \dfrac{\int f(e_u|\theta_s) f(e_s|\theta_s) \pi(\theta_s) d\theta_s}{\int f(e_s|\theta_s) \pi(\theta_s) d\theta_s} &\text{see Note~\ref{prfV5} below}
\\ &= \dfrac{1}{f(e_u|\theta_{a_{_0}}) } \times \int f(e_u|\theta_s) \dfrac{f(e_s|\theta_s) \pi(\theta_s)}{\int f(e_s|\theta_s) \pi(\theta_s) d\theta_s} d\theta_s
\\ &= \dfrac{\int f(e_u|\theta_s) \pi(\theta_s|e_s) d\theta_s}{f(e_u|\theta_{a_{_0}})} &\text{see Note~\ref{prfVpostBelief} below}
\\ &= \dfrac{\pi(e_u|e_s, M_p, I)}{f(e_u|\theta_{a_{_0}})} &\text{see Note~\ref{prfVpostPred} below}
\end{align*}
\begin{note}\label{prfV1}
We can drop the conditional notation on $I$ since the background information will be the same for both the prosecution and the defense and the relevant information has been considered in the models.
\end{note}
\begin{note}\label{prfVmarg}
The definition of the marginal belief of $X$ given some parameter $\phi$ is $\pi(x)=\int f(x|\phi) \pi(\phi) d\phi$.
\end{note}
\begin{note}\label{prfV2}
$f(e|\theta)$ is the likelihood function for observing $e$. Therefore, $f(e|\theta) = f(e_u|\theta) f(e_s|\theta) f(e_a|\theta)$. Also, $\pi(\theta)$ does not depend on $M_p$ or $M_d$ so the conditional notation can be dropped.
\end{note}
\begin{note}\label{prfV3.1}
By definition of $\pi(\theta)=\pi(\theta_s)$. The parameters for $E_s$ and $E_a$ are fixed so they will be the same for both $M_p$ and $M_d$. Under the prosecution model, $M_p$, $E_u$ is characterized by $\theta_s$ since the prosecution believes the specific source is the origin of the trace. Therefore, $f(e_u|\theta, M_p) = f(e_u|\theta_s)$. Under the defense model, $M_d$, $E_u$ is completely characterized by $\theta_{a_{_0}}$ since the defense believes the trace came from a source in the alternative source population. Therefore, $f(e_u|\theta, M_d) = f(e_u|\theta_{a_{_0}})$.
\end{note}
\begin{note}\label{prfV5}
It should be noted that $f(e_a|\theta_{a_{_0}})$ cancels from the value of evidence. This means that since $\theta_{a_{_0}}$ is known, $e_a$ is irrelevant to the resulting value of the evidence.
\end{note}
\begin{note}\label{prfVpostBelief}
The definition of the posterior belief of $\phi$ given $X$ is $\pi(\phi|x) = \dfrac{f(x|\phi) \pi(\phi)}{f(x)} = \dfrac{f(x|\phi) \pi(\phi)}{\int f(x|\phi) \pi(\phi) d\phi}.$
\end{note}
\begin{note}\label{prfVpostPred}
The definition of posterior predictive belief of $Y$ given $X$ is $\pi(y|x) = \int f(y|\phi)\pi(\phi|x) d\phi.$
\end{note}
By assuming we know (or we are certain that we know) the value $\theta_{a_{_0}}$, the denominator reduces to evaluating the sampling distribution of $e_u$; in effect the denominator does not contain any belief measures when the alternative source population parameters are known. We will refer to
\begin{equation}\label{VforAknown}
V_{ss}(e|\theta_{a_{_0}}) = \dfrac{\pi(e_u|e_s, M_p, I)}{f(e_u|\theta_{a_{_0}})}
\end{equation}
as the factored form of the specific source value of evidence when $\theta_a$ is known.
\subsection{Unknown Alternative Source Population Parameters}
In this section, we propose a factorization of $V_{ss}$ that does not assume that $\theta_a$ is known, but that is still computationally tractable. Let $e=\{e_s, \ e_u, \ e_a \}$ represent the realization of the random element $E$ for a specific case at hand. Let $\theta = \{ \theta_s, \ \theta_a \}$. Due to the uncertainty in the parameters $\theta$ we will need to characterize our belief about it using the prior. Let $\pi(\theta)=\pi(\theta_s)\pi(\theta_a)$ be the probability distribution used to describe our prior belief about $\theta$. Note that we are choosing to restrict ourselves to priors on $\theta_s$ and $\theta_a$ that are \textit{independent of each other}. Starting from the value of the evidence given by Equation~\ref{Vss}, and using similar methods as the previous case, we can rewrite $V_{ss}$ as follows:
\pagebreak
\begin{align}
V_{ss}(e; \theta_s, \theta_a) &= \dfrac{\pi(e|M_p)}{\pi(e|M_d)} &\text{see Note~\ref{prfV1} above} \notag
\\ &= \dfrac{\int f(e|\theta, M_p) \pi(\theta|M_p) d\theta}{\int f(e|\theta,M_d) \pi(\theta|M_d) d\theta} &\text{see Note~\ref{prfVmarg} above} \notag
\\ &= \dfrac{\int f(e_u|\theta, M_p) f(e_s|\theta, M_p) f(e_a|\theta, M_p) \pi(\theta) d\theta}{\int f(e_u|\theta, M_d) f(e_s|\theta, M_d) f(e_a|\theta, M_d) \pi(\theta) d\theta} &\text{see Note~\ref{prfV2} above} \notag
\\ &= \dfrac{\int f(e_u|\theta_s) f(e_s|\theta_s) f(e_a|\theta_a) \pi(\theta) d\theta}{\int f(e_u|\theta_a) f(e_s|\theta_s) f(e_a|\theta_a) \pi(\theta) d\theta} &\text{see Note~\ref{prfV3.2} below} \notag
\\ &= \dfrac{\int f(e_u|\theta_s) f(e_s|\theta_s) \pi(\theta_s) d\theta_s \int f(e_a|\theta_a) \pi(\theta_a) d\theta_a}{\int f(e_s|\theta_s) \pi(\theta_s) d\theta_s \int f(e_a|\theta_a) f(e_u|\theta_a) \pi(\theta_a) d\theta_a} &\text{see Note~\ref{prfV4} below} \notag
\\ &= \dfrac{\int f(e_u|\theta_s) f(e_s|\theta_s) \pi(\theta_s) d\theta_s}{\int f(e_s|\theta_s) \pi(\theta_s) d\theta_s} \times \dfrac{\int f(e_a|\theta_a) \pi(\theta_a) d\theta_a}{\int f(e_a|\theta_a) f(e_u|\theta_a) \pi(\theta_a) d\theta_a} \notag
\\ &= \dfrac{\int f(e_u|\theta_s) f(e_s|\theta_s) \pi(\theta_s) d\theta_s}{\int f(e_s|\theta_s) \pi(\theta_s) d\theta_s} \Bigg/ \dfrac{\int f(e_a|\theta_a) f(e_u|\theta_a) \pi(\theta_a) d\theta_a}{\int f(e_a|\theta_a) \pi(\theta_a) d\theta_a} \notag
\\ &= \dfrac{\int f(e_u|\theta_s) \pi(\theta_s|e_s) d\theta_s}{\int f(e_u|\theta_a) \pi(\theta_a|e_a) d\theta_a} &\text{see Note~\ref{prfVpostBelief} above} \notag
\\ &= \dfrac{\pi(e_u|e_s, M_p)}{\pi(e_u|e_a, M_d)} &\text{see Note~\ref{prfVpostPred} above} \notag
\end{align}
\begin{note}\label{prfV3.2}
The parameters for $E_s$ and $E_a$ are fixed so they will be the same for both $M_p$ and $M_d$. Under the prosecution model, $M_p$, $E_u$ is characterized by $\theta_s$ since the prosecution believes the specific source is the origin of the trace. Therefore, $f(e_u|\theta, M_p) = f(e_u|\theta_s)$. Under the defense model, $M_d$, $E_u$ is characterized by $\theta_{a}$ since the defense believes the trace came from a source in the alternative source population. Therefore, $f(e_u|\theta, M_d) = f(e_u|\theta_{a})$.
\end{note}
\begin{note}\label{prfV4}
Since $\pi(\theta)=\pi(\theta_s)\pi(\theta_a)$, which means that $\theta_s$ and $\theta_a$ are independent, we can factor the integral apart with respect to $\theta$ into the product of the integrals for $\theta_s$ and $\theta_a$.
\end{note}
We will refer to
\begin{equation}\label{VforAunknown}
V_{ss}(e) = \dfrac{\pi(e_u|e_s, M_p)}{\pi(e_u|e_a, M_d)}\end{equation}
as the factored form of the specific source value of evidence when $\theta_a$ is unknown.
\subsection{Results of the Factored Forms for the Value of Evidence}
When the value of evidence is given by Equation~\ref{Vss}, the computation involves evaluating the likelihood of the entire set of evidence $E$. This can be very computationally intensive, and often unfeasible. In the factored forms given by Equation~\ref{VforAknown} and Equation~\ref{VforAunknown}, the computation only involves the posterior predictive belief of the unknown evidence. Therefore, the factored forms create a computationally feasible method of evaluating the Bayes Factor for the specific source identification problem that can be calculated using Monte Carlo integration techniques [\cite{KassRaf}]. When the specific source value of evidence is factored into the form given by Equation~\ref{VforAknown} (the case when $\theta_{a_{_0}}$ is known or estimated from the data), $e_a$ is irrelevant to the value of the evidence. This considerably simplifies the computational complexity of the calculation of $V_{ss}$; however, the uncertainty in the estimation of $\theta_a$ is not formally accounted for. This uncertainty is taken into account when the value of evidence is calculated using the factorization presented in Equation~\ref{VforAunknown} (the case when the parameters for the alternative source population are unknown). In this case, Monte Carlo integration techniques have to be used for the calculation of the denominator. This involves an additional layer of computational complexity; however, modern computers can cope with the required number of computations in a reasonable amount of time.
\section{Glass Example}
In order to compare the value of the evidence obtained using both factorizations of $V_{ss}$ we use a collection of samples of glass fragments studied by \cite{AitkenLucy}. The dataset consists of three classes of windows, with 16, 16, and 30 windows in each class. There are 5 glass fragments per window. Following \cite{AitkenLucy}, we consider the logarithm of the measurements for the ratios of elemental compositions on each glass fragment: $\log(Ca/K)$ is represented by the second variable (V2), $\log(Ca/Si)$ is represented by the third variable (V3), and $\log(Ca/Fe)$ is represented by the fourth variable (V4).
As an illustrative example of computing the value of evidence for the specific source identification problem we focus only on the first class of 16 windows, and we consider two scenarios. For the first scenario, $e_u$ and $e_s$ will share a fixed source, with the $4^{th}$ window playing the role of the specific source. The first three fragments from window 4 will serve as $e_s$ and the last two fragments from window 4 will serve as $e_u$. For the second scenario, $e_u$ and $e_s$ will have different sources. The first three fragments from window 4 will serve as $e_s$ and two fragments from the $2^{nd}$ window will serve as $e_u$. In both scenarios, the remaining 70 glass fragments divided among the 14 remaining windows will serve as $e_a$. The pairwise scatter plots of the evidence under each scenario can be seen in Figure~\ref{GlassPlots}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=1\textwidth]{glass_scatter1.pdf}
\includegraphics[width=1\textwidth]{glass_scatter2.pdf}
\caption{\label{GlassPlots}In the pairwise scatterplots of the evidence $E$, the blue asterisks represent $E_u$, the red diamonds represent $E_s$, and the gray dots represent $E_a$. The large black dots are the mean values for each window and the gray lines show the deviation from that mean for each fragment from the window.}
\end{center}
\end{figure}
\vspace{0.05in}
The specific source identification question for this example can stated as
\begin{quote}
\textit{``Did the glass fragments of unknown source come from the fourth window?"}
\end{quote}
The resulting forensic hypotheses are
\begin{description}
\item $H_p$: The glass fragments from the unknown source came from the fourth window.
\item $H_d$: The glass fragments from the unknown source did not come from the fourth window, but from some other window.
\end{description}
The corresponding sampling models are
\begin{description}
\item $M_p$: The glass fragments from the unknown source came from the fourth window. Under this model, $E_u$ can be characterized by the same model which characterizes $E_s$, described in detail below.
\item $M_d$: The glass fragments from the unknown source came from a randomly selected window in the alternative source population. Under this model, $E_u$ can be characterized by the same model which characterizes $E_a$, described in detail below.
\end{description}
\subsection{Sampling Model for the Evidence from the Specific Source}
First, we will assume that the measurements on the glass fragments composing $e_s$ are an independent and identically distributed, abbreviated i.i.d., sample from a multivariate normal with a mean vector $\mu_s$ and covariance $\Sigma_s$. Let $y_{sj}$ denote the vector of measurements on the $j^{th}$ fragment from the specific source evidence for $j=1,2, \ldots, m$ (for this example $m=3$), then $y_{sj}$ follows a multivariate normal distribution with a mean vector, $\mu_s$, and a covariance matrix, $\Sigma_s$, which we will denote as $y_{sj} \sim MVN(\mu_s, \Sigma_s)$. For this example, the specific source population parameters are $\theta_s = \{ \mu_s, \Sigma_s \}$, so we need to specify priors for both $\mu_s$ and $\Sigma_s$. We will use a relatively non-informative multivariate normal prior on $\mu_s$ with the zero mean vector and a diagonal covariance matrix with diagonal elements equal to $3000$. The prior for $\Sigma_s$ is an Inverse Wishart distribution (denoted $W^{-1}$) centered at a diagonal covariance matrix, $\Phi$ with diagonal elements of $0.01$, $0.00005$, $0.0005$ and three degrees of freedom. These diagonal elements were chosen based on the approximate precision of the measurements for the evidence. The full model for $E_s$ with supporting prior beliefs is summarized below.
$$y_{sj} \sim MVN(\mu_s, \Sigma_s)$$
$$\mu_s \sim MVN(0, 3000I)$$
$$\Sigma_s \sim W^{-1}(\Phi, 3)$$
It should be noted that any number of reasonable priors can be chosen for $\mu_s$ and $\Sigma_s$. The numerator for the values of evidence $\pi(e_u|e_s, M_p)$ under scenario 1 ($Exp \ 1$) and scenario 2 ($Exp \ 2$) are given in Table~\ref{known} and Table~\ref{unknown} below.
\subsection{Sampling Model for the Evidence in the Alternative Source Population}
Next, we assume that the measurements on the glass fragments composing $e_a$ follow a hierarchical multivariate normal model with the assumption that all windows in the alternative source population have a mean $\mu_a$, a within-covariance matrix $\Sigma_w$, and a between source covariance matrix $\Sigma_b$. Let $y_{ij}$ denote the vector of measurements on the $j^{th}$ fragment, for $j = 1, 2, \ldots, m_i$ (for this example, $m \equiv m_i = 5$ for all $i$) from the $i^{th}$ window, for $i=1, 2, \ldots, n$ (for this example, $n=14$). The hierarchical multivariate model in this case is a simple random effects model where the between-source effects $a_i$ are i.i.d. multivariate normal random vectors with a mean vector of zero and a covariance matrix $\Sigma_b$. The within-source effects $w_{ij}$ are assumed to be i.i.d. multivariate normal vectors with a mean vector of zero and a covariance matrix of $\Sigma_w$. We will compare the results of both scenarios described above under two different conditions. First, the alternative source population parameters are assumed to be known. Secondly, the alternative source population parameters are assumed to be unknown.
When the alternative source population parameters $\theta_a = \{ \mu_a, \Sigma_b, \Sigma_w \}$ are assumed to be known, we use the estimates for the parameters as suggested in \cite{AitkenLucy}. The estimates $\hat{\theta}_a = \{ \hat{\mu}_a, \hat{\Sigma}_b, \hat{\Sigma}_w \}$ of the known parameters are summarized below.
$$\hat{\mu}_a = \dfrac{1}{mn} \sum_{i=1}^{n} \sum_{j=1}^m y_{ij}$$
$$\bar{y}_i = \dfrac{1}{m} \sum_{j=1}^m y_{ij}$$
$$\hat{\Sigma}_w = \dfrac{1}{n(m-1)} \sum_{i=1}^n \sum_{j=1}^m (y_{ij}-\bar{y}_i)(y_{ij}-\bar{y}_i)^T$$
$$\hat{\Sigma}_b = \left[ \dfrac{1}{n-1} \sum_{i=1}^n (\bar{y}_i-\hat{\mu}_a)(\bar{y}_i-\hat{\mu}_a)^T \right] - \left[ \dfrac{1}{m}\hat{\Sigma}_w \right]$$
The denominator for the values of evidence $f(e_u|\hat{\theta}_{a_{_0}})$ under scenario 1 ($Exp \ 1$) and scenario 2 ($Exp \ 2$) are given in Table~\ref{known} below.
\begin{table}[h!]
\centering
\caption{\label{known}Alternative Source Population Parameters Known}
\begin{tabular}{| l | c | c |}
\hline
& $Exp \ 1$ & $Exp \ 2$ \\
\hline
$\pi(e_u|e_s, M_p)$ & $119740.3$ & $2.316277$ \\
$f(e_u|\hat{\theta}_{a_{_0}})$ & $582.6974$ & $144.1683$ \\
$V_{ss}(e|\hat{\theta}_{a_{_0}})$ & $205.4931$ & $0.01606648$ \\
\hline
\end{tabular}
\end{table}
When the alternative source population parameters $\theta_a = \{ \mu_a, \Sigma_b, \Sigma_w \}$ are unknown, our prior for $\Sigma_w$ is the same that is used for $\Sigma_s$, and the prior for $\mu_a$ is the same as that used for $\mu_s$. We use an Inverse Wishart prior for $\Sigma_b$ centered at the identity covariance matrix with three degrees of freedom. The full model for $E_a$ with supporting prior beliefs is summarized below.
$$\text{For } i=1, 2, \ldots, 14 \text{ and }j = 1, 2, 3, 4, 5 :$$
\begin{center}
\begin{tabular}{ccc}
$y_{ij} = \mu_a + a_i + w_{ij}$ && $\mu_a \sim MVN(0, 3000I)$ \\
$a_i \overset{iid}{\sim} MVN(0, \Sigma_b)$ && $\Sigma_b \sim W^{-1}(I, 3)$ \\
$w_{ij} \overset{iid}{\sim} MVN(0, \Sigma_w)$ && $\Sigma_w \sim W^{-1}(\Phi, 3)$
\end{tabular}
\end{center}
The denominator for the values of evidence $\pi(e_u|e_s, M_p)$ under scenario 1 ($Exp \ 1$) and scenario 2 ($Exp \ 2$) are given in Table~\ref{unknown} below.
\begin{table}[h!]
\centering
\caption{\label{unknown}Alternative Source Population Parameters Unknown}
\begin{tabular}{| l | c | c |}
\hline
& $Exp \ 1$ & $Exp \ 2$ \\
\hline
$\pi(e_u|e_s, M_p)$ & $119740.3$ & $2.316277$ \\
$\pi(e_u|e_a, M_d)$ & $30.17140$ & $209.5902$ \\
$V_{ss}(e)$ & $3968.669$ & $0.01105146$ \\
\hline
\end{tabular}
\end{table}
The computations were performed on a 2012 MacBook Pro with an OS X 10.8.5 operating system, 2.3 GHz Intel Core i7 processor, and 16 GB, 1600 MHz memory using R version 3.0.2. All posterior predictive distributions for the parameters are estimated using the ``MCMCglmm" package in R [\cite{mcmcglmm}]. Using the parameter values sampled from these posterior predictive distributions, we estimated the posterior predictive beliefs of $e_u$ for the values of evidence using a standard Monte Carlo average integration technique. The Monte Carlo estimates were based on a sample size of $29,000$ ($30,000$ iterations with a burn-in of $1000$). We also made use of the matrix form of the multivariate simple random effects models as presented by \cite{Miller} (See the Appendix for details).
\section{Discussion}
In the illustrative example described in Section 4, the behavior of $V_{ss}$ is consistent between the calculations when the alternative source population parameters are assumed to be known (results for this experiment can be found in Table~\ref{known}) and when the alternative source population parameters are unknown (results for this experiment can be found in Table~\ref{unknown}). The value of evidence for the first scenario suggests in both cases that the evidence is more likely to have been generated according to the prosecution model than by the defense model (since the values of evidence are significantly greater than one); the value of the evidence for the second scenario suggests in both cases that the evidence is more likely to have arisen under the defense model. These results were expected by the design of the experiment. However, by contrasting the likelihood of the evidence under the defense model in Table~\ref{known} and Table~\ref{unknown}, we observe that in the first scenario the evidence is more likely when the alternative source population parameters are known than when they are not, while in the second scenario the evidence is more likely when there is uncertainty on the alternative source population parameters. When the number of observations in the alternative source population is small, there is a marked difference between these values which suggests that using the estimates of the parameters is not a reasonable surrogate for calculating the value of the evidence in the presence of uncertainty in the alternative source population parameters. We expect that the difference between $V_{ss}(e; \hat{\theta}_{a_{_0}})$ and $V_{ss}(e)$ will go to zero as the amount of evidence about the alternative source population becomes large (the rate of convergence is currently being investigated by the authors). In practice, the use of estimates of the known parameters for the alternative source population may lead to grossly over- or under-estimating the value of the evidence. The direction of the misleading effect will depend upon the rarity of the characteristics of $e_u$ in the alternative source population. This effect may ultimately mislead the criminal justice system.
It should be noted that choosing different priors for the model parameters can results in radically different values of evidence. Additionally, when the alternative source population parameters are unknown there is less freedom in choosing the priors than when the parameters are known. In order for the factorization of the value of evidence to hold when there is uncertainty in the alternative source population parameters, the prior for the specific source parameters, $\pi(\theta_s)$, must be chosen to be independent from the prior for alternative source parameters, $\pi(\theta_a)$. This precludes the use of the popular ``random man" prior for the specific source parameters in which the specific source is believed to be typical of the alternative source population.
\section{Conclusion}
Computing the value of evidence when it is given by its raw form (Equation~\ref{Vss}) requires evaluating the likelihood of the entire set of evidence. In most situations, this evaluation is computationally unfeasible. However, when the value of evidence is given in the factored forms (Equation~\ref{VforAknown} and Equation~\ref{VforAunknown}) it can be approximated using Monte Carlo integration techniques [\cite{KassRaf}]. In the rare setting when there is no uncertainty about the alternative source population parameters (Equation~\ref{VforAknown}), it is not surprising that including $e_a$ in the calculation of the value of evidence has no impact. However, in traditional forensic settings, there is rarely sufficient information about the alternative source population parameters to assume that it is possible to estimate them accurately; henceforth, such situations require a novel approach.
In this paper, we develop a logical and coherent method which formally incorporates the uncertainty on the alternative source population parameters into the calculation of the Bayes Factor. This is a major departure from the ad-hoc approaches to include uncertainty about the background population in the value of the evidence that are currently available in the forensic statistics literature. These methods typically entail the construction of a confidence or credible interval for the likelihood ratio represented by $V_{ss}(e; \hat{\theta}_{a_0})$. By formally incorporating uncertainty about $e_a$ into the value of evidence we can guarantee that the resulting value is statistically rigorous and that the decisions based on it will be admissible in a statistical decision theoretic sense. To avoid potentially misleading decisions in the court system, the authors suggest replacing the use of ad-hoc methods with the statistically rigorous methods presented here when there is uncertainty in the alternative source population parameters.
|
1,314,259,994,875 | arxiv |
\section{Introduction}
\label{seq:intro}
Text-guided image generation has seen tremendous success in recent years, primarily due to the breathtaking development in Language-Image models~\cite{radford2021learning, jia2021scaling,li2022blip} and diffusion models~\cite{rombach2021highresolution, imagen, ramesh2021zeroshot, dhariwal2021diffusion,ho2020denoising,ramesh2022hierarchical}.
These breakthroughs have also resulted in fast progression for text-guided shape generation~\cite{zeng2022lion,michel2022text2mesh,chen2022tango}. Most recently, it has been shown~\cite{poole2022dreamfusion} that one can directly use score distillation from a 2D diffusion model to guide the generation of a 3D object represented as a Neural Radiance Field (NeRF)~\cite{mildenhall2021nerf}.
While Text-to-3D can generate impressive results, it is inherently unconstrained and may lack the ability to guide or enforce a 3D structure.
In this paper, we show how to introduce shape-guidance to the generation process to guide it toward a specific shape, thus allowing increased control over the generation process.
Our method builds upon two models, a NeRF model \cite{mildenhall2021nerf}, and a Latent Diffusion Model (LDM)~\cite{rombach2021highresolution}.
Latent Models, which apply the entire diffusion process in a compact latent space, have recently gained popularity due to their efficiency and publicly available pretrained checkpoints.
As score distillation was previously applied only on RGB diffusion models, we first present two key modifications to the NeRF model that are better paired with guidance from a latent model. First, instead of representing our NeRF in the standard RGB space, we propose a \textit{Latent-NeRF} which operates directly in the latent space of the LDM, thus avoiding the burden of encoding a rendered RGB image to a latent space for each and every guiding step.
Secondly, we show that after training, one can easily transform a Latent-NeRF back into a regular NeRF. This allows further refinement in RGB space, where we can also introduce shading constraints or apply further guidance from RGB diffusion models~\cite{imagen}. This is achieved by introducing a learnable linear layer that can be optionally added to a trained Latent-NeRF, where the linear layer is initialized using an approximate mapping between the latent and RGB values~\cite{linear_latent_approximation}.
Our first form of shape-guidance is applied using a coarse 3D model, which we call a \textit{\meshsketch{}}.
Given a \meshsketch{}, we apply soft constraint during the NeRF optimization process to guide its occupancy based on the given shape. Easily combined with Latent-NeRF optimization, the additional constraint can be tuned to meet a desired level of strictness.
Using a \meshsketch{} allows users to define their base geometry, where Latent-NeRF then refines the shape and introduces texture based on a guiding prompt.
We further present \textit{\latentpaint}, another form of shape-guidance where the generation process is applied directly on a given 3D mesh, and we have not only the structure but also the exact parameterization of the input mesh. This is achieved by representing a texture map in the latent space and propagating the guidance gradients directly to the texture map through the rendered mesh. By doing so, we allow for the first time to colorize a mesh using guidance from a pretrained diffusion model and enjoy its expressiveness.
We evaluate our different forms of guidance under a variety of scenarios and show that together with our latent-based guidance, they offer a compelling solution for constrained shape and texture generation.
\section{Related Work}
\label{seq:related}
\paragraph{3D Shape Generation}
3D shape synthesis is a longstanding problem in computer graphics and computer vision.
In recent years, with the emergence of neural networks, the research in 3D modeling has immensely advanced.
The most conventional supervision type is applied directly with 3D shapes, through different representations such as implicit functions~\cite{IMNET,park2019deepsdf,hertz2022spaghetti}, meshes~\cite{gao2019sdm, yang2022dsg} or point clouds~\cite{yang2019pointflow, li2018point}.
As 3D supervision is often difficult to obtain, other works use images to guide the generative task~\cite{ Chan2021piGANPI, chan2022efficient, Niemeyer2021CAMPARICD}. In fact, even when 3D data is available, 2D renderings are sometimes chosen as the supervising primitive~\cite{chen2021ngp, gao2022get3d, bautista2022gaudi}.
For example, in GET3D~\cite{gao2022get3d}, two generators are trained, one generates a 3D SDF, and the other a texture field. The output textured mesh is then obtained in a differentiable manner by utilizing DMTet~\cite{shen2021dmtet}. These generators are adversarially trained with a dataset of 2D images.
In \cite{Watson2022Novel} a diffusion model has been used to generate multiple views of a given input image. Yet, it has been trained in a supervised manner on a multi-view dataset, unlike our work which does not require a dataset.
\paragraph{Text-to-3D with 2D Supervision}
Recently, the success of text-guided synthesis in numerous domains~\cite{patashnik2021styleclip, gal2022stylegan, tevet2022motionclip, avrahami2022blended_latent, bar2022text2live}, has motivated a surge of works that use Language-Image models to guide 3D scenes representations.
CLIP-Forge~\cite{sanghi2021clipforge} consists of two separate components, an implicit autoencoder conditioned on shape codes, and a normalizing flow model that is trained to generate shape codes according to CLIP embeddings. CLIP-Forge exploits the fact that CLIP has a joint text-image embedding space to train on image embeddings and infer on text embeddings, achieving text-to-shape capabilities.
Text2Mesh~\cite{michel2022text2mesh} introduced mesh colorization and geometric fine-tuning by optimizing an initial mesh through differential rendering and CLIP~\cite{radford2021learning} guidance.
TANGO~\cite{chen2022tango} follows a similar optimization scheme, while improving results by considering an explicit shading model.
CLIP-Mesh~\cite{khalid2022clipmesh} optimizes an initial spherical mesh according to a target text prompt, using a modified CLIP loss that accounts for the gap and ambiguity between image/text CLIP embeddings. Similarly to our method, they also use UV texture mapping to bake colors into the mesh.
DreamFields~\cite{jain2021dreamfields} employs CLIP guidance as well, but uses NeRFs to represent the 3D object instead of an explicit triangular mesh, together with a dedicated sparsity loss.
CLIPNeRF~\cite{wang2022clip} pretrains a disentangled NeRF representation network on rendered object datasets, which is then used to constraint a NeRF scene optimization under CLIP loss, between random renderings of the NeRF and target image or text CLIP embedding.
DreamFusion~\cite{poole2022dreamfusion} introduced, for the first time, the use of largely successful pretrained 2D diffusion models for text-guided 3D object generation. DreamFusion uses a proprietary 2D diffusion model~\cite{imagen} to supervise the generation of 3D objects represented by NeRFs.
To guide a NeRF scene using the pretrained diffusion model, the authors derive a \textit{Score-Distillation} loss, see Section~\ref{sec:prelim} for more details.
\paragraph{Neural Rendering}
The recent rapid progression of neural networks has immensely advanced the performance of differential renderers. Particularly NeRF~\cite{mildenhall2021nerf, barron2021mip, mueller2022instant} have shown astounding performance on novel view generation and relighting, also extending to other applications like 3D reconstruction~\cite{yariv2021volume}.
Thanks to their differentiable nature, it has been recently shown that one can introduce different neural objectives during training to guide the 3D modeling.
\section{Method}
\label{sec:method}
Here we present our shape-guidance solution. We describe the Latent-NeRF framework, presented in Figure~\ref{fig:overview_latent_nerf}, and then introduce different guidance controls that can be combined with Latent-NeRF for controlling its generation. Yet, before showing our solution, we provide a quick overview of two recently proposed techniques that are highly relevant to our method.
\subsection{Preliminaries}
\label{sec:prelim}
A {\bf latent diffusion model (LDM)}~\cite{rombach2021highresolution} is a specific form of a diffusion model that is trained to denoise \textit{latent codes} of a pretrained autoencoder, instead of the high-resolution images directly.
First, an autoencoder composed of an encoder $\mathcal{E} $, and a decoder $\mathcal{D} $ is trained to reconstruct natural images $x \sim X$, where $X$ is the image training dataset, in the following form: $\Tilde{x}=\mathcal{D}(\mathcal{E}(x))$.
The autoencoder is trained with a reconstruction loss, perceptual loss~\cite{zhang2018unreasonable} and a patch-based adversarial loss~\cite{isola2017image}.
Then, given the trained autoencoder, a denoising diffusion probabilistic model (DDPM)~\cite{ho2020denoising} is trained to generate a spatial latent $z$ from noise, according to the distribution $z=\mathcal{E}(x)~s.t.~x\sim X$.
In order to generate a novel image, a latent $\Tilde{z}$ is sampled from this learned distribution, using the trained DDPM, and passed to the decoder to obtain the final image $\mathcal{D}(\Tilde{z})$.
Operating in the latent space requires less compute, and leads to faster training and sampling, which makes LDM widely popular. In fact, the recent Stable Diffusion model is also an LDM.
\textbf{Score Distillation} is a method that enables using a diffusion model as a critic, \ie, using it as a loss without explicitly back-propagating through the diffusion process. It has been introduced in DreamFusion~\cite{poole2022dreamfusion} for guiding 3D generation using the Imagen model \cite{imagen}.
To perform score distillation, noise is first added to a given image (e.g., one view of the NeRF's output). Then, the diffusion model is used to predict the added noise from the noised image. Finally, the difference between the predicted and added noises is used for calculating per-pixel gradients.
For NeRF, the gradients are back-propagated for updating the 3D NeRF model.
Going into more detail, at each iteration of the score distillation optimization, a rendered image $x$ is noised to a randomly drawn time step $t$,
\begin{equation}
x_t=\sqrt{\Bar{\alpha_t}} x + \sqrt{1 - \Bar{\alpha_t}} \epsilon,
\label{eq:noise_step}
\end{equation}
where $\epsilon \sim \mathcal{N}(0, I)$, and $\Bar{\alpha_t}$ is a time-dependent constant specified by the diffusion model.
Then, the per-pixel score distillation gradients are taken to be
\begin{equation}
\nabla_xL_{SDS}=w(t) (\epsilon_\phi(x_t,t,T)-\epsilon),
\label{eq:sds_loss}
\end{equation}
where $\epsilon_\phi$ is the diffusion model's denoiser (which approximates the noise to be removed), $\phi$ are the denoiser's parameters, $T$ is an optional guiding text prompt, and $w(t)$ is a constant multiplier that depends on $\alpha_t$.
During training, gradients are propagated from the pixel gradients to the NeRF parameters and gradually change the 3D object.
Please refer to~\cite{poole2022dreamfusion} for the complete details and derivation of Score Distillation.
Note that DreamFusion uses the proprietary Imagen~\cite{imagen} model that is very computationally demanding. We rely on the publicly available Stable Diffusion model and the re-implementation of DreamFusion~\cite{stabledreamfusion} (it operates in the RGB space and not the latent as we propose).%
\input{figures/3_method/overview}
\subsection{Latent-NeRF}
\label{sec:latent_nerf}
We now turn to describe our Latent-NeRF approach.
In this method, a NeRF model is optimized to render 2D feature maps in Stable Diffusion's latent space $\mathcal{Z}$.
Latent-NeRF outputs four pseudo-color channels, $(c_1, c_2, c_3, c_4)$, corresponding to the four latent features that stable diffusion operates over, and a volume density $\sigma$. Figure~\ref{fig:overview_latent_nerf} illustrates this process.
Representing the scene using NeRF implicitly imposes spatial consistency between different views, due to the spatial radiance field and rendering equation.
Still, the fact that $\mathcal{Z}$ \textbf{can} be represented by a NeRF with spatial consistencies is non-trivial. Previous works~\cite{avrahami2022blended_latent, linear_latent_approximation}
showed that super-pixels in $\mathcal{Z}$ depend mainly on individual patches in the output image.
This can be attributed to the high resolution ($64\times64$) and low channel-wise depth ($4$) of this latent space, which encourages local dependency over the autoencoder's image and latent spaces.
Assuming $\mathcal{Z}$ is a near patch level representation of its corresponding RGB image makes the latents nearly equivariant to spatial transformations of the scene, which justifies the use of NeRFs for representing the 3D scenes.
\paragraph{Text Guidance}
The vanilla form of Latent-NeRF is text-guided, with no other constrains for the scene generation. In this setting, we employ the following loss:
$$
L = \lambda_{SDS}L_{SDS} + \lambda_{sparse}L_{sparse},
$$
where $L_{SDS}$ is the Score-Distillation loss depicted in Figure~\ref{fig:overview_latent_nerf}.
Note, that the exact value of this loss is not accessible. Instead, the gradients implied by it are approximated by a single forward pass through the denoiser. These gradients are directly passed to the autograd solver.
The loss $L_{sparse}=BE(w_{blend})$ suggested in~\cite{stabledreamfusion} prevents floating ``radiance clouds'' by penalizing the binary entropy of ill-defined background masks $w_{blend}$. Namely, it encourages a strict blending of the object NeRF and background NeRF.
\input{figures/3_method/finetune}
\paragraph{RGB Refinement}
Using Latent-NeRF, one may successfully learn to represent 3D scenes even when optimizing solely in latent space. Still, in some cases, it could be beneficial to further refine the model by fine-tuning it in pixel space, and have the NeRF model operate directly in RGB.
To do so, we must convert the NeRF that was trained in latent space to a NeRF that operates in RGB.
This requires converting the MLP's output from the four latent channels to three RGB channels such that the initial rendered RGB image is close to the decoder output, when applied to the rendered latent of the original model.
Interestingly, it has been shown~\cite{linear_latent_approximation} that a linear approximation is sufficient to predict plausible RGB colors given a single four-channel latent super pixel, via the following transformation
\begin{equation}
\begin{pmatrix}
\hat{r} \\
\hat{g} \\
\hat{b}
\end{pmatrix}
=
\begin{pmatrix}
0.298 & 0.187 & -0.158 & -0.184\\
0.207 & 0.286 & 0.189 & -0.271\\
0.208 & 0.173 & 0.264 & -0.473\\
\end{pmatrix}
\begin{pmatrix}
c_1 \\
c_2 \\
c_3 \\
c_4
\end{pmatrix},
\label{eq:latent_preview}
\end{equation}
which was calculated using pairs of RGB images and their corresponding latent codes over a collection of natural images.
As our NeRF model is already composed of a set of fully connected layers, we simply add another linear layer that is initialized using the weights in Equation~\ref{eq:latent_preview}.
This converts our Latent-NeRF to operate in pixel space and ensures that our refinement process starts from a valid result.
The additional layer is then fine-tuned together with the rest of the model, to create the refined and final output.
The overall fine-tuning procedure is illustrated in Figure~\ref{fig:rgb_finetune_overview}.
\input{figures/3_method/overview_latent_mesh}
\subsection{\meshsketch{} Guidance}
Next, we introduce a novel technique for guiding the Latent-NeRF generation based on a coarse geometry, which we call a \meshsketch.
A \meshsketch~is an abstract coarse alignment of simple 3D primitives like spheres, boxes, cylinders, etc., that together depict an outline of a more complex object. Figures~\ref{fig:skecth_mesh_animals}, \ref{fig:skecth_mesh_house}, \ref{fig:general_sketchshapes} illustrate such simple shapes.
Ideally, we would like the output density of our MLP to match that of the \meshsketch, such that the output Latent-NeRF result resembles the input shape. Nevertheless, we would also like the new NeRF to have the capacity to create new details and geometries that match the input text prompt and improve the fidelity of the shape.
To achieve this lenient constraint, we encourage the NeRF's occupancy to match the winding-number~\cite{jacobson2013robust, barill2018fast} indicator of the \meshsketch,~but with decaying importance near the surface to allow new geometries. This loss reads as
\vspace{-0.2cm}
\begin{equation}
L_{\meshsketch} = CE(\alpha_{NeRF}(p), \alpha_{GT}(p)) \cdot (1- e^{-\frac{d^2}{2\sigma_S}}).
\label{eq:lenient_constraint}
\end{equation}
This loss implies that the occupancy should be well constrained away from the surface, and free to be set by score distillation near the surface. This loss is applied in addition to the Latent-NeRF loss, over the entire point set $p$ that is used by the NeRF's volumetric rendering. $d$ represents the distance of $p$ from the surface, and $\sigma_S$ is a hyperparameter that controls how lenient the loss is, \textit{i.e.}, lower $\sigma_S$ values imply a tighter constraint to the input \meshsketch.
Applying the loss only on the sampled point-set $p$, makes it more efficient as these points are already evaluated as part of the Latent-NeRF rendering process.
\subsection{\latentpaint{} of Explicit Shapes}
\label{sec:latent_mesh}
We now move to a more strict constraint, where the guidance is based on an exact structure of a given shape, e.g., provided in the form of a mesh. We call this approach \latentpaint, which leads to the generation of novel textures for a given shape.
Our method generates texture over a UV texture map, which can either be supplied by the input mesh, or calculated on-the-fly using XAtlas~\cite{xatlas}. To color a mesh, we first initialize a random latent texture image of size $H \times W \times 4$, where $H$ and $W$ can be chosen according to the desired texture granularity. We set them both to be $128$ in our experiments.
Figure~\ref{fig:overview_latent_mesh} presents the training process. At each score distillation iteration, we render the mesh with a differentiable renderer~\cite{KaolinLibrary} to obtain a $64 \times 64 \times 4$ feature map that is \textit{pseudo-colored} by the latent texture image.
Then, we apply the score distillation loss from Equation~\ref{eq:sds_loss} in the same way it is applied for Latent-NeRF. Yet, instead of back-propagating the loss to the NeRF's MLP parameters, we optimize the deep texture image by back-propagating through the differentiable renderer.
To get the final RGB texture image, we simply pass the \textbf{latent texture} image through Stable Diffusion's decoder $\mathcal{D} $ once, to get a larger high-quality RGB texture.
\section{Evaluation}
\input{figures/4_exp/latent_nerf/text2nerf_directions/fig.tex}
\label{sec:experiments}
We now validate the effectiveness of our different forms of guidance through a variety of experiments.
\paragraph{Implementation Details}
We use the Stable Diffusion implementation by HuggingFace Diffusers, with the \texttt{v1-4} checkpoint. For score distillation, we use the code-base provided by \cite{stabledreamfusion}, with Instant-NGP~\cite{mueller2022instant} as our NeRF model. Latent-NeRF usually takes less than 15 minutes to converge on a single V100, while using an RGB-NeRF with Stable Diffusion~\cite{stabledreamfusion} takes about 30 minutes, due to the increased overhead from encoding into the latent space. Note that DreamFusion~\cite{poole2022dreamfusion} takes about 1.5 hours on 4 TPUs. This clearly shows the computational advantage of Latent-NeRF.
\subsection{Text-Guided Generation}
\input{figures/4_exp/latent_nerf/text2nerf_comparison/fig.tex}
\paragraph{Qualitative Results}
We begin by demonstrating the effectiveness of the latent rendering approach with Latent-NeRF.
In \cref{teaser,fig:text2nerf_directions,fig:rgb_finetune}, we show several results obtained by our method.
In the supplementary material we provide additional results of different objects, including video visualizations.
In Figure~\ref{fig:text2nerf_directions}, we show the consistency of our learned shapes from several viewpoints.
Next, we use the baseline set by DreamFusion~\cite{poole2022dreamfusion} to qualitatively compare our approach (with the proposed RGB refinement) against other methods.
As can be seen in Figure~\ref{fig:text2nerf_comparison}, Latent-NeRF achieves significantly better results than DreamFields~\cite{jain2021dreamfields} and CLIPMesh~\cite{khalid2022clipmesh}.
We believe that the better quality of DreamFusion can be attributed to the high quality of Imagen~\cite{imagen}, but unfortunately, we cannot validate this as the model is not publicly available to the community.
\paragraph{RGB Refinement}
Figure~\ref{fig:rgb_finetune} shows the quality improvement achieved by our RGB refinement method. It reveals that RGB refinement is mostly useful for complex objects or for regions with detailed textures.
Refinement iterations in the RGB space are about $\times2$ slower than iterations in latent space, thus, increasing the runtime to more than 30 minutes. Thanks to our linear mapping from a Latent-NeRF to an RGB-NeRF, practitioners may apply the refinement method only after the 3D shape has already converged with the more efficient latent training. This allows a fast exploration of 3D shapes, and an optional polishing step with RGB refinement.
\input{figures/4_exp/rgb_finetune/fig.tex}
\input{figures/4_exp/textual_inversion/cat.tex}
\paragraph{Textual-Inversion}
As our Latent-NeRF is supervised by Stable-Diffusion, we can also use \textit{Textual Inversion}~\cite{gal2022textual_inversion} tokens as part of the input text prompt. This allows conditioning the object generation on specific objects and styles, defined only by input images. Results using \textit{Textual Inversion} are presented in Figure~\ref{fig:textual_inversion}.
\input{figures/4_exp/sketch_mesh/animals/fig.tex}
\input{figures/4_exp/sketch_mesh/house/fig.tex}
\input{figures/4_exp/sketch_mesh/general/fig}
\input{figures/4_exp/latent-mesh/misc/misc}
\input{figures/4_exp/latent-mesh/shoes2}
\input{figures/4_exp/sketch_mesh/ablation/fig}
\input{figures/4_exp/latent-mesh/fish}
\input{figures/4_exp/limitations/fig.tex}
\input{figures/4_exp/sketch_mesh/weight_ablation/fig}
\subsection{\meshsketch~Guidance}
Figure~\ref{fig:skecth_mesh_animals} shows different \meshsketch~results with the same conditioning mesh. The different text prompts are able to guide the shape toward refined geometries that better match the text prompt. The rough \meshsketch{} in this figure, was quickly designed in Blender~\cite{blender} and allows us to easily define a coarse shape that guides the Latent-NeRF.
Moreover, Figure~\ref{fig:skecth_weight_ablation} depicts an ablation over the lenient parameter $\sigma_S$ from \cref{eq:lenient_constraint}.
When $\sigma_S$ is set to $0.05$, the generated mesh takes the form of the conditioning shape (shown in Figure~\ref{fig:skecth_mesh_animals}).
As $\sigma_S$ grows, more details are added on top of the base shape, until little to no resemblance is observed at $\sigma_S=1.5$.
Figure~\ref{fig:skecth_mesh_house} contains additional results of different shapes generated with the same conditioning mesh, here a coarse house shape. The normals visualization (bottom row) shows that our method can add fine geometric details.
Figure~\ref{fig:general_sketchshapes} demonstrates that our proposed approach can successfully work with a variety of different \meshsketch s. Notice that our method handles a variety of different shapes and also works well with shapes extruded from 2D sketches.
We also exhibit in Figure~\ref{fig:ablation_prompt} the effectiveness of shape-guidance, by showing results of the same prompts with and without the shape loss.
\subsection{\latentpaint{} Generation}
We tested \latentpaint{} on a variety of input shapes shown in \cref{fig:latent_mesh_misc,fig:latent_mesh_shoe}.
As all of the shapes in these figures do not contain precomputed UV parameterization, we use XAtlas~\cite{xatlas} to compute such parameterization automatically.
In contrast, the fish mesh in Figure~\ref{fig:latent_mesh_fish} (obtained from~\cite{keenan3D}) already contains high quality UV parameterization, which we are also able work with.
Figure~\ref{fig:latent_mesh_shoe} compares our \latentpaint{} approach to two closely related methods, Tango~\cite{chen2022tango} and CLIPMesh~\cite{khalid2022clipmesh}. As can be seen, our approach achieves more precise textures thanks to the guidance from the diffusion model.
Note that \latentpaint{} can work without assuming or computing any UV-map by simply optimizing per-face latent attributes. Yet, we found it better to use a UV-map for two main reasons: (i) The UV map makes the texture granularity independent of the geometric resolution, \textit{i.e.}, coarse geometries do not imply course colorization; and (ii) texture maps are easier to use with downstream applications like MeshLab~\cite{meshlab} and Blender~\cite{blender}.
\section{Limitations}
Our presented technique is yet a preliminary step towards the challenging goal of a comprehensive text-to-shape model that uses no 3D supervision.
Still, the proposed latent framework has its limitations.
To attain a plausible 3D shape, we use the same ``prompt tweaking'' used by DreamFusion~\cite{poole2022dreamfusion}, \ie, adding a directional text-prompt (e.g., "front", "side" with respect to the camera) to the input text prompt. We find that this assistance tends to fail with our approach when applied to certain objects.
Moreover, we find that even Stable Diffusion tends to generate unsatisfactory images when specifying the desired direction as shown in Figure~\ref{fig:limitations}.
Additionally, similar to most works that employ diffusion models, there is a stochastic behavior to the results, such that the quality of the results may significantly vary between different seeds.
\section{Conclusions}
\label{sec:conclusion}
In this work, we introduced a latent framework for generating 3D shapes and textures using different forms of text and shape guidance. We first adapted the score distillation loss for LDMs, enabling the use of recent, powerful and publicly available text-to-image generation models for 3D shape generation.
Successfully applying score distillation on LDMs results in a fast and flexible object generation framework. We then introduced shape-guided control on the generated model. We showed two versions of shape-guided generation, \meshsketch{} and \latentpaint{}, and demonstrated their effectiveness for providing additional control over the generation process.
Typically, the notion of rendering refers to generating an image in pixel space. Here, we have presented a method that renders directly into the latent space of a neural model. We believe that our Latent-NeRF approach opens the avenue for more latent space rendering methods, which can gain from a compact and effective latent representation, and advance the use of neural models that operate in latent space rather than pixel space.
Furthermore, our novel approach and its ease-of-use nature would encourage further research toward effective text-guided shape generation.
\subsection{Acknowledgement}
We thank Yuval Alaluf, Rinon Gal and Kfir Goldberg for their insightful comments.
We would also like to thank Ido Richardson for his excellent mesh model designs used throughout the paper.
|
1,314,259,994,876 | arxiv | \section{Introduction}\label{introduction}
The study of the order of zeros of $L$-functions is one of the central problems in number~theory. It is conjectured that all nontrivial zeros of the Riemann zeta function lie on the critical line $\Re(s) = \frac{1}{2}$ and are simple. On the other hand, there exist number fields $L$ such that the Dedekind zeta function $\zeta_L(s)$ has nontrivial zeros of higher order. This is due to the decomposition of Dedekind zeta functions as a product of Artin $L$-functions. The Artin holomorphy conjecture predicts that Artin $L$-functions associated to nontrivial irreducible representations are entire. Assuming this conjecture, if $L/K$ is a nonabelian Galois extension of number~fields, then $\zeta_L(s)$ has infinitely many nontrivial zeros of higher order.
If $L/\mathbb{Q}$ is Galois and we further assume that no two Artin $L$-functions associated to irreducible characters of $\Gal(L/\mathbb{Q})$ share nontrivial zeros (with the possible exception of $s = \frac{1}{2}$), as well as that all such zeros are simple, then one can be more precise: the highest order~nontrivial zeros of $\zeta_L(s)$ away from $s = \frac{1}{2}$ have order equal to the greatest degree of any~irreducible representation of $\Gal(L/\mathbb{Q})$ and there are infinitely many such zeros.
Since the Artin holomorphy conjecture is known for specific Galois groups, it has long~been known that there are infinitely many~cases of zeros of higher order away from $s = \frac{1}{2}$. Browkin \cite{browkin-2013} studied one such family of Galois groups. His example concerns the affine group over~a finite field, namely the matrix group
\begin{equation}\label{eq:aff-gp-def}
\AGL_1(\mathbb{F}_q) \coloneqq \bigg\{
\begin{pmatrix}
a & b \\ 0 & 1
\end{pmatrix} \bigg|\,\, a \in \mathbb{F}_q^\times, b \in \mathbb{F}_q \bigg\}.
\end{equation}
Each such group possesses only one irreducible character of degree greater than one (which has degree $q-1$) induced by one-dimensional characters on a subgroup of~$\AGL_1(\mathbb{F}_q)$. As~a result (see~\cref{cor:hol-artin-induct}), the Artin holomorphy conjecture is known for the Artin $L$-function corresponding to this character. Hence, for every Galois extension $L/K$ of number fields~with Galois group~$\AGL_1(\mathbb{F}_q)$, the Dedekind zeta function $\zeta_{L}(s)$ has infinitely many zeros of~multiplicity at least $q - 1$ in the critical strip~$0 < \Re(s) < 1$. This led Browkin to ask if $\zeta_L(s)$ always has higher order nontrivial zeros whenever $L/K$ is nonabelian \cite[Section 7]{browkin-2013}.
An alternative, more general approach is to study the holomorphy of a family of Artin $L$-functions at a given point. This is the approach taken by Stark \cite[Theorem 3]{stark1974some} who~showed that if $L/K$ is Galois and $\rho$ is a simple zero of $\zeta_L(s)$, then $L(s, \chi, L/K)$ is holomorphic at~$s = \rho$ for every character $\chi$. Stark's result has been refined by Foote and Kumar Murty \cite{foote_murty_1989} and by~Foote and Wales \cite{footeorder2} to treat higher order zeros when $L/K$ is a solvable extension. As a corollary of the work of Stark, we obtain the following theorem.
\begin{theorem}\label{thm:mainthm}
If $L/K$ is a nonabelian Galois extension of number fields, the Dedekind zeta function $\zeta_{L}(s)$ has~infinitely many nontrivial zeros with multiplicity greater than $1$.
\end{theorem}
The goal of this paper is to prove this result by wholly different means: we non-constructively consider the class of Galois groups for~which the conclusion of \cref{thm:mainthm} holds~and establish that this encompasses all nonabelian finite groups.
Although the example of Browkin allows for explicit lower bounds for the multiplicities~of the nontrivial zeros of interest, neither proof of \cref{thm:mainthm} is able to prescribe such multiplicities predicted by the Artin holomorphy conjecture. However, we can match this prediction~for zeros of order 3.
\begin{theorem}\label{thm:degree3}
Let $L/K$ be a Galois extension of number fields. If $\Gal(L/K)$ has an irreducible representation of degree at least 3, then the Dedekind zeta function $\zeta_{L}(s)$ has~infinitely many nontrivial zeros with multiplicity greater than $2$.
\end{theorem}
The solvable case of \cref{thm:degree3} can be proved quite easily by the results of Foote and Wales \cite{footeorder2}, though the non-solvable case does not appear to be easily addressed by Stark-like theorems. We elaborate on this in \cref{sec:degree-3-proof}.
In forthcoming work \cite{hkms}, we apply \cref{thm:mainthm} to establish that an analogue of the Mertens conjecture fails for certain number fields. This is our main~motivation to work with the order of zeros of Dedekind zeta functions.
\subsection*{Acknowledgements}
We are deeply grateful to Peter Humphries for supervising this project and to Ken Ono for his valuable suggestions. We would also like to thank Robert Lemke Oliver and Samit Dasgupta for helpfully directing us to the work of Stark. Finally, we are grateful for the generous support of the National Science Foundation (Grants DMS 2002265 and DMS 205118),
National Security Agency (Grant H98230-21-1-0059), the Thomas Jefferson Fund at the University of Virginia, and the Templeton~World Charity Foundation.
\section{Preliminaries}\label{preliminaries}
First of all, we recall the definition of an Artin $L$-function and some of its properties. These can be found in \cite[pp.~211,~220--222]{heilbronn1967zeta}.
\begin{definition}\label{def:artin-L} Given a Galois extension $L/K$ of number fields and a (complex linear)~representation $(\rho,V)$ of $\operatorname{Gal}(L/K)$ with character $\chi$, the \emph{Artin $L$-function} $L(s,\chi,L/K)$ is defined as a product of local factors, one for each prime ideal $\mathfrak p\subset\mathcal O_K$. For an unramified prime $\mathfrak p$,~the factor is
\begin{equation*}
\det\left(I-N(\mathfrak p)^{-s}\rho(\frob(\mathfrak p))\right)^{-1},
\end{equation*}
where $\frob(\mathfrak p)$ is the Frobenius element of $\mathfrak p$ defined up to conjugacy in $\operatorname{Gal}(L/K)$. For~ramified $\mathfrak p$, the matrix is restricted to the subspace of $V$ fixed by the inertia group of $\mathfrak p$. As $\frob(\mathfrak p)$ is only defined up to an element of the inertia group, the restriction and corresponding determinant are only well-defined on this subspace.
\end{definition}
\begin{lemma}\label{lem:artin-L}
Let $L/K$ be a Galois extension of number fields with Galois group $G$.
\begin{enumerate}[label=(\roman*), font=\normalfont]
\item[(a)] If $\chi$ is a one-dimensional character of $G$, then $L(s,\chi,L/K)$ is a Hecke $L$-function~and thus holomorphic on the whole complex plane $\mathbb{C}$, unless $\chi$ is the trivial character of $G$ in which case it is holomorphic except for a pole at $s = 1$.
\item[(b)] If $\chi$ is a character of some subgroup $H \subset G$, then $L(s, \Ind_{H}^{G} \chi, L/K) = L(s, \chi, L/L^{H})$.
\item[(c)] If $1$ is the trivial character of $G$, then $L(s, 1, L/K) = \zeta_{K}(s)$, and if $r_{G}$ is the character corresponding to the regular representation of $G$, then $L(s, r_{G}, L/K) = \zeta_{L}(s)$.
\item[(d)] If $\chi_{1}$ and $\chi_{2}$ are characters of $G$, then $L(s, \chi_{1}+\chi_{2}, L/K) = L(s, \chi_{1}, L/K) L(s, \chi_{2}, L/K)$.
\end{enumerate}
\end{lemma}
As a consequence, we have an explicit factorization of Dedekind zeta~functions.
\begin{corollary}
\label{cor:dedekind-zeta-factorization}
Let $L/K$ be a Galois extension of number fields. Then
\begin{equation*}
\zeta_L(s) = \zeta_K(s) \prod_{\chi} L(s, \chi, L/K)^{\dim \chi},
\end{equation*}
where the product runs over all nontrivial irreducible characters $\chi$ of $\Gal(L/K)$.
\end{corollary}
Another corollary is the entireness of certain Artin $L$-functions.
\begin{corollary}\label{cor:hol-artin-induct}
Let $L/K$ be a Galois extension of number fields with Galois group $G$ and $\chi$ be a character of $G$ induced from a nontrivial one-dimensional character $\psi$ of a subgroup $H$ of $G$. Then $L(s, \chi, L/K)$ is entire.
\end{corollary}
\begin{proof}
This is a direct consequence of \cref{lem:artin-L}~(a) and (b).
\end{proof}
We may also combine \cref{cor:hol-artin-induct} with a representation-theoretic theorem to obtain some stronger results on the entireness of certain $L$-functions. For a group $G$, we let $r_G$ denote the character of the regular representation of $G$ and $1=1_G$ denote the character of the trivial representation of $G$.
\begin{lemma}[Aramata--Brauer~\cite{aramata}]\label{lem:aramata-brauer}
Let $G$ be a finite group. There exist positive rational~numbers $\lambda_{i}$ and characters $\chi_{i}$ of $G$ such that
\begin{equation*}
r_{G} = 1+\sum_{i} \lambda_{i} \chi_{i},
\end{equation*}
where each $\chi_{i}$ is the induction of a one-dimensional character of some cyclic subgroup of $G$.
\end{lemma}
\begin{corollary}\label{cor:hol-quotient}
If $L/K$ is a Galois extension of number fields, $\zeta_{L}(s)/\zeta_{K}(s)$ is holomorphic.
\end{corollary}
\begin{proof}
Since Dedekind zeta functions are holomorphic (except for a pole of order $1$ at $s = 1$), the~quotient $\zeta_L(s)/\zeta_K(s)$ is meromorphic. To prove that it is holomorphic, we need only show that it has no poles.~Let $G = \mathrm{Gal}(L/K)$. Then, if
\begin{equation*}
r_{G}-1 = \sum_{i} \lambda_{i} \chi_{i},
\end{equation*}
\cref{lem:artin-L} renders that
\begin{equation*}
\left(\frac{\zeta_{L}(s)}{\zeta_{K}(s)}\right)^{N} = \prod_{i} L(s, \chi_{i}, L/K)^{N \lambda_{i}},
\end{equation*}
where $N$ is a positive integer such that $N \lambda_{i} \in \mathbb{Z}$ for each $i$. Via \cref{cor:hol-artin-induct}, the right-hand side is a holomorphic function, hence $\zeta_{L}(s)/\zeta_{K}(s)$ is as well.
\end{proof}
\begin{remark}\label{rmk:easy-sol-idea}
If some $\lambda_i$ in the decomposition of \cref{lem:aramata-brauer} exceeds $1$, then \cref{thm:mainthm} has a more direct proof, as each of the infinitely many nontrivial zeros of $L(s,\chi_i,L/K)$ would have multiplicity at least $\lambda_i$ and thus at least $\lceil\lambda_i\rceil\geq 2$. Unfortunately, at least in the explicit decomposition given in \cite{aramata}, this does not hold in general, even when identical characters are grouped appropriately.
\end{remark}
Before proving our main result, we need a class of groups where the Artin holomorphy~conjecture is known.
\begin{definition}
A group $G$ is \emph{monomial} if all of its irreducible characters are induced from characters of degree 1.
\end{definition}
By \cref{cor:hol-artin-induct}, the Artin holomorphy conjecture is known for all monomial Galois extensions of number fields. The following lemma due to Huppert presents an easily verifiable sufficient criterion for a group to be monomial which will be used subsequently.
\begin{lemma}[Huppert~{\cite{huppert-m-groups}}]
\label{lem:huppert}
Let $G$ be a finite group and let $N \lhd G$ be a proper normal subgroup for which $N$ is solvable and $G/N$ is supersolvable. If all Sylow subgroups of $N$ are abelian,~then~$G$ is monomial.
\end{lemma}
We also need a standard group-theoretic lemma. For completeness, we reproduce the~proof.
\begin{lemma}\label{lem:simple-group-nonabelian-subgp}
Any finite simple group $G$ besides $\mathbb{Z}/p\mathbb{Z}$ has a nonabelian proper subgroup.
\end{lemma}
\begin{proof}
Assume that this is not the case. Consider any maximal proper subgroup $H$ of $G$. Its normalizer is contained between $H$ and~$G$, so it must be either $H$ or $G$. If it were $G$, then $H$ would be normal, contradicting the simplicity~of $G$, so $H$ must be its own normalizer. If $H_{1}$ and $H_{2}$ are any two distinct maximal proper~subgroups of $G$, the normalizer of $H_{1} \cap H_{2}$ must contain both $H_{1}$ and $H_{2}$, since both $H_{1}$ and $H_{2}$ are abelian. As a consequence, it must be $G$ itself; this gives that $H_1\cap H_2$ is the trivial subgroup, as otherwise would contradict the simplicity of $G$. Hence, any maximal proper subgroup $H$ of $G$ has $|G|/|H|$ distinct conjugates, the union of which comprises exactly
\begin{equation*}
1+\frac{|G|}{|H|}(|H|-1)=|G|-\frac{|G|}{|H|}+1
\end{equation*}
elements. Since every group of non-prime order has a proper nontrivial subgroup, there is~some element $x\in G$ not counted in any conjugate of $H$. Thus there must exist some~maximal~proper subgroup $H'$ (the maximal proper subgroup containing $x$, say) of $G$ that is not a conjugate of $H$. The conjugates of this subgroup comprise $|G|-|G|/|H'|+1$ elements, of which only~the identity can be in any conjugate of $H$. Then we have that
\begin{equation*}
|G| \geq \left(|G|-\frac{|G|}{|H|}+1 \right)+\left(|G|-\frac{|G|}{|H'|}+1 \right)-1,
\end{equation*}
which implies $\min(|H|, |H'|) < 2$, a contradiction.
\end{proof}
\section{A Theorem of Stark}\label{sec:stark}
Stark \cite{stark1974some}, Foote and Murty \cite{foote_murty_1989}, and Foote and Wales \cite{footeorder2} considered the~holomorphy of Artin $L$-functions at a given point. Just as the Artin holomorphy conjecture implies the existence of higher order nontrivial zeros of Dedekind zeta functions, this more local phenomenon may be used to establish the existence of higher order zeros in certain circumstances. To elucidate this point, we recall a theorem of Stark to produce a first proof of \cref{thm:mainthm}.
\begin{lemma}[{Stark \cite[Theorem 3]{stark1974some}}]
\label{thm:stark}
Let $L/K$ be a Galois extension of number fields. Let $\rho \neq 1$ be such that the order of vanishing of $\zeta_L(s)$ at $s = \rho$ is at most 1. Then $L(s, \chi, L/K)$~is holomorphic at $s = \rho$ for all characters $\chi$.
\end{lemma}
The proof of this theorem is largely representation theoretic, using Frobenius reciprocity, \cref{lem:artin-L} and \cref{cor:hol-quotient} to show that the virtual character
$$\theta \coloneqq \sum_{\chi} \chi \cdot \ord_{s = \rho}L(s, \chi, L/K)$$
is a genuine character, and hence $L(s, \chi, L/K)$ is holomorphic at $s = \rho$ for all $\chi$. We now~give an initial proof of \cref{thm:mainthm}.
\begin{proof}[First Proof of \cref{thm:mainthm}]
Assume that $L/K$ is Galois but not abelian. Then $\Gal(L/K)$ has an irreducible representation $\chi$ of degree at least 2. Let $\rho$ be a zero or pole of $L(s, \chi, L/K)$ in the critical strip. It is known that infinitely many such $\rho$ exist.
Suppose for the sake of contradiction that $\ord_{s = \rho} \zeta_L(s) \leq 1$. Then by \cref{thm:stark}, the Artin $L$-function $L(s, \chi, L/K)$ is holomorphic at $\rho$ for each $\chi$. In particular, $L(\rho, \chi, L/K) = 0$. By \cref{cor:dedekind-zeta-factorization}, $\zeta_L(s)$ has a zero of order at least $\chi(1) > 1$ at $\rho$. Thus, $\zeta_L(s)$ has infinitely many nontrivial zeros of order at least 2.
\end{proof}
\section{A New Proof of \cref{thm:mainthm}}\label{sec:Proof-of-Theorem-1.1}
Let $\mathcal{S}_n$ be the set of finite groups $G$ with the property that for any Galois extension $L/K$ of number fields with Galois~group $G$, the Dedekind zeta function $\zeta_{L}(s)$ has infinitely many nontrivial zeros with multiplicity at least $n$. We establish by contradiction that all nonabelian groups $G$ are in $\mathcal S_2$. First of all, we show that this holds for all nonabelian monomial groups.
\begin{lemma}\label{lem:monomial-in-s}
Let $G$ be a finite nonabelian monomial group. Then $G \in \mathcal{S}_2$.
\end{lemma}
\begin{proof}
Suppose that $L/K$ is a Galois extension of number fields with monomial Galois group $G$. Then, by \cref{cor:dedekind-zeta-factorization},
\begin{equation*}
\zeta_{L}(s) = L(s, r_{G}, L/K) = \zeta_K(s) \cdot \prod_{\chi} L(s, \chi, L/K)^{\dim \chi},
\end{equation*}
where the product is over all nontrivial irreducible characters $\chi$ of $G$. Since $G$ is nonabelian, some such $\chi$ has degree greater than $1$. By \cref{cor:hol-artin-induct} and the definition of a monomial group,~each $L(s, \chi, L/K)$ is an entire~function with infinitely many nontrivial zeros. If $\dim \chi > 1$, then the infinitely many nontrivial zeros~of $L(s, \chi, L/K)$ occur with multiplicity at least $2$ as zeros of $\zeta_{L}(s)$. Hence we conclude that $G \in \mathcal{S}_2$.
\end{proof}
Next, we show that the property of a group belonging to $\mathcal{S}_n$ is induced through its subgroups and quotients by its normal subgroups.
\begin{lemma}\label{lem:subgp-and-quotient}
Let $G$ be a finite group and $n$ be any positive integer.
\begin{enumerate}[label=(\arabic*), font=\normalfont]
\item If $H$ is a subgroup of $G$ and $H \in \mathcal S_n$, then $G \in \mathcal S_n$.
\item If $N \lhd G$ is a normal subgroup and $G/N \in \mathcal S_n$, then $G \in \mathcal S_n$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $L/K$ be any Galois extension of number fields with $\Gal(L/K) = G$.
\begin{enumerate}
\item If $H \in \mathcal{S}_n$ is a subgroup of $G$, then $L/L^{H}$ is a Galois extension with Galois group~$H$. Since $H \in \mathcal{S}_n$, this means that $\zeta_{L}(s)$ has infinitely many nontrivial zeros with multiplicity at least $n$. Hence we have $G \in \mathcal{S}_n$.
\item If $N \lhd G$ is a normal subgroup for which $G/N \in \mathcal{S}_n$, then $L^{N}/K$ is a Galois extension with Galois group $G/N$. The Dedekind zeta function $\zeta_{L^N}(s)$ thus has infinitely many nontrivial zeros of~multiplicity at least $n$. By \cref{cor:hol-quotient},
\begin{equation*}
\frac{\zeta_{L}(s)}{\zeta_{L^{N}}(s)}
\end{equation*}
is holomorphic, meaning that $\zeta_{L}$ also has infinitely many nontrivial zeros of multiplicity at least $n$. Hence we have $G \in \mathcal{S}_n$. \qedhere
\end{enumerate}
\end{proof}
We are now ready to establish~\cref{thm:mainthm}.
\begin{proof}[Second Proof of Theorem \ref{thm:mainthm}]
Suppose for the sake of contradiction that there is a finite nonabelian group not in $\mathcal S_2$; let $G$ be such a group of minimal order.
By \cref{lem:subgp-and-quotient}, such a group~$G$ may only have abelian subgroups or quotients, as otherwise this would contradict the minimality of $G$. However, by \cref{lem:simple-group-nonabelian-subgp}, $G$ cannot be simple,~since nonabelian simple groups contain some nonabelian proper subgroup. Therefore, $G$ must have some nontrivial proper normal subgroup; take $N$ to be such a subgroup. Both of $N$ and $G/N$ are abelian, meaning that they are supersolvable. Hence, by \cref{lem:huppert}, $G$ is monomial,~which by \cref{lem:monomial-in-s} means $G \in \mathcal{S}_2$, as desired.
\end{proof}
\section{Proof of~\cref{thm:degree3}}\label{sec:degree-3-proof}
Although \cref{thm:mainthm} is strictly weaker than Stark's theorem, the method demonstrated in \cref{sec:Proof-of-Theorem-1.1} is much more amenable to collections of less well-behaved groups, like the collection of finite non-solvable groups. We will utilize this to prove \cref{thm:degree3} in the non-solvable~case.
First, however, we address the case where the Galois group is solvable; this proof may be completed in two ways. One method exploits the straightforward order-two generalization of Stark's result as proven by Foote and Wales.
\begin{lemma}[{Foote--Wales \cite{footeorder2}}]\label{FooteWales}
Let $L/K$ be a solvable extension of number~fields and let $\rho \neq 1$ be such that the order of vanishing of $\zeta_L(s)$ at $s = \rho$ is at most $2$. Then $L(s, \chi, L/K)$ is holomorphic at $s = \rho$ for all characters $\chi$.
\end{lemma}
With Lemma~\ref{FooteWales} in mind, one replicates the proof in \cref{sec:stark} directly to show the~solvable case of \cref{thm:degree3}. Alternatively, it is also possible to treat the solvable case using methods similar to those in \cref{sec:Proof-of-Theorem-1.1}. We now give the proof.
\begin{proof}[Second proof of the solvable case of \cref{thm:degree3}]
For the sake of contradiction, we consider a solvable group $G\not\in \mathcal S_3$ with an irreducible representation of dimension at least $3$, and assume that $G$ is of minimal order subject to these constraints. Since any subgroup or quotient of a solvable group is solvable, \cref{lem:subgp-and-quotient} along with the minimality of $G$ implies that $G$ has no subgroup or quotient with an irreducible representation of dimension greater than $2$.
As a result, $G$ must possess a normal subgroup $N$ with $G/N\cong \mathbb{Z}/p\mathbb{Z}$ for some prime $p$, and via minimality $N$ may only have irreducible representations of dimension $1$ or $2$. Such groups are explicitly classified in \cite[Theorem 3]{amitsur}; each has an abelian normal subgroup with abelian quotient, and so $N$ is monomial by \cref{lem:huppert}. This means that Artin holomorphy holds for all Artin characters of $L/L^N$.
Given a positive integer $n$, define the auxiliary meromorphic $L$-functions
$$L_n(s, L/K) \coloneqq \prod_{\dim \chi = n} L(s, \chi, L/K).$$
Using Clifford theory, the induction $\Ind_N^G(\psi)$ of any irreducible character $\psi$ of $N$ splits into irreducible characters of $\chi$ of equal degree (either $\dim\psi$ or $p\dim\psi$), so one can write $L_n(s, L/K)^n$ as a product of Artin $L$-functions whose characters are induced from $N$. Hence, $L_n(s, L/K)^n$ is holomorphic away from $s=1$ for each positive integer $n$; since $L_n(s, L/K)$ is meromorphic, this implies that $L_n(s, L/K)$ is holomorphic away from $s=1$. Moreover, $L_n(s, L/K)$ is itself an Artin $L$-function, and hence has infinitely many nontrivial zeros. Observing that $\zeta_L(s) = \prod_n L_n(s, L/K)^n$, it then follows that $\zeta_L(s)$ has infinitely many zeros with multiplicity at least 3.
\end{proof}
Hence, only the non-solvable case remains. In this context, we shall see that minimal~counterexamples would be minimal simple groups.
\begin{definition}\label{def:minimal-simple}
A \emph{minimal simple group} is a nonabelian finite simple group such that all proper subgroups are solvable.
\end{definition}
The classification of minimal simple groups was completed by Thompson. In what follows, $\PSL_n(q)$ denotes the projective special linear group of degree $n$ over the field $\mathbb{F}_q$ and $\Sz(2^{2k+1})$ denotes a Suzuki group.
\begin{lemma}\textup{(Thompson \cite[Corollary 1]{classification-of-minimal-simple-groups})}\label{lem:classification-of-minimal-simple-groups}
Let $G$ be a minimal simple group. Then $G$ is isomorphic to one of the following:
\begin{enumerate}[before=\normalfont]
\item $\PSL_2(2^p)$ for some prime $p$.
\item $\PSL_2(3^p)$ for some odd prime $p$.
\item $\PSL_2(p)$ for some $p > 3$ prime where $p \equiv 2, 3 \mod 5$.
\item $\PSL_3(3)$.
\item $\Sz(2^p)$ for some odd prime $p$.
\end{enumerate}
\end{lemma}
Our immediate goal is to show that all such groups belong to $\mathcal{S}_3$. For this, we will require the following standard facts concerning the groups presented in \cref{lem:classification-of-minimal-simple-groups}. Here, $\AGL_1(q)$ signifies the affine group over $\mathbb{F}_q$ as defined in \eqref{eq:aff-gp-def}, and $\AGL'_1(q)$ denotes the subgroup of $\AGL_1(q)$ formed by restricting the entry $a$ in \eqref{eq:aff-gp-def} to those elements of $\mathbb{F}_q^\times$ that are squares.
\begin{lemma}\label{lem:group-theoretic-lemmas}
The following statements hold:
\begin{enumerate}[before=\normalfont]
\item $\PSL_2(3) \cong \AGL_1(4)$.
\item $\Sz(2) \cong \AGL_1(5)$.
\item $\AGL_1(2^n) \leq \PSL_2(2^n)$ for any $n$.
\item $\AGL_1'(q) \leq \PSL_2(q)$ for $q = p^n$ odd.
\end{enumerate}
\begin{proof}
Parts (1) and (2) are routine calculations. For parts (3) and (4), consider the subgroup of upper-triangular matrices ${\SUT_2(q) \leq \SL_2(q)}$ and its image $\PSUT_2(q) \leq \PSL_2(q)$. Observe that
$$
\PSUT_2(q) = \left\{\pm a^{-1}\begin{pmatrix}
a^2 & b \\
0 & 1
\end{pmatrix} \bigg\vert a \in \mathbb{F}_q^\times, b \in \mathbb{F}_q \right\}
$$
Noting the similarities in the definitions of $\PSUT_2(q)$ and $\AGL_1(q)$, we see that $\PSUT_2(q) \cong \AGL'_1(q)$. When $q = 2^n$, the map $a\mapsto a^2$ is the Frobenius automorphism on $\mathbb{F}_q^\times$, and we may further confirm that ${\PSUT_2(q) \cong \AGL_1(q)}$.
\end{proof}
\end{lemma}
Browkin \cite{browkin-2013} establishes that $\AGL_1(q) \in \mathcal{S}_{q - 1}$. We extend this result to the index two subgroup $\AGL'_1(q)$.
\begin{lemma}\label{lem:semidirect-computation}
Let $q = p^n$ for $p \geq 3$. Then $\AGL'_1(q) \in \mathcal{S}_{(q - 1)/2}$.
\begin{proof}
Observe that $[\AGL_1(q): \AGL'_1(q)] = 2$. In particular, this means $\AGL'_1(q)$ is normal in $\AGL_1(q)$. By computations of Browkin \cite{browkin-2013}, $\AGL_1(q)$ is monomial with an irreducible representation $\chi$ of degree $q - 1$. $\AGL'_1(q)$ is also monomial by \cref{lem:huppert}, and Clifford~theory gives that $\chi$ restricts to a representation of $\AGL'_1(q)$ whose constituent irreducible representations have degree at least $(q - 1)/2$. Hence, $\AGL'_1(q) \in \mathcal{S}_{(q-1)/2}$.
\end{proof}
\end{lemma}
Note that $\mathcal{S}_n \subseteq \mathcal{S}_3$ for $n \geq 3$. Using these results, we show that all minimal simple groups belong to $\mathcal{S}_3$:
\begin{lemma}\label{lem:minimal-simple-case}
Let $G$ be a minimal simple group. Then $G \in \mathcal{S}_3$.
\begin{proof}
By \cref{lem:classification-of-minimal-simple-groups}, there are five cases to prove. In each case, by \cref{lem:subgp-and-quotient}, it suffices to find a subgroup belonging to $\mathcal{S}_3$. Such subgroups are given in \cref{lem:group-theoretic-lemmas}. In particular,
\begin{enumerate}
\item $\AGL_1(2^p) \leq \PSL_2(2^p)$ where $p$ prime.
\item $\AGL'_1(3^p) \leq \PSL_2(3^p)$ where $p$ is odd and prime.
\item $\AGL'_1(p) \leq \PSL_2(p)$ where $p \geq 7$.
\item $\AGL_1(4) \cong \PSL_2(3) \leq \PSL_3(3)$.
\item $\AGL_1(5) \cong \Sz(2) \leq \Sz(2^p)$.
\end{enumerate}
From the work of Browkin \cite{browkin-2013}, one has $\AGL_1(q) \in \mathcal{S}_3$ when $q \geq 4$. By \cref{lem:semidirect-computation}, it follows that $\AGL_1'(q) \in \mathcal{S}_3$ for $q \geq 7$. In all cases, we may conclude that $G \in \mathcal{S}_3$.
\end{proof}
\end{lemma}
We are now ready to complete the proof of \cref{thm:degree3}. Since we have already shown the result in the case where $\Gal(L/K)$ is solvable, we need only treat the non-solvable case.
\begin{proof}[Proof of \cref{thm:degree3}]
Suppose for the sake of contradiction that there is a finite non-solvable group not in $\mathcal S_3$; let $G$ be such a group of minimal order.
By \cref{lem:subgp-and-quotient} and the minimality of $G$, it follows that all proper subgroups and nontrivial quotients of $G$ are solvable. We initially wish to show that $G$ is simple. Suppose for the sake of contradiction that $G$ is not simple. Then $G$ has a maximal proper nontrivial normal subgroup $N$, so that $G/N$ is simple. If $G/N$ is abelian, by the solvability of $N$, $G$ is also solvable, contradicting our initial assumptions on $G$. If $G/N$ is nonabelian, then $G/N$ is non-solvable, contradicting the fact that $G$ only has solvable nontrivial quotients. Thus, no such $N$ exists, meaning $G$ is simple.
Since all proper subgroups of $G$ are solvable, $G$ is a minimal simple group, which by \cref{lem:minimal-simple-case} means $G \in \mathcal{S}_3$ as desired.
\end{proof}
\section{Conclusion and conjectures}\label{conclusion-and-conjectures}
Theorems \ref{thm:mainthm} and \ref{thm:degree3} show that, up through order $3$, the predictions of the Artin holomorphy conjecture on the orders of zeros of $\zeta_L(s)$ for Galois $L/K$ hold unconditionally. However, Artin holomorphy implies results that are both broader (which apply to non-Galois extensions) and stronger (which guarantee even larger multiplicities). To this end, we present two conjectures as possible extensions of our work.
\begin{conjecture} Let $L/K$ be an extension of number fields, $M$ be the Galois closure of $L/K$ and write $H=\Gal(L/K)\subset G=\Gal(M/K)$. If $\Ind_H^G(1_H)$ contains in its decomposition an irreducible representation of $G$ with nontrivial multiplicity, then $\zeta_L(s)$ has infinitely many nontrivial zeros of multiplicity greater than $1$.
\end{conjecture}
\cref{thm:mainthm} implies this conjecture in some non-obvious cases, e.g.~whenever there~is some $K\subset K'\subset L$ with $L/K'$ a nonabelian Galois extension. Nevertheless, there are cases in~which $\Ind_H^G(1_H)$ may have irreducible components with nontrivial multiplicity even if $H$ is a maximal subgroup of $G$, i.e.~if there are no fields between $K$ and $L$. These cases should be~specifically difficult to treat using methods similar to ours, since there is no obvious way to replace the extension by a smaller one.
\begin{conjecture} \label{conj:higher-mult} Let $L/K$ is a Galois extension of number fields and $G=\operatorname{Gal}(L/K)$. If $G$ has an irreducible representation of degree $m$, then $\zeta_L(s)$ has infinitely many nontrivial zeros of multiplicity at least $m$.
\end{conjecture}
This is known when $G$ is a monomial group, in which case the necessary special cases of the Artin holomorphy conjecture hold unconditionally. On the other hand, both Theorems \ref{thm:mainthm} and \ref{thm:degree3} use particular information about groups with representations of small dimension. To generalize this method to larger $m$, more work is necessary in studying groups with representations of bounded dimension.
\bibliographystyle{alpha}
|
1,314,259,994,877 | arxiv | \section{Introduction}
\label{sec:int}
To study the gravitational field of slowly uniformly rotating and slightly deformed relativistic objects, Hartle developed in his original work \cite{hartle1967} a method in the slow rotation approximation, extending the well-known exterior and interior Schwarzschild solutions. The method allows one to investigate the physical properties of rotating stellar objects in hydrostatic equilibrium. It was first applied to real astrophysical objects by Hartle and Thorne \cite{hartlethorne1968}, employing the Harrison-Wheeler, Tsuruta-Cameron and the Harrison-Wakano-Wheeler equations of state. Soon after, the method has become known as the Hartle-Thorne
approach, and there appeared a new series of research papers extending, modifying and improving the original approach by including higher order multipole moments and corrections in the angular momentum, etc. \cite{stergioulas2003, yagi2014}. Furthermore, the Hartle formalism was tested, compared and contrasted with numerical computations in full general relativity \cite{berti2004, berti2005}. As a result, it was shown that the Hartle formalism can be safely used to study stellar objects with intermediate rotation periods. Only for higher angular velocities, close to the mass-shedding limit, it shows noticeable discrepancies from the full general relativistic simulations \cite{stergioulas2003, berti2004, berti2005}.
Similar approaches were developed by Bradley et. al. in \cite{bradley2000, fodor2002, bradley2007, bradley2009}, where the slow rotation approximation is used in order to construct interior and exterior solution to the Einstein field equations. Unlike Hartle, Bradley et. al solved the six independent Einstein equations without involving the integral of the equation of hydrostatic equilibrium for uniformly rotating configurations. Moreover, the Darmois-Israel procedure was applied to match the interior and exterior solutions. In some particular cases, Bradley et. al. \cite{bradley2000, fodor2002, bradley2007, bradley2009} included the electric charge by solving the Einstein-Maxwell equations.
In addition, Konno et. al \cite{konno1999} generalized Hartle's approach in the static case to include the deformation of relativistic stars due to the presence of magnetic fields. Afterwards, Konno and coworkers \cite{konno2000} calculated the ellipticity of the deformed stars due to the presence of both magnetic field and rotation, extending their previous results. This method has become popular and found its astrophysical application in the physics of all types of magnetic stars
\cite{konno2001, ioka, colaiuda2008, mallick2014, folomeev}.
On the other hand, independently of Hartle, Sedrakyan and Chubaryan \cite{sedrakyan1968} formulated their own distinctive approach for calculating the
exterior gravitational
structure of equilibrium rigidly rotating superdense stars in the small angular velocity approximation, though it is not well-known in the scientific community.
The corresponding interior solution, together with the matching procedure, was obtained in their subsequent paper \cite{sedrakyan21968}.
The manner of solving the Einstein equations was markedly different from the Hartle's approach. Further applications of the Sedrakyan-Chubaryan solution to white dwarfs and neutron stars were considered in a number of papers e.g. \cite{arutyunyan1971, arutyunyan21971, arutyunyan1973}. Numerical results obtained by Arutyunyan et. al \cite{arutyunyan21971} were in agreement with the ones computed by Hartle and Thorne \cite{hartlethorne1968}, implying that there was no contradiction between these two solutions.
Besides, the exterior Sedrakyan-Chubaryan solution was written in an analytic form
\cite{sedrakyan1968, sedrakyan21968, arutyunyan1971, arutyunyan21971, arutyunyan1973}, and it required the additional integration of one of the metric functions, under a careful consideration of the boundary conditions. Maybe this was one of the main causes that the Sedrakyan-Chubaryan solution is still less known in the scientific community. Indeed, this fact does not allow one to compare and contrast it with the exterior Hartle-Thorne solution straightforwardly. The main goal of the present work is to derive explicitly the exterior Sedrakyan-Chubaryan solution and to establish
its relationship with the Hartle-Thorne solution. In fact, we will show that they are related by means of a coordinate transformation, whose non-trivial part includes only the radial coordinate, and a redefinition of the parameters entering the solution.
This paper is organized as follows. In Sec. \ref{sec:ht}, we present the explicit form of the exterior Hartle-Thorne metric. In Sec. \ref{sec:sc}, we use a particular
line element to derive explicitly all the components of the stationary Sedrakyan-Chubaryan metric up to the second order in the angular velocity. In Sec. \ref{sec:tra},
we find the transformation that establishes the equivalence of the two metrics under consideration. Finally, in Sec. \ref{sec:con}, we review our results.
We will follow the notation of \cite{sedrakyan1968}, and use the geometric units with $G=c=1$ throughout the paper.
\section{The Hartle-Thorne approximate solution}
\label{sec:ht}
The exterior Hartle-Thorne metric describes the gravitational field of a slowly rotating slightly deformed source in vacuum.
In geometric units, the metric is given by \cite{hartle1967}
\begin{eqnarray}\label{HT}
ds^2&=&-\left(1-\frac{2{ M }}{r}\right)\left[1+2k_1P_2(\cos\theta)+2\left(1-\frac{2{M}}{r}\right)^{-1}\frac{J^{2}}{r^{4}}(2\cos^2\theta-1)\right]dt^2 \nonumber \\
&&+\left(1-\frac{2{M}}{r}\right)^{-1}\left[1-2k_2P_2(\cos\theta)-2\left(1-\frac{2{M}}{r}\right)^{-1}\frac{J^{2}}{r^4}\right]dr^2\\
&&+r^2[1-2k_3P_2(\cos\theta)](d\theta^2+\sin^2\theta d\phi^2)-\frac{4J}{r}\sin^2\theta dt d\phi \nonumber\,
\end{eqnarray}
where
\begin{eqnarray}\label{HTk1}
k_1&=&\frac{J^{2}}{{M}r^3}\left(1+\frac{{M}}{r}\right)+\frac{5}{8}\frac{Q-J^{2}/{M}}{{M}^3}Q_2^2\left(\frac{r}{{M}}-1\right), \quad k_2=k_1-\frac{6J^{2}}{r^4}, \\ \nonumber
k_3&=&k_1+\frac{J^{2}}{r^4}+\frac{5}{4}\frac{Q-J^{2}/{M}}{{M}^2r}\left(1-\frac{2{M}}{r}\right)^{-1/2}Q_2^1\left(\frac{r}{M}-1\right), \nonumber\\
P_{2}(x)&=&\frac{1}{2}(3x^{2}-1),\nonumber \\
Q_{2}^{1}(x)&=&(x^{2}-1)^{1/2}\left[\frac{3x}{2}\ln\frac{x+1}{x-1}-\frac{3x^{2}-2}{x^{2}-1}\right],\nonumber\\ Q_{2}^{2}(x)&=&(x^{2}-1)\left[\frac{3}{2}\ln\frac{x+1}{x-1}-\frac{3x^{3}-5x}{(x^{2}-1)^2}\right].\nonumber
\end{eqnarray}
Here $P_{2}(x)$ is Legendre polynomials of the first kind, $Q_l^m$ are the associated Legendre polynomials of the second kind and the constants ${M}$, ${J}$ and ${Q}$ are the total mass, angular momentum and quadrupole moment of the rotating source, respectively.
Unlike other solutions of the Einstein equations, the Hartle-Thorne solution has an internal counterpart, which makes it more practical with respect to the exact solutions. All the internal functions are interrelated with the external ones. Thus, the total mass, angular momentum and quadrupole moment of a rotating star are determined through the constants obtained by means of the numerical integration of both interior and exterior solutions, by applying the matching conditions
on the surface of the star.
\section{The Sedrakyan-Chubaryan solution}
\label{sec:sc}
In this section, we derive the approximate Sedrakyan-Chubaryan solution \cite{sedrakyan1968} in detail. We will limit ourselves to the
exterior solution for which we derive all the metric functions.
Following the procedure presented in \cite{sedrakyan1968}, we consider the line element for axially symmetric rotating stars in the form
\begin{equation}
ds^2 = \left( \omega^2 e^\mu \sin^2 \theta - e^\nu \right)dt^2+e^\lambda dr^2+ e^\mu \left( d\theta^2 + \sin^2 \theta d\phi^2 \right)+ 2 \ \omega \ e^\mu \sin^2 \theta \ d \phi dt ,
\label{lel}
\end{equation}
where $\lambda=\lambda(r,\theta)$, \ $\mu =\mu (r,\theta)$, \ $\omega=\omega(r,\theta)$ and $\nu=\nu(r,\theta)$ are functions of the radial $r$ and
angular $\theta$ coordinates. Note that, $\omega$ is proportional to the odd powers of the angular velocity $\Omega$, whereas the remaining functions are proportional to the even powers of $\Omega$. We will consider here an approximation up to the second order in $\Omega$. We now demand that the above
metric satisfies vacuum Einstein's equations in the form
\begin{equation}
G_{\beta}^{\alpha}=R_{\beta}^{\alpha} - \frac{1}{2} R \delta_{\beta}^{\alpha} = 0 \ .
\label{eins}
\end{equation}
In the limiting case of a static star, the angular velocity $\Omega=0$ and the function $\omega=0$; then, $\lambda, \nu$ and $\mu$ are functions of the radial coordinate $r$ only. Obviously, for this special case we automatically obtain the exterior Schwarzschild solution
\begin{eqnarray}
e^\nu&=&e^{\nu_{0}}= \left( 1-\frac{2 m}{r} \right), \\
e^\lambda&=& e^{\lambda_{0}}= \left( 1-\frac{2 m}{r} \right)^{-1},\\
e^\mu&=&e^{\mu_{0}}= r^2 ,
\end{eqnarray}
where $m$ is the static mass.
We now consider the line element (\ref{lel}) for a slowly rotating relativistic star. In this case, we can expand the functions $\lambda$, $\mu$, $\nu$ and $\omega$ in powers of the angular velocity of the star $\Omega$, assuming that $\Omega$ is small. As a parameter for Taylor expanding the metric tensor components, it is convenient to introduce the dimensionless quantity $\beta=\Omega^2/ 8 \pi \rho_c$, where $\rho_c$ is the central density of the configuration. Thus, we define the metric functions as
\begin{eqnarray}
e^{\nu \left(r, \theta \right)}&=& e^{\nu_{0}}\left[1+ \beta\Phi \left(r, \theta \right)\right], \\
e^{\lambda \left(r, \theta \right)}&=& e^{\lambda_{0}}\left[ 1- \beta f \left(r, \theta \right)\right], \\
e^{\mu \left(r, \theta \right)}&=& e^{ \mu _{0}}\left[1+ \beta U \left(r, \theta \right) \right], \\
\omega \left(r, \theta \right) &=& \sqrt{\beta} q \left(r \right) \ ,
\end{eqnarray}
where the functions \ $\mu_{0}$,\ $\nu_{0}$ and $\lambda_{0}$ represent the Schwarzschild solution, and $U$, $\Phi$, and $f$ are unknown functions.
To find the independent differential equations from the Einstein field equations, we make use of the following combinations
\begin{equation}
G_{1}^{1}-G_{0}^{0}=0, \ \ \ G_{2}^{2}+G_{3}^{3}=0, \ \ \ G_{2}^{1}=0, \ \ \ G_{0}^{3}=0.
\label{con_eqn}
\end{equation}
In order to solve each component of Eq.(\ref{con_eqn}), all the metric functions are expanded in spherical harmonics. In turn, this procedure allows one to separate variables. Retaining only the terms responsible for the quadrupolar deformation, we have
\begin{eqnarray}
\Phi\left(r,\theta \right)&=&\sum_{l=0}^\infty \Phi_l \left(r \right) P_l \left( \cos \theta \right)\approx \Phi_{0} \left(r \right)P_{0} \left(\cos \theta \right)+\Phi_{2} \left(r \right) P_{2} \left(\cos \theta \right),
\label{spher_Phi}\\
f\left(r,\theta \right)&=&\sum_{l=0}^\infty f_l \left(r \right) P_l \left( \cos \theta \right)\approx f_{0} \left(r \right)P_{0} \left(\cos \theta \right)+f_{2} \left(r \right) P_{2} \left(\cos \theta \right),
\label{spher_f}\\
U\left(r,\theta \right)&=&\sum_{l=0}^\infty U_l \left(r \right) P_l \left( \cos \theta \right)\approx U_{0} \left(r \right)P_{0} \left(\cos \theta \right)+U_{2} \left(r \right) P_{2} \left(\cos \theta \right),
\label{spher_U}
\end{eqnarray}
where $P_{0} (\cos \theta)$ and $P_{2} (\cos \theta)$ are the Legendre polynomials of the first kind
\begin{eqnarray}
P_{0} (\cos \theta)=1, \ \ P_{2} (\cos \theta)= -\frac{1}{2} \left(1-3 \cos^2 \theta \right).
\label{P02}
\end{eqnarray}
Note that because the axis of symmetry is oriented along the rotation axis, the expansion in spherical harmonics contains only even values of $l$. Moreover, in the slow rotation approximation $l$ accepts only two values: $l=0$ and $l=2$.
The components of the Einstein tensor $G^0_3$ or $G^3_0$ yield a differential equation which is proportional to $\Omega$,
\begin{equation}
q_{,rr}+ \frac{4 \ q_{,r}}{r}=0,
\end{equation}
where $q_{,r}= \frac{\partial q}{\partial r}$, etc. The solution to the last equation is
\begin{equation}
q(r)=\frac{C_q}{r^3},
\label{solomega}
\end{equation}
where $C_{q}$ is a constant to be determined from the matching between the interior and exterior solutions.
Now we can substitute Eqs. (\ref{spher_Phi}), (\ref{spher_f}) and (\ref{spher_U}) in Eq.(\ref{con_eqn}). The resulting equation can then be expanded up to the first order in $\beta$ for the different values of $l$. In the case $l=0$, we obtain the following differential equations
\begin{eqnarray}
&&U_{0,rr}+ \frac{1}{r}\bigg[ 2 U_{0,r} + f_{0,r}-\Phi_{0,r} \bigg] =0,
\label{10eq0} \\
&&\Phi_{0,rr}+U_{0,rr}+ \frac{1}{r \left(r-2m \right)}\bigg[\left(m+r\right) \Phi_{0,r} +\left( r-m\right) \left(f_{0,r}+2U_{0,r} \right)-\frac{6}{r^4 } C^2_{q} \bigg] =0.
\label{l0eq2}
\end{eqnarray}
In general, it is not possible to solve the above system of equations, because the number of unknown functions is greater than the number of differential equations.
It is therefore necessary to impose an additional equation that closes the system of differential equations. Several possibilities are available. An analysis
of the line element (\ref{lel}) shows that in the lowest approximation of a spherically symmetric field, the metric components $g_{tt}$ and $g_{rr}$ satisfy the
relationship $g_{rr}=-1/g_{tt}$. Consequently, we can assume the following condition $f_{0} \left(r \right)=\Phi_{0} \left(r \right)$ which allows one to easily solve the system of equations \cite{sedrakyan1968}. In addition, if at large distances, we impose the conditions $U_{0} \left(r \rightarrow \infty \right)=0$,
$\Phi_{0} \left( r \rightarrow\infty \right)=0$ and $f_{0} \left( r \rightarrow\infty \right)=0$, we find
\begin{eqnarray}
&&U_{0} \left(r \right) = \frac{C_{U_{0}}}{r} ,
\label{solu0}\\
&&\Phi_{0}\left(r \right) = f_{0} \left(r \right) = \frac{ C^2_{q} +2C_{U_{0}} \ m r^2- 2C_{f_{0}} r^3 }{2 \ r^3 \left(r-2m \right)},
\label{soluf0}
\end{eqnarray}
where $C_{U_{0}}$ and $C_{f_{0}}$ are the integration constants of the corresponding functions.
From Eqs.(\ref{con_eqn}), we can reduce the field equations with the $l=2$ terms to:
\begin{eqnarray}
&&U_{2,rr}+ \frac{1}{r} \bigg[2 U_{2,r} +f_{2,r}- \Phi_{2,r}+\frac{3}{r-2 m} \left(\Phi_{2}+f _{2} \right) \bigg]=0 ,
\label{q2_U} \\
&&\Phi_{2,rr}+U_{2,rr} +\frac{r-m}{r \left(r-2m \right)} \bigg[2 U_{2,r} + f_{2,r} + \frac{1}{r-m} \left( \left(r+m \right) \Phi_{2,r} +3\left(f_2-\Phi_2 \right) + \frac{6 C^2_{q}}{r^4} \right) \bigg]=0 ,
\label{q2_Urr} \\
&&\Phi_{2,r}+U_{2,r}-\frac{1}{r \left(r-2m \right)} \bigg[ \left(r-3m \right) \Phi_{2} - \left(r-m \right) f_{2} \bigg]=0 .
\label{q2_UP}
\end{eqnarray}
To solve this system of equations, we isolate $U_{2,r}$ from Eq.(\ref{q2_UP}), then we calculate $U_{2,rr}$ and substitute the resulting expressions in
Eq.(\ref{q2_Urr}). This gives the relationship
\begin{equation}
f_{2}\left(r \right)= \Phi_{2}\left(r \right)- \frac{3 \ C^2_{q}}{r^4}\ .
\label{f2_Phi2}
\end{equation}
Subsequently, the solutions of Eqs.(\ref{q2_U}), (\ref{q2_Urr}) and (\ref{q2_UP}) can be expressed as
\begin{eqnarray}
&&\Phi_2 \left(r \right)=\frac{C^2_{q}}{2}\left(\frac{1}{m r^3}+\frac{1}{r^4}\right) - \frac{3 C_{\Phi_2}}{4}
\left(1-\frac{2m}{r} \right)r^2 \ln \left(1- \frac{2 m}{r} \right)-\frac{ \left(3r^2-6mr-2m^2 \right) \left( r-m\right) m C_{\Phi_2}}{2 r\left(r- 2m\right)}
\label{phi2}, \\
&&f_2 \left(r \right)= \frac{C^2_{q}}{2}\left(\frac{1}{m r^3}-\frac{5}{r^4}\right)- \frac{3 C_{\Phi_2}}{4} \left(1-\frac{2m}{r} \right)r^2 \ln \left(1- \frac{2m}{r} \right)-\frac{ \left(3 r^2-6 m r-2 m^2\right)\left( r-m\right) m C_{\Phi_2}}{2 r\left(r- 2m\right)}
\label{f2}, \\
&&U_{2} \left(r \right)=-\frac{C^2_{q}}{2}\left(\frac{1}{m r^3}+\frac{2}{r^4}\right)+ \frac{3 C_{\Phi_2}}{4} \left(1- \frac{2m^2}{r^2} \right) r^2 \ln \left(1- \frac{2m}{r} \right)+ \frac{\left(3 r^2+3 m r-2 m^2\right)m C_{\Phi_2}}{2r}
\label{U2}.
\end{eqnarray}
Note that due to the asymptotic flatness condition $U_{2} \left( r\rightarrow\infty \right)\rightarrow0$ the integration constant of (\ref{U2}) is related to
$C_{\Phi_2}$ as
\begin{equation}
C_{\Phi_2}=-\frac{C_{U_{2}}}{3m^2}.
\end{equation}
Finally, we can rewrite the metric tensor components of the line element (\ref{lel}) as:
\begin{eqnarray}
g_{00}
&=&\omega^2 e^\mu \sin^2 \theta - e^\nu\approx \beta \left(\frac{C_q}{r^3} \right)^2r^2\sin^2 \theta -\left( 1-\frac{2 m}{r} \right)\left[1+ \beta \langle \Phi_{0} \left(r \right)+\Phi_{2} \left(r \right) P_{2} \left(\cos \theta \right) \rangle \right] \\ \nonumber
&=&- \left(1- \frac{2m}{r} \right)
\bigg\{1+ \beta \bigg\langle \frac{ C^2_{q} +2C_{U_{0}} \ m r^2- 2C_{f_{0}} r^3 }{2 \ r^3 \left(r-2m \right)}
\\ \nonumber
&+& \bigg[\frac{C^2_{q}}{2}\left(\frac{1}{m r^3}+\frac{1}{r^4}\right) - \frac{3 C_{\Phi_2}}{4}
\left(1-\frac{2m}{r} \right)r^2 \ln \left(1- \frac{2 m}{r} \right) \\ \nonumber
&-& \frac{ \left(3r^2-6mr-2m^2 \right) \left( r-m\right) m C_{\Phi_2}}{2 r\left(r- 2m\right)}
\bigg]P_{2}\left(\cos \theta \right)-\left(1- \frac{2m}{r} \right)^{-1}\frac{C^2_{q}}{r^4}\sin^2\theta \bigg\rangle \bigg\}
\label{g00}, \\
g_{11}&=&e^\lambda\approx\left( 1-\frac{2 m}{r} \right)^{-1}\left[ 1- \beta \langle f_{0} \left(r \right)+f_{2} \left(r \right) P_{2} \left(\cos \theta \right)\rangle \right] \\ \nonumber
&=&\left( 1-\frac{2m}{r} \right)^{-1}
\bigg\{1-\beta \bigg\langle \frac{ C^2_{q} +2C_{U_{0}} \ m r^2- 2C_{f_{0}} r^3 }{2 \ r^3 \left(r-2m \right)}\\ \nonumber
&-&\bigg[\frac{C^2_{q}}{2}\left(\frac{1}{m r^3}-\frac{5}{r^4}\right)- \frac{3 C_{\Phi_2}}{4} \left(1-\frac{2m}{r} \right)r^2 \ln \left(1- \frac{2m}{r} \right)\\ \nonumber
&-&\frac{ \left(3 r^2-6 m r-2 m^2\right)\left( r-m\right) m C_{\Phi_2}}{2 r\left(r- 2m\right)}\bigg] P_{2}\left(\cos \theta \right) \bigg\rangle \bigg\}, \\
g_{22}&=&e^\mu\approx r^2\left[1+\beta \langle U_{0} \left(r \right)+U_{2} \left(r \right) P_{2} \left(\cos \theta \right) \rangle \right] \\ \nonumber
&=&r^2 \bigg\{1+\beta \bigg\langle \frac{C_{U_{0}}}{r} + \bigg[
-\frac{C^2_{q}}{2}\left(\frac{1}{m r^3}+\frac{2}{r^4}\right) + \frac{3 C_{\Phi_2}}{4} \left(1- \frac{2m^2}{r^2} \right) r^2 \ln \left(1- \frac{2m}{r} \right)\\ \nonumber
&+& \frac{\left(3 r^2+3 m r-2 m^2\right)m C_{\Phi_2}}{2r} \bigg]P_{2}\left(\cos \theta \right) \bigg\rangle
\bigg\}, \\
g_{33}&=&g_{22}\sin^2 \theta, \\
g_{30}&=&g_{03}=\omega e^\mu \sin^2 \theta \approx\frac{C_{q}\sqrt{\beta}}{r}\sin^2 \theta.
\end{eqnarray}
All the constants are to be determined by matching the corresponding interior solution on the surface of the star.
\section{The relation between the Hartle-Thorne and the Sedrakyan-Chubaryan metrics}
\label{sec:tra}
In general, to establish the equivalence between two spacetimes in an invariant way it is necessary to perform a detailed analysis of the corresponding
curvature tensors and their covariant derivatives \cite{mac15}. The problem can be simplified significantly, if it is possible to find the explicit diffeomorphism
that relates the two spacetimes. In the case that the spacetimes are approximate solutions of the field equations, the problem simplifies even further because the coordinate transformation must be valid only approximately. This is the case we are analyzing in the present work.
In order to compare the Sedrakyan-Chubaryan solution with the Hartle-Thorne solution, we will find a coordinate transformation so that both solutions are written in the same coordinates. A close examination of the Sedrakyan-Chubaryan solution shows that it is indeed possible when one chooses the radial coordinate transformation of the type
\begin{equation}
r\rightarrow r\left(1-\frac{\beta}{2} U_0(r)\right)\ ,
\end{equation}
and keeps the remaining coordinates unchanged. Notice that the practical effect of this transformation is to absorb the function $ U_0(r)$. This means that,
without loss of generality, we can set $U_0(r)=0$ (or, equivalently, $C_{U_0}=0$) in the Sedrakyan-Chubaryan solution and thus it becomes equivalent to the
Hartle-Thorne solution, up to a redefinition of the constants entering the metric. Indeed, we now have only three integration constants, namely,
$C_{f_0}$, $C_{q}$ and $C_{\Phi_2}$ which are directly related to the total mass, angular momentum and quadrupole moment of the Hartle-Thorne solution.
In fact, by comparing the $g_{tt}$ and $g_{t\phi}$ components of the metric tensor, we obtain
\begin{equation}
M=m+ \frac{\beta}{2}C_{f_0},
\label{M}
\end{equation}
\begin{equation}
J=-\frac{\sqrt{\beta}}{2}C_{q},
\label{J}
\end{equation}
\begin{equation}
Q=\frac{\beta}{2}\bigg(\frac{C_{q}^2}{2m}-\frac{4m^5C_{\Phi_2}}{5} \bigg).
\label{Q}
\end{equation}
Notice that $M$ in the Hartle-Thorne solution is actually composed of two terms, $M=m+\delta m$, where $m$ is the ``static mass"
and $\delta m$ is the contribution due to the rotation
of the source. This means that in fact the last equations relate four constants of the
Hartle-Thorne solution with four constants of the Sedrakyan-Chubaryan
solution, implying that the inverse transformation is well defined. This proves the mathematical and physical equivalence of the two spacetimes up to the
first order in the quadrupole moment $Q$ and the second order in the angular momentum $J$.
There is an additional way to prove the equivalence of two spacetimes, namely, in terms of their multipole moments.
In fact, it has been proved that two spacetimes with the same set of
multipole moments are isomorphic to each other (see, for instance, \cite{quev90} for a review on this issue). In the case of approximate solutions, one can assure that two spacetimes are isomorphic if they have the same set of multipole moments up to the validity order of the approximation. For the approximate spacetimes under consideration in this work, this means that the Hartle-Thorne solution and the Sedrakyan-Chubaryan are equivalent if their mass, angular momentum and quadrupole moment are the same.
In \cite{George} and \cite{Ryan1995}, it has been shown that in the approximate case, the moments can be derived from the explicit expression of the $g_{tt}$ metric component. For the Hartle-Thorme metric we obtain
\begin{equation}
g^{HT}_{tt}= -1+\frac{2M}{R}+\frac{2}{3}\frac{J^2}{R^4}+
\left[ \frac{2\left( Q-\frac{2J^2}{M}\right)}{R^3}+\frac{2\left( QM-\frac{4J^2}{3}\right)}{R^4}\right] P_{2}\left(\cos \Theta \right),
\label{mHTtt}
\end{equation}
whereas the corresponding expression for the Sedrakyan-Chubaryan metric reduces to
\begin{equation}
g^{SC}_{tt}=-1+\frac{2m+\beta C_{f_0}}{R}-\frac{\beta m C_{U_0}}{R^2}+\frac{1}{6}\frac{\beta C^{2}_{q}}{R^4}- \beta \left(
\frac{\frac{1}{2}\frac{C^2_{q}}{m}+\frac{4}{5}m^{5} C_{\Phi_2}}{R^3}+
\frac{\frac{1}{6}C^2_{q}+\frac{4}{5}m^{6} C_{\Phi_2}}{R^4}
\right) P_{2}\left(\cos \Theta \right).
\label{mSCtt}
\end{equation}
A comparison of Eqs.(\ref{mHTtt}) and (\ref{mSCtt}) shows that if we define the constants as
\begin{equation}
M=m+ \frac{\beta}{2}C_{f_0},
\label{multiM}
\end{equation}
\begin{equation}
C_{U_0}=0,
\label{multiCU0}
\end{equation}
\begin{equation}
J=-\frac{\sqrt{\beta}}{2}C_{q},
\label{multiJ}
\end{equation}
\begin{equation}
Q=\frac{\beta}{2}\bigg(\frac{C_{q}^2}{2m}-\frac{4m^5C_{\Phi_2}}{5} \bigg)\ ,
\label{multiQ}
\end{equation}
then the moments are exactly the same. This proves the equivalence of the two metrics up to the fourth order in $1/R$.
\section{Conclusions}
\label{sec:con}
In this work, we reviewed the original papers by Hartle (1967) and Hartle and Thorne (1968), and discussed their main properties, extensions and modifications.
We revisited the results of Sedrakyan and Chubaryan (1968) for the metric that describes the exterior field of an axially symmetric mass distribution.
Using a perturbation procedure, we derived the Sedrakyan-Chubaryan solution explicitly which includes several integration constants. Instead of
using the interior Sedrakyan-Chubaryan solution in order to find the integration constants, we compare the exterior metric with the
exterior Hartle-Thorne spacetime solution in the same coordinates. As a result, we obtain a set of simple algebraic expressions relating
the main parameters of the Hartle-Thorne metric with the integration constants of the Sedrakyan-Chubaryan solution. Alternatively,
we calculated the relevant multipole moments of both solutions, and showed that they are the same.
In this way,
we also proved the mathematical and physical equivalence of the two spacetimes.
We conclude that the Sedrakyan-Chubaryan solution can be considered as an alternative approach to describe the gravitational field of
a slightly deformed stationary axially symmetric mass distribution in the slow rotation approximation.
Moreover, the Sedrakyan-Chubaryan solution with its internal counterpart can be applied to various astrophysical problems together
with the Hartle-Thorne solution on equal rights.
On the other hand, in a previous work \cite{bqr12} it was shown that the Hartle-Thorne formalism for the approximate
description of rotating mass distributions is equivalent to the Fock-Abdildin approach. The last one, however,
allows us to interpret the parameters of the interior solution in terms of physical quantities like the rotational kinetic
energy or the mutual gravitational attraction between the particles of the source. Therefore, the results obtained in this work
imply that it should be possible to find a direct relationship between the interior Sedrakyan-Chubaryan solution
and the corresponding counterpart in the Fock-Abdildin approach.
It is interesting that different approaches that were developed independently in different places and under diverse circumstances
turn out to be equivalent from a mathematical point of view. It would interesting to perform a more detailed analysis of
all the physical characteristics of each approach in order to propose a unique formalism that would incorporate the advantages
of all the known approaches.
All analytical computations in this work have been performed with the help of the Mathematical Package Maple 18.
\section*{Acknowledgements}
We thank an anonymous referee for interesting remarks regarding multipole moments.
The authors acknowledge the support of the grants No. 3101/GF4 IPC-11,No. F.0679 0073/PTsF and K.B. acknowledges the grants for the university best teachers-2015 and talented young scientists 2015-2016 of the Ministry of Education and Science of the Republic of Kazakhstan. K.B. is grateful to Academician D.M.~Sedrakyan for providing a copy of his doctor dissertation. This work was partially supported by DGAPA-UNAM, Grant No. 113514, and Conacyt, Grant No. 166391.
|
1,314,259,994,878 | arxiv | \subsection{GENERAL DEFINITIONS AND PROPERTIES}
In \cite{schwinger}, two basic unitary $d\times d$ matrices $U,\ V$ are introduced.
Let
\begin{equation}} \def\edq{\end{equation}
\label{q}
q:= \exp\left(\frac{2i\pi}{d}\right)
\edq
They are of the following form:
\begin{equation}} \def\edq{\end{equation}
\label{U}
U:={\rm Diag}(1,q,q^2,..., q^{d-1})
\edq
\begin{equation}} \def\edq{\end{equation}
\label{V}
V:= \left(
\begin{array}{cccccc}
0&1&0&.&.&0\\
0&0&1&.&.&0\\
.&.&.&.&.&.\\
.&.&.&.&.&.\\
0&0&0&.&.&1\\
1&0&0&.&.&0
\end{array}\right)
\edq
\begin{lemma}
(i) $U,\ V$ obey the ``q-commutation rule'':
\begin{equation}} \def\edq{\end{equation}
\label{qcom}
VU=qUV
\edq
(ii) The Vandermonde matrix $P_{0}$ whose matrix elements for $j,k \in \left\{0,1,...,
d-1\right\}$ are defined by
\begin{equation}} \def\edq{\end{equation}
\label{VDM}
(P_{0})_{j,k}:= d^{-1/2}q^{jk}
\edq
is such that
\begin{equation}} \def\edq{\end{equation}
\label{diagV}
V=P_{0}UP_{0}^*
\edq
\end{lemma}
\begin{definition}
For any $k\in \left\{0,1,...,d-1\right\}$ we define:
\begin{equation}} \def\edq{\end{equation}
\label{Vk}
V_{k}:=VU^k = \left(
\begin{array}{cccccc}
0&q^k&0&.&.&0\\
0&0&q^{2k}&.&.&0\\
.&.&.&.&.&.\\
.&.&.&.&.&.\\
0&0&0&.&.&q^{k(d-1)}\\
1&0&0&.&.&0
\end{array}\right)
\edq
\end{definition}
\begin{remark}
The matrices $V_{k}$ have been first introduced in the study of MUB by Kibler-Planat
\cite{kipla}.
\end{remark}
\begin{definition}
(i) We say that a $d \times d$ unitary matrix $A$ is ``unbiased'' is all its matrix elements $A_{j,k}$
sasisfy
\begin{equation}} \def\edq{\end{equation}
\label{biased}
\vert A_{j,k}\vert = d^{-1/2}, \ \forall j,k \in \left\{0,1,..., d-1\right\}
\edq
(ii) We say that two $d \times d$ unitary matrices $A,\ B$ are ``mutually unbiased'' if the
matrix $A^*B$ is unbiased.
\end{definition}
Thus finding a MUB in dimension $d$ amounts to exhibit a set that we call a MUM, of the
following form:
\begin{equation}} \def\edq{\end{equation}
\label{MUM}
\left\{ {\rm 1\mskip-4.5mu l} _{d}, P_{0}, P_{1},..., P_{m}\right\}
\edq
(where ${\rm 1\mskip-4.5mu l} _{d}$ denotes the identity $d\times d$ matrix) such that $P_{j}, \ j \in
\left\{0,1,...,m\right\}$ are ``unbiased'', and $P_{j},\ P_{k},\ j,k \in \left\{
0,1,...,m\right\},\ j\ne k$ are ``mutually unbiased''.
\begin{proposition}
(i) Let , for any $k \in \left\{0,1,..., d-1\right\}$, $P_{k}$ be a unitary $d \times d$ matrix, and
$D_{k}$ be the unitary diagonal matrix such that
\begin{equation}} \def\edq{\end{equation}
\label{diagVk}
V_{k}= P_{k}D_{k}P_{k}^*
\edq
Then all matrices $P_{k}$ are ``unbiased matrices''.\\
(ii) Furthermore $D_{0}\equiv U$.
\end{proposition}
\begin{lemma}
For any $k \in \left\{0,1,...,d-1\right\}$ one has
\begin{equation}} \def\edq{\end{equation}
\label{diagVk}
U^k P_{0}= P_{0}(V^*)^k
\edq
\end{lemma}
Proof: It is known \cite{berndt} (and easy to check) that
$P_{0}^2=W$ where $W\equiv W^*$ is the permutation matrix
\begin{equation}} \def\edq{\end{equation}
\label{perm}
W:= \left(
\begin{array}{ccccccc}
1&0&0&0&.&.&0\\
0&0&0&0&.&.&1\\
.&.&.&.&.&.&.\\
.&.&.&.&.&.&.\\
0&0&1&0&.&.&0\\
0&1&0&0&.&.&0
\end{array}\right)
\edq
We want to prove that:
$$V^*=P_{0}^*UP_{0}$$
But using Lemma 2.2, this is equivalent to:
$$V^*= P_{0}^{*2}VP_{0}^2\equiv WVW$$
which follows immediately from the property of the selfadjoint matrix $W$ that:
$$WV^*=VW$$
Thus we have proven (\ref{diagVk}) for $k=1$. The general statement follows by induction
since:
\begin{equation}} \def\edq{\end{equation}
\label{100}
U^kP_{0}= U U^{k-1}P_{0}= UP_{0}(V^*)^{k-1}= P_{0}(P_{0}^*UP_{0})(V^*)^{k-1}
=P_{0}(V^*)^k
\edq
\begin{proposition}
For {\bf any dimension $d\ge 2$}, if $P_{1}$ be a unitary $d \times d$ matrix such that
$$V_{1}= P_{1}D_{1}P_{1}^*$$
then the matrices $P_{1},\ P_{0}$ are mutually unbiased $d \times d $ matrices.
\end{proposition}
Proof: One has, using Lemma 2.1 (ii) and Lemma 2.6 for $k=1$ that:
$$P_{0}^*V_{1}P_{0} =P_{0}^* VUP_{0}= P_{0}^*VP_{0}V^*= UV^*$$
Thus
$$P_{0}^*P_{1}D_{1}P_{1}^*P_{0}= UV^*$$
which means that all column vectors of $P_{0}^*P_{1}$ are eigenstates of $UV^*$ with
eigenvalues being the diagonal elements of $D_{1}$ which are all of modulus 1. Since
any eigenstate $v:= (v_{0},v_{1},..., v_{d-1})$ of the matrix $UV^*$ satisfy $\vert v_{j}
\vert =\vert v_{j}\vert, \ \forall j,k \in \left\{0,1,..., d-1\right\}$ and $P_{0}^*P_{1}$
is unitary, this implies the result.\\
\hbox {\rlap{$\sqcap$}$\sqcup$}\\
\begin{corollary}
For any integer $d\ge 2$, there is {\bf at least three MUB} given by the bases defined by ${\rm 1\mskip-4.5mu l} _{d}, P_{0}
, P_{1}$.
\end{corollary}
The existence of at least 3 MUB's in any dimension is proven in \cite{klaro}.
\subsection{THE EVEN CASE}
Let $d$ be {\bf even}. Then the determinant of both $U,\ V$ equals $\pm 1$ depending on
whether $d=0\ \mbox{or}\ 2\ (\mbox{mod}\ 4)$. Namely
$$\det U = q^{\frac{d(d-1)}{2}}$$
and $d(d-1)/2$ is half integer if $d=2\ (\mbox{mod}\ 4)$, and integer if $d=0\ (\mbox{mod}\ 4)$.
\\
In both cases the matrix $V_{1}=VU$ has thus determinant +1, which means that it is
unitarily equivalent to $\omega U$, where
$$\omega:= \exp\left(\frac{i\pi}{d}\right)$$
The eigenstate $v^{(1)}:= (1,a_{1}, a_{2},..., a_{d-1})$ of $V_{1}$ with eigenvalue
$\omega$ is such that $a_{1}= \omega^{-1}= a_{d-1}$, and obeys the recurrence relation
$$a_{k}= \omega^{1-2k}a_{k-1}$$
Thus solving he recurrence relation we have:
$$a_{k}= \omega^{\sum_{j=0}^k (1-2j)}= \omega^{k-k(k+1)}= \omega^{-k^2}$$
More generally the eigenstate $v^{(j)}:= (1, b_{1}, ..., b_{k}, ..., b_{d-1})$ of $V_{1}$
with eigenvalue $\omega^{2j+1}$ is such that
$$b_{k}= \omega^{2jk-k^2}\equiv q^{jk}\omega^{-k^2}$$
This implies:
\begin{proposition}
(i) The matrix $P_{1}$ defined by:
$$P_{1}= D' P_{0}$$
with
$$D':= {\rm diag}(1,\omega^{-1},..., \omega^{-k^2},..., \omega^{-1})$$
diagonalizes $V_{1}$, namely $D_{1}\equiv \omega U$:
$$V_{1}= \omega P_{1}UP_{1}^*$$
(ii) The property already shown that $P_{0},\ P_{1}$ are mutually unbiased reflects itself
in the identity
$$\vert {\rm Tr}D'\vert=\left\vert \sum_{k=0}^{d-1}\omega^{k^2}\right\vert = \sqrt d$$
\end{proposition}
The proof of (i) is obvious.
Furthermore (ii) results from a known property in number theory
\cite{berndt}, that if $d$ is even, then
$$\sum_{k=0}^{d-1}\exp\left(k^2\frac{i\pi}{d}\right)= \sqrt d \exp\left(
\frac{i\pi}{4}\right)$$
\hbox {\rlap{$\sqcap$}$\sqcup$}\\
\noindent
For $d$ even but not {\bf not a power of 2}, it is not known what is the maximum
number of MUB's. For example for $d=6$ there is a conjecture that $N(6)=3$ (see Section 6
where an explicit set of 3 MUB's is constructed). For $d=0\ (\mbox{mod 4})$,
it is known that the ``tensor-product method'' provides sets of more than 3 MUB's (see \cite{klaro}).
In Section 7, we make explicit this construction of 4 (resp 5) MUB's in the case $d=12$
(resp $d=20$).
\subsection{THE ODD CASE}
\begin{definition}
Let us define $F_{d}:= \mathbb Z / d\mathbb Z$ which is the finite field of residues of
$n, \ (\mbox{mod}\ d)$.
\end{definition}
\begin{theorem}
Let $d \in \mathbb N$ be an {\bf odd} number. Define the unitary diagonal matrix $D$ as
\begin{equation}} \def\edq{\end{equation}
\label{D}
D:= {\rm diag}(1,q, q^3,...q^{\frac{j(j+1)}{2}}, ...1)
\edq
Then we have:\\
(i) The matrices $V_{k}, \ k \in F_{d}$ are all unitarily equivalent to $U$.\\
(ii) Let $P_{k}:= D^{-k}P_{0}$; then, for all $k \in F_{d}$ one has:
$$P_{k}^*V_{k}P_{k}= U$$
In other words if $P_{0}= (v_{0}, v_{1},...,v_{d-1})$, then
$$P_{k}^*= (v_{0}, q^kv_{d-1}, ..., q^{kj(j+1)/2}v_{d-j},..., v_{1})$$
(iii) $\forall k \in F_{d}, \ \mbox{such that}\ d,\ k$ are co-prime, one has
\begin{equation}} \def\edq{\end{equation}
\label{tr}
\vert {\rm Tr}D^{k}\vert = \sqrt d
\edq
\end{theorem}
Proof: (i) is a consequence of (ii). Let us prove (ii):\\
It is enough to check that
$$U= P_{0}^*D^kVU^kD^{-k}P_{0}$$
But $U^k$ and $D^{-k}$ being diagonal commute, so that we are left with
$$D^kVD^{-k}U^k =P_{0}UP_{0}^*$$
this in turn is equivalent to
$$D^kVD^{-k}= VU^{-k}\equiv V_{-k}$$
or to the equation
$$D^kV=V_{-k}D^k$$
which follows easily from the fact that they are unitary matrices with only non-vanishing
elements $a_{0,d-1}=1$ and
$$a_{j,j+1}= \left(q^{\frac{k(k+1)}{2}}\right)^k,\ \forall j\in \left\{0,1,...,d-1
\right\}$$
Now let us prove (iii). We need the following proposition:
\begin{proposition}
Let $k \in F_{d},\ \mbox{such that}\ k,\ d $ are co-prime. Then the matrix
$P_{0}^*P_{k}$ is unbiased.
\end{proposition}
Proof: It follows from equ. (\ref{100}) that
$$V_{k}P_{0}\equiv VU^k P_{0}= VP_{0}(V^*)^k$$
and thus
\begin{equation}} \def\edq{\end{equation}
\label{101}
P_{0}^*P_{k}UP_{k}^*P_{0}= U(V^*)^k
\edq
(since by definition $V_{k}= P_{k}UP_{k}^*$)\\
But:
\begin{lemma}
If $d,\ k \in F_{d}$ are co-prime, the matrix $(V^*)^k$ is a permutation matrix with cycle of length
$d$, and thus all eigenstates of $U(V^*)^k$ have coordinates of equal modulus, namely
$d^{-1/2}$,
\end{lemma}
Proof: This is standard. For any $d,\ k \in F_{d}$ co-prime, there exists a cyclic permutation
$\sigma_{k}$ (that means a permutation with cycle of length $d$)
of $F_{d}$ such that for any $v \in \mathbb C^k$, the element $w \in \mathbb C^k$
defined by:
$$(V^*)^kv \equiv w$$
is such that
$$w_{j} = v_{\sigma_{k}(j)}, \ \forall j \in F_{d}$$
\hbox {\rlap{$\sqcap$}$\sqcup$}\\
\begin{remark}
The idea that the eigevectors of $V_{k}$ are ``cyclically shifted'' modulo a phase if $d$
is a prime number has already been put forward in \cite{band}.\\
\end{remark}
End of Pooof of Proposition 2.12:\\
Let us denote by $v^{(k)}$ the successive column vectors of $P_{0}^*P_{k}$. Then
$$P_{0}^*P_{k}U= (q^0v^{(0)}, qv^{(1)},..., q^jv^{(j)},...,q^{d-1}v^{(d-1)})$$
This means that $v^{(j)}$ is eigenvector of the matrix $U(V^*)^k$ with eigenvalue $q^j$.
Therefore we have that
$\vert v^{(j)}_{l}\vert = \vert v^{(j)}_{0}\vert,\ \forall l \in \left\{0,1,...,
d-1\right\}$, as a consequence of Lemma 2.13 above. Since $\Vert v \Vert =1$, this
implies $\vert v^{(j)}_{k}\vert = d^{-1/2}$.
It follows that for all primes $k\ \in F_{d}$ that are relatively prime to $d$, one has that
$P_{k}^*P_{0}$ is an unbiased matrix.\\
\hbox {\rlap{$\sqcap$}$\sqcup$}\\
Proof of Theorem 2.11 (iii):\\
Let $d,\ k \in F_{d}$ be co-prime. Let us call $v_{k}$ the normalized
eigenvector of $V_{k}$ with eigenvalue 1.
We obviously have
$$(v_{k})_{j}= \frac{1}{\sqrt d}\left(q^{\frac{j(j+1)}{2}}\right)^k$$
Now using that $P_{0}^*P_{k}$ is unbiased we have
$\vert v_{0}\cdot v_{k}\vert= d^{-1/2}$ and
$$v_{0}\cdot v_{k}\equiv d^{-1}\sum_{j=0}^{d-1}(q^{\frac{j(j+1)}{2}})^k
\equiv d^{-1}{\rm Tr}(D^k)$$
which yields the result.
\hbox {\rlap{$\sqcap$}$\sqcup$}\\
\begin{corollary}
Let d be an {\bf odd number}. Then for any $k \in F_{d}$ co-prime with $d$, we have:
$$\left\vert \sum_{j=0}^{d-1}q^{\frac{kj(j+1)}{2}}\right\vert =
\sqrt d$$
\end{corollary}
\begin{remark}
Corollary 2.12 is strongly related to the property of Gauss Sums. In \cite{berndt}, the
following result is established: define, for $a,b,d \in \mathbb Z,\ \mbox{with}\ ad+b\
\mbox{even, and}\ ad\ne 0$
$$S(a,b,d):= \sum_{n=0}^{d-1}\exp\left(\frac{i\pi(an^2+bn)}{d}\right)$$
Then the following ``reciprocity theorem for quadratic Gauss sums'' yields that:
\begin{equation}} \def\edq{\end{equation}
\label{recipr}
S(a,b,d)= \left\vert \frac{d}{a}\right\vert\exp\left(\frac{i\pi}{4}
(\mbox{sgn}(ad)-b^2/ad)\right)S(-d,-b,a)
\edq
Applying it with $d$ {\bf odd} and $a=b=1$, we have
$$S(1,1,d)= \sqrt d \exp\left(\frac{i\pi}{4}(1-\frac{1}{d})\right)$$
since $S(-d,1,1)=1$.\\
Thus arithmetics gives not only the modulus of ${\rm Tr}D$ which equals $\sqrt d$
but also the phase. A similar result holds for ${\rm Tr}D^k$ provided $d,\ k\in F_{d}$
are co-prime.
\end{remark}
If $d$ is not a prime number, and if the lowest common divisor of $d,\ k$ is 1, then the
matrices $P_{0},\ P_{k}$ have been shown to be mutually unbiased. In the {\bf odd case},
when $d$ is not a prime number, this appears very useful to find more than 3 MUB.
\begin{proposition}
Let d be an odd integer. If $E:=\left\{ k_{j}\right\}\subset \left\{0,1,..., d-1\right\}$
is such that the lowest common divisor of $d, \ k_{j}-k_{j'}$ is 1 for all $k_{j},\ k_{j'}
\in E$, then the set
$$\left\{ {\rm 1\mskip-4.5mu l} _{d}, P_{k_{j}}\right\}_{k_{j}\in E}$$
defines a MUB.
\end{proposition}
Proof: The proof is quite simple and uses Theorem 2.11 (ii). Namely, since $P_{k}= D^{-k}P_{0}$,
we have:
\begin{equation}} \def\edq{\end{equation}
\label{biased}
P_{k}^* P_{j}= P_{0}^* D^{k-j}P_{0}= P_{k-j}^*P_{0}
\edq
Now, this follows from Proposition 2.12.\\
\hbox {\rlap{$\sqcap$}$\sqcup$}\\
\begin{corollary}
Let $d=mn,\ \mbox{with}\ n, m\in \mathbb N \ \mbox{prime numbers, and}\ n<m$.
Then the cardinality of the set of $d \times d$ unbiaised bases $N(d)$ satisfies:
$$N(d)\ge N(n)\equiv n+1$$
\end{corollary}
Proof: For n=2, we are in the {\bf even case} studied in the previous subsection. It has
already been established that $N(d)\ge 3$ (Corollary 2.8). If $n$ is {\bf odd}, (then so is $m$),
the matrices $P_{k}$ for $k \in F_{n}$ are all mutually unbiased. Thus we can choose as a MUM the set
$$\left\{{\rm 1\mskip-4.5mu l} _{d}, P_{0}, P_{1},..., P_{n-1}\right\}$$
\hbox {\rlap{$\sqcap$}$\sqcup$}\\
\begin{remark}
A similar, but apparently more general result, has been proven in \cite{klaro}.
\end{remark}
\bigskip
\noindent
{\bf EXAMPLE 1: d=15 :} There are 4 MUB's, defined either by
$$\left\{{\rm 1\mskip-4.5mu l} _{15}, P_{0}, P_{1}, P_{2}\right\}\
\left\{{\rm 1\mskip-4.5mu l} _{15}, P_{0}, P_{2}, P_{4}\right\}\
\left\{{\rm 1\mskip-4.5mu l} _{15}, P_{0}, P_{1}, P_{8}\right\}\
\left\{{\rm 1\mskip-4.5mu l} _{15}, P_{0}, P_{4}, P_{8}\right\}\
\left\{ {\rm 1\mskip-4.5mu l} _{15}, P_{0}, P_{7}, P_{14}\right\}$$
\noindent
{\bf EXAMPLE 2 : d=21} There are 4 MUB's, defined for example by
$$\left\{{\rm 1\mskip-4.5mu l} _{21}, P_{0}, P_{1}, P_{2}\right\}$$
Of course we do not know whether or not this is the maximum number of MUB's in these cases.
\subsection{THE PRIME NUMBER CASE}
\begin{proposition}
Let us assume that $d$ is a {\bf prime number}$\ge 3$. Then all unitary $d \times d$ matrices
$P_{0}^*P_{k}, \ k \in \left\{0,1,...,d-1\right\}$ are unbiased.
\end{proposition}
Proof: Any prime number $\ge 3$ being odd, the result is a consequence of Lemma 2.13,
since then any $k \in F_{d}$ is relatively prime to $d$.\\
\hbox {\rlap{$\sqcap$}$\sqcup$}\\
\begin{theorem}
for $d$ a {\bf prime number}, the following set of matrices
$$\left\{{\rm 1\mskip-4.5mu l} _{d}, D^{-k}P_{0},\ k=0,1,...,d-1\right\}$$
defines a maximal set of MUM.
\end{theorem}
Proof: We use Theorem 2.11: thus $P_{k}= D^{-k}U$, so that
$$P_{k}^*P_{j}= P_{0}^*D^{k-j}P_{0}= P_{k-j}^*P_{0}$$
so that if $j \ne k$ the result follows from Proposition 2.12.\\
\hbox {\rlap{$\sqcap$}$\sqcup$}\\
\begin{remark}
The fact that in dimension $d$ there is at most $d+1$ MUB, and exactly $d+1$ for $d$ a
prime number is known for a long time. See for example \cite{wootters} and references
herein contained.
\end{remark}
\mysection{THE CASE WHERE d IS THE SQUARE OF A PRIME NUMBER}
Consider the Tensor-Product $d^2 \times d^2$ matrices introduced by Kibler-Planat \cite
{kipla}, (here restricted to two-tensor products):
\begin{equation}} \def\edq{\end{equation}
\label{W}
W_{j,k}:= V_{j}^{(d)}\otimes V_{k}^{(d)},\ j,k \in \left\{0,1,...,d-1\right\}
\edq
where $d$ is a {\bf prime number greater than or equal to 3}, and $V_{j}^{(d)}$ is the
corresponding $d \times d$ matrices, for $j \in \left\{0,1,...,d-1\right\}$.\\
Let $U^{(d)}:= {\rm diag}(1,q,...,q^j,...,q^{d-1})$ where $q$ is defined by (\ref{q}), and
$U$ be the $d^2\times d^2$ diagonal unitary matrix
$$U:= U^{(d)}\otimes U^{(d)}$$
Consider the unitary matrices $P_{k}^{(d)}$ constructed in the previous section, and let
for $j,k \in \left\{0,1,...,d-1\right\}$ the $d^2\times d^2$ unitary matrices
$$P_{j,k}:= P_{j}^{(d)}\otimes P_{k}^{(d)}$$
Then we have:
\begin{proposition}
$$W_{j,k}P_{j,k}= P_{j,k}U$$
\end{proposition}
Proof: This immediately follows (omitting the superscript d for simplicity) from:
$$(V_{j}\otimes V_{k})\ (P_{j}\otimes P_{k})= (V_{j}P_{j})\ \otimes\ (V_{k}P_{k})
= (P_{j}U)\ \otimes (P_{k}U)\equiv P_{j,k}U$$
\hbox {\rlap{$\sqcap$}$\sqcup$}\\
\begin{theorem}
(i) The matrices $P_{j,k}$ are unbiased $d^2\times d^2$ matrices, $\forall j,k \in
\left\{0,1,...,d-1\right\}$.\\
(ii) For any $j,k \in \left\{0,1,...,d-1\right\},\ k\ne j$, we have that
$P_{j,j},\ P_{k,k}$ are mutually unbiased $d^2\times d^2$ matrices.
\end{theorem}
Proof: Recall that the ``tensor-product formalism'' enables to write the $d^2 \times d^2$
matrices as $2 \times 2$ block forms of $d \times d$ matrives. Namely $\forall j,k \in F_{d}$,
$$W_{j,k}\equiv \left(
\begin{array}{cc}
0& (-i)^j V_{k}\\
V_{k}&0
\end{array}\right)\quad P_{j,k}\equiv \frac{1}{\sqrt 2}\left(
\begin{array}{cc}
P_{k}&P_{k}\\
i^jP_{k}&-i^jP_{k}\end{array}\right)\quad U_{j,k}\equiv \left(
\begin{array}{cc}
(-i)^jU_{k}&0\\
0& -(-i)^jU_{k}\end{array}\right)$$
with $U_{k}$ diagonal matrices such that
$$V_{k}= P_{k}U_{k}P_{k}^*$$
Then the result follows from Proposition 2.20.\\
\hbox {\rlap{$\sqcap$}$\sqcup$}\\
\begin{remark}
The above result provides only $d(d-1)/2$ MUB. But it is known (see \cite{wootters}, \cite{kipla})
that the maximun number which is here $d^2+1$ is attained. There is a ``trick'', not explained
here which allows to construct the ``missing'' bases, not only for the {\bf square of prime
numbers}, but more generally for {\bf any power of prime numbers.} We shall give the
explicit construction for $d=4$ in Chapter 5.
\end{remark}
\mysection{DIMENSIONS 2 AND 3}
$$P_{0}= \frac{1}{\sqrt 2}\left(
\begin{array}{cc}
1&1\\
1&-1
\end{array}\right)\qquad P_{1}= \frac{1}{\sqrt 2}\left(
\begin{array}{cc}
1&1\\
i&-i
\end{array}\right)$$
\begin{proposition}
(i) The sets $$E_{2}:= \left\{{\rm 1\mskip-4.5mu l} _{2}, P_{0}, P_{1}\right\},
\qquad E'_{2}:= \left\{{\rm 1\mskip-4.5mu l} _{2}, P_{1}, P_{1}^*\right\}$$
are complete MUM in dimension d=2.\\
(ii) The bases in $\mathbb C^2$ defined by $E_{2}\ \mbox{and}\ E'_{2}$ are the same
MUB in dimension d=2.
\end{proposition}
Proof: (i) results from Propositions 2.5 and 2.7, for $E_{2}$, and for $E'_{2}$ from the fact that
$P_{1}^2$ is unbiased (in other words $P_{1}$ is mutually unbiased to itself). Namely:
$$P_{1}^3 =e^{-i\pi/4}{\rm 1\mskip-4.5mu l} _{2} $$
which implies that $P_{1}^2= e^{-i\pi/4}P_{1}^*$ which is unbiased.\\
(ii) Denote by $e_{1}:= \left(
\begin{array}{c}1\\
0\end{array}\right)$ and $e_{2}:= \left(
\begin{array}{c}0\\
1\end{array}\right)$ the natural basis in $\mathbb C^2$. Then the MUB defined by $E_{2}
,\quad E'_{2}$ are $\left\{B_{0}, B_{1}, B_{2}\right\}$ where
$$B_{0}:= \left\{e_{1}, e_{2}\right\}\quad B_{1}:=\left\{\frac{1}{\sqrt 2}
(e_{1}\pm e_{2})\right\}\quad B_{2}:= \left\{\frac{1}{\sqrt 2}(e_{1}\pm ie_{2})
\right\}$$
\hbox {\rlap{$\sqcap$}$\sqcup$}\\
For the case of dimension $d=3$ we simply use Theorem 2.8 (ii) for the simple construction
of $P_{j},\ j\in \left\{0,1,2\right\}$:\\
Let $q= \exp(\frac{2i\pi}{3})$
$$P_{0}= \frac{1}{\sqrt 3}\left(
\begin{array}{ccc}
1&1&1\\
1&q&q^2\\
1&q^2&q
\end{array}\right)\quad
P_{1}= \frac{1}{\sqrt 3}\left(
\begin{array}{ccc}
1&1&1\\
q^2&1&q\\
1&q^2&q
\end{array}\right)\quad
P_{2}= \frac{1}{\sqrt 3}\left(
\begin{array}{ccc}
1&1&1\\
q&q^2&1\\
1&q^2&q
\end{array}\right)$$
\begin{proposition}
(i) The set $E_{3}:= \left\{{\rm 1\mskip-4.5mu l} _{3}, P_{0}, P_{1}, P_{2}\right\}$ defines a maximal
MUM for d=3.
\\
(ii) Define:
$$ P'_{1}:= \frac{1}{\sqrt 3}\left(
\begin{array}{ccc}
1&1&q\\
1&q&1\\
q&1&1
\end{array}\right )$$
Then the set $E'_{3}:= \left\{{\rm 1\mskip-4.5mu l} _{3}, P_{0}, P_{1}', P_{1}^{'*}\right\}$ defines a maximal
MUM in dimension d=3.
\end{proposition}
Proof: (i) simply follows from Theorem 2.14. Furthermore $E'_{3}$ defines the same
MUB as $E_{3}$, which establishes (ii).\\
\hbox {\rlap{$\sqcap$}$\sqcup$}\\
\mysection{THE CASE OF DIMENSION 4}
There is nothing new in the results of this section (see \cite{band}, \cite{kipla},
\cite{wootters}). The only point is that we construct explicit matrices that allow to
complete the set of MUM provided in Section 3.\\
According to Theorem 3.2, we have that $P_{0,0},\ P_{1,1}$ are mutually unbiased matrices.
\\
However $P_{0,1}, \ P_{1,0}$ are not mutually unbiased, neither to each other, nor to the
two previous ones. TThe trick is to consider that the eigenspaces of $W_{0,1},\ W_{1,0}$
with eigenvalues $\pm i$ are degenerate, so that vectors of these eigenspaces can be recombined
to build MUB's.\\
Namely take
$$P'_{0,1}:= \frac{1}{\sqrt 2}\left(
\begin{array}{cc}
P_{0}&P_{0}\\
-iP'_{0}&iP'_{0}
\end{array}\right)\quad P'_{1,0}:= \frac{1}{\sqrt 2}\left(
\begin{array}{cc}
P_{1}&P_{1}\\
-P'_{1}&P'_{1}
\end{array}\right)\quad P'_{0,0}\equiv P_{0,0}\quad P'_{1,1}\equiv P_{1,1}$$
with
$$P'_{0}:= \frac{1}{\sqrt 2}\left(
\begin{array}{cc}
1&1\\
-1&1
\end{array}\right)\quad P'_{1}:= \frac{1}{\sqrt 2}\left(
\begin{array}{cc}
1&1\\
-i&i
\end{array}\right)$$
Actually, defining the unitary $4\times 4$ matrix (that commutes with $U_{1,0}\
\mbox{and}\ U_{0,1}$) as
$$A:= \frac{e^{-i\pi/4}}{\sqrt 2}\left(
\begin{array}{cccc}
1&0&0&i\\
0&1&i&0\\
0&i&1&0\\
i&0&0&1
\end{array}\right)$$
we have:
$$P_{1,0}= P'_{1,0}A\qquad P_{0,1}= P'_{0,1}A^*$$
Then
\begin{proposition}
$$W_{0,1}P'_{0,1}= P'_{0,1}U_{0,1}, \quad W_{1,0}P'_{1,0}= P'_{1,0}U_{1,0}$$
and $P_{i,j}^{'*}P'_{k,l}$ are unbiased matrices $\forall (i,j)\ne (k,l)\ i,j,k,l \in \left\{
0,1\right\}$.
\end{proposition}
Proof: We check that $P_{0,1}^{'*}P_{1,0}$ is an unbiased matrix. We have:
$$P_{0,1}^{'*}P_{1,0}=\frac{1}{2}\left(
\begin{array}{cc}
( P_{0}^*-iP_{0}^{'*})P_{1}&(P_{0}^*+iP_{0}^{'*})P_{1}\\
(P_{0}^*+iP_{0}^{'*})P_{1}& (P_{0}^* -iP_{0}^{'*})P_{1}
\end{array}\right )$$
But
$$(P_{0}^*-iP_{0}^{'*})P_{1}= \left(
\begin{array}{cc}1&-i\\
-i&1
\end{array}\right)\qquad (P_{0}^*+iP_{0}^{'*})P_{1}= \left(
\begin{array}{cc}
i&1\\
1&i
\end{array}\right)$$
The other cases can be shown similarly.\\
\hbox {\rlap{$\sqcap$}$\sqcup$}\\
\mysection{THE CASE OF DIMENSION 6}
It is le least even dimension which in not {\bf the power of a prime number}.
Let $j := \exp(\frac{2i\pi}{6})$. Then
$$P_{0}= \frac{1}{\sqrt 6}\left(
\begin{array}{cccccc}
1&1&1&1&1&1\\
1&j&j^2&-1&-j&-j^2\\
1&j^2&-j&1&j^2&-j\\
1&-1&1&-1&1&-1\\
1&-j&j^2&1&-j&j^2\\
1&-j^2&-j&-1&j^2&j
\end{array}\right)
\quad P_{1}= \frac{1}{\sqrt 6}\left(
\begin{array}{cccccc}
1&1&1&1&1&1\\
-ij^2&i&ij&ij^2&-i&-ij\\
1&j^2&-j&1&j^2&-j\\
-i&i&-i&i&-i&i\\
j^2&1&-j&j^2&1&-j\\
-i&ij^2&ij&i&-ij^2&-ij
\end{array}\right)$$
\begin{lemma}
Let $\tilde D$ be the following unitary diagonal matrix:
$$\tilde D := {\rm diag}(1, -ij^2, 1, -i, j^2, -i)$$
Then we have:
$$P_{1}= \tilde D P_{0}$$
\end{lemma}
\begin{proposition}
The set $E_{6}:= \left\{{\rm 1\mskip-4.5mu l} _{6}, P_{0}, P_{1}\right\}$ defines a MUM in dimension
d=6.
\end{proposition}
Proof: This follows simply from Ptoposition 2.5 and Proposition 2.7. Moreover we have:
$$P_{0}^*VP_{0}= U\qquad P_{1}^* V_{1}P_{1}= iU$$
\hbox {\rlap{$\sqcap$}$\sqcup$}\\
\begin{remark}
The fact that $N(6)=3$ is the maximum number of MUB in dimension 6 is a conjecture
apparently due to Zauner \cite{zau}. Some progress has been recently made in dimension 6
by M. Grassl \cite{gra}.
\end{remark}
\mysection{THE CASE OF DIMENSIONS 12 AND 20}
Let $d= 4m$ where $m$ is an odd number $\ge 3$. Then consider the $4 \times 4$ matrices
$W_{k}, \ k = 0,1,...,3$ constructed in Section 3, together with the set of matrices $V_{k},
\ k\in F_{m}$
constructed in Subsection 2.3. Denote by $Q_{j},\ j=0,1,...,3$ the unitary $ 4 \times 4$
matrices
$P_{k,l},\
k,l \in \left\{0,1\right\}$, (in lexicographic order) provided in Section 5 for $d=4$,
and by $P_{j}, \ j\in F_{m}$ the $m \times m$ unitary matrices constructed
in Subsection 2.3. Then one has:
\begin{lemma}
For any $j= 0,1,..., {\rm Inf}(4, m+1)$, there exists a {\bf diagonal matrix} $U_{j}$ such that
$$(W_{j}\otimes V_{j})\ (Q_{j}\otimes P_{j})= (Q_{j}\otimes P_{j})U_{j} $$
\end{lemma}
The proof is very similar to the one provided in Section 3. Furthemore the idea of tensor-product
methods in this situation is already present in \cite{klaro}.\\
\noindent
Actually the new ingredient in this Section is to establish explicit $4m\times 4m$ matrices
$R_{j}:= Q_{j}\otimes P_{j} $ in $4 times 4$ or $m \times m$
block forms; let us specify them for $m=3,\ m=5$:
\begin{lemma}
(i) Let $d=12$. Thus $m=3$ and denoting by $q$ the 3rd root of unity $q:=\exp(2i\pi/3)$,
we have:
$$R_{0}:= \frac{1}{\sqrt 3}\left(
\begin{array}{ccc}
Q_{0}&Q_{0}&Q_{0}\\
Q_{0}&qQ_{0}&q^2Q_{0}\\
Q_{0}&q^2Q_{0}&qQ_{0}
\end{array}\right)\quad R_{1}:= \frac{1}{\sqrt 3}\left(
\begin{array}{ccc}
Q_{1}&Q_{1}&Q_{1}\\
q^2Q_{1}& Q_{1}&qQ_{1}\\
Q_{1}&q^2Q_{1}&qQ_{1}
\end{array}\right)$$
$$R_{2}:= \frac{1}{\sqrt 3}\left(
\begin{array}{ccc}
Q_{2}&Q_{2}&Q_{2}\\
qQ_{2}&q^2Q_{2}&Q_{2}\\
Q_{2}&q^2Q_{2}&qQ_{2}
\end{array}\right)$$
The matrices $R_{j},\ j=0,1,2$ are obviously unbiased unitary matrices and are mutually
unbiased. Thus the set $\left\{{\rm 1\mskip-4.5mu l} _{12}, R_{0}, R_{1}, R_{2}\right\}$ defines a
set of 4 MUB's for $d=12$. Futhermore any choice of $Q_{j}$'s among the 4 matrices
$P_{j,k}, j,k \in \left\{0,1\right\}$ (not necessarily the lexicographic order)
gives the same result, but not the same MUB's.\\
(ii) Let $d=20$, thus $m=5$. Take 4 unitary $5 \times 5$ matrices among the 6 possible
$P_{j}$'s
in dimension 5. Then we have:
$$R'_{0}:= \frac{1}{2}\left(
\begin{array}{cccc}
P_{0}& P_{0}&P_{0}&P_{0}\\
P_{0}&-P_{0}&P_{0}&-P_{0}\\
P_{0}&P_{0}&-P_{0}&-P_{0}\\
P_{0}&-P_{0}&-P_{0}&P_{0}\end{array}\right )\quad R'_{1}:=\frac{1}{2}\left(
\begin{array}{cccc}
P_{1}&P_{1}&P_{1}&P_{1}\\
P_{1}&-P_{1}&P_{1}&-P_{1}\\
-iP_{1}&-iP_{1}&iP_{1}&iP_{1}\\
iP_{1}&-iP_{1}&-iP_{1}&iP_{1}
\end{array}\right)$$
$$R'_{2}:= \frac{1}{2}\left(
\begin{array}{cccc}
P_{2}&P_{2}&P_{2}&P_{2}\\
iP_{2}&-iP_{2}&iP_{2}&-iP_{2}\\
-P_{2}&-P_{2}&P_{2}&P_{2}\\
iP_{2}&-iP_{2}&-iP_{2}&iP_{2}\\
\end{array}\right)\quad R'_{3}:= \frac{1}{2}\left(
\begin{array}{cccc}
P_{3}&P_{3}&P_{3}&P_{3}\\
iP_{3}&-iP_{3}&iP_{3}&-iP_{3}\\
iP_{3}&iP_{3}&-iP_{3}&-iP_{3}\\
-P_{3}&P_{3}&P_{3}&-P_{3}
\end{array}\right)$$
Then the $20 \times 20$ unitary matrices $R'_{j},\ j=0,1,...3$ are
unbiaised and mutually unbiased. Thus the set
$$\left\{{\rm 1\mskip-4.5mu l} _{20}, R'_{0}, R'_{1}, R'_{2}, R'_{3}\right\}$$
defines a set of 5 MUB's.
\end{lemma}
{\bf Acknowledgements :} It is a pleasure to thank M. Kibler for learning me everything
about MUB's, providing me with \cite{kipla} before publication and for his careful reading
of this manuscript. I am also indebted
to F. Moulin and J. Marklof for useful informations and comments about Gauss Sums.
|
1,314,259,994,879 | arxiv | \section{Introduction}
\label{introduction}
\begin{table}[htb]
\caption{Mean absolute error of the predicted saliency maps (MAE$_{global}$) and edge areas (MAE$_{edge}$) of two state-of-the-art methods over three datasets. MAE$_{edge}$ is much larger than MAE$_{global}$, demonstrating that edge prediction is more difficult.}
\label{Saliency&Edge}
\small
\renewcommand\tabcolsep{2.9pt}
\renewcommand\arraystretch{1}
\begin{tabular}{c|ccc|ccc}
\hline
\hline
\multirow{2}{*}{} & \multicolumn{3}{c|}{EGNet~\cite{EGNet}} & \multicolumn{3}{c}{SCRN~\cite{SCRN}}\\
& ECSSD & DUTS & DUT-O & ECSSD & DUTS & DUT-O \\
\hline
$MAE_{global}$ & 0.037 & 0.039 & 0.053 & 0.037 & 0.040 & 0.056\\
$MAE_{edge}$ & 0.289 & 0.292 & 0.298 & 0.299 & 0.297 & 0.302\\
\hline
\hline
\end{tabular}
\end{table}
\begin{figure}[htb]
\centering
\subfigure[EGNet~\cite{EGNet}]{\includegraphics[width=0.48\linewidth]{EGNet.pdf}}
\subfigure[ SCRN~\cite{SCRN} ]{\includegraphics[width=0.48\linewidth]{SCRN.pdf}}
\caption{Distribution of prediction error with respect to distance from pixel to its nearest edge. Horizontal coordinate represents the distance, which has been normalized to [0,1] and vertical coordinate is the prediction error. As can be seen, the closer the pixel is to the edge, the more difficult it is to be predicted.}
\label{error&distance}
\end{figure}
Salient object detection (SOD)~\cite{AchantaHES09,ChengMHTH15,FanWCS19,Fan_2018_ECCV,Fan_2019_CVPR} aims at identifying the most visually attractive objects or parts in an image or video, which is widely applied as a pre-processing procedure in downstream computer vision tasks~\cite{Survey, xin2018reverse}. During the past decades, researchers have proposed hundreds of SOD methods based on hand-crafted features ({\it e.g.,} color, texture and brightness)~\cite{Survey}. However, these features can not capture high-level semantic information, which restricts their applications in complex scenes. Recently, convolutional neural networks (CNNs) have demonstrated powerful capability of feature representation and greatly promoted the development of SOD. Many CNNs-based methods~\cite{DSS, Amulet, RFCN,SRM, DGRL, PAGR, RAS, PICANet, R3Net, PFA, RADF, MLMSNet} have achieved remarkable performance by designing different decoders to aggregate multi-level CNN features. To get better feature representations, these methods focus on mining more context information and devising more effective feature fusion strategies. Besides, introducing the boundary information is another key point in SOD. Existing methods attempt to take edges as supervision to train SOD models, which significantly improves the accuracy of saliency maps \cite{BASNet,PoolNet,EGNet,SCRN,SIBA,AFNet}.
However, the imbalance between edge pixels and non-edge ones makes it hard to get good edge predictions. Therefore, directly taking edges as supervision may lead to suboptimal solutions. To better elaborate this statement, we calculate the mean absolute error (MAE) of two state-of-the-art methods ({\it i.e.,} EGNet~\cite{EGNet} and SCRN~\cite{SCRN}) over three SOD datasets ({\it i.e.,} ECSSD~\cite{ECSSD}, DUTS~\cite{DUTS} and DUT-O~\cite{DUTO}) in Tab.~\ref{Saliency&Edge}. Though two methods get low error in global saliency prediction, they perform much worse in edge prediction, which shows that edge pixels are more difficult to predict than others. To further explore the prediction difficulties of pixels, we analyse the distribution of prediction error about the distance to the nearest edge of EGNet and SCRN in Fig.~\ref{error&distance}.
In Fig.~\ref{error&distance}, the prediction error curves gradually increases from far away to close to the edge ({\it i.e.,} the right axis to the left axis). When the distance is larger than 0.4, these curves rise slowly. However, when the distance gets smaller than 0.4, these curves begin to go upwards quickly. Based on this observation, we can divide each of the curves into two parts according to pixel distance from their nearest edges. Pixels near the edges correspond to much larger prediction errors than far-away pixels. These pixels with high prediction errors consists of both edge pixels and many other pixels close to edges that are ignored by recent edge-aware methods. Most of the hard pixels that can greatly improve the performance of SOD are not fully used, while using only edge pixels will lead to difficulties because of the imbalance distribution between edge pixels and background ones. In contrast, pixels far away from edges have relatively low prediction errors, which are much easier to be classified. However, traditional saliency labels treat all pixels inside salient object equally, which may cause pixels with low prediction errors to suffer distractive effects from those near edges.
We propose label decoupling framework to address the above problems. LDF mainly consists of a label decoupling procedure and a feature interaction network. As shown in Fig.~\ref{sketch}, a saliency label is decomposed into a body map and a detail map by LD. Different from the pure edge map, the detail map consists of both edges as well as nearby pixels, which makes full use of pixels near edge and thus has a more balanced pixel distribution.
The body map mainly concentrates on pixels far away from edges. Without the disturbance of pixels near edges, the body map can supervise the model to learn better representations. Accordingly, FIN is designed with two branches to adapt to body map and detail map respectively.
The two complementary branches in FIN are fused to predict the saliency map, which is then used to refine the two branches again. This iterative refinement procedure is helpful for obtaining gradually accurate saliency maps prediction.
We conduct experiments on six popular SOD datasets and demonstrate the superior performance of LDF. In summary, our contributions are as follows:
\begin{itemize}
\item We analyse the shortcomings of edge-based SOD methods and propose a label decoupling procedure to decompose a saliency label into body map and detail map to supervise the model, respectively.
\item We design a feature interaction network to make full use of the complementary information between branches. Both branches will be enhanced by iteratively exchanging information to produce more precise saliency maps.
\item Extensive experiments on six SOD datasets show that our model outperforms state-of-the-art models by a large margin. In particularly, we demonstrate the good performance of LDF in different challenging scenes in the SOC dataset~\cite{SOC}.
\end{itemize}
\section{Related Work}
\begin{figure*}[htb]
\centering
\includegraphics[scale=0.515]{Framework.pdf}
\caption{An overview of our proposed label decoupling framework (LDF). LDF is based on ResNet-50~\cite{Resnet} with supervision from body map, detail map and saliency map. LDF consists of two encoders and two decoders, {\it i.e.}, a backbone encoder for feature extraction, an interaction encoder for exchanging information, a body decoder and a detail decoder to generate body map and detail map respectively. The interaction encoder is not involved until body decoder and detail decoder output features.}
\label{framework}
\end{figure*}
During the past decades, a huge body of traditional methods have been developed for SOD. These methods~\cite{WangJYCHZ17, BorjiI12, ECSSD} mainly rely on intrinsic cues ({\it e.g.}, color and texture) to extract features. However, these features cannot capture high-level semantic information and are not robust to variations, which limits their applications in complex scenarios. Recently, deep learning based models have achieved remarkable performance, which can be divided into aggregation-based models and edge-based models.
\subsection{Aggregation-based Models}
Most of the aggregation-based models adopt the encoder-decoder framework, where the encoder is used to extract multi-scale features and the decoder is used to integrate the features to leverage context information of different levels. Hou {\it et al.}~\cite{DSS} constructed shortcut connections on fully convolutional networks~\cite{FCN} and integrated features of different layers to output more accurate maps. Chen {\it et al.}~\cite{RAS} proposed a reverse attention network, which erased the current predicted salient regions to expect the network to mine out the missing parts. Deng {\it et al.}~\cite{R3Net} designed an iterative strategy to learn the residual map between the prediction and ground truth by combining features from both deep and shallow layers. Wu {\it et al.}~\cite{CPD} found that features of shallow layers greatly increased the computation cost, but only brought little improvement in final results. Liu {\it et al.}~\cite{PoolNet} utilized simple pooling and a feature aggregation module to build fast and accurate model. Zhao {\it et al.}~\cite{PFA} introduced the channel-wise attention and spatial attention to extract valuable features and suppress background noise. Wang {\it et al.}~\cite{TDBU} designed a top-down and bottom-up workflow to infer the salient object regions with multiple iterations. Liu {\it et al.}~\cite{PICANet} proposed a pixel-wise contextual attention network to learn the context of each pixel, and combined the global context and local context for saliency prediction. Zhang {\it et al.}~\cite{BMPM} designed a bi-directional message passing model for better feature selection and integration.
\subsection{Edge-based Models}
In addition to saliency masks, edge label is also introduced to SOD in~\cite{BASNet, SCRN, PAGE, PoolNet, Amulet, PFA} to assist the generation of saliency maps. Zhang {\it et al.}~\cite{Amulet} and Zhao {\it et al.}~\cite{PFA} directly built the edge loss with binary cross-entropy to emphasize the importance of boundaries. Qin {\it et al.}~\cite{BASNet} designed a hybrid loss to supervise the training process of SOD on pixel-level, patch-level and map-level. Liu {\it et al.}~\cite{PoolNet} used additional edge dataset for joint training of both edge detection and SOD models. Feng {\it et al.}~\cite{AFNet} applied a boundary-enhanced loss to generate sharp boundaries and distinguish the narrow background margins between two foreground areas. Li {\it et al.}~\cite{C2SNet} used a two-branch network to simultaneously predict the contours and saliency maps, which can automatically convert the trained contour detection model to SOD model. Wu {\it et al.}~\cite{SCRN} investigated the logical inter-relations between segmentation and edge maps, which are then promoted to bidirectionally refine multi-level features of the two tasks. Although these methods take into account the relationship between edges and saliency maps, edge prediction is a hard task because of imbalanced pixel distribution. In this paper, we explicitly decouple the saliency label into body map and detail map, as shown in Fig.~\ref{sketch}. Detail map helps model learn better edge features and body map decreases the distraction from pixels near edges to center ones.
\section{Methodology}
In this section, we first introduce the label decoupling method and give the specific steps to decompose the saliency map into body map and detail map. Then, to take advantage of the complementarity between features, we introduce FIN which facilitates the iterative information exchange between branches. The overview of the proposed model is shown in Fig.~\ref{framework}.
\subsection{Label Decoupling}
As described in Sec.~\ref{introduction}, the prediction difficulty of a pixel is closely related to its position. Because of the cluttered background, pixels near the edge are more prone to be mispredicted. In comparison, central pixels have higher prediction accuracy due to the internal consistency of the salient target. Instead of treating these pixels equally, it will be more reasonable to deal with them according to their respective characteristics. Accordingly, we propose to decouple the original label into body label and detail label, as shown in Fig.~\ref{sketch}. To achieve this goal, we introduce Distance Transformation (DT) to decouple the original label, which is a traditional image processing algorithm. DT can convert the binary image into a new image where each foreground pixel has a value corresponding to the minimum distance from the background by a distance function.
Specifically, the input of DT is a binary image $I$, which can be divided into two groups ({\it i.e.}, foreground $I_{fg}$ and background $I_{bg}$. For each pixel $p$, $I(p)$ is its corresponding value. If $p \in I_{fg}$, $I(p)$ equals 1, and 0 if $p \in I_{bg}$. To get the DT result of image $I$, we define the metric function $f(p, q)=\sqrt{(p_x-q_x)^2+(p_y-q_y)^2}$ to measure the distance between pixels. If pixel $p$ belongs to the foreground, DT will first look up its nearest pixel $q$ in the background and then use $f(p,q)$ to calculate the distance between pixel $p$ and $q$. If pixel $p$ belongs to the background, their minimum distance is set to zero. We use $f(p,q)$ as the pixels of a newly generated image, and the distance transformation can be expressed as
\begin{flalign}
I^{'}(p) &= \left\{
\begin{aligned}
\min\limits_{q \in I_{bg}} f(p, q), \qquad p \in I_{fg} \\
0, \qquad p \in I_{bg}
\end{aligned}
\right. \label{DistTrans}
\end{flalign}
After the distance transformation, the original image $I$ has been transformed into $I^{'}$ where pixel value $I^{'}(p)$ no longer equals to 0 or 1. We normalize the pixel values in $I^{'}$ using a simple linear function $I^{'}=\frac{I^{'}-min(I^{'})}{max(I^{'})-min(I^{'})}$ to map the original value to [0, 1]. Compared with the original image $I$ which treats all pixels equally, pixel value of $I^{'}$ not only depends on whether it belongs to foreground or background, but also is related to its relative position. Pixels located in the center of object have the largest values and those far away from the center or in background have the smallest values. So $I^{'}$ represents the body part of the original image, which mainly focuses on the central pixels that are relatively easy. We use it as the body label in the following experiments. Correspondingly, by removing the body image $I^{'}$ from the original image $I$, we can get the detail image, which is regarded as the detail label in consequent experiments and mainly concentrates on pixels far away from the main regions. In addition, we multiply the newly generated labels with the original binary image $I$ to remove the background interference as
\begin{flalign}
Label\Rightarrow \left\{
\begin{aligned}
BL &= I*I^{'} \\
DL &= I*(1-I^{'})
\end{aligned}
\right. \label{detail}
\end{flalign}
where $BL$ means the body label and $DL$ represents the detail label. Now the original label has been decoupled into two different kinds of supervision to assist the network to learn both the body and detail features with different characteristics respectively.
\subsection{Feature Extraction}
\label{featureextraction}
\begin{figure}
\centering
\includegraphics[scale=0.57]{sketch.pdf}
\caption{Some examples of label decoupling. (c) represents the body label of the ground truth, where pixels close to the center of the target have larger values. (d) means the detail label of the ground truth, where pixels near the boundary of the target have larger values. The sum of (c) and (d) is equal to (b).}
\label{sketch}
\end{figure}
As suggested by~\cite{DGRL,SRM,PICANet}, we use ResNet-50~\cite{Resnet} as our backbone network. Specifically, we remove the fully connected layer and retain all convolutional blocks. Given an input image with shape $H \times W$, this backbone will generate five scales of features with decreasing spatial resolution by stride 2 due to downsampling. We denote these features as $F=\{F_i | i=1,2,3,4, 5\}$. The size of the $i$-th feature is $\frac{W}{2^i}\times \frac{H}{2^i} \times C_i$, where $C_i$ is the channel of the $i$-th feature. It has been shown that low-level features greatly increase computation cost, but bring limited performance improvement~\cite{CPD}. So we only utilize features from $\{F_i | i=2,3,4,5\}$, as shown in Fig.~\ref{framework}. Two convolution layers are applied to these features to adapt them seperately to the body prediction task and detail prediction task. Then we get two groups of features $B=\{B_i | i=2,3,4,5\}$ and $D=\{D_i | i=2,3,4,5\}$, which all have been squeezed to 64 channels and sent to the decoder network for saliency map generation.
\subsection{Feature Interaction Network}
Feature interaction network is built to adapt to the label decoupling, as shown in Fig.~\ref{framework}. With label decoupling, the saliency label has been transformed into the body map and the detail map, both of which are taken as supervision for model learning. FIN is designed as a two-branch structure, each of which is responsible for one label kind. Since both the body map and detail map are derived from the same saliency label, there exists a certain level of similarity and complementarity between the features from two branches. We introduce feature interaction between the complementary branches for information exchanging.
On the whole, the proposed framework is made up of one backbone encoder network, one interaction encoder network, one body decoder network and one detail decoder network. As discussed in Sec.~\ref{featureextraction}, ResNet-50~\cite{Resnet} is used as the backbone network to extract multi-level features $B=\{B_i | i=2,3,4,5\}$ and $D=\{D_i | i=2,3,4,5\}$. For features $B$, a body decoder network is applied to generate body maps. Similarly, for features $D$, a detail decoder network is applied to generate detail maps. After getting the output features of these two branches, the simplest way to deal with them is to concatenate these features and apply a convolutional layer to get final saliency maps. However, this way ignores the relationship between branches. To explicitly promote the information exchange between branches, an interaction encoder network is introduced.
More specifically, interaction decoder takes the concatenated features of the body decoder and detail decoder as input. It stacks multiple convolutions to extract multi-level features. Then these multi-level features will be applied with 3x3 convolution layers to make them appropriate for body decoder and detail decoder respectively. Direct addition is used to fuse the interaction features with features from backbone encoder to produce more accurate saliency maps. On the surface, the whole network is unusual since the latter branch outputs are used in the former decoder. But in fact, feature interaction consists of multiple iterations. At the first iteration, two branches output features without exchanging information. From the second iteration, interaction is involved between branches.
\subsection{Loss Function}
Our training loss is defined as the summation of the outputs of all iterations as,
\begin{small}
\begin{flalign}
\label{totalloss}
\mathcal{L} = \sum_{k=1}^K\alpha_k\ell^{(k)},
\end{flalign}
\end{small}
where $\ell^{(k)}$ is the loss of the $k$-th iteration, $K$ denotes the total number of iterations and $\alpha_k$ is the weight of each iteration. To simplify the problem, we set $\alpha_k=1$ to treat all iterations equally. For each iteration, we will get three outputs ({\it i.e.}, body, detail and segmentation) and each of them corresponds to one loss. So $\ell^{(k)}$ can be defined as the combination of three losses as follows:
\begin{flalign}
\label{singleloss}
\ell^{(k)} = \ell^{(k)}_{body}+\ell^{(k)}_{detail}+\ell^{(k)}_{segm},
\end{flalign}
where $\ell^{(k)}_{body}$, $\ell^{(k)}_{detail}$ and $\ell^{(k)}_{segm}$ denote body loss, detail loss and segmentation loss, respectively. We directly utilize binary cross entropy (BCE) to calculate both $\ell^{(k)}_{body}$ and $\ell^{(k)}_{detail}$. BCE is a widely used loss in binary classification and segmentation, which is defined as:
\begin{footnotesize}
\begin{flalign}
\label{bce}
\ell_{bce}\!=\!-\!\sum\limits_{(x,y)}[g(x,\!y)log(p(x,\!y))\!+\!(1\!-\!g(x,\!y))log(1\!-\!p(x,\!y))],
\end{flalign}
\end{footnotesize}
where $g(x,y) \in [0, 1]$ is the ground truth label of the pixel $(x,y)$ and $p(x,y) \in [0,1]$ is the predicted probability of being salient object. However, BCE calculates the loss for each pixel independently and ignores the global structure of the image. To remedy this problem, as suggested by~\cite{BASNet} we utilize the IoU loss to calculate $\ell^{(k)}_{segmentation}$, which can measure the similarity of two images on the whole rather than a single pixel. It is defined as:
\begin{small}
\begin{eqnarray}
\label{iou}
\ell_{iou} = 1-\frac{\sum\limits_{(x,y)} [g(x,y)*p(x,y)]}{\sum\limits_{(x,y)} [g(x,y)+p(x,y)-g(x,y)*p(x,y)]},
\end{eqnarray}
\end{small}
where the notations are the same as Eq.~\ref{bce}. We do not apply IoU loss on $\ell^{(k)}_{body}$ and $\ell^{(k)}_{detail}$, because IoU loss requires the ground truth to be binary or it will result in wrong predictions, while body label and detail label do not satisfy this requirement.
\section{Experiments}
\begin{table*}
\caption{Performance comparison with state-of-the-art methods on six datasets. MAE (smaller is better), mean $F$-measure ($mF$, larger is better) and $E$-measure ($E_\xi$, larger is better) are used to measure the model performance. '-' means the author has not provided corresponding saliency maps. The best and the second best results are highlighted in {\color{red}red} and {\color{blue}blue} respectively.}
\label{Performance}
\renewcommand\tabcolsep{2.35pt}
\renewcommand\arraystretch{1}
\centering
\begin{tabular}{l|ccc|ccc|ccc|ccc|ccc|ccc}
\hline
\hline
\multirow{3}{*}{\textbf{Algorithm}} & \multicolumn{3}{c|}{\textbf{ECSSD}} & \multicolumn{3}{c|}{\textbf{PASCAL-S}} & \multicolumn{3}{c|}{\textbf{DUTS-TE}} & \multicolumn{3}{c|}{\textbf{HKU-IS}} & \multicolumn{3}{c|}{\textbf{DUT-OMRON}} & \multicolumn{3}{c}{\textbf{THUR15K}}\\
& \multicolumn{3}{c|}{1,000 images} & \multicolumn{3}{c|}{850 images} & \multicolumn{3}{c|}{5,019 images} & \multicolumn{3}{c|}{4,447 images} & \multicolumn{3}{c|}{5,168 images} & \multicolumn{3}{c}{6,232 images}\\
\cline{2-19}
& MAE & $mF$ &$E_\xi$& MAE & $mF$ &$E_\xi$& MAE & $mF$ &$E_\xi$& MAE & $mF$ &$E_\xi$& MAE & $mF$ &$E_\xi$& MAE & $mF$ &$E_\xi$ \\
\hline
\hline
BMPM~\cite{BMPM} & .044 & .894 & .914 & .073 & .803 & .838 & .049 & .762 & .859 & .039 & .875 & .937 & .063 & .698 & .839 & .079 & .704 & .803 \\
DGRL~\cite{DGRL} & .043 & .903 & .917 & .074 & .807 & .836 & .051 & .764 & .863 & .037 & .881 & .941 & .063 & .709 & .843 & .077 & .716 & .811 \\
R$^3$Net~\cite{R3Net} & .051 & .883 & .914 & .101 & .775 & .824 & .067 & .716 & .827 & .047 & .853 & .921 & .073 & .690 & .814 & .078 & .693 & .803 \\
RAS~\cite{RAS} & .055 & .890 & .916 & .102 & .782 & .832 & .060 & .750 & .861 & .045 & .874 & .931 & .063 & .711 & .843 & .075 & .707 & .821 \\
PiCA-R~\cite{PICANet} & .046 & .867 & .913 & .075 & .776 & .833 & .051 & .754 & .862 & .043 & .840 & .936 & .065 & .695 & .841 & .081 & .690 & .803 \\
AFNet~\cite{AFNet} & .042 & .908 & .918 & .070 & .821 & .846 & .046 & .792 & .879 & .036 & .888 & .942 & .057 & .738 & .853 & .072 & .730 & .820 \\
BASNet~\cite{BASNet} & .037 & .880 & .921 & .076 & .775 & .847 & .048 & .791 & .884 & {\color{blue}.032} & .895 & .946 & .056 & {\color{blue}.756} & {\color{blue}.869} & .073 & .733 & .821 \\
CPD-R~\cite{CPD} & .037 & .917 & .925 & .072 & .824 & .849 & .043 & .805 & .886 & .034 & .891 & .944 & .056 & .747 & .866 & .068 & .738 & .829 \\
EGNet-R~\cite{EGNet} & .037 & .920 & .927 & .074 & .823 & .849 & {\color{blue}.039} & {\color{blue}.815} & .891 & {\color{blue}.032} & .898 & .948 & {\color{blue}.053} & .755 & .867 & .067 & {\color{blue}.741} & .829 \\
PAGE~\cite{PAGE} & .042 & .906 & .920 & .077 & .810 & .841 & .052 & .777 & .869 & .037 & .882 & .940 & .062 & .736 & .853 & - & - & - \\
TDBU~\cite{TDBU} & .041 & .880 & .922 & .071 & .779 & .852 & .048 & .767 & .879 & .038 & .878 & .942 & .061 & .739 & .854 & - & - & - \\
SCRN~\cite{SCRN} & .037 & .918 & .926 & {\color{blue}.064} & {\color{blue}.832} & {\color{blue}.857} & .040 & .808 & .888 & .034 & .896 & .949 & .056 & .746 & .863 & {\color{blue}.066} & {\color{blue}.741} & {\color{blue}.833} \\
SIBA~\cite{SIBA} & {\color{blue}.035} & {\color{blue}.923} & {\color{red}.928} & .070 & .830 & .855 & .040 & {\color{blue}.815} & {\color{blue}.892} & {\color{blue}.032 }& {\color{blue}.900} & {\color{blue}.950} & .059 & .746 & .860 & .068 & {\color{blue}.741} & .832 \\
PoolNet~\cite{PoolNet} & .039 & .915 & .924 & .074 & .822 & .850 & .040 & .809 & .889 & {\color{blue}.032} & .899 & .949 & .056 & .747 & .863 & .070 & .732 & .822 \\
\hline
\textbf{LDF(ours)} & {\color{red}.034} & {\color{red}.930} & {\color{blue}.925} & {\color{red}.060} & {\color{red}.848} & {\color{red}.865} & {\color{red}.034} & {\color{red}.855} & {\color{red}.910} & {\color{red}.027} & {\color{red}.914} & {\color{red}.954} & {\color{red}.051} & {\color{red}.773} & {\color{red}.873} & {\color{red}.064} & {\color{red}.764} & {\color{red}.842} \\
\hline
\hline
\end{tabular}
\end{table*}
\begin{figure*}
\centering
\includegraphics[scale=0.505]{PRCurve.pdf}
\caption{Performance comparison with state-of-the-art methods on five datasets. The first row shows precision-recall curves. The second row shows $F$-measure curves with different thresholds.}
\label{PRCurve}
\end{figure*}
\begin{figure*}[htb]
\centering
\includegraphics[scale=0.515]{Sample.pdf}
\caption{Visual comparison of different algorithms. Each row represents one image and corresponding saliency maps. Each column represents the predictions of one method. Apparently, our method is good at dealing with cluttered background and producing more accurate and clear saliency maps.}
\label{Sample}
\end{figure*}
\subsection{Datasets and Evaluation Metrics}
To evaluate the proposed method, six popular benchmark datasets are adopted, including ECSSD~\cite{ECSSD} with 1000 images, PASCAL-S~\cite{PASCALS} with 850 images, HKU-IS~\cite{HKUIS} with 4447 images, DUT-OMRON~\cite{DUTO} with 5168 images, DUTS~\cite{DUTS} with 15572 images and THUR15K~\cite{THUR15K} with 6232 images. Among them, \textbf{DUTS} is the largest saliency detection benchmark, which contains 10,553 training images (\textbf{DUTS-TR}) and 5,019 testing images (\textbf{DUTS-TE}). \textbf{DUTS-TR} is used to train the model, other datasets for evaluation. In addition, we also measure the model performance on the challenging SOC dataset~\cite{SOC} of different attributes. Five metrics are used to evaluate the performance of our model and existing state-of-the-art methods. The first metric is the mean absolute error (MAE), as shown in Eq.~\ref{mae}, which is widely adopted in~\cite{RAS,DSS,C2SNet,PICANet}. Mean $F$-measure ($mF$), $E$-measure ($E_\xi$)~\cite{Emeasure}, weighted $F$-measure ($F_\beta^\omega$) and $S$-measure ($S_\alpha$) are also widely used to evaluate saliency maps. In addition, precision-recall (PR) and $F$-measure curves are drawn to show the overall performance.
\begin{eqnarray}
MAE = \frac{1}{H \times W} \sum_{i=1}^{H}\sum_{j=1}^{W}|P(i,j)-G(i,j)|
\label{mae}
\end{eqnarray}
where $P$ is the predicted map and $G$ is the ground truth.
\subsection{Implementation Details}
The proposed model is trained on DUTS-TR and tested on the above mentioned six datasets. For data augmentation, we use horizontal flip, random crop and multi-scale input images. ResNet-50, pretrained on ImageNet, is used to initialize the backbone ({\it i.e.}, block1 to block5) and other parameters are randomly initialized. We set the maximum learning rate to 0.005 for ResNet-50 backbone and 0.05 for other parts. Warm-up and linear decay strategies are used. The whole network is trained end-to-end by stochastic gradient descent (SGD). Momentum and weight decay are set to 0.9 and 0.0005, respectively. Batchsize is set to 32 and maximum epoch is set to 48. During testing, each image is simply resized to 352 x 352 and then fed into the network to get prediction without any post-processing. It is worth noting that the output saliency maps are used as the predictions rather than the addition of predicted body and detail maps.
\begin{table*}
\caption{Performance on SOC~\cite{SOC} of different attributes. Each row represents one attribute and we report the mean $F$-measure scores of LDF and state-of-the-art methods. The last row shows the whole performance on the SOC dataset. The best and the second best results are highlighted in {\color{red}red} and {\color{blue}blue} respectively.}
\label{Attribute}
\renewcommand\tabcolsep{4.0pt}
\renewcommand\arraystretch{1.1}
\centering
\begin{tabular}{cccccccccccccc}
\hline
\hline
Attr & PiCA-R & BMPM & R$^3$Net & DGRL & RAS & AFNet & BASNet & PoolNet & CPD-R & EGNet-R & SCRN & Ours\\
\hline
\hline
AC & 0.721 & 0.727 & 0.659 & 0.744 & 0.664 & 0.763 & {\color{blue}0.773} & 0.746 & 0.765 & 0.739 & 0.770 & {\color{red}0.774}\\
BO & 0.706 & 0.802 & 0.637 & {\color{red}0.847} & 0.654 & {\color{blue}0.824} & 0.780 & 0.677 & 0.821 & 0.743 & 0.743 & 0.803\\
CL & 0.703 & 0.708 & 0.667 & 0.735 & 0.616 & 0.740 & 0.721 & 0.723 & 0.741 & 0.707 & {\color{blue}0.751} & {\color{red}0.772}\\
HO & 0.727 & 0.738 & 0.683 & 0.773 & 0.682 & {\color{blue}0.778} & 0.769 & 0.768 & 0.766 & 0.747 & 0.775 & {\color{red}0.807}\\
MB & 0.779 & 0.757 & 0.669 & 0.809 & 0.687 & 0.794 & 0.791 & 0.784 & 0.810 & 0.741 & {\color{blue}0.815} & {\color{red}0.840}\\
OC & 0.692 & 0.711 & 0.625 & 0.724 & 0.608 & 0.730 & 0.721 & 0.713 & {\color{blue}0.741} & 0.699 & 0.732 & {\color{red}0.756}\\
OV & 0.778 & 0.783 & 0.677 & 0.797 & 0.666 & {\color{blue}0.805} & 0.802 & 0.774 & 0.799 & 0.768 & 0.801 & {\color{red}0.820}\\
SC & 0.678 & 0.702 & 0.626 & 0.725 & 0.645 & 0.711 & 0.713 & 0.723 & 0.726 & 0.708 & {\color{blue}0.738} & {\color{red}0.774}\\
SO & 0.569 & 0.588 & 0.546 & 0.618 & 0.560 & 0.615 & 0.619 & 0.631 & 0.635 & 0.605 & {\color{blue}0.639} & {\color{red}0.676}\\
\hline
Avg & 0.662 & 0.673 & 0.611 & 0.698 & 0.608 & 0.700 & 0.697 & 0.694 & 0.709 & 0.680 & {\color{blue}0.710} & {\color{red}0.739}\\
\hline
\hline
\end{tabular}
\end{table*}
\subsection{Ablation Studies}
\textbf{Number of Feature Interaction.} Tab.~\ref{iteration} shows the performance with different numbers of feature interaction. Compared with the baseline which has no feature interaction (Number=0), model with one feature interaction achieves better results. When the number is larger, the performance becomes worse. Because repeated feature interaction makes the network to grow too deeper and harder to optimize. So in all the following experiments, we set the number to 1 to balance the model optmization and performance.
\textbf{Different Combinations of Supervision.} Tab.~\ref{supervision} shows the performance with different combinations of supervision. From this table, combinations including detail label perform better than those including edge label, which demonstrates the effectiveness of detail label than edge label. In addition, combinations including body label perform better than those including saliency label (Sal). It confirms that without the interference of edges, center pixels can learn better feature representations.
\begin{figure*}[htb]
\centering
\subfigure[ ECSSD ]{\includegraphics[width=0.24\linewidth]{ECSSD.pdf}}
\subfigure[ DUTS ]{\includegraphics[width=0.24\linewidth]{DUTS.pdf}}
\subfigure[HKU-IS ]{\includegraphics[width=0.24\linewidth]{HKUIS.pdf}}
\subfigure[THUR15K]{\includegraphics[width=0.24\linewidth]{THUR15K.pdf}}
\caption{Error-Distance distribution of different methods. The proposed method has the smallest error along the distance. Especially around edge areas, the proposed method performs much better.}
\label{ErrorComparsion}
\end{figure*}
\subsection{Comparison with State-of-the-arts}
\textbf{Quantitative Comparison.}
To demonstrate the effectiveness of the proposed method, 14 state-of-the-art SOD methods are introduced to compare, including BMPM~\cite{BMPM}, DGRL~\cite{DGRL}, R$^3$Net~\cite{R3Net}, RAS~\cite{RAS}, PiCA-R~\cite{PICANet}, AFNet~\cite{AFNet}, BASNet~\cite{BASNet}, CPD-R~\cite{CPD}, EGNet-R~\cite{EGNet}, PAGE~\cite{PAGE}, TDBU~\cite{TDBU}, SCRN~\cite{SCRN}, SIBA~\cite{SIBA} and PoolNet~\cite{PoolNet}. For fair comparison, we evaluate all the saliency maps provided by the authors with the same evaluation codes. We compare the proposed method with others in terms of MAE, $mF$ and $E_\xi$, which are shown in Tab.~\ref{Performance}. The best results are highlighted with red color. Obviously, compared with other counterparts, our method outperforms previous state-of-the-art methods by a large margin. Besides, Fig.~\ref{PRCurve} presents the precision-recall curves and $F$-measure curves on five datasets. As can be seen, the curves of the proposed method consistently lie above others. In addition, we calculate the Error-Distance distribution of different methods in Fig.~\ref{ErrorComparsion}, where predictions produced by the proposed method have the minimum error along distance, especially around the edge areas.
\begin{table}
\centering
\caption{Performance with different numbers of feature interaction. Number=0 means two branches have no feature interaction.}
\label{iteration}
\renewcommand\tabcolsep{3.7pt}
\renewcommand\arraystretch{1}
\begin{tabular}{c|ccc|ccc}
\hline
\hline
\multirow{2}{*}{Number} & \multicolumn{3}{c|}{THUR15K} & \multicolumn{3}{c}{DUTS-TE} \\
& MAE & $mF$ & $E_\xi$ & MAE & $mF$ & $E_\xi$ \\
\hline
0 & 0.069 & 0.751 & 0.834 & 0.038 & 0.839 & 0.897 \\
1 & 0.064 & 0.764 & 0.842 & 0.034 & 0.855 & 0.910 \\
2 & 0.066 & 0.756 & 0.837 & 0.035 & 0.849 & 0.903 \\
3 & 0.068 & 0.753 & 0.834 & 0.037 & 0.842 & 0.897 \\
\hline
\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Comparison on different combinations of supervision. Body, detail, saliency and edge maps are used, respectively.}
\label{supervision}
\renewcommand\tabcolsep{2.1pt}
\renewcommand\arraystretch{1}
\begin{tabular}{c|ccc|ccc}
\hline
\hline
\multirow{2}{*}{Label} & \multicolumn{3}{c|}{THUR15K} & \multicolumn{3}{c}{DUTS-TE} \\
& MAE & $mF$ & $E_\xi$ & MAE & $mF$ & $E_\xi$ \\
\hline
Body + Detail & 0.064 & 0.764 & 0.842 & 0.034 & 0.855 & 0.910 \\
Body + Edge & 0.066 & 0.758 & 0.836 & 0.036 & 0.850 & 0.904 \\
Sal + Detail & 0.066 & 0.756 & 0.835 & 0.037 & 0.848 & 0.901 \\
Sal + Edge & 0.070 & 0.752 & 0.827 & 0.039 & 0.844 & 0.895 \\
\hline
\hline
\end{tabular}
\end{table}
\textbf{Visual Comparison.}
Some prediction examples of the proposed method and other state-of-the-art approaches have been shown in Fig.~\ref{Sample}. We observe that the proposed method not only highlights the correct salient object regions clearly, but also well suppresses the background noises. It is robust in dealing with various challenging scenarios, including cluttered background, manufactured structure and low contrast foreground. Compared with other counterparts, the saliency maps produced by the proposed method are clearer and more accurate.
\textbf{Performance on SOC of Different Attributes.}
SOC~\cite{SOC} is a challenging dataset with multiple attributes. Images with the same attribute have certain similarity and reflect the common challenge in real world. We utilize this dataset to test the robustness of model under different scenes. Specifically, we evaluate the mean $F$-measure score of our model as well as 11 state-of-the-art methods. Each model will get nine scores under nine attributes. In addition, an overall score is calculated to measure the whole performance under all scenes. Tab.~\ref{Attribute} shows the scores. We can see the proposed model achieves the best results among most of attributes except ``BO", which indicates the good generalization of the proposed method. It can be applied in different challenging scenes.
\section{Conclusion}
In this paper, we propose the label decoupling framework for salient object detection. By empirically showing that edge prediction is a challenging task in saliency prediction, we propose to decouple the saliency label into body map and detail map. Detail map helps model learn better edge features and body map avoids the distraction from pixels near edges. Supervised by these two kinds of maps, the proposed method achieves better performance than direct supervison with saliency maps. Besides, feature interaction network is introduced to make full use of the complementarity between body and detail maps. Experiments on six datasets demonstrate that the proposed method outperforms state-of-the-art methods under different evaluation metrics.
\section{Acknowledgement}
This work was supported in part by the National Key R\&D Program of China under Grant 2018AAA0102003, in part by National Natural Science Foundation of China: 61672497, 61620106009, 61836002, 61931008 and U1636214, and in part by Key Research Program of Frontier Sciences, CAS: QYZDJ-SSW-SYS013. Authors would like to thank Kingsoft Cloud for their helpful disscussion and free GPU cloud computing resource support.
{
\small
\bibliographystyle{bibstyle}
|
1,314,259,994,880 | arxiv | \section{Introduction}
\label{sec:intro}
The next generation of robots, that can operate in and adapt to unstructured and dynamic environments, must possess a diverse set of skills. However, it is implausible to pre-program robots with a library of all required skills. Learning from Demonstration (LfD) \cite{argall2009survey,billard2008robot} is a paradigm that aims to equip robots with the ability to learn efficiently from demonstrations provided by humans. Existing work in trajectory-based LfD has contributed a wide range of mathematical representations that encode skills from human demonstrations and then reproduce the learned skills at runtime. Proposed representations include Spring-damper systems with forcing functions \cite{pastor2009learning}, Gaussian Mixture Models (GMMs) \cite{calinon2007learning,khansari2011learning,ravichandar2018learning}, Neural Networks (NNs) \cite{neumann2013neural,levine2014learning}, Gaussian Processes (GPs) \cite{schneider2010robot,rana2017towards,umlauft2017learning}, and geometric objects \cite{ahmadzadeh2017generalized}, among others. Each of these representations is used to encode the demonstrations in a predefined space or coordinate system (e.g., Cartesian coordinates). In other words, a single best coordinate system for any given skill is assumed to both exist and be known. However, as we show in this work, the assumption that a single best coordinate system exists for each task does not hold. Further, encoding in only a single coordinate system prohibits the model from capturing some of the geometric features that underlie a demonstrated skill.
\begin{figure}
\centering
\includegraphics[trim={6cm 1.1cm 6cm 1.2cm}, clip, width=\columnwidth]{converging_curves.pdf}
\caption{\small{A comparison of reproductions generated by considering different coordinates, illustrating the need for cost balancing.}}
\label{fig:converging_traj}
\end{figure}
In this work, we contribute a learning framework that encodes demonstrations simultaneously in multiple coordinates, and balances the relative influences of the learned models in generating reproductions. The proposed framework, named Multi-Coordinate Cost Balancing (MCCB), encodes demonstrations in three differential coordinates: Cartesian, tangent, and Laplacian (Section \ref{subsec:coordinate_transformations}). Simultaneously learning in these three coordinates allows our method to capture all of the underlying geometric properties that are central to a given skill. MCCB encodes the joint density of the time index and the demonstrations in each differential coordinate frame using a separate statistical model. Thus, given any time instant, we are able to readily obtain the conditional mean and covariance in each coordinate system (Section \ref{subsec:GMMs}). MCCB generates reproductions by solving an optimization problem with a blended cost function that consists of one term per coordinate. Each term penalizes deviations from the norm, weighted by the inverse of the expected variance in the corresponding coordinate system (Section \ref{subsec:optimization}). Further, we subject the optimization problem to linear constraints on the reproductions, such as initial, target, and via point constraints. Our constrained optimization problem is convex with respect to the reproduction and hence can be solved efficiently.
A major hurdle in learning a wide variety of skills, without significant parameter tweaking, is that the relative importance of each differential coordinate (or the geometric feature) in encoding a given task is unknown ahead of time. For instance, consider the problem of encoding the demonstrations illustrated in Fig. \ref{fig:converging_traj}. Using any one coordinate system in isolation, even when the most suitable one is known, does not yield good reproductions (the red, brown, and green dashed lines). To alleviate this problem, MCCB preferentially weights the costs defined in each coordinate (Fig.~\ref{fig:graph}). Importantly, MCCB learns the optimal weights directly from the demonstrations without making task-dependent assumptions. To this end, MCCB solves a meta optimization problem that aims to minimize reproduction errors (Section \ref{subsec:cost_balancing}). As shown by the solid blue lines in Fig. \ref{fig:converging_traj}, a cost function that optimally balances the costs in each coordinate yields better reproductions than any single-coordinate method.
In summary, we contribute a unified task-independent learning framework that (1) encodes demonstrations simultaneously in multiple differential coordinates, (2) defines a blended cost function that incentivizes conformance to the norm in each coordinate system while considering expected variance, and (3) learns optimal weights directly from the demonstrations to balance the relative influence of each differential coordinate in generating reproductions. Further, MCCB is compatible with and complementary to several existing LfD methods that utilize different statistical representations and coordinate systems \cite{calinon2014task,paraschos2013probabilistic,ahmadzadeh2017generalized,umlauft2017learning,rana2017towards,osa2017guiding,nierhoff2016spatial}.
\begin{figure}
\centering
\includegraphics[trim={0.5cm 0.5cm 0.5cm 0.5cm}, clip, width=\columnwidth]{flow_diagram.pdf}
\caption{\small{A flow diagram illustrating MCCB.}}
\label{fig:graph}
\end{figure}
\section{Related Work}
\label{sec:relatedwork}
Learning from demonstration has attracted a lot of attention from researchers in the past few decades. While several categories of LfD methods exist \cite{argall2009survey}, our work falls under the category of trajectory-based LfD. In this category, demonstrations take the form of trajectories and the methods aim to synthesize trajectories that accurately reproduce the demonstrations.
Dynamical systems-based trajectory learning methods, such as \cite{khansari2011learning,ravichandar2018learning,neumann2013neural}, encode demonstrations using statistical dynamical systems and generate reproductions by forward propagating the dynamics. While such deterministic methods exhibit several advantages, such as convergence guarantees and robustness to perturbations, they are restricted to learning in a single coordinate system and ignore inherent uncertainties in the demonstrations. They incentivize conformance to the norm even when demonstrations exhibit high variance.
Trajectory optimization methods, such as \cite{ratliff2009chomp} and \cite{schulman2014motion}, focus on geometric features by minimizing costs specified using predefined norms. An optimization framework proposed in \cite{dragan2015movement} attempts to adapt multiple demonstrations to new initial and target locations by minimizing the distance between the demonstrations and the reproduction according to a learned Hilbert space norm. Indeed, learning an appropriate Hilbert space norm is related to finding an appropriate coordinate system based on the demonstrations. However, similar to the dynamical systems-based methods, the methods in \cite{ratliff2009chomp,schulman2014motion,dragan2015movement} are restricted to a single predefined or learned coordinate system and do not explicitly model and utilize the inherent time-dependent variations in the demonstrations.
Probabilistic trajectory-learning methods, such as \cite{rana2017towards,umlauft2017learning} and \cite{paraschos2013probabilistic}, on the other hand, capture and utilize the variation observed in the demonstrations. However, these methods are also restricted to encoding demonstrations in a single predefined coordinate system that is assumed to be known.
Our design of the costs in each differential coordinate is inspired by the minimal intervention principle \cite{calinon2014task} that takes variance into account. While the approach in \cite{calinon2014task} does encode demonstrations in different frames of references, all the frames are restricted to Cartesian coordinates or orientation space. Furthermore, all the relevant frames for a given task are also expected to be provided by the user.
The motion planning framework in \cite{osa2017guiding}, complementary to our approach, utilizes a blended cost function, the construction of which is guided by probability distributions learned from the demonstrations. This framework incentivizes factors such as smoothness, manipulability, and obstacle avoidance, but is restricted to the Cartesian coordinate system. MCCB, on the other hand, encodes demonstrations in multiple differential coordinates and learns to optimally balance their relative influences, but does not consider factors such as manipulability and obstacle avoidance.
Differential coordinates have been extensively used in the computer graphics community \cite{lipman2004differential,levy2006laplace}. Prior work in trajectory learning that incorporates differential coordinates includes the Laplacian trajectory editing (LTE) algorithm \cite{nierhoff2016spatial}. Using Laplacian coordinates, the LTE algorithm adapts a single demonstration to new initial, target, and via points while preserving the shape. However, the LTE algorithm does not reason about the relative importances of multiple coordinates.
\section{Methodology}\label{sec:method}
The section describes the technical details of MCCB and its work flow as illustrated in Fig. \ref{fig:graph}.
\subsection{Differential Coordinate Transformations} \label{subsec:coordinate_transformations}
In this section, we define the differential coordinates and their corresponding transformations used in MCCB.
\textit{Cartesian:} Let a discrete finite-length trajectory in $n$-dimensional \textit{Cartesian coordinates} be denoted by $\bm{X} = [x(1) \ x(2) \cdots x(T)]^\top \in \mathbb{R}^{T \times n}$ and let $x(t) \in \mathbb{R}^n$ denote a discrete sample at time index $t$. This trajectory can be represented using a graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$ where $\mathcal{V}$ is the set of vertices representing the samples in the trajectory and $\mathcal{E}$ is the set of edges that represent the connections between the samples in the trajectory. The neighborhood $\mathcal{N}_t$ of each vertex $\mathcal{V}_t$ is defined by the set of adjacent vertices $\mathcal{V}_t'$. In the case of discrete-time trajectories, the edges between any given vertex and its two neighbors are assumed to carry unit weights, while all other edges carry zero weights.
\textit{Laplacian:} It is known that the discrete Laplace-Beltrami operator for the trajectory $\bm{X}$ provides the \textit{Laplacian coordinate} $\delta(t)$ as
${
\delta(t) \triangleq \sum_{t' \in \mathcal{N}_t} \frac{1}{\sum_{t' \in \mathcal{N}_t} 1} (x(t) - x(t'))
}$ \cite{lipman2004differential}.
Note that the above relationship can be written as a linear differential operator in matrix form
\begin{equation}\label{eq:Delta}
\bm{\Delta} = \bm{L}\bm{X}
\end{equation}
where $\bm{\Delta} = [\delta(1) \ \delta(2) \cdots \delta(T)]^\top \in \mathbb{R}^{T \times n}$ is the trajectory in the Laplacian coordinates, and $\bm{L} \in \mathbb{R}^{T \times T}$, called the graph Laplacian, is given by
\begin{equation}
\bm{L} = \left[\begin{smallmatrix}
1 & -1 & 0 & \dots & \dots & 0 \\
-0.5 & 1 & -0.5 & 0 & \dots & 0 \\
0 & -0.5 & 1 & -0.5 & \dots & 0 \\
\vdots & & \ddots & \ddots & \ddots & \vdots \\
0 & \dots & 0 & -0.5 & 1 & -0.5 \\
0 & \dots & \dots & 0 & -1 & 1
\end{smallmatrix}\right]
\end{equation}
As pointed out in \cite{nierhoff2016spatial}, the Laplacian coordinates have meaningful geometric interpretations. Specifically, the Laplacian coordinates can be seen as the discrete approximations of the derivative of the unit tangent vectors of an arc-length parametrized continuous trajectory. In other words, the Laplacian coordinates measure the deviation of each sample from the centroid of its neighbors.
\textit{Tangent:} While the Laplacian coordinates are discrete approximations of second order differential transformations, a discrete approximation of the first differential transformation is possible. Consider such a first order transformation using first order finite differences defined as
${
\gamma(t) \triangleq (x(t+1) - x(t))
}$,
where $\gamma(t)$ is called the \textit{tangent coordinate}. The matrix form of the above relationship results in a linear differential operator given by
\begin{equation} \label{eq:Gamma}
\bm{\Gamma} = \bm{G}\bm{X}
\end{equation}
where $\bm{\Gamma} = [\gamma(1) \ \gamma(2) \cdots \gamma(T)]^\top \in \mathbb{R}^{T \times n}$ is the trajectory in the tangent coordinates and $\bm{G} \in \mathbb{R}^{T \times T}$, called the graph incidence matrix, is given by
\begin{equation}
\bm{G} = \left[\begin{smallmatrix}
-1 & 1 & 0 & \dots & \dots & 0 \\
0 & -1 & 1 & 0 & \dots & 0 \\
0 & 0 & -1 & 1 & \dots & 0 \\
\vdots & & \ddots & \ddots & \ddots & \vdots \\
0 & \dots & 0 & 0 & -1 & 1 \\
0 & \dots & \dots & 0 & 0 & -1
\end{smallmatrix}\right]
\end{equation}
Similar to the Laplacian coordinates, the tangent coordinates have geometric interpretations. Specifically, the tangent coordinates can be seen as discrete approximations of the un-normalized tangent vectors of an arc-length parametrized continuous trajectory, i.e., the tangent coordinates measure the local direction of motion at each sample of the trajectory.
In our work, we assume that a set of $N$ demonstrations in the Cartesian coordinates are available. Let the $j$th demonstration be denoted by $\bm{X}_d^j = [x_d^j(1) \ x_d^j(2) \cdots x_d^j(T)]^\top \in \mathbb{R}^{T \times n}$. Note that if the raw demonstrations are of varying duration in time, we perform time alignment using dynamic time warping. MCCB transforms each obtained demonstration $\bm{X}_d^j$ into a trajectory in the tangent coordinates (denoted by $\bm{\Gamma}_d^j$) and a trajectory in Laplacian coordinates (denoted by $\bm{\Delta}_d^j$) using (\ref{eq:Delta}) and (\ref{eq:Gamma}), respectively.
\subsection{Encoding in Multiple Differential Coordinates}\label{subsec:GMMs}
This section defines the costs associated with each coordinate. With the demonstrations available in all three differential coordinates, we employ three independent Gaussian mixture models (GMMs)\footnote{MCCB does not rely on the use of GMMs and any statistical representation that can provide the conditional estimates will suffice.} to approximate the joint probability densities of time and the samples in each coordinate system.
The GMM associated with the Cartesian coordinates attempts to approximate the joint density of $t$ and $x$ using a finite number of Gaussian basis functions as follows
${
\mathcal{P}(t,x;\theta_{C}) = \sum_{k=1}^{K_C} \mathcal{P}(k) \mathcal{P}(t,x|k)
}$,
where $K_C$ is the number of Gaussian basis functions, $\mathcal{P}(k) = \pi_{C}^k$ is the prior associated with the $k$th basis function, $\theta_{C} = \{\mu_{C}^1 \cdots \mu_{C}^{K_{C}}, \Sigma_{C}^1 \cdots \Sigma_{C}^{K_{C}}, \pi_{C}^1 \cdots \pi_{C}^{K_{C}}\}$ is the set of parameters of the GMM, and $\mathcal{P}(t,x|k)$ is the conditional probability density given by
${
\mathcal{P}(t,x|k) \sim \mathcal{N}\left(
\begin{bmatrix} t \\ x \end{bmatrix};
\mu_{C}^k,\Sigma_{C}^k\right)
}$,
where $\mu_{C}^k = \begin{bmatrix} \mu_t^k \\ \mu_{x}^k \end{bmatrix}$ is the mean and $\Sigma_{C}^k = \begin{bmatrix} \Sigma_{t}^k & \Sigma_{t,x}^k \\ \Sigma_{x,t}^k & \Sigma_{x}^k \end{bmatrix}$ is the covariance matrix of the $k$th Gaussian basis function.
We learn the parameters $\theta_C$ of the model using the Expectation-Maximization algorithm based on the demonstrations $\{\bm{X}_d^j\}_{j=1}^{N}$. Given the learned model and a time instant, the expected value of the conditional density $\mathcal{P}(x|t)$ is given by Gaussian mixture regression (GMR) \cite{cohn1996active} as follows
\begin{equation}\label{eq:x_hat}
\hat{x}(t) = \mathbb{E}[x|t] = \sum_{k=1}^{K_C} h_C^k(t) (A_C^k t + b_C^k)
\end{equation}
where $h_C^k(t) = \frac{\mathcal{P}(k) \mathcal{P}(t|k)}{\sum_{i=1}^{K_C}\mathcal{P}(i) \mathcal{P}(t|i)}$, $A_C^k = \Sigma_{x,t}^k (\Sigma_{t}^k)^{-1}$, $b^k = \mu_{x}^k + (t - \mu_t^k)$, and the conditional covariance is given by
\begin{equation}\label{eq:Sigma_x_hat}
\hat{\Sigma}_{x}(t) = Var[x|t] = \sum_{k=1}^{K_C} {h_C^k}^2 \ (\Sigma_{x}^k - \Sigma_{x,t}^k(\Sigma_{t}^k)^{-1}\Sigma_{t,x})
\end{equation}
Similar to the GMM learned in the Cartesian coordinates, we learn a second GMM in the tangent coordinates based on the demonstrations $\{\bm{\Gamma}_d^j\}_{j=1}^{N}$, and a third GMM in the Laplacian coordinates based on the demonstrations $\{\bm{\Delta}_d^j\}_{j=1}^{N}$. The expected values of the conditional densities $\mathcal{P}(\gamma|t)$ and $\mathcal{P}(\delta|t)$ are given by
\begin{align}
\hat{\gamma}(t) = & \mathbb{E}[\gamma|t] = \sum_{k=1}^{K_G} h_G^k(t) (A_G^k t + b_G^k) \label{eq:gamma_hat} \\
\hat{\delta}(t) = & \mathbb{E}[\delta|t] = \sum_{k=1}^{K_L} h_L^k(t) (A_L^k t + b_L^k) \label{eq:delta_hat}
\end{align}
and the corresponding conditional expectations are given by
\begin{align}
\hat{\Sigma}_{\gamma}(t) =& \mathrm{Var}[\gamma|t] = \sum_{k=1}^{K_G} ({h_G^k})^2 \ (\Sigma_{\gamma}^k -\Sigma_{\gamma,t}^k(\Sigma_{t}^k)^{-1}\Sigma_{t,\gamma}) \label{eq:Sigma_gamma_hat} \\
\hat{\Sigma}_{\delta}(t) =& \mathrm{Var}[\delta|t] = \sum_{k=1}^{K_L} ({h_L^k})^2 \ (\Sigma_{\delta}^k -\Sigma_{\delta,t}^k(\Sigma_{t}^k)^{-1}\Sigma_{t,\delta}) \label{eq:Sigma_delta_hat}
\end{align}
where the variables in (\ref{eq:gamma_hat})-(\ref{eq:Sigma_delta_hat}) with subscripts $G$ and $L$ correspond to the tangent and Laplacian coordinates, respectively, and are defined similarly to the ones in (\ref{eq:x_hat})-(\ref{eq:Sigma_x_hat}).
\subsection{Imitation via Optimization}\label{subsec:optimization}
In this section, we explain the design of our multi-coordinate cost function. MCCB generates reproductions by solving a constrained optimization problem given by
\begin{align}
\bm{X}_r = & \arg \min_{\bm{X}} \ w_C J_C(\bm{X}) + w_G J_G(\bm{X}) \nonumber \\
& \qquad \qquad \qquad + w_L J_L(\bm{X}) \label{eq:X_opt} \\
&\ \mathrm{s.t.} \qquad \ P_x \bm{X} = \bm{X}^* \label{eq:X_opt_P}
\end{align}
where $\bm{X}_r \in \mathbb{R}^{T \times n}$ is the reproduction, $w_C,\ w_G,\ w_L \in \mathbb{R}^+$ are positive weights; $J_C,\ J_G,\ J_L: \mathbb{R}^{T \times n} \rightarrow \mathbb{R}^+$ are cost functions in the Cartesian, tangent, and Laplacian coordinates, respectively; $P_x \in \mathbb{R}^{m \times T}$ and $\bm{X}^* \in \mathbb{R}^{m \times n}$ define $m \in \mathbb{Z}^+$ linear constraints on $\bm{X}_r$. In practice, $m<<n$ and we use the linear constraints to enforce constraints on initial, target, and via points.
We define the cost function in each coordinate system as follows
\begin{align}
J_C(\bm{X}) = & (\bm{X}(:) - \hat{\bm{X}}(:))^\top (\bm{\hat{\Sigma}_{\bm{X}}})^{-1} (\bm{X}(:) - \hat{\bm{X}}(:)) \label{eq:Cartesian_cost} \\
J_G(\bm{X}) = & (\bm{\Gamma}(:) - \hat{\bm{\Gamma}}(:))^\top (\bm{\hat{\Sigma}_{\Gamma}})^{-1} (\bm{\Gamma}(:) - \hat{\bm{\Gamma}}(:)) \label{eq:tangent_cost} \\
J_L(\bm{X}) = & (\bm{\Delta}(:) - \hat{\bm{\Delta}}(:))^\top (\bm{\hat{\Sigma}_{\Delta}})^{-1} (\bm{\Delta}(:) - \hat{\bm{\Delta}}(:)) \label{eq:Laplacian_cost}
\end{align}
where $\bm{\hat{\Sigma}_{\bm{X}}}, \bm{\hat{\Sigma}_{\Gamma}}, \bm{\hat{\Sigma}_{\Delta} \in \mathbb{R}^{nT \times nT}}$ denote the block diagonal matrices formed with the conditional covariances $\hat{\Sigma}_x (t), \hat{\Sigma}_\gamma (t),$ and $\hat{\Sigma}_\delta (t)$, respectively, for all values of $t$. Further, the notation $(:)$ denotes vectorization - for instance, $\bm{X}(:), \hat{\bm{X}}(:) \in \mathbb{R}^{nT}$ denote the vectorized trajectories formed by vertically stacking $x(t)$ and $\hat{x}(t)$ for all values of $t$, respectively. Note that we construct the trajectories $\Gamma$ and $\Delta$ in (\ref{eq:tangent_cost}) and (\ref{eq:Laplacian_cost}) from $\bm{X}$ via the linear operators defined in (\ref{eq:Gamma}) and (\ref{eq:Delta}), respectively. MCCB penalizes deviations from the conditional mean in each coordinate system. However, deviations are penalized less (more) severely if high (low) variance is observed in the demonstrations at any given time.
\subsection{Automated Cost Balancing}\label{subsec:cost_balancing}
In order to obtain reproductions that successfully imitate demonstrations of a wide variety of skills, the weights $w_C,\ w_G,$ and $w_L$ have to be chosen with care. Indeed, they preferentially weight the costs defined in each differential coordinate and thereby manipulate the relative incentive for successful imitation in each coordinate system.
We learn these weights directly from the available demonstrations. Note that, for known weights, the constrained optimization problem in (\ref{eq:X_opt}) is convex in $\bm{X}$. We estimate the weights in the following form
\begin{equation}
\hat{w}_C = \frac{\alpha_C}{\beta_C}; \quad \hat{w}_G = \frac{\alpha_G}{\beta_G}; \quad \hat{w}_L = \frac{\alpha_L}{\beta_L}
\end{equation}
where $\beta_C,\ \beta_G,\ \beta_L \in ( 0, 1 ]$, such that $\sum_i \beta_i = 1$, are positive scaling factors used to correct for inherent differences in the magnitudes of the costs, and $\alpha_C,\ \alpha_G,\ \alpha_L \in [ 0, 1 ]$, such that $\sum_i \alpha_i = 1$, are positive weights used to preferentially weight the cost defined in each coordinate system. MCCB estimates the scaling factors $\beta_i$'s as follows
\begin{equation}
\beta_i = \frac{\sum_{j=1}^{N} J_i(\bm{X}_d^j)}{\sum_l \sum_{j=1}^{N} J_l(\bm{X}_d^j)}, \quad \forall i,l = \{C,G,L\}
\end{equation}
With the scaling factors compensating the inherent scale difference in the costs, we compute the preferential weighting factors $\alpha_i$'s that minimize reproduction error. To this end, we formulate the following meta optimization problem
\begin{align}
\{\alpha_C, \alpha_G, \alpha_L\} = & \arg \min_{\alpha_C, \alpha_G, \alpha_L} \sum_{j=1}^N \mathrm{SSE}(\bm{X}_r^j,\bm{X}_d^j) \\
& \mathrm{s.t.} \sum_i \alpha_i = 1,\ \forall i = \{C,G,L\}
\end{align}
where $\mathrm{SSE}(\cdot)$ denotes the sum of squared errors computed over time, and $\bm{X}_r^j$ is the solution to the following optimization problem
\begin{align}
\bm{X}_r^j = & \arg \min_{\bm{X}} \ \left(\frac{\alpha_C}{\beta_C}\right) J_C(\bm{X}) + \left(\frac{\alpha_G}{\beta_G}\right) J_G(\bm{X}) \nonumber \\
& \qquad \qquad \qquad + \left(\frac{\alpha_L}{\beta_L}\right) J_L(\bm{X}) \label{eq:X_j_opt} \\
&\ \mathrm{s.t.} \qquad \ P_x \bm{X} = \bm{X}_j^* \label{eq:X_j_opt_P}
\end{align}
where $P_x \bm{X} = \bm{X}_j^*$ denotes specific linear constraints pertaining to the demonstration $\bm{X}_d^j$, such as initial, target, and via points. Solving the above meta-optimization problem results in the preferential weights $\alpha_i$'s that minimize reproduction errors of the solutions generated by the original constrained optimization problem in (\ref{eq:X_opt})-(\ref{eq:X_opt_P}).
\section{Experimental Evaluation}\label{sec:experiments}
This section describes the design and discusses the results of four experiments conducted to evaluate MCCB. In each experiment, we compared the performances of the following approaches:
\begin{enumerate}
\item\textit{Cartesian-coordinates}: $w_C = 1$, $w_G=0$, $w_L=0$
\item \textit{Tangent-coordinates}: $w_C = 0$, $w_G=1$, $w_L=0$
\item \textit{Laplacian-coordinates}: $w_C = 0$, $w_G=0$, $w_L=1$
\item \textit{Uniform weighting}: $w_C = 1/3$, $w_G=1/3$, $w_L=1/3$
\item \textit{MCCB}: $w_C = \hat{w_C}$, $w_G=\hat{w_G}$, $w_L=\hat{w_L}$
\end{enumerate}
\begin{figure}
\centering
\includegraphics[trim={17cm 6.8cm 17cm 3.3cm}, clip, width=\columnwidth]{LASA_qualitative_matlab.pdf}
\caption{\small{Qualitative performance of MCCB on the LASA handwriting dataset. Demonstration (gray), reproductions (blue), and expected mean position (dashed red) are shown.}}
\label{fig:LASA_qualitative}
\end{figure}
We measured the performance of each approach by the following geometric and kinematic metrics: \textit{Swept Error Area (SEA)} \cite{khansari2014learning}, \textit{Sum of Squared Errors (SSE)}, \textit{Dynamic Time Warping Distance (DTWD)}, and \textit{Frechet Distance (FD)} \cite{frechet1906quelques}. These metrics allow us to evaluate different aspects of each method's performance. The SEA and SSE metrics penalize both spatial and temporal misalignment, and thus evaluate kinematic performance. On the other hand, the DTWD and FD metrics penalize spatial misalignment while disregarding time misalignment, and thus evaluate geometric performance. Further, the SEA, SSE, and DTWD metrics evaluate aggregate performance by summing over or averaging across all the samples of each reproduction. The FD metric, on the other hand, computes the shortest possible cord length required to connect the demonstration and the reproduction in space while allowing time re-parametrization of either trajectory, and thus measures maximal deviation in space. Note that the SEA metric is restricted to 2-dimensional data, so we only report it for one of our experiments.
In all the experiments, we used the position constraints in (\ref{eq:X_opt_P}) to enforce both initial and end point constraints uniformly across all the methods being compared. Further, we uniformly set the number of Gaussian basis functions to five across all the coordinates and all the experiments.
\subsection{Handwriting Skill}\label{subsec:LASA}
This experiment evaluates MCCB on the publicly available LASA human handwriting library \cite{khansari2011learning}, that consists of handwriting motions collected from pen input using a Tablet PC. The library contains a total of 25 handwriting motions, each with 7 demonstrations.
\begin{figure}
\centering
\includegraphics[trim={6.5cm 4cm 9cm 0cm}, clip, width=\columnwidth]{LASA_aggregate_evaluation_colored.pdf}
\caption{\small{Box plots, with mean (brown star) and median (red line), illustrate the performance of each approach on the handwriting task.}}
\label{fig:LASA_quantitative}
\end{figure}
\begin{figure}
\centering
\includegraphics[trim={1cm 1cm 1cm 2cm}, clip, width=\columnwidth]{RAIL_data_snapshots.pdf}
\caption{\small{Snapshots illustrating the experimental setup for the picking (left), pressing (center), and pushing (right) skills.}}
\label{fig:RAIL_snapshots}
\end{figure}
\begin{figure*}
\centering
\includegraphics[trim={0cm 4cm 0cm 6cm}, clip, width=0.80\textwidth]{RAIL_qualitative.pdf}
\caption{\small{Qualitative performance of MCCB on the picking, pressing, and pushing datasets. Demonstration (gray), reproductions (blue), expected mean position (dashed red), initial (black squares), and target (black stars) are shown.}}
\label{fig:RAIL_qualitative}
\end{figure*}
Fig. \ref{fig:LASA_qualitative} shows that MCCB yields reproductions that are qualitatively similar to the demonstrations while satisfying the end-point constraints across all motions. As shown in Fig. \ref{fig:LASA_quantitative}, quantitative analysis indicates that MCCB ($\bar{\alpha}_C = 0.1814,\ \bar{\alpha}_G = 0.4958,\ \bar{\alpha}_L = 0.3228$)\footnote{Weighting factors averaged over all 25 skills in the LASA dataset} and three of the four baselines performed comparably with respect to the SEA, FD, SSE, and DTWD metrics, while the Cartesian baseline performed poorly in comparison. This is consistent with the fact that the demonstrations within the LASA dataset emphasize strong similarities in shape.
\subsection{Picking Skill}\label{subsec:RAIL_picking}
The second experiment evaluates the performance of MCCB in a picking task (Fig. \ref{fig:RAIL_snapshots}). The data consists of six kinesthetic demonstrations, each a 3-dimensional robot end-effector position trajectory recorded as a human guided the robot in picking up two magnets atop two blocks. We enforced two via-point constraints (one at each picking point) in addition to the end-point constraints.
\begin{figure}
\centering
\includegraphics[trim={5cm 4cm 4cm 1cm}, clip, width=\columnwidth]{subject_5_picking_evaluation_colored.pdf}
\caption{\small{Box plots, with mean (brown star) and median (red line), illustrate the performance of each approach on the picking dataset.}}
\label{fig:RAIL_picking_quantitative}
\end{figure}
As shown in Fig. \ref{fig:RAIL_qualitative}(a), MCCB generated reproductions that are qualitatively similar to the demonstrations while satisfying all the position constraints. Quantitative evaluations reveal that learning in tangent coordinates yielded better reproductions than learning in Cartesian and Laplacian coordinates (Fig. \ref{fig:RAIL_picking_quantitative}). This was expected since the demonstrations of this task, much like the LASA dataset, emphasize shape similarity. Further, MCCB ($\alpha_C = 0.2362,\ \alpha_G = 0.5451,\ \alpha_L = 0.2187$) yielded the best performance, with respect to all three metrics. In fact, uniform weighting yielded poorer results, with respect to all three metrics, than when considering only the tangent coordinates. The results of this experiment show that while multi-coordinate methods can yield strong performance, it is critical that we balance the weights appropriately.
\subsection{Pressing Skill}\label{subsec:RAIL_pressing}
In this experiment, we evaluated MCCB's ability to learn pressing skills (Fig. \ref{fig:RAIL_snapshots}). The data consists of six kinesthetic demonstrations, each a 3-dimensional robot end-effector position trajectory recorded as a human guided the robot in pressing two cylindrical pegs into their respective holes.
\begin{figure}
\centering
\includegraphics[trim={5cm 4cm 4cm 1cm}, clip, width=\columnwidth]{subject_5_pressing_evaluation_colored.pdf}
\caption{\small{Box plots, with mean (brown star) and median (red line), illustrate the performance of each approach on the pressing dataset.}}
\label{fig:RAIL_pressing_quantitative}
\end{figure}
As shown in Fig. \ref{fig:RAIL_qualitative}(b), MCCB successfully reproduced the demonstrations. Note that MCCB is capable of automatically capturing and reproducing the consistencies across the demonstrations in certain regions without any position constraints. Fig. \ref{fig:RAIL_pressing_quantitative} illustrates the performance of MCCB and the baselines with respect to three different metrics. Learning in Cartesian coordinates resulted in the better performance compared to learning in tangent and Laplacian coordinates. Quantitative evaluations further demonstrate that MCCB ($\alpha_C = 0.6735,\ \alpha_G = 0.2034,\ \alpha_L = 0.1231$) consistently yielded the best performance with respect to all three metrics. The results of this experiment, in light of the results in Section \ref{subsec:RAIL_picking}, suggest that the relative importance of each of the differential coordinates vary across different skills.
\subsection{Pushing Skill}\label{subsec 0.045:RAIL_pushing}
\vspace{-.2cm}
The final experiment evaluates the performance of MCCB in a pushing task (Fig. \ref{fig:RAIL_snapshots}). The data consists of six kinesthetic demonstrations, each a 3-dimensional robot end-effector position trajectory recorded as a human guided the robot in sliding closed the lid of a wooden box.
\begin{figure}
\centering
\includegraphics[trim={5cm 3.9cm 4cm 1cm}, clip, width=\columnwidth]{subject_5_pushing_evaluation_colored.pdf}
\caption{\small{Box plots, with mean (brown star) and median (red line), illustrate the performance of each approach on the pushing dataset.}}
\label{fig:RAIL_pushing_quantitative}
\end{figure}
As shown in Fig. \ref{fig:RAIL_qualitative}(c), MCCB successfully generated reproductions that are similar to the demonstrations. As evidenced by quantitative evaluations in Fig. \ref{fig:RAIL_pushing_quantitative}, encoding demonstrations in the Laplacian coordinates yielded better performance, with respect to all three metrics, when compared to learning only in either of the other two coordinates, while, MCCB ($\alpha_C = 0.0123,\ \alpha_G = 0.045,\ \alpha_L = 0.9427$) consistently outperformed all the other approaches. Note that learning in the Laplacian coordinates alone resulted in better performance than uniformly weighting of all the coordinates. These results are consistent with the results from the previous sections and indicate that MCCB yields consistently good performance. The results are summarized in Table \ref{tab:summary}.
\begin{table}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{|c"c|c|c"c|c|}
\hline
& \multicolumn{3}{c"}{Single Coordinate} & \multicolumn{2}{c|}{Multi-Coordinate} \\ \hline
& Cartesian & Tangent & Laplacian & Uniform W. & MCCB \\ \hline
Handwriting & & {\color[HTML]{CE6301} \checkmark} {\color[HTML]{009901} \checkmark} & {\color[HTML]{CE6301} \checkmark} {\color[HTML]{009901} \checkmark} & {\color[HTML]{009901} \checkmark} & {\color[HTML]{009901} \checkmark} \\ \hline
Picking & & {\color[HTML]{CE6301} \checkmark} & & & {\color[HTML]{009901} \checkmark} \\ \hline
Pressing & {\color[HTML]{CE6301} \checkmark} & & & & {\color[HTML]{009901} \checkmark} \\ \hline
Pushing & & & {\color[HTML]{CE6301} \checkmark} & & {\color[HTML]{009901} \checkmark} \\ \hline
\end{tabular}
}
\caption{\small{Orange check marks denote the most relevant coordinate and green check marks denote the best performing method.}}\label{tab:summary}
\end{table}
\section{Discussion and Conclusion}\label{sec:conclusion}
\vspace{-.1cm}
We introduced MCCB, a learning framework for encoding demonstrations in multiple differential coordinates, and automated balancing of costs defined in those coordinates. As shown in Table \ref{tab:summary}, we demonstrated that the relative effectiveness of each coordinate system is not consistent across a variety of tasks since any given skill might be better suited for learning in one (or more) coordinate system(s). Furthermore, uniform weighting of costs in different coordinates does not consistently yield the best results across different skills. Indeed, uniform weighting, in some cases, yielded poorer performances compared to when only one coordinate system was used. On the other hand, MCCB learned to balance the costs and consistently yielded the best performance. Since the weights are learned directly from the demonstrations, MCCB makes no task-specific assumptions and does not require tedious parameter tuning. Note that although we used GMMs as the base representation in this work, MCCB is agnostic to the statistical model used to encode the demonstrations in each coordinate system, and thus can be combined with other techniques, such as \cite{calinon2014task,paraschos2013probabilistic,ahmadzadeh2017generalized,umlauft2017learning,rana2017towards,osa2017guiding,nierhoff2016spatial}. Furthermore, MCCB can be extended to include more coordinate systems that capture additional trajectory features.
\section*{ACKNOWLEDGMENT}
This research is supported in part by NSF NRI 1637758.
\bibliographystyle{IEEEtran}
|
1,314,259,994,881 | arxiv | \section{Introduction}
If $G$ is a virtually torsion-free group, the virtual cohomological
dimension $\vcd G$, is defined to be the cohomological dimension of
a torsion-free finite-index subgroup $H\leq G$; a lemma due to
Serre shows that this is well defined~\cite[VIII.3.1]{brownco}.
Now suppose that $X$ is a contractible $G$-CW-complex that is
{\em proper}, in the sense that all cell stabilizers are finite.
In this case any torsion-free subgroup $H$ will act freely on $X$
and so $X/H$ is a classifying space or Eilenberg-Mac~Lane space
$BH$ for $H$. In particular, $\vcd G$ provides a lower bound
for the dimension of any such $X$. K.~S.~Brown asked whether
this lower bound is always attained~\cite[ch.~2]{brownwall} or
\cite[VIII.11]{brownco}: \\
\noindent {\bf Brown's Question ({\normalfont Weak Form}).}
Does every virtually torsion-free group $G$ admit a contractible proper
$G$-CW-complex of dimension $\vcd G$? \\
\noindent Until now, this form of Brown's question has remained unanswered.
We give examples of groups $G$ with $\vcd G=2$ that do not admit
any 2-dimensional contractible proper $G$-CW-complex in Theorem~\ref{acycthm}
below.
One reason why this question has been so elusive is that there
are many different equivariant homotopy types of contractible
proper $G$-CW-complexes. The most natural example is the
classifying space for proper $G$-actions, $\underline{E}G$,
which plays the same role in the homotopy category of proper
$G$-CW-complexes as $EG$ plays for free $G$-CW-complexes.
A model for $\ebar G$ is a proper $G$-CW complex $X$
such that for any finite $F\leq G$, the $F$-fixed point set
$X^F$ is contractible. Such an $X$ always exists, and is unique
up to equivariant homotopy equivalence. Let $\gdbar G$ denote the
minimal dimension of any model for $\ebar G$.
The version of Brown's question that concerns
$\underline{E}G$~\cite[ch.~2]{brownwall} or
\cite[VIII.11]{brownco} is usually asked in the form: \\
\noindent {\bf Brown's Question ({\normalfont Strong
Form}).}
Does $\gdbar G=\vcd G$ for every virtually
torsion-free $G$? \\
\noindent We prefer to split this question into two separate
questions. There is an algebraic dimension
$\cdbar G$ that bears a close relationship to $\gdbar G$, analogous
to the relationship between cohomological dimension and the
minimal dimension of an Eilenberg-Mac~Lane space. It can be
shown that $\cdbar G= \gdbar G$ except that there may exist
$G$ for which $\cdbar G=2$ and $\gdbar G=3$, and $\cdbar G$
is an upper bound for the cohomological dimension of any
torsion-free subgroup of $G$~\cite{LuckMeintrup}. In view
of this we may split the strong form of Brown's question into
two parts, one geometric and one algebraic.
\begin{itemize}
\item[]
Does there exist $G$ for which $\gdbar G\neq \cdbar G$?
\item[]
Does there exist virtually torsion-free
$G$ for which $\cdbar G> \vcd G$?
\end{itemize}
Examples of virtually torsion-free groups $G$ for which $\gdbar G=3$
and $\cdbar G=2$ were given in~\cite{BLN}. These groups $G$ are
Coxeter groups. Examples of $G$ for which $\cdbar G>\vcd G$ were
given in~\cite{VFG}, and more recently in~\cite{CMP,DDNP}. The
advantage of the examples in~\cite{CMP,DDNP} is that in some sense
they have the least possible torsion. For any virtually torsion-free
$G$, it can be shown that $\cdbar G$ is bounded by the
sum $\vcd G + \ell(G)$, where $\ell(G)$ is the maximal length of a
chain of non-trivial finite subgroups of $G$~\cite[6.4]{Luck1}.
This bound is attained for the examples in~\cite{CMP,DDNP} but not for
the examples in~\cite{VFG}.
To date, all constructions of groups $G$ for which $\cdbar G>\vcd G$
have used finite extensions of Bestvina-Brady groups~\cite{BB}, and
none of these groups $G$ admit a cocompact model for $\ebar G$. One
of our main results is the construction of virtually torsion-free $G$
admitting a cocompact $\ebar G$ for which $\cdbar G>\vcd G$.
Amongst our examples, the easiest to describe are extensions of
a right-angled Coxeter group by a cyclic group of prime order.
By taking instead a cyclic extension of a torsion-free finite
index subgroup of the same Coxeter group we obtain examples
with cocompact $\ebar G$ and for which $\cdbar G = \vcd G+ \ell(G)$.
We say that a simplicial action of a group on a simplicial complex is
{\it admissible} if the setwise stabilizer of each simplex equals to
its pointwise stabilizer. Many of the other terms used in the
statements of our main theorems will be defined below.
\begin{theorem}\label{th:ksbq}Let $L$ be a finite $n$-dimensional
acyclic flag complex with an admissible simplicial action of a finite
group $Q$, and let $W_L$ be the corresponding right-angled Coxeter
group so that $Q$ acts as a group of automorphisms of $W_L$. Let $N$
be any finite-index normal subgroup of $W_L$ such that $N$ is
normalized by $Q$, and let $G$ be the semidirect product $N\semi Q$.
This $G$ admits a cocompact model for $\ebar G$. Let $L^\sing$ denote
the subcomplex of $L$ consisting of points with non-trivial stabilizer
in $Q$.
\[\hbox{If}\,\,\, H^n(L,L^\sing)\neq 0,\,\,\,\hbox{then}\,\,\,
\cdbar G=n+1 \,\,\,\hbox{and}\,\,\, \vcd G\leq n.\]
Now suppose that $L_i$, $Q_i$, $n_i$, $N_i$, $W_i$ and $G_i$ are
as above for $i=1,\ldots,m$, and let $\Gamma :=G_1\times \cdots
\times G_m$. As before, there is a cocompact model for $\ebar\Gamma$,
and
\[\hbox{if}\,\,\, \bigotimes_{i=1}^m H^{n_i}(L_i,L_i^\sing) \neq 0,
\,\,\,\hbox{then}\,\,\, \cdbar \Gamma =
m+\sum_{i=1}^m n_i \,\,\,\hbox{and}\,\,\,
\vcd \Gamma\leq \sum_{i=1}^mn_i.\]
Furthermore $\vcd G=n$ if either $L$ is a barycentric subdivision or
$L^\sing$ is a full subcomplex of $L$. Similarly, $\vcd \Gamma=\sum_i
n_i$ if for each $i$, either $L_i$ is a barycentric subdivision or
$L_i^\sing$ is a full subcomplex of $L_i$.
\end{theorem}
\begin{corollary}\label{cora} For each $m\geq 1$ there exists a
virtually torsion-free group $\Gamma_m$ admitting a cocompact
$\ebar \Gamma_m$ and such that $$\cdbar \Gamma_m = 3m
> \vcd \Gamma_m =2m.$$
For each $m\geq 1$ there exists a virtually torsion-free
group $\Lambda_m$ admitting a cocompact $\ebar \Lambda_m$ and such
that $$\cdbar \Lambda_m = 4m =
\vcd \Lambda_m +\ell(\Lambda_m) > \vcd\Lambda_m = 3m.$$
Furthermore, $\Lambda_m$ may be chosen so that either every finite subgroup is
cyclic or, for any fixed prime $q$, every nontrivial finite subgroup is
abelian of exponent $q$.
\end{corollary}
In contrast to the above results, Degrijse and Mart\'\i{n}ez-P\'erez
have shown that $\vcd G=\cdbar G$ for a large class of groups that
contains all (finitely generated) Coxeter groups~\cite{DDMP}.
\begin{theorem}\label{acycthm}
Suppose that $L$ is a finite 2-dimensional acyclic flag complex
such that the fundamental group of $L$ admits a non-trivial
unitary representation $\rho:\pi_1(L)\rightarrow U(n)$ for
some $n$. Then $\vcd W_L=\cdbar W_L=2$,
there is a cocompact 3-dimensional model for $\ebar W_L$, and
yet there exists no proper 2-dimensional contractible $W_L$-CW-complex.
\end{theorem}
Theorem~\ref{acycthm} strengthens a result from~\cite{BLN}, and
gives the first negative answer to the weak form
of Brown's question. A different argument was used in~\cite{BLN} to
show that $\cdbar W_L=2 < \gdbar W_L=3$ for some of the flag complexes
$L$ that appear in Theorem~\ref{acycthm}.
We remark that finitely generated Coxeter groups are linear over
$\Z$~\cite{brown}, and that this property passes to subgroups and to
finite extensions. Hence all of the groups appearing in the above
statements are linear over $\Z$. As will be seen from the proofs,
each group appearing in our statements acts properly and cocompactly
on a CAT(0) cube complex; in particular they are all CAT(0) groups. A
right-angled Coxeter group $W_L$ is (Gromov) hyperbolic if and only if
$L$ satisfies the flag-no-square condition. Hyperbolicity passes to
finite index subgroups and finite extensions. Since any 2- or
3-dimensional flag complex admits a flag-no-square
subdivision~\cite{dran,prsw} it follows that the groups $\Gamma_1$ and
$\Lambda_1$ in Corollary~\ref{cora} and the groups $W_L$ in
Theorem~\ref{acycthm} may be taken to be hyperbolic (and CAT($-1$), a
possibly stronger property) in addition to their other stated
properties.
\section{Classifying spaces and Bredon cohomology}
The algebraic analogs of the geometric finiteness properties exhibited
by classifying spaces of groups for families of subgroups are
formulated using Bredon cohomology. This cohomology theory was
introduced by Bredon in \cite{Bredon} for finite groups and was
generalised to arbitrary groups by L\"{u}ck (see \cite{Luck}).
Let $G$ be a discrete group. A family $\mathcal{F}$ of subgroups
of $G$ is a non-empty set of subgroups which is closed under
conjugation and taking subgroups, in the sense that if $H\in \mathcal{F}$,
$g\in G$ and $K\leq H$, then $K\in\mathcal{F}$ and $gHg^{-1}\in
\mathcal{F}$. The \emph{orbit category} $\orb$ is
the category with objects the left cosets $G/H$ for all $H \in
\mathcal{F}$ and morphisms all $G$-equivariant functions between
the objects. In $\orb$, every morphism $\varphi: G/H \rightarrow G/K$
is completely determined by $\varphi(H)$, since
$\varphi(xH)=x\varphi(H)$ for all $x \in G$. Moreover, there exists a
morphism \[G/H \rightarrow G/K : H \mapsto xK\] if and only if
$x^{\scriptscriptstyle -1}Hx \subseteq K$.
An \emph{$\orb$-module} is a contravariant functor $M: \orb
\rightarrow \mathbb{Z}\mbox{-mod}$. The \emph{category of
$\orb$-modules} is denoted by $\orbmod$. By definition, it has as objects
all $\orb$-modules and as morphisms all natural
transformations between these objects. The category $\orbmod$ is an
abelian category that contains enough projectives and
so one can construct bi-functors
$\mathrm{Ext}^{n}_{\orb}(-,-)$ that have all the usual properties. The
\emph{$n$-th Bredon cohomology of $G$} with coefficients $M \in
\orbmod$ is by definition
\[ \mathrm{H}^n_{\mathcal{F}}(G;M)=
\mathrm{Ext}^{n}_{\orb}(\underline{\mathbb{Z}},M), \] where
$\underline{\mathbb{Z}}$ is the constant functor, which sends each
object to $\Z$ and each morphism to the identity map on $\Z$. There
is also a notion of \emph{Bredon cohomological dimension} of $G$ for
the family $\mathcal{F}$, denoted by $\mathrm{cd}_{\mathcal{F}}(G)$
and defined by
\[ \mathrm{cd}_{\mathcal{F}}(G) = \sup\{ n \in \mathbb{N} \ | \
\exists M \in \orbmod : \mathrm{H}^n_{\mathcal{F}}(G; M)\neq 0 \}. \]
When $\mF$ is the family of finite subgroups, then
$\rH^*_{\mF}(G,M)$ and $\mathrm{cd}_{\mF}G$ are
denoted by $\underline{\rH}^*(G,M)$ and $\underline{\mathrm{cd}}G$,
respectively. Since the augmented cellular chain complex of any model
for $E_{\mF}G$ yields a projective resolution of
$\underline{\Z}$ that can be used to compute
$\mathrm{H}_{\mF}^{\ast}(G;-)$, it follows that
$\mathrm{cd}_{\mathcal{F}}(G) \leq
\mathrm{gd}_{\mathcal{F}}(G)$. Moreover, it is known (see for
example~\cite[0.1]{LuckMeintrup})
that \[\mathrm{cd}_{\mathcal{F}}(G) \leq
\mathrm{gd}_{\mathcal{F}}(G) \leq \max\{3,
\mathrm{cd}_{\mathcal{F}}(G) \}.\]
For any $\Z G$-module $M$, one may define an $\orb$-module $\underline{M}$
by $$\underline{M}(G/H)=\Hom_G(\Z [G/H],M);$$ note that this is compatible with
the notation $\underline{\Z}$ introduced earlier and that this functor is isomorphic to the fixed-point functor
$$M^{\overline{\;\;}}:\orb\to \mathbb{Z}\mbox{-mod}: G/H\mapsto M^H.$$ For any $G$-CW-complex
$X$ with stabilizers in $\mF$, it can be shown that Bredon cohomology
with coefficients in $\underline{M}$ is naturally isomorphic to the
ordinary equivariant cohomology of $X$ with coefficients in $M$:
$H^*_\mF(X;\underline{M})\cong H^*_G(X;M)$. This follows because
the adjointness of the restriction and coinduction functors between
$\Z G$-mod and $\orbmod$ associated to the functor $$G=\orbe\to \orb
:G/{\{e\}}\mapsto G/{\{e\}}$$ gives an isomorphism of cochain
complexes
\[\Hom_\mF(C^\mF_*(X),\underline{M})\cong \Hom_G(C_*(X), M).\]
A {\it subfamily} of a family $\mF$ of subgroups of $G$ is another
family $\mG\subseteq \mF$. For a subfamily $\mG$ and a $G$-CW-complex
$X$ with stabilizers in $\mF$, the $\mG$-singular set $X^{\mG\mhyph\sing}$
is the subcomplex consisting of points of $X$ whose stabilizer is
not contained in $\mG$. When $\mG$ consists of just the trivial
subgroup this is the usual singular set and we write $X^\sing$ for
$X^{\mG\mhyph\sing}$.
Given a $\Z G$-module $M$ and a subfamily $\mG$ of $\mF$, we
define two further $\orb$-modules: a submodule $\underline{M}_{\leq\mG}$ of
$\underline{M}$, and the corresponding quotient module
$\underline{M}_{> \mG}$. These are defined by
\[ \underline{M}_{>\mG}:
G /H \mapsto
\left\{\begin{array}{lll} \Hom_G(\Z [G/H],M) & \mbox{if}& H\notin\mG,\\
0 & \mbox{if}& H\in\mG, \end{array}
\right. \]
\[ \underline{M}_{\leq\mG}:
G /H \mapsto
\left\{\begin{array}{lll} 0 & \mbox{if}& H\notin\mG,\\
\Hom_G(\Z [G/H],M) & \mbox{if}& H\in\mG. \end{array}
\right. \]
By construction there is a short exact sequence of $\orb$-modules
\[\underline{M}_{\leq\mG} \rightarrowtail \underline{M}
\twoheadrightarrow \underline{M}_{> \mG}.\]
Hence, there is a
short exact sequence of cochain complexes
\[0\to \Hom_\mF(C^\mF_*(X),\underline{M}_{\leq\mG})\to
\Hom_\mF(C^\mF_*(X),\underline{M})\to
\Hom_\mF(C^\mF_*(X),\underline{M}_{> \mG})\to 0\]
which gives rise to a long exact sequence in Bredon cohomology.
By construction $$\Hom_\mF(C^\mF_*(X),\underline{M}_{> \mG})\cong
\Hom_\mF(C^\mF_*(X^{\mG\mhyph\sing}),\underline{M}),$$
and by adjointness isomorphism noted
earlier $$\Hom_\mF(C^\mF_*(X^{\mG\mhyph\sing}), \underline{M})\cong
\Hom_G(C_*(X^{\mG\mhyph\sing}), M).$$
It follows that there is a natural identification between Bredon
cohomology with coefficients in $\underline{M}_{>\mG}$ and
the equivariant cohomology of $X^{\mG\mhyph\sing}$ with
coefficients in $M$:
\[H_\mF^*(X;\underline{M}_{>\mG})\cong H_G^*(X^{\mG\mhyph\sing};M),\]
we deduce that the Bredon cohomology
with coefficients in $\underline{M}_{\leq \mG}$ is isomorphic
to the equivariant cohomology of the pair $(X,X^{\mG\mhyph\sing})$
with coefficients in $M$:
\[H_\mF^*(X;\underline{M}_{\leq\mG}) \cong H_G^*(X,X^{\mG\mhyph\sing};M).\]
Hence we obtain the following.
\begin{proposition} \label{bredon_prop}
Let $\mF$ be a family of subgroups of $G$, with $\mG$ a subfamily,
let $X$ be any model for $E_\mF G$, and let $H$ be a finite-index
subgroup of $G$. There exists an $\orb$-module module $\mathcal{C}$
such that the Bredon cohomology of the group $G$ with coefficients
in $\mathcal{C}$ computes the ordinary cohomology of the pair
$(X/H,X^{\mG\mhyph\sing}/H)$:
\[H_\mF^*(G;\mathcal{C}) \cong H^n(X/H,X^{\mG\mhyph\sing}/H;\Z).\]
Furthermore, each abelian group $\mathcal{C}(G/K)$ is finitely
generated.
\end{proposition}
\begin{proof}
Let $M$ be the permutation module $\Z [G/H]$, and let
$\mathcal{C}:=\underline{M}_{\leq \mG}$. Then
\begin{align*}H_\mF^*(G;\mathcal{C})&\cong H_\mF^*(X;\mathcal{C})
\cong H_G^*(X,X^{\mG\mhyph\sing};\Z [G/H]) \\
&\cong
H_H^*(X,X^{\mG\mhyph\sing};\Z)\cong
H^*(X/H,X^{\mG\mhyph\sing}/H;\Z),
\end{align*}
where the first two isomorphisms follow from the discussion above
and the third because $H$ has finite index in $G$.
\end{proof}
\section{Right-angled Coxeter groups}
In this section we describe the results that we require concerning
right-angled Coxeter groups and the Davis complex. The material up
to and including Corollary~\ref{corequiv} is standard; for more
details we refer the reader to \cite{davis2}~or~\cite{davis1}.
A {\it right-angled Coxeter system} consists of a group $W$ called a
{\it right-angled Coxeter group} together with a set $S$ of
involutions that generate $W$, subject to only the relations that
certain pairs of the generators commute. We will always assume that
$S$ is finite. The defining relators have the forms $s^2=1$ and
$stst=1$ where $s\in S$ and $t$ ranges over some subset of $S$
depending on $s$. Since each relator has even length as a word in the
elements of $S$, one may define a group homomorphism from $W$ to the
cyclic group $\Z/2$ by $w\mapsto [l(w)]\in \Z/2$, where $l(w)\in \Z$
denotes the length of $w$ as a word in $S$, and $[l(w)]$ its image in
$\Z/2$. The kernel of this homomorphism will be denoted $W^\ev$, and
consists of the elements of $W$ that are expressible as words of even
length in the elements of $S$. The right angled Coxeter system
$(W,S)$ is determined by the graph $L^1(W,S)$ with vertex set $S$ and
edges those pairs of vertices that commute. Equivalently, the
right-angled Coxeter system is determined by the flag complex $L(W,S)$
with vertex set $S$ and simplices the cliques in the graph $L^1(W,S)$.
Given a right-angled Coxeter system $(W, S)$, the {\it Davis complex}
$\Sigma(W, S)$ can be realized as either a cubical complex, or as a
simplicial complex which is the barycentric subdivision of the cubical
complex. The simplicial structure is easier to describe,
so we consider this first. A {\it spherical} subset $T$ of $S$ is a
subset whose members all commute; equivalently $T$ is either the
empty set, or a subset of $S$ that spans a simplex of $L(W,S)$.
A {\sl special parabolic} subgroup of $W$ is the subgroup of $W$
generated by a spherical subset $T$. We denote the special
parabolic subgroup generated by $T$ by $W_T$. A {\sl parabolic}
subgroup of $W$ is a conjugate of a special parabolic subgroup. The set
of cosets of all special parabolic subgroups forms a poset, ordered by
inclusion, and the simplicial complex $\Sigma(W,S)$ is the
realization of this poset. By construction, $W$ acts admissibly
simplicially on $\Sigma(W,S)$ in such a way that each stabilizer
subgroup is parabolic.
If $T$ is a spherical subset of $S$, then
the subposet of cosets contained in $W_T$ is
equivariantly isomorphic to the poset of faces of the standard
$|T|$-cube $[-1,1]^T$, with the group $W_T \cong C_2^T$
acting via reflections in the coordinate hyperplanes.
In this way we obtain a cubical structure on $\Sigma(W,S)$,
in which the $n$-dimensional subcubes correspond to cosets
$wW_T$ with $|T|=n$. The setwise stabilizer of the cube
$wW_T$ is the parabolic subgroup $wW_Tw^{-1}$, which acts
on the cube in such a way that the natural generators
$wtw^{-1}$ act as reflections in the coordinate hyperplanes.
The simplicial complex described above is the barycentric
subdivision of this cubical complex.
If we view every simplicial complex as containing a unique
$-1$-simplex corresponding to the empty subset of its vertex set,
then we get a natural bijective correspondence between
the $W$-orbits of cubes in $\Sigma(W,S)$ and the
simplices of $L(W,S)$ which preserves incidence (the
empty simplex corresponds to the 0-cubes).
Hence we obtain:
\begin{proposition}\label{natbij}
There is a natural bijection between subcomplexes of the
simplicial complex $L(W,S)$ and non-empty $W$-invariant
subcomplexes of the cubical complex $\Sigma(W,S)$.
\end{proposition}
To show that $\Sigma(W,S)$ is a model for $\ebar W$, metric
techniques are helpful. There is a natural CAT(0)-metric on
$\Sigma(W,S)$, which is best understood in terms of the cubical
structure. The length of a
piecewise linear path in $\Sigma(W,S)$ is defined using the standard
Euclidean metric on each cube, and the distance between two points of
$\Sigma(W,S)$ is the infimum of the lengths of PL-paths connecting
them. According to Gromov's criterion~\cite{gromov}, $\Sigma(W,S)$
is locally CAT(0) because the link of every vertex is isomorphic to
$L(W,S)$ which is a flag complex (see \cite{davis2, gromov}). It is
easy to see that $\Sigma(W,S)$ is simply connected (for example, because
its 2-skeleton is a version of the Cayley complex for $W$), and
it follows that $\Sigma(W,S)$ is CAT(0)~\cite[Theorem II.4.1]{BridHaef}.
Given that $W$ acts isometrically with finite stabilizers on $\Sigma(W,S)$
it follows that $\Sigma(W,S)$ is a model for $\ebar W$ via the
Bruhat-Tits fixed point theorem~\cite[p.~179]{BridHaef}~or~\cite[Prop.~3]{BLN}.
\begin{lemma}\label{conjugacy} Every finite subgroup of $W$ is a
subgroup of a parabolic subgroup of $W$. In particular,
there are finitely many conjugacy classes of finite subgroups of $W$
and every finite subgroup is isomorphic to a direct product
$(\Z/2)^k$ for some $0\leq k \leq n$ where $n$ is the dimension of
$\Sigma(W, S)$.
\end{lemma}
\begin{proof} Let $F$ be a finite subgroup of $W$. By the Bruhat-Tits
fixed point theorem $F$ fixes some point of $\Sigma$, and hence $F$
is a subgroup of a point stabilizer. Every such subgroup is
parabolic, and each is conjugate to one of the finitely many
special parabolics.
\end{proof}
Recall that a group is said to be of {\it type $F$} if it admits a
compact classifying space.
\begin{corollary} The commutator subgroup $W'$ of $W$ is a
finite-index torsion-free subgroup of type $F$.
\end{corollary}
\begin{proof}
The abelianization of $W$ is naturally isomorphic to $C_2^S$.
Every parabolic subgroup of $W$ maps injectively into $C_2^S$.
It follows that $W'$ acts freely on the finite-dimensional
contractible space $\Sigma$. Hence $\Sigma/W'$ is a compact
$K(W',1)$, from which it follows that $W'$ is both type $F$ and
torsion-free.
\end{proof}
\begin{lemma}[\cite{BLN}]\label{modW} The quotient of the pair
$(\Sigma, \Sigma^\sing)$ by $W$ is isomorphic to the pair
$(CL', L')$, i.e., the pair consisting of the cone on the
barycentric subdivision of $L$ and its base. This isomorphism is
natural for automorphisms of $L$. If $L$ is acyclic
then so is $\Sigma^\sing$. If $L$ is simply-connected, then
so is $\Sigma^\sing$.
\end{lemma}
\begin{proof}
The first part is clear from the simplicial description of $\Sigma$.
Now let $V$ be the unique free $W$-orbit of vertices in the simplicial
description of $\Sigma$. The star of each $v\in V$ is a copy
of the cone $CL'$, with $v$ as its apex. The subcomplex of
$\Sigma$ consisting of all simplices not containing any vertex
of $V$ is $\Sigma^\sing$. Hence $\Sigma$ is obtained from
$\Sigma^\sing$ by attaching cones to countably many subcomplexes
isomorphic to $L'$.
In the case when $L$ is acyclic, attaching a cone to a copy
of $L'$ does not change homology. It follows that $\Sigma^\sing$
must be acyclic since $\Sigma$ is. Similarly, if $L$ is
simply-connected, then attaching a cone to a copy of $L'$ does
not change the fundamental group, so $\Sigma^\sing$ must be
simply-connected since $\Sigma$ is.
\end{proof}
Now suppose a finite group $Q$ acts by automorphisms on $L(W,S)$.
This defines an action of $Q$ on $W$, and hence a semidirect product
$G=W\semi Q$.
\begin{lemma} \label{equivariant}
There is an admissible simplicial $G$-action on $\Sigma(W,S)$
extending the action of $W$, and $\Sigma(W,S)$ becomes a
cocompact model for $\ebar G$.
\end{lemma}
\begin{proof} The action of $Q$ on the poset underlying $\Sigma(W,S)$
is defined in such a way that $q\in Q$
sends the coset $wW_T$ to the coset $q(w)W_{q(T)}$. This combines
with the $W$-action to give an admissible $G$-action on $\Sigma(W,S)$.
Since $\Sigma(W,S)$ is CAT(0) and the stabilizers are finite it
follows that $\Sigma(W,S)$ is a model for $\ebar G$.
\end{proof}
\begin{corollary}\label{corequiv}
Any finite-index subgroup $H$ of $G$ as above admits a cocompact
model for $\ebar H$ and is virtually torsion-free.
\end{corollary}
\begin{remark}\label{remstabs}
For the action of $G$ on $\Sigma$, the stabilizer of the vertex $W_T$
is the semidirect product $W_T\semi Q_T$, where $Q_T:=\{q\in Q: q(T)=T\}$.
If $Q$ acts admissibly on $L$ then $Q_T$ fixes each element of $T$ and
the stabilizer is the direct product $W_T\times Q_T$. Similarly, the
stabilizer of the vertex $wW_T$ is the direct product
$wW_Tw^{-1}\times wQ_Tw^{-1}$. Note in particular that the image
of the stabilizer under the quotient map $G\to G/W\cong Q$ depends only
on $T$, and not on $w$.
\end{remark}
\begin{lemma} \label{mainlem}
Let $N$ be a finite index normal subgroup of $W^\ev$. There is an
isomorphism $\psi$ from the relative chain complex $C_*(CL',L')$ to a
direct summand of the simplicial chain complex $C_*(\Sigma/N)$. This
isomorphism is natural for automorphisms of $L$ that preserve $N$. It
is also natural for the inclusion of subcomplexes in $L$ and the
corresponding $W/N$-invariant subcomplexes of the cubical structure on
$\Sigma/N$.
\end{lemma}
\begin{proof}
The cone $CL'$ is the realization of the poset of spherical
subsets of $S$, with cone point the empty set $\emptyset$.
For $\sigma$ a simplex of $CL'$, $\psi(\sigma)$ in
$C_*(\Sigma/N)$ will be the signed sum of its $|W/N|$
inverse images under the map $\Sigma/N\rightarrow
\Sigma/W=CL'$. The signs will ensure that simplices
of $CL'$ that do not contain $\emptyset$ as a vertex map
to zero.
In more detail, fix a transversal $w_1,\ldots,w_m$ to
$N$ in $W$, and for $\sigma$ a simplex of $CL'$, viewed
as a chain $\sigma=(T_0<T_1<\cdots<T_r)$ of spherical subsets,
define
\[\psi(\sigma) = \sum_{i=1}^m (-1)^{l(w_i)} w_i\sigma
= \sum_{i=1}^m (-1)^{l(w_i)} (w_iW_{T_0}<\cdots<w_iW_{T_r}).\]
Here $l(w)$ denotes the length of $w$ as a word in $S$. For
any $n\in N$, $l(wn)-l(w)$ is even, and so the sum above
does not depend on the choice of transversal. The above
formula clearly describes a chain map from $C_*(CL')$ to
$C_*(\Sigma/N)$. Now if $T$ is a non-empty spherical subset
of $S$, $W_T$ contains equal numbers of words of odd and
even length, and hence so does its image $W_T/(W_T\cap N)\leq W/N$.
Equivalently, any transversal to $W_T\cap N$ in $W_T$ contains equal
numbers of words of odd and even length.
It follows that if $T_0\neq \emptyset$, then $\psi(\sigma)=0$.
Hence the formula given above defines a chain map
$\psi: C_*(CL',L')\rightarrow C_*(\Sigma/N)$. This
clearly has the claimed naturality properties.
It remains to exhibit a splitting map $\phi: C_*(\Sigma/N)\rightarrow
C_*(CL',L')$. This uses a `simplicial excision map'. Let $v$ be the
image of $W_\emptyset\in\Sigma$ in $\Sigma/N$, and let $X$ be the
subcomplex of $\Sigma/N$ consisting of all simplices that do not have
$v$ as a vertex. There is a natural bijection between simplices of
$CL'$ containing the cone vertex and simplices of $\Sigma/N$
containing $v$. This induces an isomorphism $C_*(\Sigma/N,X) \cong
C_*(CL',L')$ and $\phi$ is defined as the composite of this with the
map $C_*(\Sigma/N)\rightarrow C_*(\Sigma/N,X)$. To check that
$\phi\circ\psi$ is the identity map on $C_*(CL',L')$, let
$\sigma=(T_0<T_1<\cdots<T_r)$ be any $r$-simplex of $CL'$. If
$T_0\neq \emptyset$ then we already know that
$\phi\circ\psi(\sigma)=\phi(0)=0$, and so the given formula for
$\phi\circ\psi$ does define a self-map of $C_*(CL',L')$. On the other
hand, if $T_0=\emptyset$ then $\psi(\sigma)$ contains $m=|W:N|$
distinct signed simplices, exactly one of which has $v=W_\emptyset$ as
a vertex rather than some other coset of $W_\emptyset$; furthermore
this simplex appears with sign $+1$. It follows that in this case
$\phi\circ\psi(\sigma)=\sigma$, confirming that $\phi\circ\psi$ is
the identity map of $C_*(CL',L')$.
\end{proof}
\begin{corollary}\label{cormainlem}
With notation as above, let $K$ be a subcomplex of $L$, and
let $\Sigma(K)$ be the (barycentric subdivision of the)
cubical $W_L$-subcomplex of $\Sigma$ associated to $K$. There
is a natural isomorphism $\psi$ from the relative chain complex
$C_*(CL',L'\cup CK')$ to a direct summand of the relative
simplicial chain complex $C_*(\Sigma/N,\Sigma(K)/N)$.
\end{corollary}
\begin{proof}
$C_*(CK',K')$ is a subcomplex of $C_*(CL',L')$ and the
corresponding quotient is $C_*(CL',L'\cup CK')$. Similarly,
$C_*(\Sigma(K)/N)$ is a subcomplex of $C_*(\Sigma/N)$ with
$C_*(\Sigma/N,\Sigma(K)/N)$ the corresponding quotient.
By naturality of $\psi$ and $\phi$ we get a diagram as follows,
in which the two left-hand squares with the same label on both
vertical sides commute and such that the two composites labelled
$\phi\circ\psi$ are equal to the relevant identity maps. A
diagram chase shows that there are unique maps $\psi$~and~$\phi$
corresponding to the dotted vertical arrows that make the right-hand
squares with the same label on both
vertical sides commute, and that these maps also satisfy $\phi\circ\psi=1$.
\[\xymatrix{
0\ar[r]&C_*(CK,K')\ar@/^/[d]^\psi \ar[r]& C_*(CL',L')\ar@/^/[d]^\psi
\ar[r]&C_*(CL',L'\cup CK')\ar@{-->}@/^/[d]^\psi\ar[r]& 0\\
0\ar[r]&C_*(\Sigma(K)/N)\ar@/^/[u]^\phi \ar[r]& C_*(\Sigma/N)\ar@/^/[u]^\phi
\ar[r]&C_*(\Sigma/N,\Sigma(K)/N)\ar@{-->}@/^/[u]^\phi\ar[r]&0\\
}
\]
\end{proof}
\section{Proof of Theorem~\ref{th:ksbq}}
As in the statement of Theorem~\ref{th:ksbq}, let $L$ be a finite
$n$-dimensional flag complex equipped with an admissible simplicial
action of a finite group $Q$, let $(W,S)=(W_L,S_L)$ be the associated
right-angled Coxeter system, let $N$ be a finite-index subgroup of $W$
that is normalized by $Q$, and let $G$ be the semidirect product
$G=N\semi Q$.
\begin{proof}[Proof of Theorem~\ref{th:ksbq}]
Note that $G$ may be viewed as a finite-index
subgroup of the semidirect product $W\semi Q$. Under these
hypotheses, we already see from Corollary~\ref{equivariant} that
the Davis complex $\Sigma$ is a cocompact $(n+1)$-dimensional
model for $\ebar G$, and that $G$ is virtually torsion-free.
Using the hypothesis that $L$ is acyclic, we see that the subcomplex
$\Sigma^{\sing(W)}$ of $\Sigma$ consisting of those points whose
stabilizer in $W$ is non-trivial is acyclic by Lemma~\ref{modW}. In
this case any finite-index torsion-free subgroup of $G$ acts freely on
the acyclic $n$-dimensional complex $\Sigma^{\sing(W)}$, which
implies that $\vcd G\leq n$.
For the remainder of the proof it will be convenient to define
$K:=L^\sing$, the subcomplex of $Q$-singular points in $L$.
(We warn the reader that our use of `$K$' is different
to that in~\cite[ch.~7--8]{davis2}.)
Next we show that $\cdbar G\geq n+1$. Since $W^\ev$ has index~2
in $W$ and is clearly $Q$-invariant, we see that $(N\cap W^\ev)\semi
Q$ is a subgroup of $G$ of index at most~2. Hence without loss of
generality we may assume that $N\leq W^\ev$. Now consider the family
$\mW$ of finite subgroups of $G$, consisting of those finite subgroups
that are contained in $N$, or equivalently the finite subgroups that
map to the trivial subgroup under the factor map $G\rightarrow Q$.
The stabilizers in $W\semi Q$ of vertices of $\Sigma$ are described in
Remark~\ref{remstabs}, and by intersecting with $G=N\semi Q$ we get a
similar description of stabilizers in $G$: the stabilizer of the
vertex $wW_T$ is the direct product of the intersection $N\cap
wW_Tw^{-1}$ and a subgroup that maps isomorphically to $Q_T$, the
stabilizer in $Q$ of the vertex $T$ of $CL'$. It follows that
$\Sigma^{\mW\mhyph\sing}$ is equal to the inverse image in $\Sigma$ of
the $Q$-singular set $CK'$ in $\Sigma/W=CL'$. Hence
$\Sigma^{\mW\mhyph\sing}$ is the $W$-invariant subcomplex of the
cubical structure on $\Sigma$ that corresponds (under the map of
Proposition~\ref{natbij}) to $K=L^\sing$. Using
Corollary~\ref{cormainlem} applied in this case, we see that
$H^{n+1}(\Sigma/N,\Sigma^{\mW\mhyph\sing}/N)$ admits a split
surjection onto $H^{n+1}(CL',L'\cup CK')$, which is isomorphic to
$H^n(L,K)=H^n(L,L^\sing)$ by excision. Proposition \ref{bredon_prop}
finishes the argument.
To show that $\vcd G=n$ when $L$ is a barycentric subdivision, we use
the calculation of the cohomology of $W$ with free coefficients as
described in~\cite[section~8.5]{davis2}. If $v$ is a vertex of $L$
that corresponds to the barycentre of an $n$-dimensional cell, then
$L-v$ is homotopy equivalent to the subcomplex obtained from $L$ by
removing the (interior of the) $n$-dimensional cell. Hence we see that
$H^{n-1}(L-v)\cong \Z$, and so by~\cite[cor.~8.5.3]{davis2}, $H^n(W;\Z
W)$ contains a free abelian summand.
Now we show that $\vcd G=n$ in the case when $K=L^\sing$ is a full
subcomplex of $L$. From the long exact sequence for the pair $(L,K)$
we see that $H^{n-1}(K)\neq 0$, and hence $H^n(CK,K)\neq 0$.
Lemma~\ref{mainlem} applied to the Coxeter group $W_K$ and its
finite-index torsion-free subgroup $W'_K$ shows that $H^n(W'_K;\Z)=
H^n(\Sigma_K/W'_K)$ contains a summand isomorphic to $H^n(CK,K)$ and
so is not zero. (Here we use $\Sigma_K$ to denote $\Sigma(W_K,S_K)$
since we reserve $\Sigma$ to stand for $\Sigma(W_L,S_L)$.) Since $K$
is a full subcomplex of $L$, the Coxeter group $W_K$ is naturally a
subgroup of $W_L\leq G$, and hence $\vcd G\geq \vcd W_K \geq n$.
The general case of Theorem~\ref{th:ksbq} follows from the
case described above, but since we will make extensive use
of the K\"unneth theorem it is helpful to work with cohomology
with coefficients in a finite field rather than integral cohomology.
Since $L$ is finite and $n$-dimensional, the hypothesis that
$H^n(L,L^\sing)\neq 0$ is equivalent to the existence of a prime~$p$
for which the mod-$p$ cohomology group $H^n(L,L^\sing;\F_p)\neq 0$.
Similarly, the hypothesis that
$\bigotimes_{i=1}^m H^{n_i}(L_i,L_i^\sing)\neq 0$
is equivalent to the existence of a single prime $p$ such that for
each $i$, $H^{n_i}(L_i,L_i^\sing;\F_p)\neq 0$. For the remainder of
the proof, we fix such a prime. The mod-$p$ analogues of
Proposition~\ref{bredon_prop} and Corollary~\ref{cormainlem} are
easily deduced from the integral versions.
Now let each $G_i$ be defined as above
in terms of $L_i$, $Q_i$, $n_i$ and $N_i\leq W_i$, and define
$\Gamma:=G_1\times\cdots\times G_m$, $Q=Q_1\times\cdots\times Q_m$,
and $W:=W_1\times \cdots\times W_m$. Finally, let $n:=\sum_{i=1}^m n_i$.
The direct product $\Sigma:=\Sigma_1\times \cdots\times \Sigma_m$ is a
cocompact model for $\ebar\Gamma$ of dimension $m+n$, and so
$\cdbar \Gamma\leq m+n$. Also
the direct product $\Sigma_1^\sing\times\cdots\times \Sigma_m^\sing$
is an acyclic $n$-dimensional simplicial complex admitting a proper
$\Gamma$-action, which implies that $\vcd\Gamma\leq n$.
The lower bounds also work just as in the case $m=1$; first we
consider $\cdbar\Gamma$. If we define
$\mW$ to be the family of finite subgroups of $\Gamma$ that are contained
in $W$, then a point
$x=(x_1,\ldots,x_m)\in \Sigma= \Sigma_1\times\cdots\times \Sigma_m$
is in $\Sigma^{\mW\mhyph\sing}$ if and only if there is an $i$ so
that $x_i\in \Sigma_i^{\mW_i\mhyph\sing}$. Hence we see that
if we define $N:=N_1\times\cdots\times N_m$, then
$$\Sigma^{\mW\mhyph\sing}=\bigcup_{i=1}^m\Sigma_1\times\cdots
\times\Sigma^{\mW_i\mhyph\sing}\times\cdots\times \Sigma_m,$$
and by the relative K\"unneth Formula
$H^{n+m}(\Sigma/N,\Sigma^{\mW\mhyph\sing}/N;\F_p)$ contains a direct
summand isomorphic to
$\bigotimes_{i=1}^m
H^{n_i+1}(\Sigma_i/N_i,\Sigma_i^{\mW_i\mhyph\sing}/N_i;\F_p),$
which is non-zero since it contains a
summand isomorphic to
\[\bigotimes_{i=1}^m H^{n_i+1}(CL_i,L_i\cup CK_i;\F_p) \cong
\bigotimes_{i=1}^m H^{n_i}(L_i,K_i;\F_p)=
\bigotimes_{i=1}^m H^{n_i}(L_i,L_i^\sing;\F_p).\]
To give a lower bound for $\vcd\Gamma$, start by considering the two
extra hypotheses separately for each $i$. If $L_i$ is a barycentric
subdivision, then as above $H^{n_i}(W_i;\Z W_i)$ contains a free
abelian summand, and so by the universal coefficient theorem
$H^{n_i}(W_i;\F_p W_i)\neq 0$. If instead $K_i:=L_i^\sing$ is a full
subcomplex of $L_i$, then $H^{n_i}(W_i';\F_p)\neq 0$ as above. There
is a surjective homomorphism from $\F_pW'_i$ onto $\F_p$ and hence a
short exact sequence of $\F_pW'_i$ modules
\[0\rightarrow I\rightarrow
\F_pW'_i\rightarrow \F_p \rightarrow 0\]
for suitable $I$. The
corresponding long exact sequence in cohomology implies that
$H^{n_i}(W'_i;\F_pW'_i)\rightarrow H^{n_i}(W'_i;\F_p)$ is surjective,
since its cokernel is contained in $H^{n_i+1}(W'_i;I)=0$. It follows
that $H^{n_i}(W_i;\F_p W_i)\cong H^{n_i}(W'_i;\F_pW'_i)\neq 0$. Since
$W$ acts cocompactly on $\Sigma=\Sigma_1\times\cdots\times \Sigma_m$,
the universal coefficient theorem for cohomology with compact supports
may be applied~\cite[8.5.9]{davis2}. Hence
\[H^n(W;\F_pW) \cong \bigotimes_{i=1}^m H^{n_i}(W_i;\F_pW_i) \neq 0,\]
showing that $\vcd\Gamma\geq n$ as required.
\end{proof}
\section{Examples}
In this section we construct sufficiently many examples of finite
groups $Q$ and $Q$-CW-complexes $L$ to establish
Corollary~\ref{cora}. First we collect some results concerning
triangulations.
\begin{proposition} Any finite $Q$-CW-complex is equivariantly
homotopy equivalent to a finite simplicial complex of the same dimension
with an admissible $Q$-action. If $L$ is any simplicial complex
with $Q$-action, the $Q$-action on the barycentric subdivision $L'$ of $L$ is
admissible. For any admissible action of $Q$ on
$L$, $L^\sing$ is a subcomplex. If $M\leq L$ is any subcomplex of a
simplicial complex $L$, then its barycentric subdivion $M'$ is a
full subcomplex of the flag complex $L'$.
\end{proposition}
\begin{proof}
The first claim follows easily from the simplicial approximation
theorem. Simplices of $L'$ correspond to chains in the poset of
simplices of $L$; since $Q$ acts as automorphisms of this poset the
action on $L'$ is admissible. For an admissible action of $Q$ on $L$, a
simplex of $L$ is fixed by $H\leq Q$ if and only if each of its
vertices is fixed. Hence each $L^H$ is the full subcomplex on the
$H$-fixed vertices, and $L^\sing=\bigcup_{1<H\leq Q}L^H$ is a
subcomplex. Finally if $M$ is any subcomplex of $L$, the poset of
simplices of $M$ is a subposet of the poset of simplices of $L$, and
so $M'$ is a full subcomplex of $L'$.
\end{proof}
\noindent
{\bf Example 1.} Let $Q$ be the alternating group $A_5$, and define a
$Q$-CW-complex as follows. For the 1-skeleton $L^1$ of $L$ take the
complete graph on five vertices, with the natural action of $Q=A_5$.
In $A_5$, the 24 elements of order five split into two conjugacy
classes of size 12, and any element $g$ of order 5 is conjugate to
$g^{-1}$ (but is not conjugate to $g^2$ or $g^3$). Define $L$ by
using one of the two conjugacy classes of 5-cycles to describe
attaching maps for six pentagonal 2-cells. By construction there is a
$Q$-action without a global fixed point and it is easily checked that
$L$ is acyclic. In fact, $\pi_1(L)$ is isomorphic to $SL(2,5)$, the
unique perfect group of order 120, and $L$ is isomorphic to the
2-skeleton of the Poincar\'e homology sphere~\cite[I.8]{Bredon}. The
singular set for the $Q$-action consists of the 1-skeleton and the
five lines of symmetry of each pentagonal 2-cell. Equivalently the
singular set is the 1-skeleton of the barycentric subdivision of $L$
(i.e., the simplicial complex with 21 vertices coming from the poset
of faces of $L$). In particular $H^2(L,L^\sing)\neq 0$. For this
21-vertex triangulation, $L^\sing$ is not a full subcomplex of $L$.
This could be rectified by taking a finer triangulation, but instead
note that $L$ is the barycentric subdivision of a polygonal complex.
By taking each $Q_i$ to be $A_5$ and each $L_i$ to be this 21-vertex
triangulation of $L$, we obtain groups $\Gamma_m$ having the
properties stated in the first part of Corollary~\ref{cora}.
\medskip\noindent
{\bf Example 2.} Fix distinct primes $p$ and $q$, and let
$Q$ be cyclic of order $q$, generated by $g$. For the
$Q$-fixed point set $L^Q$, take a mod-$p$ Moore space
$M(1,p)$. This space has a CW-structure with 1-skeleton
a circle and one 2-cell $f$. The 2-cell $f$ is attached
to the circle via a map of degree~$p$. Now define $L^2$
by adding on a free $Q$-orbit of 2-cells $f_0,\ldots,f_{q-1}$,
where $f_i=g^if_0$, so that each $f_i$ is attached to the
circle by a degree one map. $L^2$ is simply connected,
and $H_2(L^2)$ is a free $\Z Q$-module of rank one,
since it has a $\Z$-basis given by the elements
$$e_j:=f - \sum_{i=0}^p g^i f_j = f-\sum_{i=0}^p g^{i+j}f_0$$
for $0\leq j<q$, and $g^je_0= e_j$ for each $j$. Make
$L$ by attaching a free $Q$-orbit of 3-cells to kill each
$e_j$, so that $L$ is acyclic (and also contractible).
The long exact sequence for the pair $(L,L^Q)=(L,L^\sing)$
implies that $H^3(L,L^Q)\cong \Z/p$.
To establish the second part of Corollary~\ref{cora}
we take each $Q_i$ to be cyclic of order $q_i$, take
each $L_i$ to be a suitable triangulation of the above
$Q_i$-CW-complex for some fixed choice of $p$,
and take $N_i$ to be the commutator subgroup of the
Coxeter group $W_i:=W_{L_i}$. For any such choice,
we obtain a group $\Lambda_m$ as in the statement.
To ensure that $\Lambda_m$ contains only cyclic finite
subgroups we must take the primes $q_i$ all distinct,
whereas to ensure that $\Lambda_m$ contains only abelian
finite subgroups of exponent $q$ we take $q_i=q$ for all~$i$.
\section{Contractibility and acyclicity}
In~\cite{BLN}, it was shown that certain right-angled Coxeter groups
$W$ have the property that $\vcd W=\cdbar W =2<\gdbar W=3$. In this section
we improve this result by showing that for these same groups there
is no 2-dimensional contractible proper $W$-CW-complex.
We will use a few subsidiary results in the proof. Results similar to
Propositions \ref{propsubcx}~and~\ref{propacyc} appear
in~\cite{casadicks}, and with extra hypotheses in~\cite{segev}.
Proposition~\ref{kervair} is a corollary of the celebrated
Gerstenhaber-Rothaus theorem~\cite{GR}.
\begin{proposition} If $Y$ is a subcomplex of a 2-dimensional
acyclic complex, then $H_2(Y)=0$ and each $H_i(Y)$ is
free abelian. \label{propsubcx}
\end{proposition}
\begin{proof}
If $Y$ is any subcomplex of an $n$-dimensional acyclic
complex $Z$, then consideration of the homology long exact
sequence for the pair $(Z,Y)$ shows that $H_n(Y)$ is
trivial and that $H_{n-1}(Y)$ is free abelian. Since
$H_0$ is always free abelian, the case $n=2$ gives the
claimed result.
\end{proof}
\begin{proposition}\label{propacyc}
Let $Q$ be a finite soluble group and let $X$ be a 2-dimensional
acyclic $Q$-CW-complex. Then the fixed point set $X^Q$ is also acyclic.
\end{proposition}
\begin{proof}
The finite soluble group $Q$ has a normal subgroup $N$ of prime index,
the factor group $Q/N$ acts on the $N$-fixed point set $X^N$, and
the equality $X^Q=(X^N)^{Q/N}$ holds. Hence it suffices to consider
the case in which $Q$ has prime order.
By the P.~A.~Smith theorem, $X^Q$ is mod-$p$ acyclic in the case
when $Q$ has order $p$. By the previous proposition, $H_i(X^Q)$
is free abelian for all $i$. By
the universal coefficient theorem, the rank of the $i$th mod-$p$
homology group of $X^Q$ is equal to the rank of $H_i(X^Q)$.
Hence $X^Q$ must be acyclic.
\end{proof}
\begin{proposition}\label{kervair} Let $\Gamma$ be a group and
$\rho:\Gamma\to U(n)$ be a unitary representation of $\Gamma$.
Define $\widetilde \G:=\Gamma \ast \langle x_1, \dots,
x_r\rangle/{\langle\langle w_1, \dots, w_r\rangle\rangle}$ where
each $w_i$ is a word in elements of $\G$ and $x_1, \dots, x_r$. Let
$d_{ij}$ be the total exponent of $x_j$ in $w_i$ and set
$d=\det(d_{ij})$. If $d\ne 0$, then $\rho$ extends to a representation
$\tilde \rho: \widetilde \Gamma\to U(n)$.
\end{proposition}
\begin{proof} Extending $\rho$ to a representation of $\widetilde
\Gamma$ is equivalent to finding solutions $\overline{x}_i\in U(n)$
to the system of equations $\overline{w}_1=\cdots=\overline{w}_r=1$.
Here $\overline{w}_i$ is the word in elements of $U(n)$ and
variables $\overline{x}_1,\ldots,\overline{x}_r$ corresponding to
the word $w_i$. In more detail, the elements
of $U(n)$ appearing in $\overline{w}_i$ are obtained by applying
$\rho$ to the elements of $\Gamma$ appearing in the word $w_i$,
while each occurrence of $x_i$ is replaced by $\overline{x}_i$.
When such a solution has been found, we may define
$\widetilde{\rho}(x_i):= \overline{x}_i$. The existence of a
solution to this system is established in~\cite[theorem~1]{GR}.
\end{proof}
We recall that the {\sl nerve} of a covering is the simplicial complex whose
vertices are the sets in the cover, and whose simplices are the finite
collections with a non-empty intersection~\cite[section~3.3]{hatcher}.
\begin{lemma} \label{lemmahelly}
Let $X$ be a CW-complex, let $S$ be a finite indexing
set, and let $X(s)$ be a subcomplex of $X$ such that each
$X(s)$ is acyclic and each intersection of $X(s)$'s is either
empty or acyclic. Define
$$X^\#:= \bigcup_{s\in S} X(s),$$
and let $|\mathcal{N}|$ be the realization of the nerve of the
covering of $X^\#$ by the subcomplexes $X(s)$. There is a map
$f:X^\#\rightarrow |\mathcal{N}|$ which is a homology isomorphism
and induces a surjection of fundamental groups.
\end{lemma}
\begin{proof}
In the case when each intersection of $X(s)$'s is either contractible
or empty, it is well-known that there is a homotopy equivalence
$f:X^\#\rightarrow |\mathcal{N}|$~\cite[4.G, Ex.~4]{hatcher}.
We use Quillen's plus construction to reduce to this case.
For $T\subseteq S$,
define $X(T)$ to be the intersection $X(T)=\bigcap_{s\in T} X(s)$.
Suppose that $U\subseteq S$ is such that
$X(U)$ is non-empty. In this case, since $X(U)$ is
acyclic we can find a set $A_U$ of 2-cells with attaching
maps from the boundary of the 2-cell to $X(U)$ so that
each attaching map represents a conjugacy class of
commutators in $\pi_1(X(U))$ and so that the fundamental
group of the resulting complex $\widehat{X}(U)$ is trivial. Moreover, there
is a set $B_U$ of 3-cells and attaching maps from the boundary
of the 3-cell to $\widehat{X}(U)$ so that the resulting complex
$X_U$ contains $X(U)$ as a subcomplex, is simply-connected,
and such that the inclusion of $X(U)$ into $X_U$ is a homology
isomorphism. Define $Y$ by attaching to $X$ 2- and 3-cells
indexed by $\coprod_{U}A_U$ and $\coprod_U B_U$ respectively.
Define a subcomplex $Y(s)$ of $Y$ by attaching to $X(s)$ the 2- and
3-cells indexed by $\coprod_{s\in U} A_U$ and $\coprod_{s\in U} B_U$
respectively. Finally define $Y(T):=\bigcap_{s\in T}Y(s)$,
and $Y^\#:= \bigcup_{s\in S}Y(s)$. The nerve of the covering
of $Y^\#$ by the subcomplexes $Y(s)$ is naturally isomorphic to
$\mathcal{N}$. A Mayer-Vietoris spectral sequence argument
shows that the inclusion $X^\#\rightarrow Y^\#$ is a homology
isomorphism, and this map induces a surjection $\pi_1(X^\#)
\rightarrow \pi_1(Y^\#)$ because the 1-skeleta of $X^\#$ and
$Y^\#$ are equal.
\end{proof}
It is also possible to prove the above result directly using the
Mayer-Vietoris spectral sequence and the van Kampen theorem to keep
track of the homology and fundamental group respectively.
\begin{proof}[Proof of Theorem~\ref{acycthm}]
The Davis complex $\Sigma=\Sigma(W_L,S_L)$ is a cocompact
3-dimensional model for $\ebar W_L$. Since $L$ is acyclic, $\Sigma^\sing$
is a 2-dimensional acyclic proper $W_L$-CW-complex in which the fixed
point set for any non-trivial finite subgroup is contractible. This
suffices to show that $\cdbar W_L=2$.
Now suppose that $X$ is any contractible proper 2-dimensional
$W_L$-CW-complex. Let $S=S_L$, and define $X^\#$ as the union
of the fixed point sets $X^s$: $X^\#:=\bigcup_{s\in S}X^s$.
By construction, the realization of the nerve of the covering of $X^\#$ by the
sets $X^s$ is equal to $L$. By Propostion~\ref{propacyc},
for each $T\subseteq S$ that spans a simplex of $L$ the subset
$$X(T):=\bigcap_{s\in T}X^s= X^{\langle T\rangle}$$
is acyclic, and for each $T$ that does not span a simplex of
$L$, $X(T)$ is empty. By Lemma~\ref{lemmahelly}, it follows
that $X^\#$ is acyclic and that there is a natural surjection
$\phi:\pi_1(X^\#)\rightarrow \pi_1(L)$.
Define $\rho':=\rho\circ \phi:\pi_1(X^\#)\rightarrow U(n)$, a
non-trivial unitary representation of $\pi_1(X^\#)$.
We use this representation to obtain a contradiction.
Pick $g\in \pi_1(X^\#)$ so that $\rho'(g)\neq 1$.
Since $X$ is contractible, there exists a connected subcomplex
$X_1$ of $X$ with $X^\# \subseteq X_1$ such that $X_1-X^\#$ comprises
only finitely many cells, and such that $g$ maps to the identity
element of $\pi_1(X_1)$. By Proposition~\ref{propsubcx},
$H_2(X_1)=0$, and $H_1(X_1)$ is free abelian. In general $X_1-X^\#$ will
contain some 0-cells; by contracting some of the 1-cells in $X_1-X^\#$
we may get rid of these extra 0-cells without changing the homotopy
type. In this way we replace $X_1$ by a complex $X_2$ with the
following properties: $X^\#\subseteq X_2$; $H_1(X_2)$ is free abelian
and $H_2(X_2)=\{0\}$; $X_2$ consists of $X^\#$ with finitely many
1- and 2-cells added; $g$ is in the kernel of the map
$\pi_1(X^\#)\rightarrow \pi_1(X_2)$. Unlike $X_1$, $X_2$ is not a
subcomplex of $X$ but this is irrelevant. Since $X_2$ is made by
attaching finitely many cells to the acyclic complex $X^\#$, note that
$H_1(X_2)$ is free abelian of finite rank. Now make $X_3$ by attaching
2-cells to exactly kill $H_1(X_2)$. Thus $X_3$ is an acyclic
2-complex, obtained by attaching the same finite number, $r$ say,
of 1- and 2-cells to $X^\#$. If we write $\Gamma=\pi_1(X^\#)$ and
$\widetilde\Gamma:=\pi_1(X_3)$, then the relationship between $\Gamma$
and $\widetilde\Gamma$ is exactly as in the hypotheses of
Proposition~\ref{kervair}. Here, the group generator
$x_i\in\widetilde\Gamma$ corresponds to a based loop in $X_3$ that
remains in $X^\#$ except that it travels once along the $i$th of
the $r$ new 1-cells, and the word $w_i$ spells out the attaching map
for the $i$th of the $r$ new 2-cells as a word in the elements of $\Gamma$
and the new loops $x_j$. Moreover, since $X^\#$ and $X_3$ are both
acyclic, the relative homology groups $H_i(X_3,X^\#)$ all vanish,
which tells us that the determinant $d$ appearing in the statement of
Proposition~\ref{kervair} is equal to $\pm 1$. Now
Proposition \ref{kervair} can be applied and tells us that the
representation $\rho':\pi_1(X^\#)\rightarrow U(n)$ extends to
a representation $\tilde \rho:\pi_1(X_3)\rightarrow U(n)$.
However, this contradicts the fact that $\rho'(g)\neq 1$,
while $g$ maps to the identity in $\pi_1(X_3)$.
\end{proof}
\begin{remark}
As an example of a suitable $L$, take a flag triangulation
of the 2-skeleton of the Poincar\'e homology sphere (which
was discussed in the previous section); here
there is a faithful representation
$\rho:\pi_1(L)\cong SL(2,5)\rightarrow U(2)$.
\end{remark}
\begin{remark}
There is a version of Brown's question that remains open:
for $m>2$, is there a virtually torsion-free group $G$
such that $\vcd G= m$ but there exists no contractible
$m$-dimensional proper $G$-CW-complex?
\end{remark}
|
1,314,259,994,882 | arxiv | \section{Introduction}
{\renewcommand*{\thetheorem}{\Alph{theorem}}
A normal complex surface singularity $(X,0)$ can be resolved either by a sequence of normalized point blowups, following seminal work of Zariski \cite{Zariski1939} from the late nineteen thirties, or by a sequence of normalized Nash transforms, as was done half a century later by Spivakovsky \cite{Spivakovsky1990}.
The main goal of this paper is to shed some light on the relationship between these two resolution algorithms, which despite their importance and their centrality in modern mathematics is still quite mysterious, providing some evidence of a duality between the two which was initially observed by L\^e \cite[\S4.3]{Le2000}.
While the blowup $\mathrm{Bl}_0X$ of the maximal ideal of $(X,0)$ is the minimal transformation which resolves the family of generic hyperplane sections of $(X,0)$, the Nash transform $\nu$ of $(X,0)$ is the minimal transformation that resolves the family of the polar curves associated with the generic plane projections of $(X,0)$.
Therefore, the study of the duality of resolution algorithms translates into the study of the relative positions on $(X,0)$ of those two families of curves.
This is the viewpoint we adopt in this paper. Our main theorem roughly states that fixed the topology of $(X,0)$, that is the homeomorphism class of its link, there are, up to homeomorphism, only a finite number of possible relative positions between these families of curves.
In order to give a precise statement of our result we need to introduce some additional notation.
Let $\pi\colon X_\pi \to X$ be a good resolution of $(X,0)$, by which we mean a proper bimeromorphic
morphism from a smooth surface $X_\pi$ to $X$ which is an isomorphism outside of a simple normal crossing divisor $E=\pi^{-1}(0)$, and denote by $V(\Gamma_\pi)$ the set of vertices of the dual graph $\Gamma_\pi$ of $\pi$, so that every element $v$ of $V(\Gamma_\pi)$ corresponds to an irreducible component $E_v$ of $E$.
We weight $\Gamma_\pi$ by attaching to each vertex $v$ the genus $g(v)\geq 0$ of the complex curve $E_v$ and the self-intersection $e(v)<0$ of $E_v$.
For each $v$ in $V(\Gamma_\pi)$, we denote by $l_v$ the intersection multiplicity of the zero locus $h^{-1}(0)$ of a generic hyperplane section $h\colon(X,0)\to(\C,0)$ with $E_v$, and we call \emph{$\cal L$-vector} of $(X,0)$ the vector $L_\pi=(l_v)_{v\in V(\Gamma_\pi)}\in\Z_{\geq0}^{V(\Gamma_\pi)}$.
Whenever $\pi\colon X_\pi \to X$ factors through $\mathrm{Bl}_0X$, the strict transform of such a generic hyperplane section via $\pi$ consists of a disjoint union of smooth curves that intersect transversely $E$ at smooth points of $E$, and $l_v$ is the number of such curves passing through the component $E_v$. Similarly, we denote by $p_v$ the intersection multiplicity of the strict transform of the polar curve of a generic plane projection $\ell\colon(X,0)\to(\C^2,0)$ with $E_v$ and we call \emph{$\cal P$-vector} of $(X,0)$ the vector $P_\pi=(p_v)_{v\in V(\Gamma_\pi)}\in\Z_{\geq0}^{V(\Gamma_\pi)}$. Whenever $\pi\colon X_\pi \to X$ factors through $\nu$ then such a strict transform consists of smooth curves intersecting $E$ transversely at smooth points, and $p_v$ equals the number of such curves through $E_v$. We can now give a precise statement of our main result:
\begin{theorem} \label{thm:main}
Let $M$ be a real $3$-manifold.
There exists finitely many triplets $(\Gamma,L,P)$, where $\Gamma$ is a weighted graph and $L$ and $P$ are vectors in $(\Z_{\geq0})^{V(\Gamma)}$, such that there exists a normal surface singularity $(X,0)$ satisfying the following conditions:
\begin{enumerate}
\item The link of $(X,0)$ is homeomorphic to $M$.
\item $(\Gamma,L,P) = (\Gamma_\pi, L_\pi, P_\pi)$, where $\pi \colon X_{\pi} \to X$ is the minimal good resolution of $(X,0)$ which factors through the blowup of the maximal ideal and the Nash transform of $(X,0)$.
\end{enumerate}
\end{theorem}
Recall that the \emph{link} of a normal surface singularity $(X,0)$, which is defined by embedding $(X,0)$ in a suitable smooth germ $(\mathbb{C}^N,0)$ and intersecting it with a small sphere, is, up to homeomorphism, a well defined real 3-manifold which determines and is determined by the homeomorphism class of the germ $(X,0)$ thanks to the Conical Structure Theorem.
Equivalently, the topological type of $(X,0)$ can be completely described in terms of the weighted dual graph $\Gamma_{\pi}$ of any good resolution $\pi$ of $(X,0)$, since $\Gamma_{\pi}$ is a plumbing graph of the link of $(X,0)$.
Conversely, Neumann \cite{Neumann1981} proved that the weighted dual graph of the minimal good resolution of $(X,0)$ is determined by the topology of the surface germ.
Adopting this point of view, the datum of the 3-manifold $M$ of Theorem~\ref{thm:main} is equivalent to the one of a weighted dual graph $\Gamma$, and as a consequence of the theorem we obtain the following:
\begin{corollary}\label{cor:main}
Let $\Gamma$ be a weighted graph.
Then there exist finitely many pairs $(L,P)$ of vectors $L$ and $P$ in $(\Z_{\geq0})^{V(\Gamma)}$ such that there exist a normal surface singularity $(X,0)$ and a good resolution $\pi$ of $(X,0)$ satisfying
\[
(\Gamma,L,P)=(\Gamma_{\pi}, L_{\pi}, P_{\pi}).
\]
\end{corollary}
One of the ingredients of the proof of Theorem~\ref{thm:main} is the fact that the topological type of a normal surface singularity gives a bound of the multiplicity of the germs realizing it.
We believe this statement to be of independent interest:
\begin{proposition}\label{prop:main}
Let $M$ be a real $3$-manifold.
Then there exists a natural number $n_M$ that only depends on the homeomorphism type of $M$ and such that, if $(X,0)$ is a normal surface singularity whose link is homeomorphic to $M$, the multiplicity $m(X,0)$ of $(X,0)$ is at most $n_M$.
\end{proposition}
Moreover, an explicit value for the bound $n_M$ can be computed in terms of the topology of $M$.
Our proof of this result makes use of a construction of Caubel, Popescu-Pampu, and the third author \cite{CaubelNemethiPopescu-Pampu2006}.
Given a weighted graph $\Gamma$, Proposition~\ref{prop:main} would then be sufficient to prove the finiteness of the set of the $\cal L$-vectors $L$ such that the pair $(\Gamma,L)$ can be realized by a surface singularity $(X,0)$.
By a procedure that we call \emph{gardening}, we then obtain the finiteness of pairs $(\Gamma,L)$ such that $\Gamma$ is the graph of the minimal good resolution factoring through the blowup of the maximal ideal.
In order to obtain the finiteness of the $P$-vector, we then use the well-known L\^e--Greuel--Teissier formula \cite{LeTeissier1981} to deduce from Proposition~\ref{prop:main} a bound on the multiplicity of the polar curve of $(X,0)$ in terms of $n_M$ and of the Euler characteristic of the Milnor--L\^e fiber of a generic linear form on $(X,0)$, which can be computed in terms of the graph $\Gamma$.
While the argument above would suffice to deduce Corollary~\ref{cor:main}, in order to prove Theorem~\ref{thm:main} we also need to prove that the topological type of $(X,0)$ provide a bound of the number of point blowups necessary to go from any good resolution of $(X,0)$ to one factoring through the Nash transform of $(X,0)$.
We do this by considering a set of invariants, the so-called \emph{Mather discrepancies} introduced by de Fernex, Ein, and Ishii \cite{FernexEinIshii2008}, and proving that they are bounded from above by another invariant $\nu_v$ which only depends on the topological type of $(X,0)$.
We conclude by showing that the Mather discrepancies grow faster than the $\nu_v$ do when we perform any blowup necessary to achieve factorization through the Nash transform, which permits us to set up an inductive argument.
The key technical result allowing us to do this is Theorem~\ref{thm:sheaf_2-forms}, which proves the existence of a suitable sheaf of K\"ahler 2-forms that only depends on the topological type of $(X,0)$, leading to the definition of the invariants $\nu_v$.
\cdiamond
While Theorem~\ref{thm:main} provides a finite list of possibly realizable pairs of $\cal L$- and $\cal P$-vectors $L$ and $P$, the list outputted by its proof could still be fairly long.
In the final section of the paper we discuss how to sharpen this bound by studying additional restrictions on the relative positions of generic hyperplane sections and polar curves.
We recast this problem in the framework of \emph{polar exploration}, which is the quest of determining the $\cal P$-vector $P$ which can be realized by a normal surface singularity which realizes a fixed pair $(\Gamma,L)$.
In order to approach this question, we build on the so-called \emph{Laplacian formula} of a normal surface singularity.
This result, proven by three of the authors of the present paper in \cite{BelottodaSilvaFantiniPichon2019}, can be thought of as a local version of the L\^e--Greuel--Teissier formula referred to above.
It describes the behavior of an infinite family of metric invariants of a normal surface singularity $(X,0)$, called its \emph{inner rates}, that appeared naturally in the study of the Lipschitz geometry of $(X,0)$ in the foundational work \cite{BirbrairNeumannPichon2014}.
This tool has been used in the previous work \cite[Theorem~1.1]{BelottodaSilvaFantiniPichon2020} to prove that the problem of polar exploration admits a unique solution for a specific class of surface singularities, those that are \emph{Lipschitz Normally Embedded}.
Additional restrictions on the relative hyperplane and polar positions can be derived from the topology of Milnor--L\^e fibers; this is discussed in Lemma~\ref{lem:hurwitz}.
We conclude the paper by discussing in detail an example from \cite{MaugendreMichel2017}, for which by combining the Laplacian formula with the topological constraints from Lemma~\ref{lem:hurwitz} we obtain a unique solution to the problem of polar exploration (see Example~\ref{ex:MaugendreMichel2017}).
\subsection*{Acknowledgments}
We would like to thank Hussein Mourtada and Ana Reguera for useful discussions.
This work has been partially supported by the project \emph{Lipschitz geometry of singularities (LISA)} of the \emph{Agence Nationale de la Recherche} (project ANR-17-CE40-0023).
The second author has also been partially supported by a \emph{Research Fellowship} of the \emph{Alexander von Humboldt Foundation}, while the third author has been partially supported by the NKFIH Grant “Élvonal (Frontier)” KKP 126683.
}
\section{Preliminaries on Lipman cones}
In this section we begin by recalling the notion of Lipman cone, and then prove an adaptation of a result of Caubel, Popescu-Pampu, and the third author from \cite{CaubelNemethiPopescu-Pampu2006} which will be useful in the remaining part of the paper.
A more thorough discussion of the basic objects described in this section can be found in \cite{Nemethi1999}.
\medskip
Let $\Gamma$ be a finite connected graph without loops and such that each vertex $v\in V(\Gamma)$ is weighted by two integers $g(v)\geq0$, called genus, and $e(v)\leq -1$, called self-intersection.
We assume that the incidence matrix induced by the self-intersections of the vertices of $\Gamma$, that is the matrix $I_\Gamma\in \Z^{V(\Gamma)}$ whose $(v,v')$-th entry is $e(v)$ if $v=v'$, and the number of edges of $\Gamma$ connecting $v$ to $v'$ otherwise, is negative definite.
Let $E=\bigcup_{v\in V(\Gamma)}E_v$ be a configuration of curves whose dual graph is $\Gamma$, so that $I_{\Gamma} = (E_v \cdot E_{v'})$, and consider the free additive group $\cal G$ generated by the irreducible components of $E$, that is
\[
\cal G = \bigg\{D =\sum_{v \in V(\Gamma)} d_v E_v \,\bigg|\, d_v \in \Z\bigg\}.
\]
By a slight abuse of notation, we refer to the elements of $\cal G$ as \emph{divisors on $\Gamma$}.
On $\cal G$ there is a natural intersection pairing $D\cdot D'$, described by the incidence matrix $I_{\Gamma}$, and a natural partial ordering given by setting $\sum d_v E_v \leq \sum d'_v E_v$ if and only if $d_v \leq d'_v$ for every $v \in V(\Gamma)$.
The {\it Lipman cone} of $\Gamma$ is the semi-group $\cal E^+$ of $\cal G$ defined as
\[
\cal E^+ = \big\{D \in \cal G \,\big|\, D \cdot E_v \leq 0 \text{ for all } v \in V(\Gamma) \big\}.
\]
\begin{remark}
By looking at the coefficients of a divisor we can identify $\cal G$ with the additive group $\Z^{V(\Gamma)}$.
Then the Lipman cone $\cal E^+$ of $\Gamma$ is naturally identified with the cone $\Z_{\geq0}^{V(\Gamma)}\cap-I_\Gamma^{-1}\big(\Q_{\geq0}^{V(\Gamma)}\big)$, since by definition a divisor $\sum d_vE_v$ belongs to $\cal E^+$ if and only if the vector $I_\Gamma\cdot(d_v)_{v\in V(\Gamma)}$ belongs to $\Z_{\leq0}^{V(\Gamma)}$.
\end{remark}
A cardinal property of the Lipman cone $\cal E^+$, proven in \cite[Proposition 2]{Artin1966}, is that it has a unique nonzero minimal element $Z_{\min}^{\Gamma}$, called the {\it fundamental cycle} of $\Gamma$, and that moreover $Z_{\min}^{\Gamma} \succ 0$, that is the coefficients of $Z_{\min}^{\Gamma}$ are all strictly positive.
Observe that the existence of the fundamental cycle and the fact that $Z_{\min}^{\Gamma}\succ 0$ are equivalent to the fact that $D\succ 0$ for every nonzero divisor $D$ in $\cal E^+$.
Assume from now on that $\Gamma$ is the dual graph of a good resolution $\pi$ of a normal surface singularity $(X,0)$.
Notice that the Lipman cone, and therefore its fundamental cycle, only depend on the graph $\Gamma$, that is on the topology of $(X,0)$, and not on the complex geometry of $(X,0)$; the fundamental cycle $Z_{\min}^{\Gamma}$ can be computed from $\Gamma$ by using Laufer's algorithm from \cite[Proposition 4.1]{Laufer1972}.
Consider now a germ of analytic function $f \colon (X, 0) \to (\C, 0)$.
The \emph{total transform} of $f$ by $\pi$ is the divisor $(f) = (f)_{\Gamma} + f^*$ on $X_\pi$, where $f^*$ is the strict transform of $f$ and $(f)_{\Gamma} = \sum_{v \in V(\Gamma)} m_v(f) E_v$ is the divisor supported on $E$ such that $m_v(f)$ is the multiplicity of $f \circ \pi$ along $E_v$.
By \cite[Theorem 2.6]{Laufer1971}, we have
\begin{equation}\label{eq:IdentityTotalTransform}
(f) \cdot E_v=0 \quad \text{for all }v \in V(\Gamma).
\end{equation}
In particular, $(f)_\Gamma$ belongs to the Lipman cone $\cal E^+$ of $\Gamma$, and therefore the semi-group $\cal A_X^+ = \{ (f)_{\Gamma} \;|\; f \in \cal O_{X,0}\}$\label{def:A+} of $\cal G$ is contained in $\cal E^+$; it has a unique nonzero minimal element $Z_{\max}^{\Gamma}(X,0)$, which is called the {\it maximal ideal divisor} of $(X,0)$.
Observe that $Z_{\max}^{\Gamma}(X,0)$ coincides with the cycle $(h)_{\Gamma}$ of a generic linear form $h \colon (X,0) \to (\C,0)$, so that $l_v = - Z_{\max}^{\Gamma}(X,0) \cdot E_v$ for all $v$ in $V(\Gamma)$, and, by the definition of the fundamental cycle, $Z_{\min}^{\Gamma} \leq Z_{\max}^{\Gamma}(X,0)$.
In general the inclusion $\cal A_X^+ \subset \cal E^+$ is strict and thus we may have $Z_{\min}^{\Gamma} \neq Z_{\max}^{\Gamma}(X,0)$.
However, given a weighted graph $\Gamma$ with negative definite intersection matrix, for all $D \in \cal E^+$, there exists a normal complex surface singularity $(X,0)$ and a resolution $\pi \colon X_{\pi} \to X$ such that $\Gamma_{\pi}=\Gamma$ and $D \in \cal A_X^+$ (\cite{Pichon2001} or \cite{NemethiNeumannPichon2011}).
So in particular, there exists $(X,0)$ such that $Z_{\min}^{\Gamma} = Z_{\max}^{\Gamma}$. However, when $D$ is sufficiently big, it can be obtained as cycle of an analytic function on any surface singularity realizing the weighted graph $\Gamma$, as showed in \cite[Theorem~4.1]{CaubelNemethiPopescu-Pampu2006}.
The following Proposition is an adaptation of that result.
\begin{proposition}
\label{thm:CaubelNemethiPopescu-Pampu2006}
Let $\Gamma$ be a weighted graph and let $D\geq0$ be a nonzero effective divisor on $\Gamma$.
Then:
\begin{enumerate}\label{inequality_intersection_divisor}
\item Assume that for every vertex $v$ of $\Gamma$ we have
\begin{equation}\label{eq:Ineq1}
D \cdot E_v + \val_\Gamma(v) + 2g(v) \leq 0,
\end{equation}
where $\val_\Gamma(v)$ denotes the valency of $v$ in $\Gamma$.
Then, for every normal surface singularity $(X,0)$ and every good resolution $\pi \colon (X_{\pi},E) \to (X,0)$ of $(X,0)$ whose weighted dual graph is $\Gamma$, there exists a function $f \in \mathfrak M_{X,0}$ with an isolated singularity at $0$ such that $(f)=(f)_{\Gamma} + f^{\ast}$ is a normal crossing divisor on $X_{\pi}$ and $(f)_{\Gamma} = D$.
Moreover, the line bundle $\mathcal O_{X_\pi}(-D)$ has no basepoints (that is, for every point $\pa\in E$ there exists a global section $s\in H^{0}\big(X_{\pi},\mathcal{O}_{X_{\pi}}(-D)\big)$ such that $s(\pa) \neq 0$).
\item If moreover the stronger inequality
\begin{equation}\label{eq:Ineq2}
D \cdot E_v + \val_\Gamma(v) + 2g(v) +2 \leq 0
\end{equation}
holds for every vertex $v$ of $\Gamma$, then for every free point $\pa$ in $E$ (that is, $\pa$ is not a double point of $E$) we can find a function $f\in \mathfrak M_{X,0}$ with an isolated singularity at $0$ such that $(f)=(f)_{\Gamma} + f^{\ast}$ is a normal crossing divisor on $X_{\pi}$ and $(f)_{\Gamma} = D$ and such that its strict transform $f^*$ via $\pi$ intersects $E$ nontrivially at $\pa$.
\end{enumerate}
\end{proposition}
We remark that the reason why the second part requires a stronger inequality is because we want the inequality of the first part to hold also after blowing up a free point of $E$.
\begin{proof}
What is missing from \cite[Theorem~4.1]{CaubelNemethiPopescu-Pampu2006} with respect to the first part of our statement is the basepoint-freeness of $\mathcal{O}_{X_\pi}(-D)$.
To see this, we recall the following two facts which are obtained in the proof \cite[Theorem~4.1]{CaubelNemethiPopescu-Pampu2006}: first, the natural application $H^0\big(\mathcal{O}_{X_{\pi}}(-D)\big) \to H^0\big(\mathcal{O}_E(-D)\big)$ is surjective \cite[Page 685, second to last paragraph]{CaubelNemethiPopescu-Pampu2006}; second, for every point $\pa \in E$, there exists a global section of $\mathcal{O}_E(-D)$ which is non-zero at $\pa$ \cite[Page 685, last paragraph]{CaubelNemethiPopescu-Pampu2006}.
Now fix a point $\pa \in E$ and consider the set
\[
H_{\pa}= \big\{ s \in H^{0}( E,\mathcal{O}_E(-D)) \,\big|\, s(\pa) =0\big\}
\]
which is the kernel of the linear map $H^{0}\big(E,\mathcal{O}_E(-D)\big) \to \mathbb{C}$ given by the evaluation of the sections at $\pa$.
By the second fact, it is a proper subspace of $H^{0}\big(E,\mathcal{O}_E(-D)\big)$ of codimension at least one. Finally, the function $f$ of the statement of the theorem is taken as a global section of $\mathcal{O}_{X_{\pi}}(-D)$ whose projection to $\mathcal{O}_E(-D)$ is generic \cite[Page 686, second paragraph]{CaubelNemethiPopescu-Pampu2006}.
We conclude that we can suppose the strict transforms of $f$ does not pass through $\pa$. This proves $(i)$.
In order to prove part $(ii)$, let $D $ be an effective cycle satisfying inequality \eqref{eq:Ineq2} for every vertex $v\in V(\Gamma)$.
Fix a free point $\pa \in E$ and consider the blowup $\sigma \colon ( X_{\pi'},E') \to (X_{\pi},E)$ with center $\pa$.
We denote by $\pi' = \pi \circ \sigma$, and by $E_w'$ the irreducible component of the exceptional divisor created by $\sigma$.
Consider the cycle $D' = \sigma^{\ast}(D) + E_w'$.
Note that $D' \cdot E'_w =-1 = - \val_{\Gamma_{\pi'}}(w) - 2g(w)$ and, if $v \neq w$ then $v$ is also a vertex of $\Gamma_{\pi}$, which implies that
\[
D' \cdot E_v' = D \cdot E_v + E'_w \cdot E_v' \leq D\cdot E_v + 1,
\]
and we conclude that $D'$ satisfies inequality \eqref{eq:Ineq1} for every vertex $v\in \Gamma_{\pi'}$.
It follows from part $(i)$ that there exists a function $f \in \mathfrak M_{X,0}$ with an isolated singularity at $0$ such that $(f)=(f)_{\Gamma_{\pi'}} + f^{\ast}_{\pi'}$ is a normal crossing divisor on $X_{\pi}'$, $(f)_{\Gamma_{\pi'}} = D'$ and $f^{\ast}_{\pi'} \cdot E_w = \val_{\Gamma_{\pi'}}(w) + 2g(w) =1$.
We conclude that $(f)=(f)_{\Gamma_{\pi}} + f^{\ast}_{\pi}$ is such that $(f)_{\Gamma_{\pi}} = D$, and $(f)$ is a normal crossing divisor on $X_{\pi}$ outside a neighborhood of $\pa$.
At the point $\pa$, we know that the order of $f^{\ast}_{\pi}$ must be one, implying that it is smooth (since, after the blowup, $(f)_{\Gamma_{\pi'}}=D'= \sigma^{\ast}(D) + E_w'$).
Furthermore, $f^{\ast}_{\pi}$ must be transverse to the exceptional divisor, since its strict transform is transverse to a free point of $E_w$.
We therefore conclude that $(f)$ is a normal crossing divisor on $X_{\pi}$ and that $f^{\ast}_{\pi}$ intersects the free point $\pa$.
This proves $(ii)$.
\end{proof}
\section{Bound on multiplicities}
\label{sec:bound_multiplicity}
In this section we prove Proposition~\ref{prop:main} using the construction of Proposition~\ref{thm:CaubelNemethiPopescu-Pampu2006} and some basic commutative algebra.
\medskip
As mentioned in the introduction, thanks to \cite{Neumann1981} the datum of an homeomorphism class of a real 3-manifold $M$ that can be realized as the link of a normal surface singularity is equivalent to the datum of a weighted connected graph $\Gamma$ with negative-definite self-intersection matrix.Moreover, for a given $M$, there is an unique such graph $\Gamma$ that is minimal in the sense that it has no vertex of genus 0, valency at most two, and self-intersection -1. Let us therefore fix such a minimal graph $\Gamma$,
let $(X,0)$ be a normal surface singularity, and let $\pi \colon (X_{\pi},E) \to (X,0)$ be the minimal good resolution of $(X,0)$, and assume that the weighted dual graph of $\pi$ is $\Gamma$.
Let $D$ be an integral effective divisor satisfying the inequality~\eqref{eq:Ineq1} of Proposition~\ref{thm:CaubelNemethiPopescu-Pampu2006} for every vertex $v$ of $\Gamma$, so that the line bundle $\mathcal{O}_{X_\pi}(-D)$ is basepoint-free.
Since $(X,0)$ is normal, reasoning like at the beginning of the proof of Proposition~\ref{thm:CaubelNemethiPopescu-Pampu2006} this implies the existence of two functions $f,\, g \in \mathfrak M_{X,0}$, with an isolated singularity at $0$, whose total transforms by $\pi$ are given by
\[
(f) = (f)_{\Gamma} +f^{\ast} = D + f^{\ast}, \quad (g) = (g)_{\Gamma} +g^{\ast} = D + g^{\ast},
\]
where the strict transforms $f^{\ast}$ and $g^{\ast}$ are smooth disjoint curves.
Consider the map $\Psi= (f,g)\colon (X,0) \to (\mathbb{C}^2,0)$, which is a finite morphism.
Then the degree $\deg(\Psi)$ of $\Psi$ can be computed as the number of points in a general fiber of $\Psi$, that is the number $\dim\big(\mathcal{O}_{X,0} / (f,g)\big)$ of intersection points $(f= \epsilon) \cap (g=\delta)$ for sufficiently small $\epsilon$ and $\delta$ in a neighborhood of $0$ in $ \mathbb{C}$.
Since $f^{\ast} \cdot g^{\ast}=0$, we conclude that this intersection multiplicity is equal to $D \cdot g^{\ast}$.
By \cite[Theorem 2.6]{Laufer1971} we have $(g) \cdot D = \big((g)_{\Gamma} + g^{\ast}\big) \cdot D = 0$, and so $\deg(\Psi)= g^{\ast} \cdot D = -D^2$, which implies that $\dim\big(\mathcal{O}_{X,0} / (f,g)\big)= -D^2$.
It now follows from \cite[Theorem~14.10]{Matsumura1980} that $\dim\big(\mathcal{O}_{X,0} / (f,g)\big) \geq m\big((f,g),\mathcal{O}_{X,0}\big)$, and from \cite[Formula~14.4]{Matsumura1980} that $m\big((f,g),\mathcal{O}_{X,0}\big) \geq m(\mathfrak{M}_{X,0},\mathcal{O}_{X,0}) = m(X,0)$. We deduce that $m(X,0) \leq -D^2$, which concludes the proof of Proposition~\ref{prop:main}. \hfill\qed
\section{Bounding the number of $\mathcal L$-vectors}
The results of the previous sections are sufficient to prove the following weaker version of Theorem~\ref{thm:main}.
\begin{proposition}
\label{prop:weaker_version_main_theorem}
Let $M$ be a real $3$-manifold.
There exists finitely many pairs $(\Gamma,L)$, where $\Gamma$ is a weighted graph and $L$ is a vector in $(\Z_{\geq0})^{V(\Gamma)}$, such that there exists a normal surface singularity $(X,0)$ satisfying the following conditions:
\begin{enumerate}
\item The link of $(X,0)$ is homeomorphic to $M$.
\item $(\Gamma,L) = (\Gamma_\pi, L_\pi)$, where $\pi \colon X_{\pi} \to X$ is the minimal good resolution of $(X,0)$ which factors through the blowup of the maximal ideal of $(X,0)$.
\end{enumerate}
\end{proposition}
\begin{proof}
As discussed at the beginning of Section~\ref{sec:bound_multiplicity}, the homeomorphism class of $M$ determines a minimal weighted graph $\Gamma_0$.
Let $(X,0)$ be a normal surface singularity whose minimal good resolution $\pi\colon X_\pi\to X$ has weighted dual graph $\Gamma_0$ and denote by $L=(l_v)_{v\in V(\Gamma_0)}$ the corresponding $\cal L$-vector and by $(m_v)_{v\in V(\Gamma_0)}$ the corresponding multiplicities. Denote now by $\pi'\colon X_{\pi'}\to X$ the minimal good resolution of $(X,0)$ that factors through the blowup of its maximal ideal.
We claim that there are finitely many possibilities for the weighted dual graph $\Gamma_{\pi'}$ of $\pi'$.
Indeed, the map $\pi'$ factors through $\pi$, and the resulting map $\alpha\colon X_{\pi'}\to X_\pi$ is a sequence of point blowups, each of which is centered at a basepoint of the family of generic hyperplane sections of $(X,0)$.
In particular, under each such blowup, the sum $\sum_v m_v l_v$ increases.
Since we have $\sum_{v\in V(\Gamma)} m_vl_v = - Z_{\max}^{\Gamma}(X,0)^2 \leq m(X,0)$ by \cite[Theorem 2.7]{Wagreich1970} (see also \cite[Theorem 2.18]{Nemethi1999}), and the latter is bounded by the integer $n_M$ from Proposition \ref{prop:main}, this implies that $\alpha$ consists of at most $n_M$ point blowups, which is sufficient to describe a finite list of graphs to which $\Gamma_{\pi'}$ belongs.
Moreover, since $m_v\geq1$ for all vertices $v$ of $\Gamma_{\pi'}$, we deduce that $l_v\leq n_M$ for all $v$, which proves that finitely many vectors in $\Z^{V(\Gamma_{\pi'})}$ can be realized as $\cal L$-vectors of a normal surface singularity.
\end{proof}
\begin{remark}\label{rem:bound_L}
While we have striven to make the proof above as simple as possible, more optimal bounds on the number of realizable
vectors $L$ can be found via a more careful approach.
For instance, not all vectors $L$ outputted by the proof above can be obtained as solution of a linear system of the form $L = Z_{\max}^{\Gamma}(X,0) \cdot E$, since $I_\Gamma^{-1}\cdot L$ needs not have integer coordinates in general.
One can obtain a shorter list of possibly realizable $\cal L$-vectors by considering the smallest possible integral divisor $D$ to which Theorem~\ref{thm:CaubelNemethiPopescu-Pampu2006} applies (this divisor can be found very easily using the dual basis of the Lipman cone with respect to the intersection matrix of $\Gamma$),
so that $Z_{\max}^{\Gamma}(X,0) \leq D$, which gives us a finite list of candidates for the maximal ideal divisor of any normal surface singularity realizing $\Gamma$, and therefore a much shorter list of possibilities for the vector $L$. We can then also reduce the list of possibilities for the graph $\Gamma_{\pi'}$ by only considering the possible maximal ideal divisors and $\cal L$-vectors above, which greatly reduces the number and combinatorics of the blowups in the morphism $\alpha$ appearing in our proof.
\end{remark}
\section{K\"ahler differentials and valuative invariants}
\label{sec:kahler}
In this section we introduce two valuative invariants associated with the sheaf of 2-forms on a normal surface singularity and prove some results that will allow us to use them to prove Theorem~\ref{thm:main} in Section~\ref{sec:proof_thm_A}.
\medskip
Given a normal surface singularity $(X,0)$, we consider the sheaf of \emph{K\"ahler 2-forms} $\Omega^2_X$ on $X$.
We refer the reader to \cite[$\S$16]{Eisenbud1995} for the general definition; in this work it is enough to work with the following local description of its pullback to a resolution, proven in $\S$20.2 of \emph{loc.\ cit.}: given a local embedding $i \colon (X,0) \hookrightarrow (\mathbb{C}^N,0)$
and a good resolution $\pi\colon(X_{\pi},E) \to (X,0)$, then we have $\pi^{\ast}\Omega^2_X = (i \circ \pi)^{\ast} \Omega_{\mathbb{C}^N}^2$, where $\Omega_{\mathbb{C}^N}^2$ be the sheaf of differential $2$-forms on $\mathbb{C}^N$.
Now, since $X_{\pi}$ is smooth, the sheaf of 2-forms $\Omega_{X_{\pi}}^2$ is locally free of rank one.
Consider the $\mathcal{O}_{X_{\pi}}$-subsheaf generated by the image of
\(
\pi^{\ast} \colon \Omega_{X}^2 \to \Omega_{X_{\pi}}^2.
\)
This sheaf is of the form $\mathcal{F}_0(\pi) \,\Omega_{X_{\pi}}^2$, where $\mathcal{F}_0(\pi)$ is an ideal sheaf called the \emph{$0$-Fitting ideal} associated with $\pi$ (see \cite[$\S$~20.2]{Eisenbud1995} or \cite[$\S$2.3]{BelottodaSilvaBierstoneGrandjeanMilman2017}).
We now recall the definition of a natural invariant associated with the sheaf $\Omega_X^2$, introduced in \cite[Definition~1.9]{FernexEinIshii2008} (see also \cite[Page~1259]{IshiiReguera2013} or \cite[\S~2.1]{FernexDocampo2014} for a point of view closer to the one we adopt here).
Given a vertex $v$ of $V(\Gamma_{\pi})$, the \emph{Mather discrepancy} $\hat k_v$ of $(X,0)$ along $v$ is defined as
\[
\hat{k}_v = \ord_v \big(\pi^{\ast}(\Omega_{X}^2)\big)= \ord_v\big(\mathcal{F}_0(\pi)\big).
\]
We also consider the \emph{residual ideal sheaf} $\mathcal{R}_0(\pi)$ of $\mathcal{F}_0(\pi)$ \cite[Definition~4.1]{BelottodaSilvaBierstoneGrandjeanMilman2017}, which is defined stalk-wise by setting
\begin{equation}\label{eq:Residual}
\mathcal{F}_0(\pi)_{\pa} = \mathcal{R}_0(\pi)_{\pa} \, \prod_{v \in V(\Gamma)\text{ s.t. } \pa \in E_v} {x_{v,\pa}}^{\hat{k}_v}
\end{equation}
for every closed point $\pa$ of $E$,
where $x_{v,\pa}$ is a reduced local equation for $E_v$ at $\pa$.
Note that the order of vanishing $\mbox{ord}_{\pa}(\mathcal{R}_0(\pi))$ of the ideal sheaf $\mathcal{R}_0(\pi)$ at a closed point $\pa$ of $E$ is zero except at at most finitely many points $\pa$; in particular its order along $E_v$ is zero for each $v \in V(\Gamma_{\pi})$.
Moreover, $\mathcal{R}_0(\pi)$ is trivial if and only if $\mathcal{F}_0(\pi)$ is principal, which is equivalent to the fact that $\pi$ factors through the Nash transform of $(X,0)$ (this result can be found in \cite[Theorem 2.5]{BelottodaSilvaBierstoneGrandjeanMilman2017}, but also seems to be implicit in other references, such as \cite{FernexEinIshii2008}).
In our context, it can also be seen from \cite[III, Theorem~1.2]{Spivakovsky1990} that $\pi$ factors through the Nash transform of $(X,0)$ if and only if the family of the polar curves of the generic plane projections has no basepoints on $X_\pi$.
The relation between the 0-Fitting ideal and generic polar curves can be seen explicitly as follows.
Let $(\Pi_{\cal D})_{\cal D \in \Omega}$ be the family of polar curves associated with the generic plane projections $\ell_{\cal D} \colon (X,0) \to (\C^2,0)$ of $(X,0)$ (with the notations of \cite[Section 2.2]{BelottodaSilvaFantiniPichon2019}, so that in particular it is an equisingular family in the sense of strong simultaneous resolution).
Consider a closed point $\pa$ of $E$.
For all $\cal D$ in $\Omega$, denote by $(\Pi_{\cal D, \pa}, 0)$ the union of the irreducible components of $(\Pi_{\cal D},0)$ whose strict transforms pass through $\pa$, and let $\Omega_{\pa}$ be the maximal Zariski open and dense subset of $\Omega$ such that the family $(\Pi_{\cal D, \pa})_{\cal D \in \Omega_{\pa}}$ is equisingular.
Note that the latter condition is equivalent to ask that the number of irreducible components of $\Pi_{\cal D, \pa}$ is constant for every $\cal D$ in $\Omega_{\pa}$ and that, by definition, $\Pi_{\cal D, \pa}$ is nonempty if and only if $\pa$ is a basepoint of the family of polar curves $(\Pi_{\cal D})_{\cal D \in \Omega}$.
Then the ideal $\mathcal{R}_0(\pi)_{\pa}$ defines exactly the family consisting of the strict transforms via $\pi$ of the curves $\Pi_{\cal D, \pa}$, for $\cal D$ in $\Omega_\pa$.
Indeed, suppose that $\pa$ is a free point of $E$ (the case when $\pa$ is a double point is completely analogous) belonging to the component $E_v$ and that $(x,y)$ are local coordinates for $X_\pi$ at $\pa$ such that $E_v$ is locally defined by the equation $x=0$.
If locally at $\pa$ we write $\phi_{\cal D}(x,y)=(\ell_{\cal D} \circ \pi)(x,y)=\big(z_1(x,y),z_2(x,y)\big)$, then $\cal F_0(\pi)_\pa$ is generated by the forms
\(
\phi_{\cal D}^* (dz_1 \wedge dz_2)_\pa,
\)
where $\cal D$ varies in $\Omega_\pa$, $\ell_{\cal D}\colon (X,0) \to (\C^2,0)$ is the associated plane projection, and $dz_1 \wedge dz_2$ is a standard volume form on $(\C^2,0)$.
Observe that we can write
\[
\phi_{\cal D}^* (dz_1 \wedge dz_2)_\pa = \mbox{Jac}\big(\phi(x,y)\big)(dx \wedge dy) = x^{\hat k_v}f_{\cal D}(x,y)(dx \wedge dy)
\]
for some $f_{\cal D}$ in $\cal O_{X_\pi,\pa}$.
Since $\pi$ is an isomorphism outside of $E$, it follows that $f_{\cal D}(x,y)=0$ is the local equation at $\pa$ of the strict transform via $\pi$ of the critical locus of $\ell_{\cal D}$, which is by definition the polar curve of $\ell_{\cal D}$.
In other words, $f_{\cal D}(x,y)=0$ is the local equation of the curve $(\Pi_{\cal D, \pa}, 0)$ defined above.
We refer to \cite[Chapter~3, Section~1]{Spivakovsky1990} for further details.
Although the Mather discrepancies $\hat k_v$ depend on the analytic structure of $(X,0)$, we will show that the weighted dual graph $\Gamma_{\pi}$ of a good resolution of singularities $\pi\colon(X_{\pi},E) \to (X,0)$ is enough to bound their growth under blowups. We start the following result.
\begin{theorem}
\label{thm:sheaf_2-forms}
Let $\Gamma$ be a weighted graph.
Then there exists a divisor $D$ on $\Gamma$ such that, for every normal surface singularity $(X,0)$ and every good resolution $\pi \colon (X_{\pi},E) \to (X,0)$ of $(X,0)$ whose weighted dual graph is $\Gamma$, the line bundle $\mathcal{O}_{X_{\pi}}(-D)$ is basepoint-free and there exists a subsheaf $\Omega \subset \Omega^2_{X}$ of the sheaf of K\"ahler differentials of $X$, such that the pullback $\pi^{\ast}(\Omega)$ generates a $\mathcal{O}_{X_{\pi}}$-subsheaf of $\Omega^2_{X_{\pi}}$ of the form $\mathcal{J}(\pi) \Omega^{2}_{X_{\pi}}$, where $\mathcal{J}(\pi)$ is a principal ideal sheaf equivalent to $\mathcal{O}_{X_{\pi}}(-D)$.
\end{theorem}
Observe that if we had only been interested in the existence of a basepoint-free subsheaf of $\Omega_{X_{\pi}}^2$ we could have obtained it from \cite[Theorem 3.1]{Laufer1983}.
However, for our applications, and namely to prove Lemma~\ref{lem:compatibility} below, it is important to require that this subsheaf is the pullback of a subsheaf of $\Omega_{X}^{2}$.
\begin{proof}
Let $D_1 = \sum_{v \in V(\Gamma)} \alpha_v E_v$ and $D_2= \sum_{v \in V(\Gamma)} \beta_v E_v$ be two integral effective divisors satisfying inequality \eqref{eq:Ineq2} for every vertex $v \in V(\Gamma)$, and which verify the following open condition in the Lipman cone:
\(
\alpha_v \beta_w - \beta_v \alpha_w \neq 0,
\) for all vertexes $v$ and $w$ of $V(\Gamma)$ which are connected by an edge. We set $D = D_1 + D_2 - E$.
Note that by Proposition \ref{thm:CaubelNemethiPopescu-Pampu2006}(i), both line bundles $\mathcal{O}(-D_1)$ and $\mathcal{O}(-D_2)$ have no basepoints.
This fact together with the normality of $(X,0)$ implies the existence of two ideals $\mathcal{I}_1$ and $\mathcal{I}_2$ of $\mathfrak M_{X,0}$ whose pullbacks by $\pi$ are equivalent to $\mathcal{O}(-D_1)$ and $\mathcal{O}(-D_2)$ respectively.
By considering a local embedding $i \colon (X,0)\hookrightarrow (\mathbb{C}^N,0)$, we may consider without loss of generality that $\mathcal{I}_1$ and $\mathcal{I}_2$ are ideals of $\mathfrak M_{(\mathbb{C}^N,0)}$.
Consider the $\mathcal{O}_{(\mathcal{C}^N,0)}$ submodule of $2$-forms $\Omega \subset \Omega_{\mathbb{C}^N}^2$ generated by all $2$-forms of the form $df_1 \wedge df_2$ with $f_1$ in $\mathcal{I}_1 $ and $f_2$ in $\mathcal{I}_2 $. We claim that $i^{\ast}\Omega$ is the desired sheaf.
Indeed, since $X_{\pi}$ is a smooth surface, the sheaf $\Omega^2_{X_{\pi}}$ is everywhere an $\mathcal{O}_{X_{\pi}}$ free module of rank $1$, and the pullback $(\pi \circ i)^{\ast}\Omega $ generates a sheaf of the form $\mathcal{J} (\pi) \Omega_{X_{\pi}}$, where $\mathcal{J}(\pi)$ is an ideal sheaf.
We now verify that $\mathcal{J}(\pi)$ is a principal ideal equivalent to $\mathcal{O}(-D)$; note that it is enough to verify this statement at each point $\pa \in E$.
We divide the proof in two parts depending on the nature of $\pa$ as follows.
Suppose first that $\pa$ is a free point of $E_v$.
There exists a local coordinate system $(x,y)$ centered at $\pa$ such that $E_v$ is locally equal to $(x=0)$. By Proposition \ref{thm:CaubelNemethiPopescu-Pampu2006}, there are two functions $f_1$ in $\mathcal{I}_1$ and $f_2$ in $\mathcal{I}_2$ such that, apart from a change of coordinates, $(f_1 \circ \pi) (x,y) = x^{\alpha_v} y$ and $(f_2\circ \pi)(x,y) = x^{\beta_v}$. It follows that
\[
\pi^{\ast} (df_1 \wedge df_2) = \beta_v x^{\alpha_v + \beta_v -1} dx \wedge dy.
\]
which implies that $\mathcal{J}(\pi)_{\pa} \supset (x^{\alpha_v + \beta_v -1})$; the other inclusion easily follows from the construction, so $\mathcal{J}(\pi)_{\pa} = (x^{\alpha_v + \beta_v -1})$.
Suppose now that $\pa \in E_v \cap E_w$ is a double point of $E$.
There exists a local coordinate system $(x,y)$ centered at $\pa$ such that $E_v= (x=0)$ and $E_w=(y=0)$. By Proposition \ref{thm:CaubelNemethiPopescu-Pampu2006}, there are two functions $f_1 \in \mathcal{I}_1$ and $f_2 \in \mathcal{I}_2$ such that $(f_1 \circ \pi) (x,y) = x^{\alpha_v} y^{\alpha_w}U_1(x,y)$ and $(f_2\circ \pi)(x,y) = x^{\beta_v}y^{\beta_w}U_2(x,y)$ where $U_1(0) \neq 0$ and $U_2(0)\neq 0$.
Since $\alpha_v \beta_w - \alpha_w\beta_v \neq 0$, up to a change of coordinates we may suppose that $U_1 \equiv U_2 \equiv 1$, so that
\[
\pi^{\ast} (df_1 \wedge df_2) = (\alpha_v \beta_w - \alpha_w\beta_v ) \, x^{\alpha_v + \beta_v -1} y^{\alpha_w + \beta_w -1} dx \wedge dy.
\]
which implies that $\mathcal{J}(\pi)_{\pa} \supset (x^{\alpha_v + \beta_v -1} y^{\alpha_v + \beta_v -1})$; the other inclusion follows easily from the construction, so that $\mathcal{J}(\pi)_{\pa} = (x^{\alpha_v + \beta_v -1} y^{\alpha_v + \beta_v -1})$.
This concludes the proof.
\end{proof}
Fix a graph $\Gamma$, let us fix once and for all a divisor $D$ on $\Gamma$ given to us by Theorem~\ref{thm:sheaf_2-forms}, and write
\[
D = \sum_{v\in V(\Gamma)} \nu_vE_v.
\]
Note that, if $(X,0)$ is a normal surface singularity, $\pi \colon (X_{\pi},E) \to (X,0)$ is a good resolution of $(X,0)$ whose weighted dual graph is $\Gamma$, and $\Omega$ is the subsheaf of $\Omega^{2}_X$ given by Theorem~\ref{thm:sheaf_2-forms}, (so that the pullback $\pi^{\ast}(\Omega)$ generates a sheaf of the form $\mathcal{J}(\pi) \, \Omega_{X_{\pi}}^2$, where $\mathcal{J}(\pi)$ is a principal ideal equivalent to the basepoint-free line bundle $\mathcal{O}(-D)$), we also have
\(
\nu_v = \ord_v(\pi^{\ast}(\Omega)) = \ord_v(\mathcal{J}(\pi)) .
\)
This leads to a natural extension of the definition of these invariants to a vertex $v$ of the dual graph $\Gamma_\pi$ of any good resolution $\pi'$ of $(X,0)$ that factors through $\pi$, by setting
\(
\nu_v = \ord_v(\pi'^{\ast}(\Omega)) = \ord_v(\mathcal{J}(\pi')) .
\)
Note that this definition only depends on the combinatorics of the sequence of blowups required to pass from $\pi$ to $\pi$'.
This is made clear by the following lemma, and also explains how the Mather discrepancies behave after blowups.
\begin{lemma}\label{lem:bound}
Let $\Gamma$ and $D$ be as above.
Let $(X,0)$ be a normal surface singularity, let $\pi$ be a good resolution of $(X,0)$ whose weighted dual graph is $\Gamma$, let $\pi'$ be another good resolution that factors through $\pi$, and let $\pi'' = \pi' \circ \sigma$ be the composition of $\pi'$ with a blowup $\sigma \colon (X_{\pi''},E'') \to (X_{\pi'},E')$ centered at a closed point $\pa$ of $E'$.
Write $\sigma^{-1}(\pa)=E_w$.
Then we have:
\begin{enumerate}
\item if $\pa $ is a smooth point of $E'$ contained in the component $E_v$ we have
\[
\nu_w = \nu_v+1 \quad\quad \text{and} \quad\quad \hat k_w = \hat k_v + 1 + \ord_{\pa}\big(\mathcal{R}_0(\pi')\big);
\]
\item if $\pa$ is a double point of $E'$ lying at the intersection of two components $E_{v}$ and $E_{v'}$ we have
\[
\nu_w = \nu_{v} + \nu_{v'} +1 \quad\quad \text{and} \quad\quad \hat k_w = \hat k_v + \hat k_{v'} +1 +
\ord_{\pa}\big(\mathcal{R}_0(\pi')\big).
\]
\end{enumerate}
\end{lemma}
\begin{proof}
We divide the proof in two cases depending on the nature of $\pa \in E'$ and perform two computations similar to the ones of \cite[$\S$2.4]{BelottodaSilvaBierstoneGrandjeanMilman2017}.
Suppose first that $\pa$ is a free point of $E'$.
Then there exists a local coordinate system $(x,y)$ centered at $\pa$ such that $E_v$ is defined locally by $(x=0)$.
Consider a 2-form germ $\omega$ at $\pa$, and write \(\omega = x^{\alpha} f(x,y) dx \wedge dy\).
Note that the order of the pullback form $\sigma^{\ast}(\omega)$ over $E_w$ may be computed at any general point of $E_w$.
We consider the origin of the $x$-chart $(x,y) = (\tilde{x},\tilde{x}\tilde{y})$, where we obtain
\[
\begin{aligned}
\sigma^{\ast}(\omega) &= \tilde{x}^{\alpha} f(\tilde{x},\tilde{x}\tilde{y}) \, d\tilde{x} \wedge d(\tilde{x}\tilde{y})\\
& = \tilde{x}^{\alpha+1 + \ord_{\pa}(f)} \tilde{f}(\tilde{x},\tilde{y})\, d\tilde{x} \wedge d\tilde{y},
\end{aligned}
\]
where $\tilde{f}(\tilde{x},\tilde{y})$ is such that $\tilde{f}(0,\tilde{y}) \not\equiv 0$ and $E_w = (\tilde{x}=0)$. It follows that
\[
\ord_w\big(\sigma^{\ast}(\omega)\big) = \alpha + 1 + \ord_{\pa}(f).
\]
Now, note that $\mathcal{J}(\pi')$ is a principal ideal, so that $\mathcal{J}(\pi')_{\pa} = (x^{\nu_v})$.
In other words, the differential form \(\omega = x^{\nu_v} dx \wedge dy\) belongs to the subsheaf generated by the pullback $\pi^{\ast}(\Omega)$, and we easily conclude that \(\nu_w = \nu_v+1.\)
Finally, note that $\mathcal{F}_0(\pi')_{\pa} = x^{\hat{k}_v}(f_1,\ldots,f_k)$, where $f_1,\ldots,f_k$ are generators of $\mathcal{R}_{0}(\pi')_{\pa}$.
In particular, the differential forms \(\omega_i = x^{\hat{k}_v} f_i(x,y) dx \wedge dy\) belongs to the subsheaf generated by the pullback $\pi^{\ast}(\Omega_{X}^2)$, and we easily conclude that \(\hat k_w = \hat{k}_v + 1 + \ord_{\pa}(\mathcal{R}_0(\pi')).\)
Suppose now that $\pa \in E_v \cap E_{v'}$ is a double point of $E'$.
Then there exists a local coordinate system $(x,y)$ centered at $\pa$ such that $E_v =(x=0)$ and $E_{v'} = (y=0)$.
Consider a 2-form germ $\omega$ at $\pa$, and write \(\omega = x^{\alpha} y^{\beta} f(x,y) dx \wedge dy\).
Note that the order of the pullback form $\sigma^{\ast}(\omega)$ over $E_w$ may be computed at any generic point of $E_w$.
We consider the origin of the $x$-chart $(x,y) = (\tilde{x},\tilde{x}\tilde{y})$, where we get
\[
\begin{aligned}
\sigma^{\ast}(\omega) &= \tilde{x}^{\alpha} (\tilde{x}\tilde{y})^{\beta} f(\tilde{x},\tilde{x}\tilde{y}) \, d\tilde{x} \wedge d(\tilde{x}\tilde{y})\\
& = \tilde{x}^{\alpha+\beta+1 + \ord_{\pa}(f)} \tilde{y}^{\beta}\tilde{f}(\tilde{x},\tilde{y})\, d\tilde{x} \wedge d\tilde{y}
\end{aligned}
\]
where $\tilde{f}(\tilde{x},\tilde{y})$ is such that $\tilde{f}(0,\tilde{y}) \not\equiv 0$ and $E_w = (\tilde{x}=0)$.
It follows that
\[
\ord_w\big(\sigma^{\ast}(\omega)\big) = \alpha + \beta + 1 + \ord_{\pa}(f).
\]
Now, note that $\mathcal{J}(\pi')$ is a principal ideal, so that $\mathcal{J}(\pi')_{\pa} = (x^{\nu_v} y^{\nu_{v'}})$.
In other words, the differential form \(\omega = x^{\nu_v} y^{\nu_{v'}} dx \wedge dy\) belongs to the subsheaf generated by the pullback $\pi^{\ast}(\Omega)$, and we easily conclude that \(\nu_w = \nu_v+ \nu_{v'}+1.\)
Finally, note that $\mathcal{F}_0(\pi')_{\pa} = x^{\hat{k}_v}y^{\hat{k}_{v'}}(f_1,\ldots,f_k)$, where $f_1,\ldots,f_k$ are generators of $\mathcal{R}_{0}(\pi')_{\pa}$.
In particular, the differential forms \(\omega_i = x^{\hat{k}_v} y^{\hat{k}_{v'}} f_i(x,y) dx \wedge dy\) belong to the subsheaf generated by the pullback $\pi^{\ast}(\Omega_{X}^2)$, and thus \(\hat k_w = \hat k_v + 1 + \ord_{\pa}(\mathcal{R}_0(\pi')).\)
\end{proof}
Observe that, while the divisor $D$ that we have chosen, and therefore the integers $\nu_v$, only depend on the graph $\Gamma$ and not on the choice of $(X,0)$ and $\pi$, the sheaf $\mathcal F_0(\pi)$, and therefore the integers $\hat k_v$, depends on the analytic structure of $(X,0)$.
However, as a consequence of Theorem~\ref{thm:sheaf_2-forms} we deduce that $\hat k_v$ is bounded from above by the topological invariant $\nu_v$:
\begin{lemma}\label{lem:compatibility}
Let $\pi' \colon (X_{\pi'},E') \to (X,0)$ be a good resolution of $(X,0)$ which factors through $\pi \colon (X_{\pi},E) \to (X,0)$. Then for every vertex $v$ of $\Gamma_{\pi'}$ we have
\[
\hat{k}_v \leq \nu_v.
\]
Moreover, if this inequality is an equality for some vertex $v$ of $\Gamma_{\pi'}$, then the family of the polar curves of the generic plane projections of $(X,0)$ has no basepoint at a free point of $E_{v}$, and if all inequalities for all vertices $v$ are equalities then the family of the polar curves of the generic plane projections of $(X,0)$ has no basepoint at all.
\end{lemma}
\begin{proof}
Let us denote by $\mathcal{J}(\pi') \, \Omega_{X_{\pi'}}$ and $\mathcal{F}_0(\pi')\, \Omega_{X_{\pi'}}$ the subsheaves generated by the pullback of $\Omega$ and $\Omega_{X}^2$ respectively. Since $\Omega \subset \Omega_X^2$, we conclude that $\mathcal{J}(\pi') \subset \mathcal{F}_0(\pi')$, which implies the desired inequality.
Now, suppose that the inequality is an equality over a vertex $v\in \Gamma_{\pi'}$.
Since $\pi'$ factors through $\pi$, and $\mathcal{J}(\pi)$ is a principal ideal, we conclude that $\mathcal{J}(\pi')$ is also a principal ideal (indeed, $\mathcal{J}(\pi')$ is the pullback of $\mathcal{J}(\pi)$ multiplied by the Jacobian determinant of a sequence of point blowups, see Lemma~\ref{lem:bound}).
It therefore follows that for all free point $\pa \in E_v$, the localizations $\mathcal{J}(\pi')_{\pa} = \mathcal{F}_0(\pi')_{\pa}$ coincide. Moreover, if the inequality is an equality over two vertices $v, \, v'\in \Gamma_{\pi'}$ that are connected by an edge, then the localizations $\mathcal{J}(\pi')_{\pa} = \mathcal{F}_0(\pi')_{\pa}$ coincide over any point $\pa \in E_v \cap E_{v'}$.
We conclude by the results \cite[III, Theorem 1.2]{Spivakovsky1990} and \cite[Theorem 2.5]{BelottodaSilvaBierstoneGrandjeanMilman2017} already discussed above.
\end{proof}
\section{Proof of Theorem~\ref{thm:main}}
\label{sec:proof_thm_A}
The goal of this section is to complete the proof of our main result, Theorem~\ref{thm:main}.
\medskip
As we can rely on Proposition~\ref{prop:weaker_version_main_theorem}, we now turn our attention to the polar invariants, and we can assume without loss of generality that $\pi\colon X_\pi \to X$ is a good resolution of the normal surface singularity $(X,0)$ which factors through the blowup of its maximal ideal, and that the $\cal{L}$-vector $L=\{l_v\}_{v\in V(\Gamma_\pi)}$ is given. By the L\^e--Greuel--Teissier formula \cite[Theorem~5.1.1]{LeTeissier1981} (see also \cite[Proposition~5.1]{BelottodaSilvaFantiniPichon2019}), the multiplicity $m(\Pi,0)$ of the polar curve $\Pi$ of a generic projection $\ell\colon (X, 0) \to (\C^2, 0)$, is equal to $m(X,0)-\chi(F_t)$, where $F_t$ denotes the Milnor--L\^e fiber of a generic linear form on $(X,0)$.
Now, for each vertex $v$ of $V(\Gamma_{\pi})$, let $N(E_v)$ be a small tubular neighborhood of the corresponding component $E_v$ of $\pi^{-1}(0)$ obtained as the total space of a normal disc bundle on $E_v$, and set
\[
\cal{N}(E_v) = \overline{N(E_v) \setminus \bigcup_{E_w\neq E_v} N(E_w)}.
\]
Then by additivity of the Euler characteristic, we can compute $\chi(F_t)$ as
\begin{align*}
\chi(F_t) & = \sum_{v\in V(\Gamma_{\pi})}\chi\big(\cal N(E_v) \cap F_t\big) = \sum_{v\in V(\Gamma_{\pi})}m_v\big(\chi\big(\cal N(E_v)\cap E_v\big)-l_v\big) \\
& =\sum_{v\in V(\Gamma_{\pi})}m_v\big(2-2g(E_v)-\val_{\Gamma_{\pi}}(v)-l_v\big),
\end{align*}
where the second equality makes use of the fact that $\pi$ factors through the blowup of the maximal ideal of $(X,0)$, and therefore the strict transform via $\pi$ of a generic hyperplane section of $(X,0)$ consists of $l_v$ curvettes on each component $E_v$.
This shows that $\chi(F_t)$ only depends on the weighted graph $\Gamma_{\pi}$ and on the $\cal L$-vector $L$, so that we obtain an explicit bound for $m(\Pi,0)$ as well.
Observe that what we have proven so far suffices to deduce Corollary~\ref{cor:main}, since $m(\Pi,0)$ equal to the sum $\sum_{v\in V(\Gamma_{\pi})}m_vp_v$ and so the value $p_v$ for any vertex $v$ of $\Gamma_{\pi}$ is bounded as well.
What is sensibly harder to show, and requires the invariants introduced in Section~\ref{sec:kahler}, is the fact that the topology of $(X,0)$ determines a finite family of dual graphs for the minimal resolution factoring through the Nash blowup of $(X.0)$.
In order to do this, we introduce an auxiliary numerical invariant based on the following local version of the invariants $p_v$.
Following the notation of the previous section, given a closed point $\pa$ of $E$ pick $\cal D$ in $\Omega_\pa$, consider the associated curve germ $(\Pi_{\cal D, \pa}, 0)$, and set $p_v(\pa) = \Pi^*_{\cal D, \pa} \cdot E_v$, where $^*$ denotes the strict transform under $\pi$.
Then, since $\Pi_{\cal D, \pa}$ is nonempty if and only if $\pa$ is a basepoint of the family of polar curves $(\Pi_{\cal D})_{\cal D \in \Omega}$, we deduce that $p_v(\pa) \neq 0$ if and only if $\pa$ is a basepoint which belongs to $E_v$.
\begin{definition}
Given a point $\pa \in E$, consider the quadruple of integers
\[
\mathrm{Aux}(\pa)=\big(m(\Pi_{\cal D, \pa}), \epsilon_1(\pa),\epsilon_2(\pa),\beta(\pa)\big)
\]
whose last three entries are defined as
\begin{align*}
\epsilon_1(\pa) = \sum_{v}p_v(\pa),
\quad \quad
\epsilon_2(\pa) = \max_v\big\{p_v(\pa),p_w(\pa)\big\},
\quad \quad
\beta(\pa)= \max_v\big\{\nu_{v} - \hat k_v\big\},
\end{align*}
where the sum and the two maxima run over the set of vertices $v$ of $\Gamma_\pi$ such that $\pa$ belongs to $E_v$.
\end{definition}
In the rest of the section we consider the invariants $\Aux(\pa)$ as elements of $\Z^4$ equipped with the lexicographic order.
Note that all the entries of $\Aux(\pa)$ are positive integers: this is immediate for the first three, and a direct consequence of Lemma~\ref{lem:compatibility} for $\beta(\pa)$.
Moreover, observe that we have
\begin{equation}\label{eq:top_bound_for_Aux}
\Aux(\pa) \leq \big(m(\Pi,0), m(\Pi,0), m(\Pi,0), \max \{\nu_v\,|\,v\in V(\Gamma)\} \big),
\end{equation}
that is the invariant $\Aux(\pa)$ is bounded from above by a quadruple
which does not depend on the choice of $(X,0)$ and $\pi$ but only on the topology of $(X,0)$.
We are now ready to see that, as a consequence of the computations of Lemma~\ref{lem:bound}, the auxiliary invariant always drops after a blowup.
\begin{lemma}
\label{lem:induction}
Let $\pi\colon (X_{\pi},E) \to (X,0)$ be a good resolution of $(X,0)$ that factors through the blowup of its maximal ideal, let $\pa$ be a closed point of $E$ such that $m(\Pi_{\cal D, \pa}) \neq 0$, and let $\pi' = \pi \circ \sigma$ be the composition of $\pi$ with the blowup $\sigma \colon (X_{\pi'},E') \to (X_{\pi},E)$ with center $\pa$.
Then, for every closed point $\pb$ of $\sigma^{-1}(\pa)$ we have
\[
\mathrm{Aux}(\pb) < \mathrm{Aux}(\pa).
\]
\end{lemma}
\begin{proof}
Set $E_w=\sigma^{-1}(\pa)$.
Since $\pi$ factors through the blowup of the maximal ideal of $(X,0)$, the multiplicity $m(\Pi_{\cal D, \pa},0)$ of $\Pi_{\cal D, \pa}$ at $0$ can be computed as the sum
\(
m(\Pi_{\cal D, \pa},0) = \sum_{v \in V(\Gamma)} m_v p_v(\pa).
\)
If $\pb_1, \ldots,\pb_r$ are the basepoints of $(\Pi_{\cal D})_{\cal D \in \Omega}$ on $E_w$, then for all $i = 1,\ldots, r,$ we have $\Omega_{\pb_i} \subset \Omega_{\pa}$ and thus $(\Pi_{\cal D, \pb_i}, 0) \subset (\Pi_{\cal D, \pa}, 0)$.
Moreover, for all $\cal D \in \bigcap_{i=1}^r \Omega_{\pb_i}$ and all $i, j$ with $i \neq j$, the curve germs $(\Pi_{\cal D, \pb_i}, 0)$ and $(\Pi_{\cal D, \pb_j}, 0)$ have no irreducible components in common, which implies that
\(
m(\Pi_{\cal D, \pa},0) \geq \sum_{i=1}^r m(\Pi_{\cal D, \pb_i},0).
\)
It follows that if $r>1$, then $ m(\Pi_{\cal D, \pb_i}) < m(\Pi_{\cal D, \pa})$ for every $i=1,\ldots,r$. We may therefore suppose that there is an unique closed point $\pb$ of $E_w$ which is a basepoint for the family of polar curves, that is that $r=1$.
Moreover, if $ m(\Pi_{\cal D, \pb}) < m(\Pi_{\cal D, \pa})$ then there is nothing to prove, and so we may further suppose that $m(\Pi_{\cal D, \pb}) = m(\Pi_{\cal D, \pa})$. We now divide the proof in four parts, depending on the nature of $\pa$ and $\pb$.
First, suppose that $\pa$ is a double point belonging to $E_{v_1} \cap E_{v_2}$ and that $\pb$ is a free point of $E_w$.
Then we have
\[
m_{v_1}p_{v_1}(\pa) + m_{v_2}p_{v_2}(\pa) = m_{w} p_{w}(\pb) = (m_{v_1} + m_{v_2})p_{w}(\pb).
\]
It easily follows that $\epsilon_1(\pb) = p_{w}(\pb) < p_{v_1}(\pa)+p_{v_2}(\pa) = \epsilon_1(\pa)$.
Second, suppose that $\pa$ is a double point belonging to $E_{v_1} \cap E_{v_2}$ and that $\pb$ is a double point, without loss of generality say that $\pb$ belongs to $E_{v_1} \cap E_w$, and note that
\[
m_{v_1}p_{v_1}(\pa) + m_{v_2}p_{v_2}(\pa) = m_{v_1}p_{v_1}(\pb) + (m_{v_1}+m_{v_2}) p_{w}(\pb),
\]
so that we have $\epsilon_1(\pb) <\epsilon_1(\pa)$.
Third, suppose that $\pa$ is a free point of $E_{v}$ and that $\pb$ is a double point, say that $\pb$ belongs to $E_v \cap E_w$. Then we have
\[
m_v p_{v}(\pa) = m_v p_{v}(\pb) + m_w p_{w}(\pb) = m_v \big(p_{v}(\pb) + p_{w}(\pb)\big),
\]
and therefore $\epsilon_1(\pa) = \epsilon_1(\pb)$ and $\epsilon_2(\pb) < \epsilon_2(\pa)$.
Finally, suppose that $\pa$ is a free point of $E_{v}$ and that $\pb$ is a free point of $E_w$, so that we have
\[
m_vp_{v}(\pa) = m_w p_{w}(\pb) = m_v p_{w}(\pb),
\]
which implies that $\epsilon(\pa)=\epsilon(\pb)$. Now, recall that $\beta(\pa) >0$ by the hypothesis that $ m(\Pi_{\cal D, \pa})>0$ and by Lemma \ref{lem:compatibility}. Since $\pa$ is a basepoint for the family of generic polar curves, we conclude that it is contained in the zero locus of the residual ideal sheaf $\mathcal{R}_0(\pi)$, see \eqref{eq:Residual}, so that $\ord_{\pa}(\mathcal{R}_0(\pi)) >0$. It now follows from Lemma~\ref{lem:bound} that
\(
\beta(\pb) < \beta(\pa),
\)
finishing the proof.
\end{proof}
We have now collected all the ingredients we need to show that the number of blowups needed from any given resolution of $(X,0)$ to achieve factorization through its Nash transform admits an upper bound that only depends on the topology of $(X,0)$.
To see this, let $\Gamma$ be any weighted graph which can be realized as the dual graph of some good resolution of $(X,0)$ factoring through its maximal ideal.
Then the auxiliary invariants of the closed points of the exceptional divisor of such resolution are always bounded by the 4-tuple
\[
\mathrm{Aux}(\Gamma) = \max_{X',\pi',\pa} \{\Aux(\pa)\}
\]
where the maximum is taken over the set of triples $\big((X',0),\pi',\pa\big)$, where $(X',0)$ is a normal surface singularity, $\pi' \colon X'_{\pi'}\to X'$ is a good resolution of $(X',0)$ realizing the weighted graph $\Gamma$, and $\pa$ is a closed point of the exceptional divisor $(\pi')^{-1}(0)$ of $\pi'$.
Observe that, as noted in \eqref{eq:top_bound_for_Aux} the invariant $\mathrm{Aux}(\Gamma)$ is bounded from above by an element of $\Z^4$ which only depends on the topology of $(X,0)$.
It follows immediately from Lemma~\ref{lem:induction} that if $\pi \colon X_\pi\to X$ is a good resolution of $(X,0)$ which does not factor through its Nash transform and $\pi'\colon X_\pi\to X$ is the good resolution of $(X,0)$ obtained by blowing up once every basepoint of the family of generic polar curves of $(X,0)$, then we have
\[
\max_{\pb\in(\pi')^{-1}(0)}\{\mathrm{Aux}(\pb)\}
<
\max_{\pa\in\pi^{-1}(0)}\{\mathrm{Aux}(\pa)\} \leq \mathrm{Aux}(\Gamma).
\]
The result we are after follows now immediately by induction, since the weighted dual graph $\Gamma_{\pi'}$ belongs to a finite family of dual graphs which only depends on the weighted dual graph $\Gamma_\pi$ and on $\mathrm{Aux}(\Gamma)$.
Indeed, the family of generic polar curves of $(X,0)$ can have at most $m(\Pi,0)$ basepoints, and we have already proven that $m(\Pi,0)$ is bounded by the topology of $(X,0)$, so that the number of blowups, and hence the combinatorics of $\Gamma_{\pi'}$, is itself bounded by the topology of $(X,0)$.
This concludes the proof of Theorem~\ref{thm:main}.\hfill\qed
\section{Polar exploration}
We have discussed in Remark~\ref{rem:bound_L} how to give sharper bounds on the number of realizable $\cal L$-vectors on a given resolution graph $\Gamma$.
On the other hand, the bound on the number of $\cal P$-vectors in Theorem~\ref{thm:main}, being solely based on the polar multiplicities given by the L\^e--Greuel--Teissier formula, gives no information on the position of polar curves relative to the hyperplane sections.
We now discuss restrictions on these relative positions, thus providing a sharper bound to the number of realizable $\cal P$-vectors and a better understanding on the polar geometry of singularities realizing $\Gamma$.
For this, we can shift our focus to the following situation, which we refer to as the problem of \emph{polar explorations}: given a resolution graph $\Gamma$ and a $\cal L$-vector $L$ on $\Gamma$, how can we give geometric conditions on a $\cal P$-vector $P$ such that the triplet $(\Gamma,L,P)$ may be realizable?
In this section we describe two distinct tools that can be very effective in addressing this question.
\subsection{Inner rates and the Laplacian formula}
Assume that $(X,0)$ is a normal surface germ realizing the pair $(\Gamma,L)$.
The first tool we make use of is a result on the structure of this germ with respect to its \emph{inner metric} $d_{\inn}$, which is defined up to a bi-Lipschitz homeomorphism by embedding $(X,0)$ in a smooth germ $(\C^n,0)$ and considering the arc-length on $X$ induced by the usual Hermitian metric of $\C^n$.
Denote by $S_{\epsilon}$ the sphere in $\C^n$ having center $0$ and radius $\epsilon>0$.
Given two distinct curve germs $(\gamma,0)$ and $(\gamma',0)$ on $(X,0)\subset(\C^n,0)$, the \emph{inner contact} between $\gamma$ and $\gamma'$ is the rational number $q_{\inn}=q_{\inn}(\gamma, \gamma')$ defined by
\[
d_{\inn} \big(\gamma \cap S_{\epsilon}, \gamma' \cap S_{\epsilon}\big) = \Theta(\epsilon^{q_{\inn}}),
\]
where given two function germs $f,g\colon \big([0,\infty),0\big)\to \big([0,\infty),0\big)$ we write $f(t) = \Theta \big(g(t)\big)$ if there exist real numbers $\eta>0$ and $K >0$ such that ${K^{-1}}g(t) \leq f(t) \leq K g(t)$ for all $t\geq0$ satisfying $f(t)\leq \eta$.
Let $\pi\colon X_\pi \to X$ be a good resolution of $(X,0)$ and let $E_v$ be an irreducible component of $\pi^{-1}(0)$.
Then the \emph{inner rate} $q_v$ of $E_v$ is defined as the inner contact $q_\mathrm{inn}(\gamma,\gamma')$, where $\gamma$ and $\gamma'$ are are two curve germs on $(X,0)$ that pullback via $\pi$ to two curvettes at distinct points of $E_v$.
This definition only depends on the exceptional component $E_v$ and not on the choice of a good resolution on which the component appears (see \cite[Lemma~3.2]{BelottodaSilvaFantiniPichon2019}).
We now recall a deep result, the so-called \emph{Laplacian formula} for the inner rate function from \cite{BelottodaSilvaFantiniPichon2019}.
In order to state it we will introduce two additional vectors indexed by the vertices of the dual graph $\Gamma_\pi$ of a good resolution $\pi\colon X_\pi\to X$ of $(X,0)$.
For every vertex $v$ of $\Gamma_\pi$, set $k_v=\val_{\Gamma_\pi}(v)+2g(v)-2$ and $a_v=m_vq_v$, and consider the vectors $K_\pi=(k_v)_{v\in V(\Gamma_\pi)}$ and $A_\pi=(a_v)_{v\in V(\Gamma_\pi)}$. Let $L_\pi$ and $P_\pi$ be respectively the $\cal L$- and the $\cal P$-vector of $(X,0)$ as before.
Then the following equality holds:
\begin{equation}\label{equation:laplacian_formula_effective}
A_\pi = I_{\Gamma}^{-1}\cdot(K + L - P_\pi) \,.
\end{equation}
This equality is an effective version (see \cite[Proposition~5.3]{BelottodaSilvaFantiniPichon2019}) of the main result of \emph{loc.\ cit.}
Observe that in our situation $I_{\Gamma}$ and $K + L$ are known, while $A_\pi$ and $P_\pi$ are not, but either of the two is determined by the other one thanks to the formula above.
In what follows, we argue that in general there is only a very limited number of possible values of $A_\pi$, therefore restricting the number of possible configurations of $P_\pi$.
Since the vector $P_\pi$ has positive coordinates, then $-I_{\Gamma}^{-1} \cdot P_\pi$ is an element of the Lipman cone $\cal E^+$ of $\Gamma$, and so $A_\pi$ belongs to the translate $I_{\Gamma}^{-1}\cdot(K + L) + \cal E^+$ of $\cal E^+$.
Moreover, if a vertex $v_0$ of $\Gamma$ is an $\cal L$-node of $(X,0)$, that is if $l_{v_0}\neq0$, we know that its inner rate $q_{v_0}$ must be equal to 1 (see the paragraph preceding Proposition~3.9 of \cite{BelottodaSilvaFantiniPichon2019}).
This implies that $A_\pi$ belongs to the intersection for $v_0$ running over the set of $\cal L$-nodes of $(X,0)$ of the hyperplanes of $\Z^{V(\Gamma)}$ whose $v_0$-th coordinate is equal to $m_{v_0}$.
Since $D>0$ for every nonzero element $D$ of the Lipman cone $\cal E^+$, this intersection is finite,
and in fact rather small.
Therefore, the vector $P_\pi$ can only take fewer values than those allowed by the proof of Theorem~\ref{thm:main}.
This construction is illustrated in Figure~\ref{figure:cones} below.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=1]
\draw[thick,->] (-3.5,0) -- (6.5,0);
\draw[thick,->] (1,-1.35) -- (1,5.5);
\draw[very thick,color=blue] (1,0) -- (3.65,5.3);
\draw[very thick,color=blue] (1,0) -- (6,1.666);
\draw[very thick,color=red] (-3,-1) -- (7,2.333);
\draw[very thick,color=red] (-3,-1) -- (.15,5.3);
\draw[very thick,color=ForestGreen,dashed] (-3.5,2) -- (1,2);
\draw[ultra thick,color=ForestGreen] (1,2) -- (6,2);
\draw[very thick,color=ForestGreen,dashed] (5.9,2) -- (7,2);
\foreach \x in {-3,-3,-2,-1,0,1,2,3,4,5,6}
\foreach \y in {-1,0,1,2,3,4,5}
\draw[fill ] (\x,\y)circle(1pt);
\draw[fill,color=blue] (1,0)circle(1.7pt);
\draw[fill,color=blue] (2,1)circle(1.7pt);
\draw[fill,color=red] (-3,-1)circle(1.7pt);
\draw[fill] (1,2)circle(1.7pt);
\draw[fill,color=ForestGreen] (3,2)circle(1.7pt);
\begin{small}
\node(a)at(3.7,4.5){{\color{blue}$\mathcal E^+$}};
\node(a)at(-.6,.6){{\color{red}$I_{\Gamma}^{-1}\cdot(K + L)+\mathcal E^+$}};
\node(a)at(-2.4,4.5){$\mathbb Z^{V(\Gamma)}$};
\end{small}
\draw[thick,color=ForestGreen,->] (4.6,3.2) -- (4.4,2.1);
\begin{scriptsize}
\node(a)at(-2.6,-1.3){{\color{red}$I_{\Gamma}^{-1}\cdot(K + L)$}};
\node(a)at(2.3,1.2){{\color{blue}$Z_{\min}^{\Gamma}$}};
\node(a)at(4.7,3.4){{\color{ForestGreen}possible values for $A_\pi$}};
\node(a)at(3,2.3){{\color{ForestGreen}$A_\pi$}};
\node(a)at(.48,1.7){$(m_{v_0},0)$};
\end{scriptsize}
\end{tikzpicture}
\caption{ Observe that, since $Z_{\min}^{\Gamma}\succ 0$, then the Lipman cone $\cal E^+$ (in blue), and thus $I_{\Gamma}^{-1}\cdot(K + L)+\mathcal E^+$ (in red), contain no horizontal line. Only six values of $A_\pi$ are possible in this example.}
\label{figure:cones}
\end{figure}
\subsection{Topological constraints}
\label{subsec:topological_constraints}
Additional restrictions may be derived from the topological properties of the germ $(X,0)$, and more specifically from the \emph{local degrees} associated with a generic projection $\ell\colon(X,0)\to(\C^2,0)$. Let $\pi\colon X_\pi\to X$ be a good resolution of $(X,0)$ and let $\ell\colon(X,0)\to(\C^2,0)$ be a generic projection.
Let $\sigma_{\ell} \colon Y_{\ell} \to \C^2$ be a sequence of point blowups of $(\C^2,0)$ such that the rational map $\sigma_\ell^{-1}\circ\ell\circ\pi$ maps each component of $\pi^{-1}(0)$ surjectively to a component of $\sigma_\ell^{-1}(0)$ (that is, no $E_v$ is contracted), so that $\ell$ induces a map $\tilde\ell\colon \Gamma_\pi\to \Gamma_{\sigma_\ell}$ between (the topological spaces underlying) the graphs $\Gamma_\pi$ and $\Gamma_{\sigma_\ell}$.
Let ${\pi_\ell} \colon X_{\pi_\ell} \to X$ be a good resolution of $(X,0)$ such that $\pi_\ell\circ \ell$ factors through $\sigma_\ell$, let $v$ be a vertex of $\Gamma_{\pi}$ and let $v_1, \ldots, v_r$ be the vertices of $\Gamma_{\pi_\ell}$ that are adjacent to $v$ and contained in $\Gamma_\pi$.
Let $\Gamma_v$ be the subgraph of $\Gamma_{\pi_\ell}$ defined as the closure in $\Gamma_{\pi_\ell}$ of the connected component of $\Gamma_{\pi_\ell}\setminus\widetilde\ell^{-1}\big(\widetilde\ell(\{v_1,\ldots,v_r\})\big)$ containing $v$, and consider the subgraph of $\Gamma_{\sigma_\ell}$ defined as $T_v=\widetilde\ell(\Gamma_v)$.
Set
\[
\cal N(\Gamma_v) = \overline{ \bigcup_{w \in V(\Gamma_v)} N(E_w) \setminus \bigcup_{w' \in V(\Gamma_{\pi_\ell}) \setminus V( \Gamma_v)} N(E_{w'})}
\]
and
\[
\cal N(T_v) = \overline{\bigcup_{w \in V(T_v)} N(E_w) \setminus \bigcup_{w' \in V(\Gamma_{\sigma_\ell}) \setminus V( T_v)} N(E_{w'})}.
\]
Adjusting the fiber bundles $N(E_w)$ if necessary, by restricting $\ell$ to $\pi_\ell\big(\cal N(\Gamma_v)\big)$ we obtain a cover
\(
\ell|_{\pi_\ell( \cal N(\Gamma_v))} \colon \pi_\ell\big( \cal N(\Gamma_v)\big) \to \sigma_{\ell}\big(\cal N(T_v)\big).
\)
Following \cite[Definition~4.16]{BelottodaSilvaFantiniPichon2019}, we call \emph{local degree of $\ell$ at $v$} the degree of this cover, and we denote it by $\deg(v)$. Pick now a generic linear form $h\colon (X,0)\to (\C,0)$ on $(X,0)$, denote by $\widehat{F}_v$ the intersection of $\pi_\ell \big(\cal N(\Gamma_{v})\big)$ with the Milnor fiber $X\cap\{h=t\}$ of $x$, and set $\widehat{F}'_v=\ell_v(\widehat{F}_v)$.
Restricting again $\ell$, we get a $\deg(v)$-cover $\ell|_{\widehat{F}_v}\colon \widehat{F}_v\to \widehat{F}_v'$.
The Hurwitz formula applied to this cover yields the following equality:
\begin{lemma}\label{lem:hurwitz}
\(
\chi(\widehat{F}_v)+m_vp_v = \deg(v) \chi(\widehat{F}'_v).
\)
\end{lemma}
\begin{remark}
It is worth pointing out that the map $\tilde{\ell}$ and the local degree $\deg(v)$ can be defined more intrinsically, without the need of choosing a modification of $(\C^2,0)$, by working with suitable valuation spaces.
We refer the reader to \cite{BelottodaSilvaFantiniPichon2019}, and in particular to sections 2.1, 2.2, and 4.6 of \emph{loc.\ cit.}, for a more thorough discussion.
\end{remark}
\section{An example of polar exploration}
\begin{example}\label{ex:MaugendreMichel2017}
We will discuss in detail Example 3 from the paper \cite{MaugendreMichel2017}, showing that we can determine its $\cal P$-vector completely.
Consider the hypersurface $(X,0)$ in $(\C^3,0)$ defined by the equation $z^2=(y+x^3)(y+x^2)(x^{34}-y^{13})$.
The dual graph of the minimal resolution of $(X,0)$ which factors through the blowup of its maximal ideal is given in Figure~\ref{exMM_figure 1}.
All exceptional curves are rational, the arrow represents the strict transform of a generic linear form, and the negative numbers are the self intersections of the irreducible components of $\pi^{-1}(0)$.
The multiplicities $m_v$, which are computed from this data using the equalities~\eqref{eq:IdentityTotalTransform} (page \pageref{eq:IdentityTotalTransform}), are within parentheses in the figure.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\draw[fill ] (0,0)circle(2pt);
\draw[fill ] (1,0)circle(2pt);
\draw[fill ] (1.8,0.8)circle(2pt);
\draw[fill ] (1.8,-0.8)circle(2pt);
\draw[fill ] (1.8,0.8)circle(2pt);
\draw[fill ] (2.8,-0.8)circle(2pt);
\draw[fill ] (2.8,0.8)circle(2pt);
\draw[fill ] (3.5,0)circle(2pt);
\draw[fill ] (4.5,0)circle(2pt);
\draw[fill ] (5.5,0)circle(2pt);
\draw[fill ] (6.5,0)circle(2pt);
\draw[thin ](0,0)--(1,0);
\draw[thin ](1,0)--(1.8,0.8);
\draw[thin ](1.8,0.8)--(2.8,0.8);
\draw[thin ](2.8,0.8)--(3.5,0);
\draw[thin ](1,0)--(1.8,-0.8);
\draw[thin ](1.8,-0.8)--(2.8,-0.8);
\draw[thin ](2.8,-0.8)--(3.5,0);
\draw[thin ](3.5,0)--(6.5,0);
\draw[thin,>-stealth,->](0,0)--+(-0.7,0.7);
\begin{scriptsize}
\node(a)at(0,-0.3){$(2)$};
\node(a)at(.9,-0.3){$(1)$};
\node(a)at(1.85,.5){$(2)$};
\node(a)at(2.75,.5){$(5)$};
\node(a)at(1.85,-.5){$(2)$};
\node(a)at(2.75,-.5){$(5)$};
\node(a)at(3.65,-0.3){$(13)$};
\node(a)at(4.5,-0.3){$(3)$};
\node(a)at(5.5,-0.3){$(2)$};
\node(a)at(6.5,-0.3){$(1)$};
\node(a)at(0,0.45){$-1$};
\node(a)at(.9,0.45){$-6$};
\node(a)at(1.8,1.3){$-3$};
\node(a)at(2.8,1.3){$-3$};
\node(a)at(1.8,-1.4){$-3$};
\node(a)at(2.8,-1.4){$-3$};
\node(a)at(3.6,0.45){$-1$};
\node(a)at(4.5,0.45){$-5$};
\node(a)at(5.5,0.45){$-2$};
\node(a)at(6.5,0.45){$-2$};
\node(a)at(0,0.2){$v_1$};
\node(a)at(.9,0.2){$v_2$};
\node(a)at(1.8,1){$v_3$};
\node(a)at(2.8,1){$v_4$};
\node(a)at(1.8,-1.05){$w_3$};
\node(a)at(2.8,-1.05){$w_4$};
\node(a)at(3.6,0.2){$v_5$};
\node(a)at(4.5,0.2){$v_6$};
\node(a)at(5.5,0.2){$v_7$};
\node(a)at(6.5,0.2){$v_8$};
\end{scriptsize}
\end{tikzpicture}
\caption{The graph $\Gamma_\pi$, decorated with the self-intersections and multiplicities.}\label{exMM_figure 1}
\end{figure}
Let us now focus on determining the $\cal P$-vector of $(X,0)$ from the data we have available. Observe that in the following discussion we will not derive any information from the analytic type of the singularity $(X,0)$, but only from the weighted graph $\Gamma_\pi$ and from the multiplicities of its vertices, so that our conclusion will hold for any singularity $(S,0)$ realizing the same data.
This means in particular that the resolution of $(S,0)$ whose weighted dual graph is $\Gamma_\pi$ will be required to factor through the blowup of the maximal ideal of $(S,0)$, so that the multiplicity $m(S,0)$ of $(S,0)$ is the sum of the products $m_v l_v$ over the vertices $v$ of $\Gamma_\pi$, which in this case is equal to $2$.
Applying \cite[Proposition~3.9]{BelottodaSilvaFantiniPichon2019} to the vertex $v_8$, we deduce that the inner rates are strictly increasing along a path from $v_1$ to $v_8$.
In particular, if $\widetilde\ell$ is the graph morphism of subsection~\ref{subsec:topological_constraints}, one of the two chains going from $v_2$ to $v_5$ must be sent by $\widetilde\ell$ surjectively onto a chain $\gamma$ in the tree $\widetilde\ell(\Gamma_\pi)$.
It follows that the image through $\widetilde\ell$ of the second chain from $v_2$ to $v_5$ should contain $\gamma$ as well, and since the multiplicity of the surface is $2$, this shows that the local degree on each of them cannot exceed $1$, so that it must be equal to 1, and therefore the image of both must be precisely $\gamma$.
In particular the two strings are also strings of the graph $\Gamma_{\pi_\ell}$, where $\pi_\ell$ is the resolution of $(X,0)$ introduced in the discussion of subsection~\ref{subsec:topological_constraints}.
Then, the connected components of $\Gamma_{\pi} \setminus \{v_5\}$ and $\Gamma_{\pi_\ell} \setminus \{v_5\}$ containing $v_1$ coincide, and thus we can determine $p_v$ for each vertex $v$ of this subgraph by applying Lemma~\eqref{lem:hurwitz}.
We obtain $p_{v_1} = p_{v_3} = p_{v_4} = p_{w_3} = p_{w_4} = 0$ and $p_{v_2}=1$.
Since the inner rate function achieve its maximum strictly on the vertex $v_8$, then $v_8$ keeps valency one in the graph $\Gamma_{\pi_\ell}$, and applying again Lemma~\eqref{lem:hurwitz} we obtain $\chi(F_{v_8})+m_vp_{v_8} = \deg(v_8) \chi(F'_{v_8})$.
Since $\chi(F_{v_8})=1$, we obtain that $\chi(F'_{v_8}) \geq 1$, so that $\chi(F'_{v_8}) =1$ since $F'_{v_8}$ has one boundary component.
Therefore, $p_{v_8} = \deg(v_8) -1$, so $p_{v_8} \in \{0,1\}$ since $ \deg(v_8) $ cannot exceed the multiplicity of $(S,0)$, which is $2$.
Notice that we do not know at this point whether $\Gamma_{\pi_\ell}$ has or not an extra edge adjacent to one of the vertices $v_5$, $v_6$, or $v_7$ whose image through $\widetilde\ell$ is contained in $\widetilde\ell(\Gamma_\pi)$.
Applying again Lemma~\eqref{lem:hurwitz}, we then have four cases: $(p_{v_5}, p_{v_6}, p_{v_7}, p_{v_8}) \in \{ (1,0,0,1), (1,0, 1,0), (1,1, 0,0), (2,0,0,0) \}$, with the first case corresponding to $\Gamma_{\pi} =\Gamma_{\pi_\ell}$.
We now use the Laplacian formula recalled in equation~\eqref{equation:laplacian_formula_effective} (page \pageref{equation:laplacian_formula_effective}) to eliminate some of these possibilities for $P$ and to compute the inner rates.
Writing the formula for every vertex $v \in \{ v_1, v_2, v_3,v_4 , w_3,w_4\}$, for which we know $p_v$, and using the fact that $q_{v_1}=1$, we obtain the corresponding inner rates $q_v$ and the inner rate $q_{v_5}$, which are as follows: $q_{v_2} = 2, q_{v_3} =q_{w_3} = \frac{5}{2}, q_{v_4} =q_{w_4} = \frac{13}{5}$, and $q_{v_5} = \frac{34}{13}$.
Now, the Laplacian formula for $v_5$ yields $-13q_{v_5} + 5(q_{v_4} + q_{w_4}) + 3 q_{v_6} = 1-p_{v_5}$.
Therefore $3 q_{v_6} + p_{v_5} = 9$, where $q_{v_6} \in \frac{1}{3} \N$ and $q_{v_6} > q_{v_5} = \frac{34}{13}$.
Therefore $q_{v_6}=3$ and $p_{v_5} = 0 $ or $q_{v_6}=\frac{8}{3}$ and $p_{v_5} = 1$.
But we know from the Hurwitz arguments above that $p_{v_5} = 1$ or $2$.
Therefore, the only possibility is $q_{v_6}=\frac{8}{3}$ and $p_{v_5} = 1$.
The Laplacian formula for $v_6$ gives $ -15q_{v_6} + 13 q_{v_5} +2q_{v_7} = -p_{v_6}$.
Then $2q_{v_7} +p_{v_6} =11$ with $q_{v_7} \in \frac{1}{2} \N, q_{v_7} > \frac{8}{3}$ and and $p_{v_6} \leq 1$.
The unique possibility is $q_{v_7} = 3$ and $p_{v_6} \leq 0$.
The Laplacian formula for $v_7$ gives $ q_{v_8} +p_{v_7} =4 q_{v_7}-3 q_{v_6} = 4 $, with $q_{v_8} \geq0$, $q_{v_8} > 3$, and $p_{v_7} \leq 1$.
The unique possibility is $q_{v_8} = 4$ and $p_{v_7} \leq 0$.
Among the four possibilities for $(p_{v_5}, p_{v_6}, p_{v_7}, p_{v_8})$ identified above, the unique possibility is then $(1,0,0,1)$, so $p_{v_8}=1$.
This is indeed compatible with the Laplacian formula for $v_8$.
We have obtained a unique possibility for $P$ and the inner rates, as shown in Figure~\ref{figure 3}.
Observe that since there are no edges joining two vertices with nonzero $p_{v}$, then the strict transform $\Pi^*$ of the polar curve $\Pi$ by $\pi$ meets $E$ at smooth points of the exceptional divisor $\pi^{-1}(0)$.
Moreover, since each $p_{v_i}$ equals either zero or one, then $\pi$ is a good resolution of $\Pi$, that is $\pi^{-1}(0)$ is a simple normal crossing divisor.
The arrows in Figure~\ref{figure 3} represent the strict transform of a generic polar curve.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\draw[thin,>-stealth,->](1,0)--+(-0.7,-0.7);
\draw[thin,>-stealth,->](3.5,0)--+(+0.7,-0.7);
\draw[thin,>-stealth,->](6.5,0)--+(+0.7,-0.7);
\draw[fill ] (0,0)circle(2pt);
\draw[fill ] (1,0)circle(2pt);
\draw[fill ] (1.8,0.8)circle(2pt);
\draw[fill ] (1.8,-0.8)circle(2pt);
\draw[fill ] (1.8,0.8)circle(2pt);
\draw[fill ] (2.8,-0.8)circle(2pt);
\draw[fill ] (2.8,0.8)circle(2pt);
\draw[fill ] (3.5,0)circle(2pt);
\draw[fill ] (4.5,0)circle(2pt);
\draw[fill ] (5.5,0)circle(2pt);
\draw[fill ] (6.5,0)circle(2pt);
\draw[thin ](0,0)--(1,0);
\draw[thin ](1,0)--(1.8,0.8);
\draw[thin ](1.8,0.8)--(2.8,0.8);
\draw[thin ](2.8,0.8)--(3.5,0);
\draw[thin ](1,0)--(1.8,-0.8);
\draw[thin ](1.8,-0.8)--(2.8,-0.8);
\draw[thin ](2.8,-0.8)--(3.5,0);
\draw[thin ](3.5,0)--(6.5,0);
\node(a)at(0,0.3){$\mathbf 1$};
\node(a)at(.9,0.3){$\mathbf 2$};
\node(a)at(1.8,1.2){$ \frac{\mathbf 5}{\mathbf 2}$};
\node(a)at(2.8,1.2){$\frac{\mathbf{13}}{\mathbf 5}$};
\node(a)at(1.8,-1.2){$ \frac{\mathbf 5}{\mathbf 2}$};
\node(a)at(2.8,-1.2){$ \frac{\mathbf{13}}{\mathbf 5}$};
\node(a)at(3.6,0.4){$ \frac{\mathbf{34}}{\mathbf{13}}$};
\node(a)at(4.5,0.4){$ \frac{\mathbf{8}}{\mathbf{3}}$};
\node(a)at(5.5,0.3){$\mathbf{3}$};
\node(a)at(6.5,0.3){$\mathbf{4}$};
\end{tikzpicture}
\caption{The graph $\Gamma_\pi$, decorated with the inner rates of its vertices and arrows corresponding to the components of a generic polar curve.}\label{figure 3}
\end{figure}
\end{example}
\begin{remark}
Observe that at this stage in the previous example it is not possible to know whether the $\cal P$-nodes of $(S,0)$ are $v_2,v_5$, and $v_8$, since in principle the resolution whose weighted dual graph is $\Gamma_\pi$ might not factor through the Nash transform of $(S,0)$.
For example, in the case of the hypersurface $(X,0)$ we started with, none of those three vertices is a $\cal P$-node.
\end{remark}
\bibliographystyle{alpha}
|
1,314,259,994,883 | arxiv | \section{Introduction}
Shortly after the discovery of pulsars and the realization that they are
rotating neutron stars, deformations of rotating neutron stars were proposed as
sources of continuous gravitational radiation~\cite{Shklovskii1969,
Ostriker1969, Ferrari1969, Melosh1969}; see~\cite{Press1972} for an early
review.
Searches for such radiation are an ongoing concern of the LIGO and Virgo
gravitational wave detectors~\cite{LIGO_psrs2010, LIGO_CasA, LIGO_Vela}; see~\cite{Owen2009, Pitkin, Astone} for recent
reviews.
It is thus of
great interest to know the maximum
quadrupolar deformation that a neutron star could sustain, in order to motivate
further searches and help interpret upper limits or detections.
In the case of elastic (as opposed to magnetic) deformations, the main factor
influencing the answer is whether the neutron star contains particles more
exotic than neutrons~\cite{OwenPRL, Owen2009}.
However, the structure of the star also plays an important role.
While there are
relativistic calculations of the quadrupole deformations due to magnetic
fields (e.g.,~\cite{IS,CFG,YKS, FR,CR}), all the computations involving elastic
deformations have used Newtonian gravity. Moreover, all but two of these computations
have used the integral expression obtained in the Cowling approximation (i.e.,
neglecting the self-gravity of the perturbation) by
Ushomirsky, Cutler, and
Bildsten (UCB)~\cite{UCB}; see \cite{OwenPRL, Lin, KS, Horowitz}.
Haskell, Jones, and Andersson (HJA)~\cite{HJA} dropped the Cowling approximation using
a somewhat different formalism than UCB's; there is a further application of
their results in~\cite{Haskelletal}.
We improve these treatments by generalizing the UCB integral to relativistic
gravity with no Cowling approximation. We also provide a
similar generalization for the Newtonian no-Cowling case, as a warm-up. In
addition to providing a simpler formalism for performing computations than the
more general Newtonian gravity treatment in HJA, the integrals we obtain allow us to verify that a
maximal uniform
strain continues to yield the maximum quadrupole deformation in the Newtonian
and relativistic no-Cowling cases. (UCB showed this to be true for an arbitrary equation of state
in the Newtonian Cowling approximation case; we are able to verify that it is
true in the more general cases for each background stellar model we consider.)
We then apply our calculation to the standard case of quadrupoles supported by
shearing the lattice of nuclei in the crust, as well as the cases where the
quadrupole is supported by the hadron--quark mixed phase lattice in the core, or a crystalline color superconducting phase throughout a solid strange quark star. For the crustal
quadrupoles, we calculate the shear modulus following HJA, using the equation
of state (EOS) and composition results of Douchin and Haensel~\cite{DH} and
the effective shear modulus calculated by Ogata and Ichimaru~\cite{OI}. (There are recent
improvements to the Ogata and Ichimaru result~\cite{HH, Baiko,BaikoCPP}, but these only reduce their
shear modulus by $<10\%$.) For the hadron--quark mixed phase, we use our recent calculations of the EOS and
shear modulus~\cite{J-MO1} for a variety of parameters. (We also consider the range of surface tensions for
which the mixed phase is favored.) For crystalline quark matter, we use the shear modulus calculated by Mannarelli, Rajagopal, and Sharma~\cite{MRS}, and the EOS given by
Kurkela, Romatschke, and Vuorinen~\cite{KRV}.
In all cases, we use a breaking strain of $0.1$, comparable to that
calculated by Horowitz and Kadau~\cite{HK} using molecular dynamics
simulations. (Hoffman and Heyl~\cite{HoHe} have recently obtained
very similar values over more of parameter space.) This result is directly applicable to the crustal lattice, at least for the outer crust, above neutron drip (though see Chugunov and
Horowitz~\cite{ChHo} for caveats). We also
feel justified in applying it to the inner crust, as well as to the mixed phase and crystalline quark matter, since the primary source of
the high breaking strain appears to be the system's large pressure.
But one can apply our results to any breaking strain using the linear scaling of
the maximum quadrupole with breaking strain.
In our general relativistic calculation, we use the relativistic theory of elasticity given by Carter and
Quintana~\cite{CQ} and placed in a more modern guise by
Karlovini and Samuelsson~\cite{KaSaI}.
However, all we need from it is the relativistic form of the elastic stress-energy tensor, which can be obtained by simple
covariance arguments, as noted by Schumaker and Thorne~\cite{ST}. We also use the standard
\citet{TC} Regge-Wheeler gauge~\cite{RW} formalism for perturbations
of static relativistic stars, following
Hinderer's recent calculation~\cite{Hinderer} of the quadrupole moment of a
tidally deformed relativistic star (first discussed in~\citet{FH}), and the classic calculation by \citet{Ipser}.
Even though we are interested in the gravitational radiation emitted by
rotating stars, it is sufficient for us to calculate the static quadrupole deformation.
As discussed by
Ipser~\cite{Ipser}, and then proved for more general situations by
Thorne~\cite{Thorne}, this static quadrupole (obtained
from the asymptotic
form of the metric) can be inserted into the quadrupole formula to obtain the emitted
gravitational radiation in the fully relativistic,
slow-motion limit. [This
approximation has uncontrolled remainders of order $(\omega/\omega_K)^2$, where
$\omega$ and $\omega_K$ are the star's angular velocity and its maximum---i.e.,
Kepler---angular velocity, respectively.
This ratio is $\lesssim 10^{-2}$ for the pulsars for which LIGO has been able
to beat the spin-down limit~\cite{LIGO_psrs2010}.]
We shall generally show the gravitational constant $G$ and speed of light $c$
explicitly, though we shall take $G = c =1$ in most of Sec.~\ref{GR}, only restoring them in our
final expressions.
The relativistic calculation was aided by use of the computer algebra
system {\sc{Maple}}
and the associated tensor manipulation package {\sc{GRTensorII}}~\cite{GRTensor}.
We used {\sc{Mathematica}}~7 to perform numerical computations.
The paper is structured as follows: In Sec.~\ref{Newt}, we review UCB's formalism
and extend it by introducing a Green function to compute the maximum Newtonian quadrupole deformation without
making the Cowling approximation. In Sec.~\ref{GR}, we further generalize to
the fully relativistic case, and compare the various approximations for the maximum quadrupole.
In Sec.~\ref{results2}, we show the maximum quadrupoles for three different cases: first crustal quadrupoles,
then hadron--quark hybrid quadrupoles, and finally solid strange quark star quadrupoles. We also describe
the modifications to our formalism needed to treat solid strange quark stars. We discuss all these
results in Sec.~\ref{discussion}, and summarize and conclude in
Sec.~\ref{concl2}.
In the Appendix, we show that the mixed phase is favored by global energy arguments even for surface tensions large enough that it is disfavored by local energy arguments.
\section{Newtonian calculation of the maximum quadrupole}
\label{Newt}
We first demonstrate how to compute the maximum Newtonian quadrupole without making the Cowling approximation. This provides
a warm-up before we tackle the full relativistic case, and also allows us to verify some of the
statements made by UCB and HJA. We use
the basic formalism of UCB, modeling the star as nonrotating, with the
stress-energy tensor of a perfect fluid plus
shear terms, and treating the shear contributions as a first-order
perturbation of hydrostatic equilibrium.
This perturbative treatment should be quite a good approximation: The maximum shear stress to energy density ratio we consider in the crustal and hybrid star cases is $\lesssim 0.05\%$ (and the maximum shear stress to pressure ratio is
$\lesssim 0.3\%$). (Here we have taken the shear stress to
be $\mu\bar{\sigma}_\mathrm{max}$, which is good up to factors of order unity.) And even in the case of solid strange quark stars, the
maximum shear stress to energy density ratio is still only at most $\sim 0.2\%$.
[We have already discussed the effects of rotation in the relativistic case, above; UCB note at
the beginning of their Sec.~4 that rotation also only modifies the perturbative Newtonian results for the static deformations we and they consider at the $O([\omega/\omega_K]^2)$ level.]
It is convenient to start by writing the quadrupole moment
in terms of the surface value of the perturbation to the star's
Newtonian potential. We start from UCB's definition of
\<
\label{Q22}
Q_{22} := \int_0^\infty\delta\rho(r)r^4dr
\?
[where the (Eulerian) density perturbation $\delta\rho$ and all similar
perturbed quantities have only an $l = m = 2$ spherical harmonic component].
[Note that this
quadrupole moment differs by an overall constant from the one defined by
Thorne~\cite{Thorne}---e.g., his Eq.~(5.27a).] We then recall that
the perturbed Poisson equation for the $l = 2$ part of the perturbed
gravitational potential is
\<\label{Poisson}
(\triangle_2\delta\Phi)(r) := \frac{1}{r^2}[r^2\delta\Phi'(r)]' - \frac{6}{r^2}\delta\Phi(r) = 4\pi G\delta\rho
\?
($\triangle_2$ is the $l = 2$ radial part of the Laplacian), with boundary conditions of
\<
\label{PoissonBCs}
\delta\Phi(0) = 0, \qquad R\delta\Phi'(R) = -3\delta\Phi(R),
\?
where $R$ is the radial coordinate of the star's surface.
[See, e.g., Eqs.~(2.15) and (2.16) in \cite{LMO}---their $\Phi_{22}$ is our
$\delta\Phi$. Note also that the primes denote derivatives with respect to $r$. Additionally, we shall
continue to be inconsistent with our inclusion of the functional dependence of quantities---e.g., $\delta\rho$
depends upon $r$, even though we do not always indicate this explicitly. We will eventually stop displaying $\delta\Phi$'s
explicit functional dependence on $r$, for instance.]
If we now substitute Eq.~\eqref{Poisson} into Eq.~\eqref{Q22} and integrate by
parts using the boundary conditions~\eqref{PoissonBCs}, we obtain
\<\label{Q22N}
Q_{22} = -\frac{5R^3}{4\pi G}\delta\Phi(R).
\?
This
sort of expression is more commonly seen in the relativistic case, where it is
necessary to obtain the quadrupole in this manner by looking at the
perturbation's asymptotic behavior---see the discussion in Sec.~\ref{GR}.
We now wish to obtain an equation for $\delta\Phi$ in terms of the shear
stresses. We follow UCB in decomposing the perturbed stress tensor as
[see their Eqs.~(59) and (61)]
\<\label{deltatau}
\begin{split}
\delta\tau_{ab} &= -\delta pY_{lm}g_{ab} + t_{rr}Y_{lm}(\hat{r}_a\hat{r}_b - e_{ab}/2) +
t_{r\perp}f_{ab}\\
&\quad + t_\Lambda(\Lambda_{ab} + Y_{lm}e_{ab}/2).
\end{split}
\?
Here $\delta p$ is the (Eulerian) pressure perturbation; $Y_{lm}$ is a spherical
harmonic; $\hat{r}_a$ is the radial unit vector; $t_{rr}$, $t_{r\perp}$, and
$t_\Lambda$ are the components of the shear stresses; and
$g_{ab}$ denotes the metric of flat, $3$-dimensional Euclidean space.
(Following UCB, we will generally write out $l$ and $m$ explicitly, even
though we only consider $l = m = 2$ here.)
Also [Eqs.~(40) in UCB],
\begin{subequations}
\begin{align}
e_{ab} &:= g_{ab} - \hat{r}_a\hat{r}_b,
\\
f_{ab} &:= 2r\hat{r}_{(a}\nabla_{b)}Y_{lm}/\beta,
\\
\beta &:= \sqrt{l(l+1)} = \sqrt{6},
\\
\label{Lambda}
\Lambda_{ab} &:= r^2\nabla_a\nabla_bY_{lm}/\beta^2 + f_{ab}/\beta.
\end{align}
\end{subequations}
(We have corrected the dropped factor of $\beta^{-1}$ multiplying $f_{ab}$ in
UCB's definition of $\Lambda_{ab}$---this was also noticed by HJA.)
We also have
\begin{equation}
\label{ts}
t_{ab} = 2\mu\sigma_{ab},
\end{equation}
where $\mu$ is the shear modulus and $\sigma_{ab}$ is the strain tensor.
(This is a factor-of-$2$ correction to the expression in UCB, as noted in
\cite{OwenPRL}.)
Now, a convenient expression can be obtained from the perturbed
equation of hydrostatic equilibrium
\<\label{hydro}
\nabla^a\delta\tau_{ab} = \delta\rho g(r)\hat{r}_b + \rho\nabla_b\delta\Phi
\?
($\nabla_a$ denotes the flat-space covariant derivative), by
substituting for $\delta\rho$ using the Poisson equation~\eqref{Poisson}
and projecting along $\hat{r}^b$, yielding
\<\label{deltaPhieq1}
\begin{split}
\frac{\triangle_2\delta\Phi}{4\pi G} + \frac{\rho}{g(r)}\delta\Phi' &= \frac{\hat{r}^b\nabla^a\delta\tau_{ab}}{g(r)}\\
&= \frac{1}{g(r)}\left[-\delta p' + t_{rr}' + \frac{3}{r}t_{rr} - \frac{\beta}{r}t_{r\perp}\right].
\end{split}
\?
We then project Eq.~\eqref{hydro} along $\nabla^bY_{lm}$ to express $\delta p$ in terms of the shear stresses $t_{rr}$, $t_{r\perp}$, and $t_\Lambda$,
along with $\rho$ and $\delta\Phi$, giving
\<
\delta p = -\rho\delta\Phi - \frac{t_{rr}}{2} + \frac{r}{\beta}t_{r\perp}' + \frac{3}{\beta}t_{r\perp} + \left(\frac{1}{\beta^2} - \frac{1}{2}\right)t_\Lambda.
\?
Substituting this into Eq.~\eqref{deltaPhieq1}, we thus obtain
\<\label{deltaPhieq2}
\begin{split}
\triangle_2\delta\Phi - \frac{4\pi G}{g(r)}\rho'\delta\Phi &=
\frac{4\pi G}{g(r)}\biggl[\frac{3}{2}t_{rr}' - \frac{4}{\beta}t_{r\perp}' - \frac{r}{\beta}t_{r\perp}''\\
&\quad - \left(\frac{1}{\beta^2} - \frac{1}{2}\right)t_\Lambda' + \frac{3}{r}t_{rr} - \frac{\beta}{r}t_{r\perp}\biggr].
\end{split}
\?
We now wish to obtain an integral expression for $Q_{22}$ that generalizes
UCB's Eq.~(64) to the case where we do not make the Cowling approximation.
We shall do this by obtaining the Green function for the left-hand side of
Eq.~\eqref{deltaPhieq2} and then integrating by parts.
We will be able to
discard all of the boundary terms, since the
stresses vanish at the star's surface (we assume that the shear modulus vanishes there) and the integrand vanishes at the star's center. We can obtain the
Green function using the standard Sturm-Liouville expression in terms of the solutions of the
homogeneous equation [e.g., Eq.~(10.103) in Arfken and Weber~\cite{AW}~].
We obtain the appropriate solution to the homogeneous equation numerically
for a given background stellar model (EOS and mass). The equation for the Green function is
[multiplying the left-hand side of Eq.~\eqref{deltaPhieq2} by $r^2$ to improve its regularity]
\<\label{Leq}
\begin{split}
(\mathcal{L}_N\mathcal{G})(r,\bar{r}) &:= \frac{\partial}{\partial r}\left[r^2\frac{\partial}{\partial r}\mathcal{G}(r,\bar{r})\right] - \left[6 + \frac{4\pi G r^2}{g(r)}\rho'\right]\mathcal{G}(r,\bar{r})\\
&\,= \delta(r - \bar{r})
\end{split}
\?
[$\delta(r - \bar{r})$ is the Dirac delta function], with boundary conditions (at the star's center and surface) of
\<\label{Newt_BC}
\mathcal{G}(0,\bar{r})=0, \qquad R\partial_1\mathcal{G}(R,\bar{r}) = -3\mathcal{G}(R,\bar{r}),
\?
where $\partial_1$ denotes a partial derivative taken with respect to the
first ``slot'' of the function.
If we then write [using Eq.~\eqref{Q22N}, the factor of $r^2$ from the Green function equation~\eqref{Leq}, and the
prefactor on the right-hand side of Eq.~\eqref{deltaPhieq2}]
\begin{equation}\label{GN}
G_N} % Changed to save space; was {\mathrm{Newt}(r) := -5R^3r^2\mathcal{G}(R,r)/g(r),
\end{equation}
we have
\begin{widetext}
\<\label{Q22N1}
\begin{split}
Q^N} % Changed to save space; was {\mathrm{Newt}_{22} &= \int_0^RG_N} % Changed to save space; was {\mathrm{Newt}(r)\left[\frac{3}{2}t_{rr}' - \frac{4}{\beta}t_{r\perp}' - \frac{r}{\beta}t_{r\perp}'' -
\left(\frac{1}{\beta^2} - \frac{1}{2}\right)t_\Lambda' + \frac{3}{r}t_{rr} - \frac{\beta}{r}t_{r\perp}\right]dr\\
&= -\int_0^R\biggl\{\left[\frac{3}{2}G_N} % Changed to save space; was {\mathrm{Newt}'(r) - \frac{3}{r}G_N} % Changed to save space; was {\mathrm{Newt}(r)\right]t_{rr} + \left[\frac{r}{\beta}G_N} % Changed to save space; was {\mathrm{Newt}''(r) - \frac{2}{\beta}G_N} % Changed to save space; was {\mathrm{Newt}'(r) + \frac{\beta}{r}G_N} % Changed to save space; was {\mathrm{Newt}(r)\right]t_{r\perp
+ \left(\frac{1}{2} - \frac{1}{\beta^2}\right)G_N} % Changed to save space; was {\mathrm{Newt}'(r)t_\Lambda\biggr\}dr.
\end{split}
\?
\end{widetext}
We have freely integrated by parts in obtaining the second expression, noting that the boundary terms are zero since $G_N} % Changed to save space; was {\mathrm{Newt}(r)$ vanishes sufficiently rapidly as $r \to 0$ and
the stresses are zero at the surface of the star (since we assume that the shear modulus vanishes at the star's surface).\footnote{We shall treat the case where the stresses do \emph{not} vanish at the surface of the star when we consider solid strange quark stars in Sec.~\ref{SQM_computation}. Also, note that HJA claim that UCB's
expression does not include distributional contributions due to sudden changes
in the shear modulus. This is not the case---these are included due to UCB's
integration by parts (cf.\ the definition of the distributional derivative). All that the UCB derivation requires is,
e.g., that the shear modulus vanish outside of the crust, not that
it do so continuously.}
This reduces to UCB's Eq.~(64) if we take the Cowling approximation
\begin{equation}
\label{CowlingGN}
G_N} % Changed to save space; was {\mathrm{Newt}(r) \to r^4/g(r),
\end{equation}
corresponding to dropping the second term on the left-hand side of
Eq.~\eqref{deltaPhieq2}.
To obtain an analogue of the expression for the maximum quadrupole
given in Eq.~(5) of Owen~\cite{OwenPRL}, we note that
UCB's argument about maximum uniform strain leading to the maximum quadrupole
still holds here for the stars we consider, since the coefficients of the stress
components in the integrand are all uniformly positive. (We have checked this numerically for each background
stellar model we consider.)
The strain tensor components are
\begin{subequations}
\label{sNewt}
\begin{align}
\label{srr}
\sigma_{rr} &= (32\pi/15)^{1/2}\bar{\sigma}_\mathrm{max},
\\
\label{srp}
\sigma_{r\perp} &= (3/2)^{1/2}\sigma_{rr},
\\
\label{sL}
\sigma_\Lambda &= 3\sigma_{rr}
\end{align}
\end{subequations}
in the case where the
star is maximally (and uniformly) strained---see Eqs.~(67) in UCB. The
breaking strain $\bar{\sigma}_\mathrm{max}$ is given by the von Mises
expression,
\begin{equation}
\label{vonMises}
\sigma_{ab}\sigma^{ab} = 2\bar{\sigma}_\mathrm{max}^2.
\end{equation}
It thus
corresponds to assuming that the lattice yields when it has stored a certain maximum
energy density. We then have
\<\label{Q22N2}
\frac{|Q_{22}^{\mathrm{max}, N}|}{\bar{\sigma}_\mathrm{max}} = \sqrt{\frac{32\pi}{15}}\int_0^R\mu(r)\left[rG_N} % Changed to save space; was {\mathrm{Newt}''(r) + 3G_N} % Changed to save space; was {\mathrm{Newt}'(r)\right]dr.
\?
This reduces to Eq.~(5) in Owen~\cite{OwenPRL} if we use the Cowling
approximation~\eqref{CowlingGN}.
Note that there is no direct contribution from $\rho'$ to $G_N} % Changed to save space; was {\mathrm{Newt}''$ in the
no-Cowling case, despite what one might expect from Eq.~\eqref{Leq}: Writing
$\bar{\mathcal{G}}(r) := \mathcal{G}(R,r)$ for notational simplicity, the $\rho'$ contribution from
\<
\bar{\mathcal{G}}''(r) = (2/r)\bar{\mathcal{G}}'(r) +
[6/r^2 + 4\pi G\rho'(r)/g(r)]\bar{\mathcal{G}}(r)
\?
is exactly canceled by one from
\<
g''(r) = 6Gm(r)/r^4 - 8\pi G\rho(r)/r + 4\pi G\rho'(r)
\?
in
\<
\begin{split}
G''_N} % Changed to save space; was {\mathrm{Newt}(r) &= -5R^3r^2[\bar{\mathcal{G}}''(r)/g(r) - \bar{\mathcal{G}}(r)g''(r)/\{g(r)\}^2\\
&\quad + \{\text{terms with no }\rho'\}].
\end{split}
\?
However, there is a direct contribution from
$\rho'$ to $G_N} % Changed to save space; was {\mathrm{Newt}''$ (via $g''$) if we make the Cowling
approximation [Eq.~\eqref{CowlingGN}].
We shall see that this leads to a
significant
difference in the resulting contributions to the quadrupole moment from regions
of the star surrounding a sudden change in density (e.g., near the crust-core
interface, which will be relevant for the quadrupoles supported by crustal
elasticity considered by UCB and others).
Numerically, we compute $G_N} % Changed to save space; was {\mathrm{Newt}$ using the standard expression for the Green function in terms of the
two independent solutions to the homogeneous equation [see, e.g., Eq.~(10.103) in Arfken and Weber~\cite{AW}~]. Since we are solely interested in the Green function evaluated at the star's surface, we can
eliminate one of the homogeneous solutions using the boundary conditions there, and only consider the
homogeneous solution that is regular at the origin, which we call $F$.
In terms of $F$, the Green function is given by
\<\label{cG_F}
\mathcal{G}(R,r) = -\frac{F(r)}{3RF(R) + R^2F'(R)}.
\?
We thus solve $\mathcal{L}_N F = 0$ [with the operator $\mathcal{L}_N$ given by Eq.~\eqref{Leq}] with the boundary conditions
$F(r_0) = 1$ and $F'(r_0) = 2/r_0$, where $r_0$ is the small inner radius used in the solution of the
OV equations, as discussed at the end of Sec.~\ref{GR}. [These boundary conditions come from
regularity at the origin, which implies that $F(r) = O(r^2)$ there.]
Our Green function method for obtaining the maximum quadrupole numerically may
seem more complicated than existing methods because it introduces extra steps.
However this method is ideal for showing that maximum stress gives the maximum
quadrupole and for seeing how much stresses at different radii contribute to
the total quadrupole.
It also appears to be the simplest way of dealing with any potential
distributional contributions from the derivatives of the shear modulus, since
they are automatically taken care of by the integration by parts.
\section{General relativistic calculation of the maximum quadrupole}
\label{GR}
Here we compute the maximum quadrupole moment in general relativity, using the Regge-Wheeler
gauge~\cite{RW} relativistic stellar perturbation theory developed by \citet{TC}, as in
the similar calculation of the tidal Love number of a relativistic star by
\citet{Hinderer}.
We
start by writing down the line element corresponding to a static, even-parity,
$l = 2$ first-order perturbation of a static,
spherical, relativistic star in the Regge-Wheeler gauge [cf.\ Eq.~(14) in Hinderer~\cite{Hinderer}~]:
\<\label{ds2}
\begin{split}
ds^2 &= -[1 + H_0(r)Y_{lm}]f(r)dt^2 + [1 + H_2(r)Y_{lm}]h(r)dr^2\\
&\quad + [1 + K(r)Y_{lm}]r^2(d\theta^2 + \sin^2\theta d\phi^2).
\end{split}
\?
Here we have used the notation of Wald~\cite{Wald} for the background, so that $f$ and $h$ are the standard
Schwarzschild functions for the unperturbed star, with $f = e^{2\phi}$, where
\<
\phi'(r) = \frac{m(r) + 4\pi r^3 p}{r[r - 2m(r)]},
\?
with $\phi(R) = \log(1 - 2M/R)/2$, and
\<
h(r) = \left[1 - \frac{2m(r)}{r}\right]^{-1}.
\?
In these expressions,
\begin{equation}
m(r) := 4\pi \int_0^r \rho(\bar{r})\bar{r}^2d\bar{r}.
\end{equation}
Also, recall that we write our spherical harmonics in terms of $l$ and $m$, following
UCB, even though we specialized to $l = m = 2$, and that we are now taking $G = c = 1$.
The metric perturbation is determined by $H_0$, $H_2$, and $K$, which here are sourced by the
perturbation to the star's stress-energy tensor. The appropriate stress-energy
tensor can be obtained directly
from the standard Newtonian expression~\eqref{deltatau} by simple covariance arguments, as in
Schumaker and Thorne~\cite{ST}, or from the detailed relativistic elasticity
theory of Carter and Quintana~\cite{CQ} [see their Eq.~(6.19); this is also given in Eq.~(128) of
Karlovini and Samuelsson~\cite{KaSaI}~]. All
we really need for our purposes is to note that the shear contribution is
tracefree with respect to the background metric, so that we can use the obvious
covariant generalization of the decomposition given by UCB,\footnote{Of course, this assumes that it is
possible to obtain \emph{any} symmetric tracefree tensor from the detailed relativistic expression, but---as would be expected (and can easily be seen from the expressions)---this is indeed the case, at least if one only works to first order in the perturbation, as we
do here.
Also, it is instructive to note that we do not need to know the specifics of the matter displacements that generate
the quadrupoles we consider, only that there is a tracefree contribution to the star's
stress-energy tensor whose maximum value is given by the material's shear
modulus and von Mises breaking strain.} yielding
\<\label{deltaT}
\begin{split}
\delta T_{ab} &= [\delta\rho\hat{t}_a\hat{t}_b + \delta p(g_{ab} + \hat{t}_a\hat{t}_b) -
t_{rr}(\hat{r}_a\hat{r}_b - q_{ab}/2)]Y_{lm}\\
&\quad -
t_{r\perp}f_{ab} - t_\Lambda(\tilde{\Lambda}_{ab} + h^{1/2}Y_{lm}q_{ab}/2),
\end{split}
\?
with the full stress-energy tensor given by
\<
T_a{}^b = \rho\hat{t}_a\hat{t}^b + p(\delta_a{}^b + \hat{t}_a\hat{t}^b) + \delta T_a{}^b.
\?
Here, indices now run over all four spacetime dimensions and $g_{ab}$ denotes
the background (spacetime) metric (which we use to raise and lower indices).
Additionally, we have introduced the background temporal and radial unit vectors $\hat{t}_a$ and $\hat{r}_a$;
$q_{ab}$ is the induced metric on the unit $2$-sphere;
$f_{ab} := 2r\hat{r}_{(a}\nabla_{b)}Y_{lm}/\beta$;
and
$\tilde{\Lambda}_{ab} := r^2h^{1/2}\nabla_a\nabla_bY_{lm}/\beta^2 + f_{ab}/\beta$.
Here $\hat{r}_a$ and $\nabla_a$ now have their curved-space meanings.
Our $\tilde{\Lambda}_{ab}$
differs from the Newtonian $\Lambda_{ab}$ [from UCB, given in our Eq.~\eqref{Lambda}] due to the insertion of $h^{1/2}$. This insertion is necessary
for $\tilde{\Lambda}_{ab}$ to be transverse and orthogonal to $f_{ab}$ (with respect to the background spacetime metric).
The same logic leads to the introduction of the factor of $h^{1/2}$ multiplying $q_{ab}$ in the $t_\Lambda$ term in
Eq.~\eqref{deltaT}; it is there so that the $t_\Lambda$ term is orthogonal to the $t_{rr}$ term. We
have used UCB's convention
for the relative sign between the perfect fluid and shear portions of the stress-energy tensor, though we
have reversed the overall sign. (However, we used the UCB convention proper
in Sec.~\ref{Newt}.)
The factor of $h^{1/2}$ in the coefficient of $t_\Lambda$ leads to a factor
of $h^{-1}$ in the
strain $\sigma_\Lambda$ that corresponds to the von Mises
breaking strain~\eqref{vonMises}.
We thus replace the Newtonian Eq.~\eqref{sL} with
\begin{equation}
\label{sLGR}
\sigma_\Lambda = 3\sigma_{rr}/h,
\end{equation}
leaving Eqs.~\eqref{srr} and~\eqref{srp} unchanged.
One can now obtain an equation for $H_0$ from the perturbed Einstein equations,
as in Ipser~\cite{Ipser}. (The other two metric functions, $H_2$ and $K$, can be expressed in terms of $H_0$;
these expressions are given by Ipser.) The concordance for notation
is $\nu = 2\phi$, $e^\nu = f$, $\lambda = 2\psi$, $e^\lambda = h$, $\rho_1 = -\delta\rho$, $p_1 = -\delta p$,
$\mathfrak{P}_2 = t_{rr}$, $\mathfrak{Q}_1 = h^{1/2}t_{r\perp}/\beta$, and $\mathfrak{S} = h^{1/2}t_\Lambda/\beta^2$. Additionally, Ipser's $H_0$ is the
negative of ours. The relevant result is given in Ipser's Eqs.~(27)--(28), and
is (in our notation)
\<
H_0'' + \left(\frac{2}{r} + \phi' -\psi'\right)H_0' + \mathcal{P}(r)H_0= 8\pi h^{1/2}\mathcal{S}(r),
\?
where
\<\label{cP}
\mathcal{P}(r) := 2\phi'' + 2\phi'\left(\frac{3}{r} - \phi' -\psi'\right) + \frac{2\psi'}{r} - \frac{\beta^2}{r^2}h
\?
and
\<\label{cS}
\begin{split}
\mathcal{S}(r) &:=
h^{1/2}(\delta\rho + \delta p - t_{rr}) +
2\biggl\{(3 - r\phi')\frac{t_{r\perp}}{\beta} + r\frac{t_{r\perp}'}{\beta}\\
&\,\quad + [r^2\phi'' + r\phi'(5 - r\phi') + r\psi' -\beta^2h/2 + 1]\frac{t_\Lambda}{\beta^2}\\
&\,\quad + r^2\phi'\frac{t_\Lambda'}{\beta^2}\biggr\}\\
&=: h^{1/2}(\delta\rho + \delta p) + \mathcal{S}_{[t]}(r).
\end{split}
\?
Here we have defined $\psi := (1/2)\log h$ and written $\mathcal{S}_{[t]}$
for the contributions from shear stresses. (The ``$=:$'' notation implies that the quantity
being defined is on the right-hand side of the equality.)
We now wish to eliminate $\delta\rho$ and $\delta p$ in favor of the shear stresses, as in the Newtonian calculation. We use the same projections of stress-energy conservation as in the Newtonian case (projecting onto the quantities defined by
the background spacetime, for simplicity) along with the Oppenheimer-Volkov
(OV) equations, giving
\<
\begin{split}
\delta\rho + \delta p &= \frac{1}{\phi'}\biggl[-\frac{H_0'}{2}(\rho + p)
-\delta p' + t_{rr}' + \left(\frac{3}{r} + \phi'\right)t_{rr}\\
&\quad - \frac{\beta}{r}h^{1/2}t_{r\perp}\biggr]
\end{split}
\?
and
\<
\begin{split}
\delta p &= -\frac{H_0}{2}(\rho + p) - \frac{t_{rr}}{2} + \frac{1}{\beta h^{1/2}}\left[(3 + r\phi')t_{r\perp} + rt_{r\perp}'\right]\\
&\quad + h^{1/2}\left(\frac{1}{\beta^2} - \frac{1}{2}\right)t_\Lambda.
\end{split}
\?
Using the second expression to substitute for $\delta p'$ in the first, we have
\<\label{drplusdp}
\begin{split}
\delta\rho + \delta p &= \frac{1}{\phi'}\biggl\{\frac{H_0}{2}(\rho' + p') + \left[\frac{3}{r} + \phi'\right]t_{rr} + \frac{3}{2}t_{rr}'\\
&\quad - \frac{1}{\beta h^{1/2}}\biggl[\left(\frac{\beta^2 h}{r} + \phi' + r\phi'' - \psi'[3 + r\phi'] \right)t_{r\perp}\\
&\quad + (4 + r[\phi' - \psi'])t_{r\perp}' + rt_{r\perp}''\biggr] +
\left(\frac{1}{2} - \frac{1}{\beta^2}\right)\\
&\quad\times h^{1/2}(\psi't_\Lambda + t_\Lambda')\biggr\}\\
&=: \frac{H_0}{2\phi'}(\rho' + p') + \frac{\mathcal{S}_{[\delta\rho,\delta p]}(r)}{\phi'}.
\end{split}
\?
The equation for $H_0$ thus becomes
\<\label{cLGR}
\begin{split}
(\mathcal{L}_\mathrm{GR} H_0)(r) &:= H_0'' + \left(\frac{2}{r} + \phi' -\psi'\right)H_0'\\
&\,\quad + \left[\mathcal{P}(r) - 4\pi h\frac{\rho' + p'}{\phi'}\right]H_0\\
&\,= 8\pi h^{1/2}[h^{1/2}\mathcal{S}_{[\delta\rho,\delta p]}(r)/\phi' + \mathcal{S}_{[t]}(r)].
\end{split}
\?
[$\mathcal{P}(r)$ and $\mathcal{S}_{[t]}(r)$ are given in Eqs.~\eqref{cP} and~\eqref{cS}, respectively.]
As expected, this
reduces to Eq.~\eqref{deltaPhieq2} in the Newtonian
limit [where we have $H_0 \to 2\delta\Phi$ and $\phi' \to g(r)$].
We now want to write the equation for $H_0$ in Sturm-Liouville form in order to obtain its Green
function easily. To do this, we note that the appropriate ``integrating factor'' (for the first
two terms) is $r^2(f/h)^{1/2}$, which gives
\begin{multline}
\label{H0_S-L}
[r^2(f/h)^{1/2}H_0']' + r^2(f/h)^{1/2}\left[\mathcal{P}(r) - 4\pi h\frac{\rho' + p'}{\phi'}\right]H_0\\
= 8\pi r^2f^{1/2}[h^{1/2}\mathcal{S}_{[\delta\rho,\delta p]}(r)/\phi' + \mathcal{S}_{[t]}(r)].
\end{multline}
We also need the boundary conditions, which are given by matching $H_0$ onto a
vacuum solution at the surface of the star. The vacuum
solution that is regular at infinity is given by Eq.~(20) in Hinderer~\cite{Hinderer} with $c_2 = 0$,
viz.,
\<\label{H0_BC}
\begin{split}
H_0(R) &= c_1\biggl[\left(\frac{2}{\mathcal{C}} - 1\right)\frac{\mathcal{C}^2/2 + 3\mathcal{C} - 3}{1 - \mathcal{C}}\\
&\quad + \frac{6}{\mathcal{C}}\left(1 - \frac{1}{\mathcal{C}}\right)\log\left(1 - \mathcal{C}\right)\biggr],
\end{split}
\?
where we have evaluated this at the star's surface ($r = R$) and
defined the star's compactness
\<
\label{compactness}
\mathcal{C} := 2GM/Rc^2
\?
(now returning to showing factors of $G$ and $c$ explicitly).
We require that $H_0$
and $H_0'$ be continuous at the star's surface. The value of $c_1$ obtained from
this matching of the internal and external solutions gives us
the quadrupole moment. If we use the quadrupole moment amplitude that reduces
to the UCB integral [given in our Eq.~\eqref{Q22}] in the Newtonian limit, we have
\<\label{Q22rel}
Q_{22} = \frac{G^2}{c^4}\frac{M^3c_1}{\pi}.
\?
[This expression comes from inserting a pure $l = m = 2$ density perturbation
into Eq.~(2) in Hinderer~\cite{Hinderer}, contracting the free indices with
unit position vectors, performing the angular integral, for
which the expressions in Thorne~\cite{Thorne} are useful,
and noting that the result is $(8\pi/15)Y_{22}$ times
our Eq.~\eqref{Q22}. The
given result then follows immediately from Hinderer's Eqs.~(7), (9), and (22);
we reverse the overall sign since we have reversed the UCB sign convention for the
stress-energy tensor.]
We then have a Green function for $Q_{22}$ of
\<\label{cG_GR}
\begin{split}
\mathcal{G}_\mathrm{GR}(R,r) &= \left(\frac{2GM}{c^2}\right)^3\left(1 - \frac{2GM}{Rc^2}\right)^{-1}\\
&\quad\times\frac{\mathcal{U}(r)}{c^2R^2[\mathcal{U}'(R)H_0(R) - \mathcal{U}(R)H_0'(R)]}
\end{split}
\?
(including the overall factor of $8\pi G/c^4$ that
multiplies the source).
Here $\mathcal{U}$ is given by $\mathcal{L}_\mathrm{GR}\mathcal{U} = 0$ [$\mathcal{L}_\mathrm{GR}$ is given in
Eq.~\eqref{cLGR}], with boundary conditions $\mathcal{U}(r_0) = 1$ and $\mathcal{U}'(r_0) = 2/r_0$.
[Compare Eq.~(10.103) in Arfken and Weber~\cite{AW}, as well as our Newtonian
version above.]
Additionally, $H_0(R)$ and $H_0'(R)$ are given by the boundary conditions~\eqref{H0_BC}
with $c_1 \to 1$. [One obtains this expression by first computing the Green function
for $H_0(R)$ following Arfken and Weber, then dividing through by the quantity in brackets in Eq.~\eqref{H0_BC}
to obtain $c_1$, and finally using Eq.~\eqref{Q22rel} to obtain $Q_{22}$. We have also noted that
$1/f \to h \to 1/(1 - 2GM/Rc^2)$ at the star's surface.]
We thus define, for notational simplicity, two relativistic generalizations
of $G_N} % Changed to save space; was {\mathrm{Newt}(r)$: One,
\<\label{GGR}
G_\mathrm{GR}(r) := \frac{r^2(fh)^{1/2}\mathcal{G}_\mathrm{GR}(R,r)}{\phi'},
\?
for
the contributions from $\mathcal{S}_{[\delta\rho,\delta p]}$, and one,
\<\label{GbGR}
\bar{G}_\mathrm{GR}(r) := r^2f^{1/2} \mathcal{G}_\mathrm{GR}(R,r),
\?
for the contributions
from $\mathcal{S}_{[t]}$.
With these definitions, the integral expression for the quadrupole in terms
of the stresses and the
structure of the background star is
\<\label{Q22GR}
\begin{split}
Q_{22} &= \int_0^R\left[G_\mathrm{GR}(r)\mathcal{S}_{[\delta\rho,\delta p]}(r)+\bar{G}_\mathrm{GR}(r)\mathcal{S}_{[t]}(r)\right]dr\\
&= \int_0^R(\mathcal{C}_{rr}t_{rr} + \mathcal{C}_{t\perp}t_{r\perp} + \mathcal{C}_\Lambda t_\Lambda)dr,
\end{split}
\?
where
\begin{subequations}\label{cCs}
\begin{gather}
\mathcal{C}_{rr} := \left(\frac{3}{r} + \phi'\right)G_\mathrm{GR}(r) - \frac{3}{2}G_\mathrm{GR}'(r) -
h^{1/2}\bar{G}_\mathrm{GR}(r),\\
\begin{split}
\mathcal{C}_{r\perp} &:= -\frac{\beta h^{1/2}}{r}G_\mathrm{GR}(r) + \frac{2 + r(\phi' + \psi')}{\beta h^{1/2}}G_\mathrm{GR}'(r)\\
&\,\quad - \frac{r}{\beta h^{1/2}}G_\mathrm{GR}''(r) + \frac{4 - 2r\phi'}{\beta}\bar{G}_\mathrm{GR}(r) - 2\frac{r}{\beta}\bar{G}'_\mathrm{GR}(r),
\end{split}\\
\begin{split}
\mathcal{C}_\Lambda &:= \left(\frac{1}{\beta^2} - \frac{1}{2}\right)h^{1/2}G_\mathrm{GR}'(r)\\
&\,\quad + \frac{2r\phi'(3 - r\phi') + 2r\psi' - \beta^2h + 2}{\beta^2}\bar{G}_\mathrm{GR}(r)\\
&\,\quad - \frac{2r^2\phi'}{\beta^2}\bar{G}_\mathrm{GR}'(r),
\end{split}
\end{gather}
\end{subequations}
and we have integrated by parts twice to obtain the second equality in
Eq.~\eqref{Q22GR}, using the same argument as in our Newtonian calculation.
We now look at the maximum quadrupole. This is still given by the uniformly
maximally strained case: We have checked numerically that the
coefficients of the three
stress terms are always negative for all the background stars we consider. We thus
have a maximum quadrupole
given by inserting Eqs.~\eqref{ts}, \eqref{srr}, \eqref{srp}, and
\eqref{sLGR} into Eq.~\eqref{Q22GR}, yielding
\begin{widetext}
\<\label{Q22GR2}
\frac{|Q_{22}^\mathrm{max, GR}|}{\bar{\sigma}_\mathrm{max}} = \sqrt{\frac{32\pi}{15}}\int_0^R\mu(r)\biggl\{\left[\frac{6}{r}(h^{1/2} - 1) - 2\phi'\right]G_\mathrm{GR}(r) + \left[3 - \frac{r}{h^{1/2}}(\phi' + \psi')\right]G_\mathrm{GR}'(r
+ \frac{r}{h^{1/2}}G_\mathrm{GR}''(r) + \mathcal{Q}^\mathrm{stress}\biggr\}dr,
\?
where
\<
\mathcal{Q}^\mathrm{stress} := 2\left[\frac{r\phi'(r\phi' - 3) - r\psi' - 1}{h} + r\phi' + h^{1/2} + 1\right]\bar{G}_\mathrm{GR}(r) + 2r\left(\frac{r\phi'}{h} + 1\right)\bar{G}_\mathrm{GR}'(r)
\?
\end{widetext}
is the contribution from the stresses' own gravity. We have split it off both for
ease of notation and because it is negligible except for the
most massive and compact stars, as illustrated
below. The contributions from the density and pressure perturbations are so much larger due to the factor of $1/\phi'$ present in $G_\mathrm{GR}$ [cf.\ Eqs.~\eqref{GGR} and~\eqref{GbGR}]. It is easy to see that Eq.~\eqref{Q22GR2} reduces to
Eq.~\eqref{Q22N2} in the Newtonian limit, where $h\to 1$,
and we can neglect the contributions involving $\phi'$, $\psi'$, and $\mathcal{Q}^\mathrm{stress}$.
\begin{figure}[htb]
\begin{center}
\epsfig{file=Q22_integrands_SLy_h0_7e19.eps,width=8cm,clip=true}
\end{center}
\caption[$Q_{22}$ integrands for the SLy EOS and an
$0.500\,M_{\odot}$ star]{\label{Q22_integrands_SLy_2} The $Q_{22}$ integrands (without the factor of $\mu\bar{\sigma}_\mathrm{max}$) for the SLy EOS and an
$0.500\,M_{\odot}$ star with a compactness of $0.12$.}
\end{figure}
\begin{figure}[htb]
\begin{center}
\epsfig{file=Q22_integrands_SLy_h0_2e20.eps,width=8cm,clip=true}
\end{center}
\caption[$Q_{22}$ integrands for the SLy EOS and a
$1.40\,M_{\odot}$ star]{\label{Q22_integrands_SLy_4} The $Q_{22}$ integrands (without the factor of $\mu\bar{\sigma}_\mathrm{max}$) for the SLy EOS and a
$1.40\,M_{\odot}$ star with a compactness of $0.35$.}
\end{figure}
\begin{figure}[htb]
\begin{center}
\epsfig{file=Q22_integrands_SLy_h0_7e20.eps,width=8cm,clip=true}
\end{center}
\caption[$Q_{22}$ integrands for the SLy EOS and a maximum mass star]{\label{Q22_integrands_SLy_6} The $Q_{22}$ integrands (without the factor of $\mu\bar{\sigma}_\mathrm{max}$) for the SLy EOS and a maximum mass,
$2.05\,M_{\odot}$ star, with a compactness of $0.60$.}
\end{figure}
We now show how the relations between the different maximal-strain $Q_{22}$ Green functions
[given by the integrands in Eqs.~\eqref{Q22N2} and~\eqref{Q22GR2} without the factors of
$\mu$ (but with the overall prefactor)] vary with
EOS, as well as with the mass of the star for a given EOS.
This gives
an indication of how much difference the various approximations make in different situations. We start with the unified SLy EOS~\cite{DH}, obtained by Haensel
and Potekhin~\cite{HP}
(using the table provided by
the Ioffe group~\cite{HPY} at~\cite{Ioffe}), which is a standard choice for making predictions about crustal
quadrupoles (e.g., in Horowitz~\cite{Horowitz}, HJA, and our Sec.~\ref{results_crust}). Here we illustrate the
changes in the Green functions with mass for stars with masses ranging from $0.5\,M_{\odot}$ to the EOS's maximum mass of $2.05\,M_{\odot}$;
see Figs.~\ref{Q22_integrands_SLy_2}, \ref{Q22_integrands_SLy_4}, and \ref{Q22_integrands_SLy_6}. (All three Green functions agree extremely closely for stars around the EOS's minimum mass of $0.094\,M_{\odot}$, so we do not show this case, particularly because such
low-mass neutron stars are of unclear astrophysical relevance.)
These stars' compactnesses [defined in Eq.~\eqref{compactness}]
range from $0.12$ to
$0.6$. Note that Fig.~\ref{Q22_integrands_SLy_6} has a different vertical scale than the other two plots, due to the suppression of the quadrupole for massive, compact stars (discussed below).
\begin{figure}[htb]
\begin{center}
\epsfig{file=Q22_integrand_Hy1_h0_max_ratios.eps,width=8cm,clip=true}
\end{center}
\caption[Ratios of the $Q_{22}$ integrands for the Hy1 EOS and a maximum mass
star]{\label{Q22_integrands_Hy1_ratios} Ratios of the $Q_{22}$ integrands
with the Newtonian Cowling approximation integrand for the Hy1 EOS and a maximum mass,
$2.06\,M_{\odot}$ star, with a compactness of $0.49$. Note that the top and bottom plots have completely
separate vertical axis scalings.}
\end{figure}
We illustrate the ratios of the various $Q_{22}$ Green functions to the
Newtonian Cowling approximation one for the maximum mass ($2.05\,M_{\odot}$) hybrid
star using the Hy1 EOS (see Table~I in~\cite{J-MO1}) in Fig.~\ref{Q22_integrands_Hy1_ratios}.\footnote{As
discussed in~\cite{J-MO1}, for our low-density EOS, we use the same combination of
the Baym, Pethick, and Sutherland (BPS)~\cite{BPS} EOS for $n_B < 0.001\text{ fm}^{-3}$ and
the Negele and Vautherin~\cite{NV} EOS for $0.001\text{ fm}^{-3} < n_B < 0.08\text{ fm}^{-3}$
used by Lattimer and Prakash~\cite{LP2001} ($n_B$ is the baryon number density).
These were obtained from the table provided by
Kurkela~\emph{et al.}~\cite{Kurkelaetal} at~\cite{Kurkelaetal_URL}.
Bulk quantities of hybrid stars such as the mass and quadrupole moment (from core deformations) do not depend much on the precise choice of low-density EOS.} We see the overestimate of the Newtonian
no Cowling approximation calculation for perturbations in the core, particularly compared with the
general relativistic (GR) version, and also see the
overestimate of the Newtonian Cowling approximation version for crustal perturbations. (We do not make
some sort of similar plot for the solid strange quark star case, since the expressions for the maximum
quadrupole in this case end up being rather different than the integrated-by-parts ones presented in the previous sections, as we shall see in Sec.~\ref{SQM_computation}.)
In all these cases, we compute the stellar background fully relativistically, using
the OV equations and identifying the OV equations' Schwarzschild
radial coordinate
with the Newtonian radial coordinate when necessary. We have used the enthalpy form of the OV
equations given by Lindblom~\cite{Lindblom} and implemented the inner boundary
condition by taking the star to have an inner core of radius $r_0 =
100\text{ cm}$, whose mass is given by $(4/3)\pi r_0^3\epsilon_0$, where
$\epsilon_0$ is the energy density corresponding to the central enthalpy that
parametrizes the solution. (The spike near the origin seen in the bottom plot in Fig.~\ref{Q22_integrands_Hy1_ratios} is due to this implementation of the inner boundary condition and has a
negligible effect on the computed maximum quadrupoles.) In all cases, we have used {\sc{Mathematica}}~7's
default methods to solve the differential equations, find roots, etc. We have computed as many derivatives
as possible analytically, to aid numerical accuracy, e.g., using the OV equations to substitute for derivatives of the pressure, and also using the Green function equations to
express second derivatives of the Green functions in terms of the functions themselves and their first derivatives.
\section{Results}
\label{results2}
\subsection{Maximum $Q_{22}$ for crustal deformations}
\label{results_crust}
Here we consider the maximum quadrupoles from elastic deformations of a
nonaccreted
crust in three possible situations, following HJA. In particular, we use the SLy EOS
(as do Horowitz~\cite{Horowitz} and HJA, though they do not refer to it by that name) and
impose two comparison crustal thicknesses to ascertain how much this
affects the maximum quadrupole. Here we use the
same rough
model for the crust's shear modulus used by HJA. We
also consider the more detailed model for the shear modulus
obtained using the crustal composition provided by Douchin and
Haensel~\cite{DH} (also used by Horowitz~\cite{Horowitz} and HJA).
Here the crust's thickness is fixed to the value given in that work. In this case,
we also consider a different high-density EOS that yields much less compact stars with larger crusts.
Specifically, the two comparison crustal thicknesses are given by taking the base of the
crust to occur at densities of $2.1\times10^{14} \text{ g cm}^{-3}$ (thick
crust, for comparison with UCB) or $1.6\times10^{14} \text{ g cm}^{-3}$ (thin
crust, following a suggestion by Haensel~\cite{Haensel}), while Douchin and
Haensel place the bottom of the crust at a density of
$1.28\times10^{14} \text{ g cm}^{-3}$.
For the two comparison cases, we take the shear modulus to be
$10^{16}\text{ cm}^2\text{ s}^{-2}$ times the star's density (in $\text{g cm}^{-3}$). As
illustrated in HJA's Fig.~2, this is an underestimate of $<50\%$, except at the very extremes of the density range considered.\footnote{Note that Fig.~3 in HJA is not in agreement with their Fig.~2. When we reproduce those figures, we
find that the ratio $\mu/\rho$ is considerably closer to $10^{16}\text{ cm}^2\text{ s}^{-2}$ over all the density range than the trace shown in HJA's Fig.~3, so their approximation is better than it would appear from that figure.}
We plot the quadrupole moment and ellipticity for these two cases for masses between
$\sim 1.2\,M_{\odot}$ (around the minimum observed neutron star mass---see~\cite{Lattimer_table})
and the SLy EOS's maximum mass of $2.05\,M_{\odot}$ in Fig.~\ref{Q22s_vs_M_SLy}.
\begin{figure}[htb]
\begin{center}
\epsfig{file=Q22s_vs_M_SLy_c16.eps,width=8cm,clip=true}
\end{center}
\caption{\label{Q22s_vs_M_SLy} The Newtonian Cowling, Newtonian no
Cowling, and full relativistic (including stress contributions) values for the
maximum quadrupole deformations (and fiducial ellipticity) due to crustal
stresses versus mass for two choices of crustal thickness. These are computed using the SLy EOS with the rough HJA recipe for the shear
modulus and a breaking strain of $0.1$.}
\end{figure}
In addition to the quadrupole moments, we also show the fiducial ellipticity
$\epsilon_\mathrm{fid} = \sqrt{8\pi/15}Q_{22}/I_{zz}$
[e.g., Eq.~(2) of~\cite{OwenPRL}~].
Here $I_{zz}$ is the star's principal moment of
inertia, for which we use the fiducial value of $I_{zz} = 10^{38} \text{ kg m}^2 =
10^{45} \text{ g cm}^2$ used in the LIGO/Virgo papers rather than the true value for a given mass and EOS,
which can be greater by a factor of a few.
We do this for easy comparison with the observational
papers, since they frequently quote
results in terms of this fiducial ellipticity instead of the quadrupole
moment, which is the quantity truly measured.
\emph{Nota bene} (N.B.):\ We present these fiducial ellipticities \emph{only} for comparison with LIGO/Virgo
results, not to give
any indication of the size of the deformation. While the true ellipticity
gives a measure of the size of the deformation in the Newtonian case (up to ambiguities from the fact that the true density distribution is nonuniform), it does not do so in
any obvious way in the relativistic case. Nevertheless, the relativistic shape of the star's surface can be obtained from its quadrupole deformation, as shown in~\cite{NKJ-M_shape}. However, if one wished to know, for instance, how much the star is deformed as a function of radius, one would need to calculate this using a detailed relativistic theory of elasticity to relate the stresses to the matter displacements, as in Penner~\emph{et al.}~\cite{Penneretal}.
In the more detailed
case, we use the HJA version of the Ogata and Ichimaru~\cite{OI} shear modulus,
combined with the Douchin and Haensel~\cite{DH} results for the crust's
composition.
This is [correcting a typo in HJA's Eq.~(20)],
\<\label{mueff}
\mu_\mathrm{eff} = 0.1194\left(\frac{4\pi}{3}\right)^{1/3}\left(\frac{1-X_n}{A}n_b\right)^{4/3}(Ze)^2,
\?
where $X_n$ is the fraction of neutrons outside of nuclei, $A$ and $Z$ are the
atomic and proton number of the nuclei, respectively, $n_b$ is the baryon number density, and
$e$ is the fundamental charge.
Since HJA's study, there have been a few improvements
to the Ogata and Ichimaru result: Horowitz and Hughto~\cite{HH} have computed
the effects of charge screening, finding a $\sim 7\%$ reduction in the shear
modulus. Baiko~\cite{BaikoCPP} has also considered a relativistic model
for the electron polarizability and arrived at similar conclusions. Indeed, Baiko's results suggest that screening will yield an even smaller
correction in the innermost portion of the crust, where the shear modulus is the largest, and the
electrons are the most relativistic, with a relativity parameter over an order of magnitude larger than the largest
Baiko considers. (However, the ion charge numbers are also almost always somewhat greater than the largest Baiko considers,
particularly at the very innermost portion of the crust, which will tend to increase the effect.)
Baiko~\cite{Baiko} has also recently computed quantum corrections, and finds
that they reduce the shear modulus by up to $\sim 18\%$ in some regimes.
However, in our case, the reduction will be much smaller, based on the scaling
of $\rho^{1/6}/(ZA^{2/3})$ given near the end of Baiko's Sec.~6. Even though
our densities are over an order of magnitude greater, the nuclei we
consider are also over an order of magnitude more massive than the ${}^{12}$C
composition Baiko considers, so
the quantum mechanical effects end up being reduced by about an order of
magnitude from the number Baiko quotes.
We thus use the same Ogata and Ichimaru result used by HJA, noting that the
resulting quadrupoles might be reduced by less than $10\%$ due to charge screening
and quantum effects---an error which is small compared to other
uncertainties, such as crust thickness and the composition of dense matter. Indeed, there
is a factor of $\sim 2$ uncertainty in the shear modulus due to angle averaging (even
disregarding whether the implicit assumption of a polycrystalline structure for the crust is
warranted): As shown by
Hill~\cite{Hill}, the Voigt average used by Ogata and Ichimaru is an upper bound on the
true shear modulus of a polycrystal. A lower bound is given by the Reuss average (also
discussed in Hill~\cite{Hill}), for which the prefactor in Eq.~\ref{mueff} would be
$0.05106$.
Note that there would be even
further corrections to the shear modulus due to pasta phases (see~\cite{PP}), but such phases are not present in the Douchin and Haensel model~\cite{DH}. We also note that the Douchin and Haensel results only include the very
innermost portion of the outer crust. However, this lack of coverage has a negligible effect on the final
results for the quadrupoles, since the neglected region has at most half the radial extent of the inner crust and the shear modulus in this region is orders of magnitude below its maximum value at the bottom of the inner crust. We have checked this explicitly using the detailed calculations
of the outer crust composition due to R{\"u}ster, Hempel, and Schaffner-Bielich~\cite{RHS-B}, available at~\cite{HempelURL}.
We plot the maximum quadrupole and ellipticity in the three approximations for the detailed shear modulus
model in Fig.~\ref{Q22s_vs_M_SLy_DH_HJA}. Here we show these for the SLy EOS proper, and also for
a high-density EOS that yields much less compact stars (and a crust that is $\sim 2$ times as thick), and thus larger maximum quadrupoles. For the latter EOS, we have chosen (for simplicity) the LKR1 hybrid EOS from~\cite{J-MO1}---the maximum compactnesses for the two EOSs are $0.60$ (SLy) and $0.43$ (LKR$1$). (We show the much larger quadrupoles that could be
supported by the mixed phase in the core for the LKR$1$ EOS in Fig.~\ref{Q22_vs_M_EOSs}, but here just show the crustal quadrupoles using the Douchin and Haensel model for the crust.)
\begin{figure}[htb]
\begin{center}
\epsfig{file=Q22s_vs_M_crustal.eps,width=8cm,clip=true}
\end{center}
\caption{\label{Q22s_vs_M_SLy_DH_HJA} The Newtonian Cowling, Newtonian no
Cowling, and full relativistic (including stress contributions) values for the
maximum quadrupole deformations (and fiducial ellipticity) due to crustal
stresses versus mass, for the SLy EOS with the detailed Douchin and Haensel + Ogata and
Ichimaru model for the shear modulus and a breaking strain of $0.1$, plus the crustal quadrupoles for the LKR$1$ EOS with the same crustal model.
}
\end{figure}
In all of these crustal results, in addition to the expected relativistic suppression of the quadrupole (which
becomes quite dramatic for compact, high-mass stars), we also find that the
Newtonian Cowling approximation slightly overestimates the quadrupole (by
$\sim 25$--$50\%$), as observed by HJA (though they found the overestimate to be
considerably greater, around a factor of at least a few). This overestimate is due to the cancellation of
contributions from $\rho'$ when one drops
the Cowling approximation (see the discussion at the end of Sec.~\ref{Newt}).
The overall decrease in the maximum crustal quadrupole with mass is due primarily to the fact that the crust thins by a factor of $\sim 4$ (SLy) or $\sim 2$ (LKR1) in going from a $1\,M_{\odot}$ star to
the maximum mass star, though the quadrupole itself receives even further suppressions with mass due to relativistic
effects and an increased gravitational field.
\subsection{Maximum $Q_{22}$ for hybrid stars}
\label{results_hybrid}
\begin{figure}[htb]
\begin{center}
\epsfig{file=Q22s_vs_M_Hy1_sigma80_mu_corr2.eps,width=8cm,clip=true}
\end{center}
\caption{\label{Q22s_vs_M_Hy1_sigma80} The Newtonian Cowling, Newtonian no
Cowling, and full relativistic (including stress contributions) values for the
maximum quadrupole deformations (and fiducial ellipticity) of hybrid stars versus
mass, using the Hy1 EOS with a surface tension of
$\sigma = 80\text{ MeV fm}^{-2}$ and a breaking strain of $0.1$.}
\end{figure}
Here we display the maximum quadrupole deformations as a
function of stellar mass for each of the hybrid EOS parameter sets considered
in~\cite{J-MO1}. (N.B.: Most of the results from~\cite{J-MO1} we use or refer to here were
corrected in the erratum to that paper.) We start by showing these values calculated in the various
approximations using the Hy1 EOS (with a surface tension of
$\sigma = 80\text{ MeV fm}^{-2}$; see Table~I in~\cite{J-MO1}) in
Fig.~\ref{Q22s_vs_M_Hy1_sigma80}, and then
restrict our attention to the relativistic results. (The relation between the results of the different approximations is roughly
the same for all the hybrid EOSs we consider.)
Here the maximum quadrupoles increase with mass, since the volume of mixed phase increases with mass, and this is more than enough to offset the suppressions due to relativity and the increased gravitational field.
\begin{figure}[htb]
\begin{center}
\epsfig{file=Q22s_vs_M_Hy1_sigmas_mu_corr2.eps,width=8cm,clip=true}
\end{center}
\caption{\label{Q22_vs_M_Hy1_sigmas} The full relativistic maximum quadrupole deformations (and fiducial ellipticity) of hybrid
stars versus mass, using the Hy1 EOS with various surface tensions $\sigma$
and a breaking strain of $0.1$.}
\end{figure}
We also show how the maximum relativistic quadrupole
varies with the surface tension for the Hy1 EOS in
Fig.~\ref{Q22_vs_M_Hy1_sigmas}. The slightly larger quadrupoles for lower surface tensions
at low masses are expected, due to a slightly larger shear modulus at low
pressures for lower surface tensions---see Fig.~10 in~\cite{J-MO1}. In fact, despite differences of
close to an order of magnitude in the high-pressure shear modulus for the Hy$1$ EOS
in going from a surface tension of $20\text{ MeV fm}^{-2}$ to one of
$80\text{ MeV fm}^{-2}$ (see Fig.~10 in~\cite{J-MO1}), the
differences in the resulting maximum quadrupoles are at most
a factor of a few (for large masses). This is not unexpected: These quantities
are dominated by the portions of the mixed phase further out in the star, where
the shear moduli have a much weaker dependence on the surface tension.
(Additionally, the fact that larger surface tensions lead to smaller shear moduli at low
pressures helps to minimize the effect, though the maximum
quadrupoles still increase with increasing surface tension for high masses, as
expected.)
\begin{figure}[htb]
\begin{center}
\epsfig{file=Q22s_vs_M_EOSs_mu_corr2.eps,width=8cm,clip=true}
\end{center}
\caption{\label{Q22_vs_M_EOSs} The full relativistic maximum quadrupole deformations (and fiducial ellipticity) of hybrid
stars versus mass, using the EOSs from Table~I in~\cite{J-MO1}, all with a surface tension of
$\sigma = 80\text{ MeV fm}^{-2}$
and a breaking strain of $0.1$.}
\end{figure}
Finally, we show the maximum quadrupoles for different
hybrid EOSs in Fig.~\ref{Q22_vs_M_EOSs}. (Note that these curves start somewhat above the minimum
masses for which the mixed phase is present, since we are mostly interested in the significantly larger
maximum quadrupoles possible for larger masses.) The
considerable differences are due primarily to the substantial variations in the extent
of the mixed phase in stable stars with EOS parameters as well as the
EOS dependence of the stars'
compactnesses (see Table~I in~\cite{J-MO1}), not to variations in the magnitude of the shear modulus for a
given quark matter fraction (compared in Fig.~12 in~\cite{J-MO1}). In particular, the LKR1 EOS produces stars with a very large region of mixed
phase---up to $72.5\%$ of the star's radius---and a (relatively) small maximum
compactness---only $0.433$. (Note that our quadrupole curve for
the LKR1 EOS ends slightly short of the EOS's maximum mass of $1.955\,M_{\odot}$,
only going to $1.948\,M_{\odot}$, due to problems with the numerics.)
N.B.:\ These maximum quadrupoles may all be overly optimistic. First, as was discussed in Sec.~\ref{results_crust}, the averaging used to obtain the effective shear modulus only gives an upper bound on the true shear modulus of a
polycrystal. (We do not quote results for the Reuss lower
bound here, since it is only straightforward to obtain for the three-dimensional droplet phases. However,
we shall note that preliminary investigations, using the Reuss bound for the droplet phases, and the
Voigt bound for the rest, give reductions in the maximum quadrupoles of up to $\sim 5$ for lower masses.)
Second, the relatively large value we have chosen for the surface tension also increases the maximum
quadrupoles, while recent calculations place the surface tension on the low side ($\sim 10$--$30\text{ MeV fm}^{-2}$)---see~\cite{PKR} for the latest results. Nevertheless, as we show in the Appendix, the mixed phase is nevertheless favored by global energy
arguments even for these large surface tensions. The maximum quadrupoles are also affected by the
method of EOS interpolation and the lattice contributions to the EOS, as is illustrated in the Appendix, though
the largest change is only $\sim40\%$ (at least for the LKR$1$ and Hy$1'$ EOSs, the two EOSs that yield the largest quadrupoles).
Note that LIGO's current upper limits on fiducial ellipticity
in the most interesting cases (the Crab pulsar, PSR~J0537--6910, and
Cas~A)~\cite{LIGO_psrs2010, LIGO_CasA} are $\sim 10^{-4}$, corresponding to a
quadrupole moment of $\sim 10^{41} \text{ g cm}^2$.
The first hybrid star estimate by \citet{OwenPRL} was an order of magnitude
lower.
Thus our new results here show that current LIGO upper limits are interesting
not only for quark stars but also for hybrid stars, at least high-mass ones.
Indeed, the most extreme case we consider, the LKR$1$ EOS with high surface tensions, gives maximum
quadrupoles of a $\text{few} \times 10^{42}\text{ g cm}^2$, which are above and therefore relevant to the
limits set by Virgo for the Vela pulsar~\cite{LIGO_Vela}.
\subsection{Maximum $Q_{22}$ for crystalline color superconducting quark stars}
\label{SQM_computation}
Here we consider stars made of crystalline color superconducting quark matter, for which the
shear modulus has been estimated by Mannarelli, Rajagopal, and Sharma~\cite{MRS}.\footnote{This
estimate is not angle averaged, but Mannarelli, Rajagopal, and Sharma's calculation has relatively large uncontrolled remainders, so we do not worry about the effects of angle averaging here.}
[See Eq.~(1) in Haskell~\emph{et al.}~\cite{Haskelletal} for the expression in cgs units.] Such
stars have also been treated (with varying degrees of sophistication) by Haskell~\emph{et al.}~\cite{Haskelletal}, \citet{Lin}, and~\citet{KS}. However, only Lin considers the case of a solid quark star, as we will do here, and does so using quite a rough model. (The others consider crystalline color superconducting
cores in hybrid stars.)
Since strange quark stars have a nonzero surface density---and solid quark stars have a nonzero
surface shear modulus, with the standard density-independent treatment of the superconducting
gap---we have to make some changes to our previously obtained expressions
in order to treat them.
First, the outer boundary condition changes. The potential (in the Newtonian case) and metric
perturbation (in the GR case) are no longer continuous at the star's surface, due to the presence
of $\rho'$ in both equations [see Eqs.~\eqref{Leq} and~\eqref{H0_S-L}]. As discussed in Hinderer~\emph{et al.}~\cite{Hindereretal} (following Damour and Nagar~\cite{DN}), one can obtain the distributional contribution to the boundary conditions [Eqs.~\eqref{Newt_BC} and~\eqref{H0_BC}] using the usual procedure of integrating the defining
differential equation over $[R-\epsilon,R+\epsilon]$ and taking the limit $\epsilon\searrow 0$.
In the Newtonian case, this gives [defining $\rho_-$ as the density immediately inside the star's
surface and $R^-$ to mean evaluation at $R-\epsilon$ in the limit $\epsilon\searrow 0$]
\<
\delta\Phi'(R^-) = \left[\frac{4\pi G}{g(R)}\rho_- - \frac{3}{R}\right]\delta\Phi(R),
\?
and in the GR case, we have (with $G = 1$)
\<
H_0'(R^-) = H_{0,\mathrm{old}}'(R) + \frac{4\pi h}{\phi'(R)}\rho_-H_0(R),
\?
where $H_{0,\mathrm{old}}'(R) $ is computed using Eq.~\eqref{H0_BC}.
We thus make the replacement $3RF(R) \to [3 - 4\pi G \rho_- R/g(R)]RF(R)$ in the expression
for the Newtonian Green function [Eq.~\eqref{cG_F}], and the replacement
$H_0'(R) \to H_{0,\mathrm{old}}'(R) + 4\pi h\rho_-H_0(R)/\phi'(R)$ in the GR case [Eq.~\eqref{cG_GR}].
These changes in the boundary conditions increase the maximum quadrupole by a factor of $\lesssim 2$ in the example case considered below; the largest effect is for the least massive stars considered.
Second, we would have to keep the boundary terms at the outer boundary when integrating by parts to
obtain the expressions for the maximum quadrupole, since the shear modulus no longer vanishes at the
star's surface. However, since here the shear modulus is smooth, it is numerically preferable not to perform any integration by parts, thus avoiding potential problems with large cancellations between the surface and integrated terms. In this case, the expressions for the quadrupole assuming the UCB maximum
uniform strain are [cf.\ Eqs.~\eqref{Q22N2} and \eqref{Q22GR2}]
\<
\label{Q22UCBs}
\frac{|Q_{22}^{\text{UCB strain}, N}|}{\bar{\sigma}_\mathrm{max}} = \sqrt{\frac{32\pi}{15}}\int_0^RG_N(r)[r\mu''(r) - \mu'(r)]dr
\?
and
\<
\begin{split}
\frac{|Q_{22}^\text{UCB strain, GR}|}{\bar{\sigma}_\mathrm{max}} &= \sqrt{\frac{32\pi}{15}}\int_0^R\bigl[G_\mathrm{GR}(r)\mathcal{I}_{[\delta\rho,\delta p]}^\mathrm{UCB}(r)\\
&\quad + \bar{G}_\mathrm{GR}(r)\mathcal{I}_{[t]}^\mathrm{UCB}(r)\bigr]dr,
\end{split}
\?
where
\begin{widetext}
\begin{subequations}
\begin{align}
\mathcal{I}_{[\delta\rho,\delta p]}^\mathrm{UCB}(r) &:= \left[\frac{6}{r}(h^{1/2} - 1) - 2\phi' + \frac{r\phi'' + \phi'(1 - r\psi') - \psi'}{h^{1/2}}\right]\mu(r) + \left[\frac{2 + r(\phi' - \psi')}{h^{1/2}} - 3\right]\mu'(r) + \frac{r\mu''(r)}{h^{1/2}},\\
\mathcal{I}_{[t]}^\mathrm{UCB}(r) &:= 2\left\{\frac{r\phi'[r(\phi' + 2\psi') - 5] - r\psi' - r^2\phi'' - 1}{h} + r\phi' + h^{1/2}\right\}\mu(r) - 2r\left(\frac{r\phi'}{h} + 1\right)\mu'(r).
\end{align}
\end{subequations}
\end{widetext}
However, these expressions will not actually yield the maximum quadrupole in this case, due to an important
difference between the cases where the shear modulus vanishes at the star's surface and those where it does not. It is simplest to see this in the
Newtonian case for a star with a constant shear modulus: Since the UCB maximum strain expression~\eqref{Q22UCBs} only
depends upon derivatives of the shear modulus, it
predicts a \emph{zero} maximum quadrupole, which seems absurd. One can, however, make a small
adjustment to the form of the maximum strain one considers to yield a nonzero quadrupole in this case. This modification will also yield considerably larger maxima in the realistic case we
consider, as well, where the shear modulus is close to constant---it decreases by less than a factor of $2$ in going from
the star's center to its surface in the example case we consider below.
Specifically, in the case of a slowly varying shear modulus, with $\mu(r)\gg|r\mu'(r)|,|r^2\mu''(r)|$, appropriate for strange quark stars, we want the terms involving $\mu$ itself to be
largest. The appropriate choice for the strain in this case is most readily apparent
from inspection of the Newtonian expression for the maximum quadrupole in terms of the stress tensor components, Eq.~\eqref{Q22N1}. We want the maximum contribution from the undifferentiated terms, which implies that we want $t_{rr}$ and $-t_{r\perp}$ to be as large as possible. For $t_\Lambda$, we note that since $\mu'(r)<0$,
we also want $-t_\Lambda$ to be as large as possible. Realizing that we can freely change the sign of any of
the $\sigma_\bullet$ that give maximum uniform strain [given for the Newtonian case in Eqs.~\eqref{sNewt}; cf.\ Eq.~(65) in UCB], we thus reverse the sign of $\sigma_{r\perp}$ and $\sigma_\Lambda$. [The same logic holds for the more involved GR case, as well, where
the appropriate expression for $\sigma_\Lambda$ will be the negative of Eq.~\eqref{sLGR}.]
The resulting expressions for the putative maximum quadrupole in this case are thus
\<\begin{split}
\frac{|Q_{22}^{\text{mod.\ strain}, N}|}{\bar{\sigma}_\mathrm{max}} &= \sqrt{\frac{32\pi}{15}}\int_0^RG_N(r)\biggl[\frac{12}{r}\mu(r) + 5\mu'(r)\\
&\quad +r\mu''(r)\biggr]dr
\end{split}
\?
and
\<
\begin{split}
\frac{|Q_{22}^{\text{mod.\ strain, GR}}|}{\bar{\sigma}_\mathrm{max}} &= \sqrt{\frac{32\pi}{15}}\int_0^R\bigl[G_\mathrm{GR}(r)\mathcal{I}_{[\delta\rho,\delta p]}^\mathrm{mod}(r)\\
&\quad + \bar{G}_\mathrm{GR}(r)\mathcal{I}_{[t]}^\mathrm{mod}(r)\bigr]dr,\end{split}
\?
where
\begin{widetext}
\begin{subequations}
\begin{align}
\mathcal{I}_{[\delta\rho,\delta p]}^\mathrm{mod}(r) &:= \left[\frac{6}{r}(h^{1/2} + 1) + 2\phi' + \frac{r\phi'' + \phi'(1 - r\psi') - \psi'}{h^{1/2}}\right]\mu(r) + \left[\frac{2 + r(\phi' - \psi')}{h^{1/2}} + 3\right]\mu'(r) + \frac{r\mu''(r)}{h^{1/2}},\\
\mathcal{I}_{[t]}^\mathrm{mod}(r) &:= - 2\left\{\frac{ r\phi'[5- r(\phi'+2\psi')] + r\psi' + r^2\phi'' + 1}{h} - r\phi' + h^{1/2}\right\}\mu(r)
- 2r\left(\frac{r\phi'}{h} + 1\right)\mu'(r).
\end{align}
\end{subequations}
\end{widetext}
In principle, these merely give a lower bound on the
maximum quadrupole, unlike the case in which the shear modulus vanishes below the surface, where there is a firm argument that maximum uniform strain maximizes the quadrupole. However, even if they do not give the absolute
maximum, they should be quite close for cases like the one we consider here, where the shear modulus varies quite slowly.
\begin{figure}[htb]
\begin{center}
\epsfig{file=Q22s_vs_M_SQM1_final.eps,width=8cm,clip=true}
\end{center}
\caption{\label{Q22s_vs_M_SQM1} The Newtonian Cowling, Newtonian no
Cowling, and full relativistic (including stress contributions) values for the
quadrupole deformations (and fiducial ellipticity) of maximally strained strange quark stars versus
mass, using the EOS discussed in the text with a breaking strain of $0.1$. We show these both for the
standard UCB uniform maximum strain, and our modification that yields significantly larger quadrupoles in this case.}
\end{figure}
Applying these expressions to a specific case, we use the strange quark matter EOS calculated by Kurkela,
Romatschke, and Vuorinen (KRV)~\cite{KRV}, generating an EOS for the parameter values of
interest using the {\sc{Mathematica}} notebooks available at~\cite{KRV_URL}.
The relevant parameters are the values of the $\overline{\mathrm{MS}}$ renormalization point,
$\Lambda_{\overline{\mathrm{MS}}}$, and the strange quark mass, $m_s$, both at a scale of $2$~GeV, along with the coefficient in the relation between the renormalization scale and the
quark chemical potential, $X$, the color superconductivity gap parameter, $\Delta$ (taken to be
independent of density),\footnote{Note that $\Delta$ enters the KRV EOS through a color flavor locked (CFL)
pressure term. This is not quite appropriate for the crystalline color superconducting phase we consider here, since it assumes that all the quarks pair, while only some of them pair in the crystalline phase. However, as
discussed in Sec.~VI~B of~\cite{Alford2008}, the condensation energy of the crystalline phases is easily
$1/3$ to $1/2$ that of the CFL phase with zero strange quark mass, which is the pressure
contribution used by KRV. We have thus not altered this term in our calculations, since the contribution is
already approximate, in that it assumes a density-independent gap parameter.
Moreover, we only consider a fairly low value of $\Delta$,
while Knippel and Sedrakian~\cite{KS} suggest that the crystalline phase might be favored up to $\Delta = 100$~MeV. Our EOS may thus simply correspond to a slightly larger value of $\Delta$, which would
increase the maximum quadrupole, since the shear modulus scales as $\Delta^2$.} and the
minimal quark chemical potential at which strange quark matter exists, $\mu_{q,\mathrm{min}}$.
We consider the EOS obtained by choosing $\Lambda_{\overline{\mathrm{MS}}} = 355$~MeV, $m_s = 70$~MeV, $X=4$, $\Delta = 10$~MeV, and $\mu_{q,\mathrm{min}} = 280$~MeV.
This parameter set yields a maximum mass of $2.45\,M_{\odot}$, with a maximum compactness of $0.467$.
These parameter choices were generally inspired by those considered at~\cite{KRV_URL}, though with a smaller value of $\Delta$, to place us well within the crystalline
superconducting regime. However, as Knippel and Sedrakian~\cite{KS} suggest, the crystalline phase
could still be favored for considerably larger $\Delta$s, up to $\sim 100$~MeV, for the low-temperature
case relevant for neutron stars. We thus note that increasing $\Delta$ decreases the maximum mass, and increases the
maximum quadrupole, though the latter is increased by considerably less than the na{\"\i}ve scaling of $\Delta^2$ one would
expect from the scaling of the shear modulus, likely due to the increased compactness of the stars with
larger $\Delta$s: For $\Delta = 100$~MeV, we have a maximum mass and compactness of $2.12\,M_{\odot}$ and $0.508$, respectively, and a maximum
quadrupole of $\sim 3.5\times 10^{45}\text{ g cm}^2$ for a $1.4\,M_{\odot}$ star, $\sim 20$ times that for $\Delta = 10$~MeV. However, one must bear in
mind that our perturbative treatment starts to become questionable with such large gap parameters, for which the maximum shear stresses are more than $10\%$ of the background's energy density. The uncontrolled remainders in the Mannarelli,
Rajagopal, and Sharma~\cite{MRS} calculation of the shear modulus also increase as the gap parameter
increases.
We show the quadrupole for a maximally uniformly strained star in the three approximations (Newtonian
Cowling, Newtonian no Cowling, and GR) for both the UCB and modified maximum strain choices for this EOS in Fig.~\ref{Q22s_vs_M_SQM1}. Here we have used a breaking strain of $0.1$, by the same high
pressure argument as in the mixed phase case. (While the very outermost portions of the star are at low
pressure, the parts that are at a lower pressure than the crustal case for which the $0.1$ breaking strain
was calculated make negligible
contributions to the quadrupole.)
N.B.:\ To obtain the EOS used for this figure, we made some slight modifications to the KRV {{\sc{EoScalc}}} {\sc{Mathematica}} notebook so that it would output particle number densities on a denser mesh for low strange quark chemical potentials. This then gave an EOS table with better low-pressure coverage than their default settings produced. We still needed to perform
an extrapolation of the EOS to zero pressure, where we found that a linear extrapolation of the energy
density and quark chemical potential in terms of the pressure using the lowest two entries of the table provided a good fit.
(More involved approaches involving fitting to more points and/or a quadratic extrapolation produce very similar results.)
Additionally, it is worth pointing out that the applying the KRV results to compact stars
pushes their second-order perturbative calculation towards the edge of its domain of validity. However,
in our case, the smallest value of the quantum chromodynamics (QCD) renormalization scale we consider is
$1.12$~GeV, at which value the QCD coupling constant is $\sim 0.45$. Thus, the uncontrolled remainders in
the expansion are suppressed by at least a factor of $\sim 0.1$. (While Rajagopal and Shuster~\cite{RS} find that perturbative QCD calculations of the color superconducting gap are only reliable at energy scales of
$\gtrsim 10^5$~GeV, the specifics of this calculation are rather different from the calculation of the EOS we
are considering here, where the gap is taken as an input parameter.) While it is unreasonable to expect
this calculation to be a truly accurate description of strange quark matter, it is not clear that any of the alternative descriptions of strange quark matter are \emph{a priori} guaranteed to be a better description of the physics, given the very considerable uncertainties associated
with this phase of matter.
\section{Discussion}
\label{discussion}
Previous studies of the tidal and magnetic deformations of compact
stars have found similar relativistic suppressions of quadrupole moments with
compactness.
In the tidal case, see the Love number computations in~\cite{Hinderer, BP, DN, Hindereretal, PPL}.
In the case of magnetic deformations, the expected suppressions are seen in,
e.g., \cite{IS, CFG, YKS, FR}.
In fact, since the largest compactness considered in these latter papers is only
$0.48$ (in~\cite{IS}), one imagines that they overestimate the maximum quadrupoles
by at least a factor of a few for more compact stars (for a fixed magnitude of magnetic
field).
As was argued by Damour and Nagar~\cite{DN} in the tidal case, all these suppressions are primarily related to the ``no-hair'' property of black holes: The largest relativistic suppression we find comes from the boundary conditions
[through the $H_0(R)$ and $H_0'(R)$ in the Green function's denominator---see Eq.~\eqref{cG_GR}], where
one matches on to the external vacuum spacetime. For instance, for the SLy EOS's maximum
compactness of $0.6$, $H_0(R)$ and $H_0'(R)$ are $\sim 3.5$ and $\sim 6$ times their
Newtonian values
[which can be obtained from the first term of Eq.~(21) in Hinderer~\cite{Hinderer}~].
In fact, these ratios go to infinity in the formal black hole limit, where the compactness
approaches unity, as required by the
no-hair property, and discussed by Damour and Nagar~\cite{DN} (see
their Secs.~IV~C and VII~A, but note that their definition for the compactness is half of ours). This implies that the stiffness of spherically symmetric
curved vacuum spacetime suppresses the quadrupole. The quadrupole is also
suppressed by a larger effective gravitational acceleration (given by $\phi'$), which appears
in the denominator of $G_\mathrm{GR}$, replacing the Newtonian $g(r)$ [cf.\ Eqs.~\eqref{GN} and~\eqref{GGR}]. (But
recall that we always compute the background stellar structure relativistically, so this
larger acceleration \emph{only} affects the perturbation equations, and not, e.g., the thickness of
the crust for a given mass and EOS, which is the same in both the Newtonian and
relativistic calculations of the quadrupole.)
Our results imply that nearly all of the Newtonian computations of quadrupoles due to elastic
deformations of relativistic stars overestimate the quadrupole moment, often by at
least a factor of a few. The only exceptions we have found are
for low-to-mid mass strange quark stars and for elastic stresses in the cores of neutron stars around $0.5\,M_{\odot}$. In
both of these cases, the Newtonian Cowling
approximation is a slight underestimate for contributions to the quadrupole, though the Newtonian no Cowling
version is still an overestimate. See Fig.~\ref{Q22_integrands_SLy_2} for an illustration in the core case;
but note that neutron stars with such low masses are not known to exist in nature.
The overestimate from performing a Newtonian Cowling approximation calculation
can be $\sim 6$ for massive stars whose quadrupole is
being generated by an elastic deformation near the
crust-core interface, as considered by UCB and others. This is due in part to the
sudden changes in density at that interface entering directly through $g''$, as
discussed at the end of Sec.~\ref{Newt}.
However, the calculations by Horowitz~\cite{Horowitz} for crustal deformations of very low mass stars only receive negligible
corrections (of $\lesssim5\%$), since he considers
compactnesses of $\sim 0.01$. In fact, one makes even
smaller errors in using the Cowling approximation to treat these stars,
since the changes in density in the crust (times $4\pi G r^2$) are much smaller than the star's gravitational field there.
No neutron stars with such low masses have ever been observed (nor is there
a compelling mechanism for forming them). Nevertheless, Horowitz remarks that
gravitational wave detection of
gravitational waves from elastically deformed neutron stars will,
\emph{ceteris paribus}, be biased towards low(er) mass neutron stars, if one
considers deformations generated by crustal stresses. This is an important
point, particularly when considering the astronomical interpretation of
detections (or even upper limits), and the results we present here make the
bias against high-mass stars even stronger. (This bias also applies to solid
quark stars, though there it is rather weak. It does \emph{not} apply to hybrid
stars, however, where it is high-mass stars that can sustain the largest quadrupoles.)
Of course, one must remember that all of these values are maxima, assuming a
maximally strained star, while there is no reason, \emph{a priori}, for a given star to be
maximally strained.
Moreover, as UCB and HJA note, these calculations assume
that all the strain goes into the $l = m = 2$ perturbation, though strain in
other modes (e.g., the $l = 2$, $m = 0$ mode due to rotation) can push the
lattice closer to its breaking strain while not
increasing the $l = m = 2$ quadrupole.
\section{Conclusions and outlook}
\label{concl2}
We have presented a method for calculating the maximum elastic quadrupole
deformation of a relativistic star with a known shear modulus and breaking strain.
We then applied this method to stars whose elastic deformations are
supported by a shear modulus either from the Coulomb lattice of nuclei in the crust, a hadron--quark mixed phase in the core, or crystalline superconducting strange quark matter throughout the star. (In the last case, we have made the requisite changes to the method so that it is valid when the star has a nonzero surface
density and the shear modulus does not vanish at the star's surface.) In all but the strange quark case, we find that the
relativistic quadrupole is suppressed, compared with the standard,
Newtonian Cowling approximation calculation of the quadrupole, at least for
stars with masses of $\gtrsim 1\,M_{\odot}$ (corresponding to the observed masses
of neutron stars) and the EOSs we have investigated. These suppressions can be
up to $\sim 4$ in the hybrid case, and
$\sim 6$ in the crustal case. In the strange quark star case, the Newtonian Cowling approximation
calculation slightly underestimates the quadrupole (by tens of percent) for low-to-standard mass stars, but is still an overestimate of $\sim 2$ at higher
masses.
These suppressions strengthen the Horowitz~\cite{Horowitz} argument that
searches for gravitational waves from elastically deformed neutron stars
supported by crustal stresses are biased towards lower-mass stars. The same argument also
applies to strange quark stars, though there the suppressions with increasing mass are less
severe (and the maximum quadrupoles are all considerably larger). However,
this argument does not apply to quadrupole deformations of hybrid stars, since
the increase in the size of the region of mixed phase with increasing mass
dominates the various suppressions.
Our results also imply that many of the
previous calculations of elastic quadrupoles (e.g.,~\cite{Lin, Haskelletal,
KS, UCB, HJA}) will need their results revised downwards. (While we find much larger maximum quadrupoles for solid strange quark stars than did Lin~\cite{Lin}, this is only because we assume a breaking
strain $10$ times that assumed by Lin. If we take the same $10^{-2}$ breaking strain as does Lin, then we find a suppression of a factor of a few, though this is very likely within the uncertainties of Lin's calculation,
which assumed a uniform density, incompressible star with a uniform shear modulus.)
It is instructive to compare our results with the numbers quoted in Pitkin's review~\cite{Pitkin}. All of these were obtained by Pitkin using scalings given in the aforementioned papers,
sometimes updating to the Horowitz and Kadau~\cite{HK} breaking strain, and provide a good overview of the standard Newtonian predictions.
None of our detailed calculations for maximum crustal quadrupoles approach the
high values Pitkin obtained using UCB's fitting formula (as corrected by
\citet{OwenPRL}). However, our very largest hybrid star quadrupoles are an order of magnitude
above Pitkin's quoted maximum, even if one only assumes a breaking strain of $10^{-2}$, as does Pitkin. Additionally, our estimates for maximum solid quark star quadrupoles ($\sim 10^{44}\text{ g cm}^2$ for $1.4\,M_{\odot}$ stars) are considerably larger than the ones quoted by Pitkin (based on a different shear modulus model), even if we reduce them by an order of magnitude due to scaling the breaking strain to Pitkin's $10^{-2}$. In fact, they
are in the same range as those Pitkin quotes for a model for crystalline superconducting hybrid stars (with an optimistic gap parameter $5$ times the one we used for solid quark stars, leading to a shear modulus $\sim 40$ times our shear modulus's maximum value).
Even with the relativistic suppressions, we obtain maximum quadrupole deformations of
$\text{a few}\times10^{42}\text{ g cm}^2$
in the hybrid case for a very stiff hadronic EOS, and
$\text{a few}\times10^{41}\text{ g cm}^2$ for more realistic cases. In both
situations, the largest maximum quadrupoles are given by the most massive stars. These
values are proportional to the breaking strain and assume that the Horowitz and Kadau~\cite{HK}
breaking strain of about $0.1$ is applicable to the mixed phase. Such large
quadrupole deformations were previously thought only to be possible for solid
quark stars (see~\cite{OwenPRL, Lin, Haskelletal, KS}), or from crustal deformations in the very
low-mass neutron stars considered by Horowitz~\cite{Horowitz}. These large
deformations (corresponding to fiducial ellipticities of
$\text{a few}\times10^{-3}$ in the extreme case, and
$\sim 5\times10^{-4}$ in a more realistic case)
would be able to be
detected by current LIGO searches for gravitational waves from certain known
neutron stars~\cite{LIGO_psrs2010, LIGO_CasA, LIGO_Vela}. (However, we must note that there is no reason to assume that
such isolated stars are anywhere near maximally strained, even neglecting the
uncertainties in the description of their interiors.)
The prospects for
crustal quadrupoles are now somewhat less optimistic, and definitely favor
lower-mass stars. However, for a canonical $1.4\,M_{\odot}$ neutron star, we find that the maximum
relativistic crustal quadrupole is in the range $\sim\text{(1--6)}\times 10^{39}\text{ g cm}^2$ [corresponding to
fiducial ellipticities of $\sim\text{(1--8)}\times 10^{-6}$], depending on the model used for the crust and the
high-density EOS. (Note that the fully consistent Douchin and Haensel model with its associated high-density EOS yields the lowest numbers. Additionally, there is the possibility of a further reduction of up to $\sim 2$ due to the angle averaging procedure used to obtain the shear modulus.) On the high side, these numbers are consistent
with those given previously for breaking strains of $0.1$ by Horowitz~\cite{HK, Horowitz},\footnote{But recall
that the results from Horowitz~\cite{Horowitz} were obtained using the SLy EOS and crustal composition results, so they
are the same as our Newtonian Cowling approximation SLy predictions, given in Fig.~\ref{Q22s_vs_M_SLy_DH_HJA}, except $\sim7\%$ lower, since Horowitz is using the Horowitz and Hughto~\cite{HH} result for the shear modulus. In the fully relativistic case, one requires a thicker crust than provided by the pure SLy results to obtain values for the maximum quadrupole comparable to those given by Horowitz.} though they are a factor of $\sim 5$ lower than the maximum Pitkin~\cite{Pitkin} obtained using scalings of previous results and the maximum
value given by HJA (scaled to this breaking strain). For stars around $2\,M_{\odot}$, the relativistic suppressions
lead to maximum quadrupoles that are nearly an order of magnitude smaller than those for a $1.4\,M_{\odot}$ star in the compact SLy case: ${\sim\text{(1--5)}}\times 10^{38}\text{ g cm}^2$ [corresponding to
fiducial ellipticities of $\sim\text{(1--6)}\times 10^{-7}$]; and even in the much less compact LKR$1$ case, there is a suppression of $\sim 5$. Previous Newtonian studies (see Fig.~3 in~\cite{Horowitz}) had only
found suppressions of around a factor of $4$, due to the thinning of the crust and the increase
in Newtonian gravity with increasing mass. It will be interesting to consider further models for the crustal
composition and EOS in this case, particularly the large
suite of crustal models including the pasta phases recently calculated by Newton, Gearheart, and Li~\cite{NGL}. (See~\cite{GNHL} for order-of-magnitude estimates of the maximum quadrupole for these models,
illustrating the sensitive dependence on the slope of the symmetry energy.)
One can also compare these maximum elastic quadrupoles with those generated by an internal magnetic field. Here the values depend,
of course, upon the equation of state, compactness, and---perhaps most crucially---magnetic field topology, as well as the quantity one
chooses to use to measure the magnitude of the magnetic field. But
sticking to order-of-magnitude numbers, and considering a canonical $1.4\,M_{\odot}$ neutron star, Frieben and
Rezzolla~\cite{FR} show that a toroidal internal
field of $\sim10^{15}$~G would generate a quadrupole of $\sim 10^{39}$--$10^{40}\text{ g cm}^2$, comparable to the
maxima we find for crustal quadrupoles. Similarly, quadrupoles of $\sim 10^{41}$--$10^{42}\text{ g cm}^2$, around the maxima we find for hybrid
stars, could come from magnetic fields of $\sim 10^{16}$~G, while the maximum quadrupoles of $\sim 10^{44}\text{ g cm}^2$ we find for crystalline strange quark stars could also be
generated by magnetic fields of $\sim 10^{17}$~G, close to the maximum allowed field strength. (But note that these magnetic deformations are all computed for ordinary, purely hadronic neutron stars. Additionally,
the quoted maximum elastic quadrupoles in the hybrid case are attained only for more massive stars than the $1.4\,M_{\odot}$
stars for which we are quoting the magnetic deformation results.)
The quoted values for magnetic quadrupoles come from the fits given in Sec.~7 of Frieben and Rezzolla~\cite{FR}, except for the final ones, which are obtained from inspection of their Fig.~5 and Table~3. All these values agree in order of magnitude with the
predictions for the twisted torus topology given by Ciolfi, Ferrari, and Gualtieri~\cite{CFG}, and with many other studies for various topologies cited in Frieben and Rezzolla~\cite{FR}. But note that very recent calculations by Ciolfi and Rezzolla~\cite{CR} show that the magnetic field required to obtain a given quadrupole deformation with the twisted torus topology could be reduced by about an order of magnitude if the toroidal contribution dominates.
One would also like to make relativistic calculations of the maximum energy that could be
stored in an elastic deformation.
This would be useful in properly computing the available
energy for magnetar flares, for instance.
(Using Newtonian scalings, \citet{CO} estimated that the hybrid case was especially
interesting compared to existing LIGO upper limits for gravitational wave emission from such flares.)
The basic expressions (at least in
the perfect fluid case) appear to be readily available in the literature
(see, e.g.,~\cite{Schutz2,DI}; \cite{ST, Finn} give related results including
elasticity). However, one cannot apply these directly to the crustal and hybrid cases, even in the Newtonian
limit, due to the
distributional nature of the density and pressure perturbations. Specifically, the
sudden change in shear modulus at the phase transitions gives delta functions in the
derivatives of the density and pressure perturbations. Since the energy expressions involve
squares of these derivatives, one would have to invoke some sort of regularization procedure, or apply a different
method. Developing appropriate expressions for this case
will be the subject of future work.
Returning to the quadrupoles, one might also want to consider the shape of the deformed star, particularly
in the relativistic case---the ellipticity is already only a rough indicator of the shape of the
deformation in the Newtonian case---as has now been done in~\cite{NKJ-M_shape}.
But the effects of the star's magnetic field are surely the most interesting to consider, from
its influence on the lattices that support elastic deformations, to the changes to the boundary conditions at the star's surface from an external magnetic field (particularly for magnetars), to the internal magnetic field's own contribution to the star's deformation.
One might also want to consider the lattice's full elastic modulus tensor in this case, instead of simply assuming a polycrystalline structure and angle averaging to obtain an effective isotropic shear modulus, as was
done here. (And even if one assumes a polycrystalline structure, one could use more involved, sharper
bounds on the shear modulus than the ones considered here---see~\cite{WDO} for a classic review of such bounds.)
\acknowledgments
We wish to thank S.~Bernuzzi, D.~I.~Jones, A.~Maas, R.~O'Shaughnessy, and the anonymous referee for helpful suggestions.
This work was supported
by NSF grants PHY-0855589 and PHY-1206027, the Eberly research funds of Penn State, and the DFG SFB/Transregio 7.
|
1,314,259,994,884 | arxiv | \section{INTRODUCTION}
The configuration-mediated directionality of non-covalent bonds between proteins explains their propensity to self-assemble into fibrils and filaments \cite{oosawa75,pollard86,dobson06,chandler05,irback03}. Protein filaments are ubiquitous in biology, forming inside the cells or in the extra-cellular matrix -- individually, in bundles, or in randomly crosslinked networks. They facilitate the propulsion in bacteria, they control the mechanical strength in cytoskeleton and the bending stiffness in axons, they allow positional control of organelles and provide the transport routes all around the cell \cite{oosawa75,dobson06,chandler05,irback03,niranjan03,adamcik10,tanaka06}. In a different situation, the self-assembly of proteins into amyloid fibrils impairs physiological activity and is the root cause of a number of organic dysfunctions \cite{dobson06,knowles09,fodera1,fodera2}. In yet another context, filaments are artificially or spontaneously assembled to achieve a specific function in the material, such as directed conductivity, plasmonic resonances, or just the mechanical strength in a fiber composite, with important technological applications \cite{colloid09,bonn98}. Finally, a conceptually related issue emerges in the denaturation of DNA~\cite{DiMicheleJACS}, for which the available theoretical framework \cite{peyrard89,mast13} cannot provide predictions about the topology of the disassembly process. The typical size of all these aggregates, and its time-evolution, are a non-trivial function of the rate at which bonds along the filament spontaneously dissociate due to the thermal motion of the assembled molecules. The dissociation rate and the distribution of fragments are important parameters which enter the master kinetic equation description of self-assembling filament size and populations.
A filament growth can be summarized by the reversible reaction: $A_n + A_1\rightleftharpoons A_{n+1}$, where the monomer subunit $A_1$ is added to an existing filament of $n$-units long. For the forward reaction, it is commonly accepted that association proceeds by the addition of a single subunit -- as opposed to the joining of larger segments -- because of the greater abundance of monomers with respect to active fragments. In contrast, despite the importance of thermal breakup in many fields of colloid science and technology~\cite{wu1,wu2}, its basic understanding is far from satisfactory. Several studies aimed to explain thermally-activated filament breakup in physical terms, came to the conclusion that fibrils of any respective size can aggregate, while the filament breakup can occur with equal probability anywhere along its length. In particular, Lee~\cite{lee09} has demonstrated that the thermal breakup occurs randomly along the chain, leading to daughter fragments of any size. In yet another classical model based on equilibrium detailed-balance between the various aggregation and breakup events, by Hill~\cite{hill}, the highest breakup probability is for two fragments of equal size, i.e. the breakup rate is maximum in the middle.
Theoretical models in the past have focused on the simplified case of chains of harmonically bonded particles (subunits), so that the binding force is linear in the inter-protein displacement~\cite{hill,lee09}. In this approximation the normal modes of vibration of the chain are de-coupled, which makes the problem amenable to simpler analysis. Even in this case, previous theoretical models reached contradictory conclusions, with either flat breakup distribution or a pronounced maximum in the middle. However, the physical bonds linking protein filament subunits (such as hydrogen bonds and hydrophobic attraction) are strongly anharmonic. Then the problem becomes one of coupled nonlinear oscillators as in the famous Fermi-Pasta-Ulam problem~\cite{fermi55}, for which the typical vibration modes are no longer delocalized periodic waves but solitons~\cite{kruskal65}. This is also consistent with the finding~\cite{oliveira,vilgis} that in a strained Lennard-Jones chain, the strain is not uniformly distributed, but localized around the bond which is going to break first.
The standard tools of chemical dynamics and stochastic rate theory~\cite{zaccone12,haenggi90}, all based on the harmonic approximation and on normal modes, are therefore inapplicable~\cite{wiggins13,vilgis2}.
Here we develop a systematic microscopic understanding of this process based on Brownian dynamics simulation and theoretical arguments, {focusing on the {nonequilibrium} breakup phenomena. Hence we study the intrinsic breakup rates independent of any recombination phenomena which may occur at later stages leading eventually to an equilibrium size.} First of all, we discover that the topology of filament breakup critically depends on the bending stiffness of the chain. Secondly, a clear connection is found between the anharmonicity of subunit interaction and the fragment distribution resulting from thermal breakup. The anharmonic Lennard-Jones or Morse-like binding potential in stiff or semiflexible filaments inevitably leads to a very strong preference for the breakup to occur at chain ends, but recover the uniform, flat fragment distribution in the limit of harmonic (or any other symmetric) potential. Importantly, it is not the bare anharmonicity which controls this effect, but, more precisely, the \textit{asymmetry} of the bonding potential about the minimum (larger force for bond compression than for extension), which is inherent to the most common anharmonic potentials. As we will show below, it is precisely the asymmetry in the potential which "breaks the symmetry" between dissociation rates at the middle of the filament and at the ends. Those rates are equal only for symmetric potentials like harmonic, and they always differ for asymmetric potentials.
In contrast, when the intermolecular interaction is purely of the central-force type, i.e. a fully flexible chain with no bending resistance, a bell-like distribution peaked in the middle is obtained in accord with the prediction of the Hill model. These findings can be understood with an argument based on counting the degrees of freedom per particle for the different potentials.
These results provide a fundamental link between the features of intermolecular interaction and the filament breakup rate and topology, and can be used in the future to predict, control and manipulate the filament length distribution in a variety of self-assembly processes in biological and nanomaterials applications.
\section{SIMULATIONS}
To model a non-covalently bonded filament we use a coarse-grained model of linear chains of Brownian particles (Fig.\ref{fig1}a) bonded by the truncated-shifted Lennard-Jones (LJ) potential,
\begin{equation}\label{1}
\frac{U_{LJ}}{k_BT} = \left\{ {\begin{array}{*{20}{c}}
{4\tilde \varepsilon [{{(\sigma /r)}^{12}} - {{(\sigma /r)}^6}] - {U_c},~~{\rm{ for }}~r < {R_c}}\\
{0, \qquad {\rm{ for }}~r \ge {R_c}}
\end{array}} \right.
\end{equation}
where $r$ is the distance between two neighbor proteins $i$ and $i+1$, $\sigma$ is the linear size of the monomer unit, and $U_c = 4\tilde \varepsilon [{(\sigma /{R_c})^{12}} - {(\sigma /{R_c})^6}]$. The parameter $\tilde \varepsilon = \varepsilon /\{ 4[{(\sigma /{R_c})^{12}} - {(\sigma /{R_c})^6}] + 1\}$ is set to maintain a constant well depth equal to $\varepsilon$, independently of $R_{c}$. The LJ potential is inherently anharmonic, except in the close proximity of its minimum. An alternative could be the Morse potential, and we have checked that the results do not change qualitatively with its use. Figure \ref{fig1}b explains what we mean by truncation: the attractive region stretches up to a distance $R_c$ (indicated by arrows in the plot and measured in terms of LJ length scale $\sigma$), while the depth of the potential well is kept independently fixed (measured by $\varepsilon$, in units of $k_BT$). The shorter the attraction range, the closer is the potential to its harmonic approximation. For all the data we use $\varepsilon=10$, which well approximates the strength of the most common physical interactions such as hydrogen bonds and hydrophobic attraction.
We also include in our analysis the local bending energy, in the form $\frac{1}{2} \sum_i K \, \theta_{i}^2$, where $\theta_{i}$ is the angle between the directions of bonds from the particle $i$ to the preceding ($i-1$) and the subsequent ($i+1$) subunits. Figure~\ref{fig1}d illustrates the way this effect is implemented by imposing pairs of equal and opposite forces on the joining bonds, providing a net torque on the junction. It is the same algorithm that is used in, e.g. LAMMPS `angle-harmonic' system \cite{lammps}. The bending modulus $K$, {in units of $k_{B}T$,} is directly related to the persistence length of the filament via the standard expression $l_p \approx K \sigma/k_BT$.
\begin{figure}
\includegraphics[width=.42\textwidth]{fig1-start}
\caption{(a) Scheme of the coarse-grained nanofilament as a sequence of subunits bonded by the truncated LJ potential. (b) The LJ pair interaction potential between two bonded subunits of size $\sigma$ in the chain, for several values of attractive range, measured by $R_c$ and indicated by arrows in the plot. (c) The contrast between a combined potential $W(r)$ felt by an inner subunit in the filament, bonded on both sides, and the end-subunit bonded by the regular LJ potential. (d) Scheme of the bond-bending force which opposes changes in the angle between two adjacent bonds by applying couples on each adjacent bond.}
\label{fig1}
\end{figure}
The dynamics of the chain of subunits is governed by the overdamped Langevin equation,
\begin{equation}\label{A1}
\gamma \frac{{d{\bf{r}}}}{{dt}} = - \nabla {V}({\bf{r}}) + {\bf{A}}(t)
\end{equation}
where $\mathbf{r}$ is the vector containing the positions of all molecules, $\gamma$ is the friction coefficient, the total potential force acting on a given particle, $- \nabla {V}$, has contribution from both the LJ and the bending couples, and
the Gaussian stochastic force defined such that $\langle {\bf{A}}(t)\rangle = 0$ and
$\langle {A_i}(t){A_j}(t')\rangle = 2{k_B}T \gamma \, {\delta_{ij}}\delta (t - t')$ , according to the fluctuation-dissipation theorem.
For numerical integration Eq. (\ref{A1}) is discretised in the form known as the Ermak-McCammon equation \cite{ermak1,ermak2}:
\begin{equation}\label{S2}
{\bf{r}}(t + \Delta t) = {\bf{r}}(t) - \frac{\nabla V({\bf{r}})}{\gamma}\Delta t + \Gamma \sqrt {\frac{{2 k_\mathrm{B}T}}{\gamma }\Delta t},
\end{equation}
where $\Gamma$ is randomly extracted from a normal distribution with zero average and unit standard deviation. The discrete time step is taken as $\Delta t = 5\cdot 10^{-5} \tau$, where the reduced time uint is defined as $\tau=\sigma^2/D$, and $D=k_\mathrm{B}T/\gamma$ is the diffusion coefficient. For a typical globular protein (e.g. Lysozyme), with diameter$\sigma \simeq 5$~nm and diffusion coefficient $D\simeq 10^{-10}$ m$^2$/s \cite{burne1}, we obtain $\tau\simeq 0.25$~$\mu$s. Therefore $\Delta t \simeq 13$~ps.
Each run is initialized with the equilibrium interparticle distance ${r_i} - {r_{i + 1}} = {2^{1/6}}\sigma $, as a straight chain (all $\theta_i=0$), corresponding to the minimum of all interaction potentials. A dissociation event is assumed to take place when one of the bonds exceeds the cut-off length ($R_c$), i.e. $\left| {{\mathbf{r}_i} - {\mathbf{r}_{i + 1}}} \right| > {R_c}$, at which point the simulation is terminated and the location of the rupture recorded. The location of the rupture is recorded. To generate the probability distributions plotted in Figs. \ref{fig3} and \ref{fig4}, $N$ independent runs are performed and the normalised breakup probability is calculated as $P(s) = N(s)/N$ where $N(s)$ is the total number of recorded breakup events for the bond $s \in 1...n$. For most data we have reached $N \geq {10^4}$; since the runs are independent, the $N(s)$ are binomially (Bernoulli) distributed and the error bars are estimated as $\sqrt{{P(s)[1 - P(s)]/N}}$, which always stayed below 10\% of the value for $P(s)$.
\begin{figure}
\includegraphics[width=.4\textwidth]{fig2-chains}
\caption{An illustration of the role of bond-bending in the potential. The chains of $n=20$ subunits bonded with $\varepsilon = 10$ and $R_c=4$ were initialized from a straight conformation and allowed to fluctuate for $5000$ ts. The resulting snapshots, for each value of $K$ indicated on the image, show the effect of differing persistence length. }
\label{figK}
\end{figure}
\section{RESULTS}
\subsection*{Breakup statistics along the filament}
Figure \ref{fig3} shows the main result of our Brownian dynamics simulation: on increasing the bending stiffness of the filament, the highly inhomogeneous normalized probability $P(s)$ changes from a bell-shaped distribution peaked in the middle (reminding of the Hill model), to a completely opposite shape, with a strong preference for single subunits to dissociate from the ends.
\begin{figure}
\includegraphics[width=.42\textwidth]{fig3-K}
\caption{The normalized probability of the first breakup as a function of the position along the filament for a chain $n=20$ and LJ parameters $\varepsilon=10$, $R_c=4$. The effect of changing bending stiffness (increasing persistence length) is evident: for the chain with essentially no bending penalty (lowest $K$) the distribution of fragment sizes is bell-shaped with a maximum in the middle of the middle of the filament.
Stiff chains, with a strong bond-bending penalty, instead, feature a nearly homogeneous, flat distribution of fragment sizes -- with an increasingly large increase of breakup rate at the ends. There is a broad range of semiflexible filaments that behave in exactly the same way: as ``stiff'' chains.}
\label{fig3}
\end{figure}
The conclusion arising from this data is clear: there is a broad range of what one could collectively interpret as `stiff' filaments, for which the nature of bond-breaking statistics is exactly the same. These are with the bending stiffness of $K \gtrsim 1000$, and their behaviour does not differ from the last dataset in Fig.~\ref{fig3} (labelled `stiff'), corresponding to the strictly 1-dimensional filament where only the motions along the chain were permitted. For these stiff or semiflexible filaments there is a very strong preference to dissociate a single subunit from the chain ends, which does diminish for less symmetric potentials, as demonstrated by Fig. \ref{fig4} below. However, as the chain becomes increasingly flexible, the ratio of breaking rates at the ends and in the middle gradually reverses, and for a very flexible chain ($K = 0.1$ in the plot) the breakup probability resembles the prediction from the Hill model. One can qualitatively understand this effect: for a stiff filament (as shown in Figs. \ref{fig1}a and \ref{figK}), in order to develop a thermal fluctuation large enough to stretch a bond beyond $R_c$, a whole sequence of bonded particles has to move in a correlated fashion; this leads to an effectively harmonic potential acting on the middle particles, and diminishes their breaking rate very significantly. On the other hand, as the particles in a flexible chain are free to move perpendicular to the bond axis, this coherent motion is not required and the bond breaking statistics is dominated by the single-bond equilibrium.
Most protein filaments are quite stiff. The F-actin has the quoted persistence length $l_p \sim 16 \, \mu$m \cite{actin1,actin2}, and the insulin amyloid filaments: $l_p \sim 4 \, \mu$m \cite{craig}. Interestingly, if one measures $l_p$ in the units of constituent protein size (as the parameter $\sigma$ in our case), these very different filaments all have $l_p$ between 3000 and 6000 units. We therefore choose the bending stiffness modulus $K=1000$ in all subsequent analysis, which is within the class of `stiff' filaments according to the data in Fig. \ref{fig3}.
\begin{figure}
\includegraphics[width=.42\textwidth]{fig4-Rc}
\caption{The normalized probability of the first breakup event as a function of the position along the filament for a chain $n=20$ long, bonded by potentials with $\varepsilon =10$ and $K=1000$. The harmonic potential with the same depth has a uniform (homogeneous) probability of breakup, while at increasing $R_c$ (and the degree of potential asymmetry about the minimum) the ends of the chain are increasingly more prone to single-particle depolymerization. The ratio of the probability of breaking at the end (dissociation) to the fragmentation in the middle $P_\mathrm{end}/P_\mathrm{mid}=5.46$ for $R_c=4$. }
\label{fig4}
\end{figure}
This distribution of breaking points along the chain is equivalent to the distribution of fragment sizes resulting upon breakup. Figure \ref{fig4} shows how this distribution depends on the nature of physical bond between subunits. As we have seen in the illustration, Fig. \ref{fig1}b, changing the cutoff distance $R_c$ while keeping the depth of the attractive potential well constant ($\varepsilon$) effectively alters the degree of potential asymmetry: the larger the $R_c$, the more asymmetric the potential is. We have also independently tested the breaking statistics in an explicitly harmonic potential of the same depth and curvature at the minimum. In the limit of harmonic chain, we recover a completely uniform (flat) distribution of fragments, with a very high accuracy. This is in agreement with the theory of Lee~\cite{lee09}, who assumed harmonic bonds. On the other hand, Fig. \ref{fig4} clearly demonstrates that, with increasing asymmetry, the breakup probability $P(s)$ displays an increasingly strong preference for depolymerization from the ends. For the highly asymmetric (and also highly anharmonic) potential with $R_c=4$, the breakup probability of the outer bonds is over $5$ times larger than the one of the innermost bonds.
\begin{figure}
\includegraphics[width=.42\textwidth]{fig5-N}
\caption{Relative probability of the first breakup event upon varying the total length $N$, and the parameters of bonding potentials $\varepsilon = 10$, $R_c=2.5$ and $K=1000$.
The range of enhanced breakup probability $\Delta s $ at each end remains constant for all $n$.}
\label{fig5}
\end{figure}
Another important result is shown in Fig.\ref{fig5}, where for a given level of LJ potential depth and asymmetry, and stiff filament with $K=1000$, as usual, we study the effect of filament length (the total number of bonded subunits, $n$). It is more difficult to normalise the breakup probability $P(s)$ this time, because for longer filaments there are more and more `plateau values' of the constant (low) breakup frequency in the middle, which participate in the original normalisation by the total number of runs, $N$ (effectively uniformly suppressing the values of $P(s)$ and thus masking the characteristic ratio $P_\mathrm{end}/P_\mathrm{mid}$). We therefore chose to scale all datasets by their maximum value of $P_\mathrm{end}$, such that the different curves are comparable. It is clear that with the filament increasing past $n=20$ there is no further change in the characteristic ratio $P_\mathrm{end}/P_\mathrm{mid}$ -- simply the region of `chain middle' becomes extended. Perhaps one may regard this as an effective confirmation of the Lee model \cite{lee09}, since for very long and very stiff filaments a very large middle section has an effectively harmonic bonding, and therefore uniform breakup rate.
It appears, the range of enhanced probability near the ends is relatively constant, $\Delta s \lesssim 10$. Shorter filaments have the middle region elevated simply because the two end-effects start overlapping.
The finding that, for stiff filaments with asymmetric interaction potentials, the dissociation rate at the end can be substantially larger than the rate of fragmentation in the middle, may be important in the self-assembly kinetics of actin filaments \cite{oosawa75,pollard86}. There, and in many other cases, the tendency to depolymerize at the end is amplified by the presence of multiple bonds in the interior of the filament, due to the double-stranded helical structure in the case of actin.
\section{Probability of first breakup}
In addition, we studied the probability of the first breakup (irrespective of its position along the filament), upon varying $R_c$ and the filament length $n$, for the case of a stiff filament (limit of large $K$) which also approximates the case of a 1D aggregate. From the results plotted in log-linear fashion in Fig.\ref{fig6} it is clear that the probability for the chain to fracture depends exponentially on time, $P(t) = \textrm{const} \cdot e^{-\lambda t}$, with a characteristic breakup time $\lambda^{-1}$ increasing upon increasing the attraction range $R_c$ (and the asymmetry of bonding potential with that). The average first-breakup time, irrespective of the location on the chain, is defined by $\lambda^{-1} = \int_0^\infty P(\tau) \tau \, \textrm{d}\tau$, upon normalizing $P(\tau) = \lambda \cdot e^{-\lambda t}$. This exponential dependence can be understood from the analysis of the many-particle Fokker-Planck equation,
\begin{equation}\label{2}
\frac{{\partial \rho ({\bf{r}},t)}}{{\partial t}} = {\hat L_s}\rho ({\bf{r}},t)
\end{equation}
with the Smoluchowski operator defined as~\cite{doi86,vankampen97}
\begin{equation}\label{3}
{\hat L_s}(...) = {D{\nabla _{{\textbf{r}}}} \cdot [{\nabla _{{\textbf{r}}}}(...) + \beta \nabla {U_{LJ}}(\textbf{r} )(...)]}
\end{equation}
acting on the many-particle probability density $\rho ({\bf{r}},t)$, where, in supervector notation, ${\bf{r}} = \{ {r_1},...,{r_n}\} $ is the set of interparticle coordinates. The probability as a function of time that all bonds remain within the cutoff at a time $t$, that is, the probability that the chain does not break within a time $t$, is given by
$Q (t) = \int_{ - \infty }^{{{\bf{R}}_c}} {\rho ({\bf{r}},t)d{\bf{r}}} $. We shall recall that, in supervector notation, the condition ${\bf{r}}<\bf{R}_c$ means that \textit{all} bond vectors (relative particle coordinates) in the chain are within an extension smaller than the cutoff $R_c$. Furthermore, ${U_{LJ}}(\textbf{r})$ represents the multi-dimensional potential energy landscape given by the superposition of the Lennard-Jones potentials acting on pairs of molecules.
The first passage/breakup time probability density is defined as the change of $Q(t)$ between the time $t$ and $t + dt$, and is thus given by $P(t) = - dQ (t)/dt$. Combining these equations, with some manipulations (see e.g. Ref.~\cite{ebeling}), it is possible to show that the first-passage time probability density is exactly equal to
$P(t) = - D{\left. {{\nabla _r}\rho ({\bf{r}},t)} \right|_{{{\bf{R}}_c}}}$. The mean first-breakup time is then defined as the first moment of the first-breakup time probability density,
${\lambda ^{ - 1}} = \int_0^\infty {t \cdot P(t)dt} = \int_0^\infty {t \cdot [ - D{{\left. {{\nabla _r}\rho ({\bf{r}},t)} \right|}_{{{\bf{R}}_c}}}]dt} $, which is the same quantity as measured from the exponential fits in simulations. The exponential dependence on time can be understood from the analysis of the many-particle Fokker-Planck equation, Eqs. (\ref{2})-(\ref{3}). Its general solution is
$\rho ({\bf{r}},t) = \sum_{p} {{\phi _p}} ({\bf{r}}){e^{ - D{\lambda _p}t}}$, where $p$ labels the eigenfunctions $\phi _p$ and eigenvalues $\lambda_{p}$ of the many-body operator $\hat L_s$.
\begin{figure}
\includegraphics[width=.4\textwidth]{fig6-T}
\caption{The probability of first breakup of a filament of fixed $n=20$, normalized such that it is equal to unity at $t=0$, is plotted against simulation time measured in timesteps (ts). Different data sets represent the different attraction range $R_c$, which is our measure of potential asymmetry. The fitted lines are all simple exponentials, from which we extract the characteristic rate of the first breakup, $\lambda$. }
\label{fig6}
\end{figure}
\begin{figure}
\includegraphics[width=.32\textwidth]{fig7-lam}
\caption{The mean time of the filament breakup is plotted for different filament lengths, indicating an almost linear increase. In simple terms, taking from Fig.\ref{fig4} the probability to break in the middle as $p_\textrm{mid} \approx 0.035$ and the one at the end as $p_\textrm{end} \approx 0.145$, with $\Delta s \approx 6$ subunits from each end affected, the total rate can be estimated as $\lambda = (n-2\Delta s) p_\textrm{mid} +2\Delta s \, p_\textrm{end}$, which is the solid line in the plot with only a single fitting normalisation factor: $6.3 \cdot 10^{-7}\mathrm{ts}^{-1}$; the deviations at small $n$ are clearly due to the overlapping end effects (see Fig.\ref{fig5}).}
\label{fig7}
\end{figure}
According to the ground-state dominance principle, the time evolution for long filaments ($n \gg 1$) is dominated by the smallest non-zero eigenvalue $\lambda_{\mathrm{min}}$, such that, recalling the expression for $P(t)$, the time dependence of the first-breakup probability is given by
$P(t) = - D{\left. {{\nabla _{\bf{r}}}\rho ({\bf{r}},t)} \right|_{{{\bf{R}}_c}}}\sim{e^{ - {\lambda _\mathrm{min}}t}}$. Hence the breakup probability is indeed exponential in time with a characteristic frequency-scale given by the smallest finite eigenvalue $\lambda_{\mathrm{min}}$ of the many-body operator $\hat L_s$. This result explains the exponential dependence on time of the breakup probability observed in the simulations in Fig. \ref{fig6}. Also, combining the expressions for $P(t)$ and for $\lambda^{-1}$, it is possible to show that $\lambda\approx\lambda_{\mathrm{min}}$, which confirms that the ground-state of the many-body Fokker-Planck equation indeed sets the time scale of breakup.
Furthermore, the rate $\lambda$ grows roughly linearly with the chain length $n$, which is demonstrated in Fig. \ref{fig7}. This particular dependence $\lambda \propto n$ arises because the number of escape attempts increases with the chain size. One can show by means of the standard supersymmetric transformation of the Fokker-Planck equation into the Schr\"{o}dinger equation~\cite{ebeling}, that $\lambda(n)$ is analogous to the quantum ground-state energy of an ensemble of $(n-1)$ bound states, and the ground state energy is extensive ($\propto n$) within the quasiparticle approximation~\cite{nozieres}.
\section{Discussion}
\subsection*{`Phase diagram' of first breakup locations}
We find a useful representation in a map that covers all of the $K$-$R_c$ parameter space to study how the location of first-breakup events along the filament changes upon varying both the stiffness $K$ and the cutoff or asymmetry $R_c$.
The results can be represented as a contour plot for the ratio $P_{end}/P_{middle}$ as a function of $K$ and $R_c$.
The contour plot is shown in Fig. \ref{fig8}. The bottom left corner, corresponding to flexible (low-$K$) filaments with short-ranged potential close to harmonic (low-$R_c$), represents conditions where the filament breaks in the middle and the fragment distribution is bell-shaped, in conformity with Hill's model predictions. Upon increasing both $K$ and $R_c$ at the same time, breakup in the middle becomes less favourable and the distribution tends to flatten out. Eventually, for very stiff filaments and asymmetric potentials with large $R_c$ the opposite limit of U-shaped fragment distributions with preferential breakup at the filament ends is recovered. This occurs in the top-left region of the map in Fig. \ref{fig8}. For symmetric binding potentials close to harmonic (low $R_c$: along the $K$ axis of the contour plot), the bell-shaped distribution persists longer upon increasing $K$, eventually transforming into a flat distribution $P_{end}/P_{middle}=1$ for stiff filaments. On the other side of the map, where $R_c$ is increased for flexible chains, the bell-shaped distribution persists for flexible chains up to $R_c \rightarrow\infty$ which corresponds to the LJ with no cutoff.
In general, the most dramatic change in the breakup location and fragment-distribution shape occurs along the path of steepest ascent, defined as the path parallel to the gradient of the surface. Based on our results, the path of steepest ascent and most dramatic evolution in the breakup topology is approximately identified by the line $\log (K/k_BT) = (7/5) R_c/\sigma$.
\begin{figure}
\includegraphics[width=.42\textwidth]{fig8-map}
\caption{Contour plot showing the ratio $P_{end}/P_{middle}$ as a function of filament stiffness $K$ and the asymmetry parameter $R_c$. The bottom-left lagoon (dark) represents conditions where filaments break in the middle (bell-shaped distribution, according to Hill \cite{hill}), while the upper-right (light) region of this map represents conditions where filaments break at the ends (the U-shaped distribution) and negligibly in the inner locations. Arrows on the top signify that these geodesic lines extrapolate towards $K\rightarrow \infty$. Arrows to the right indicate that there is little further change past $R_c = 4-5$. The dashed geodesic line marks the $P_{end}/P_{middle}=1$ condition, separating the regions of bell- and U-shape distributions. }
\label{fig8}
\end{figure}
\subsection*{Bond-bending stiffness controls the filament breakup/recombination equilibrium}
In Figs. \ref{fig3} and \ref{fig4} we have shown that depending on the relative extent of bond-bending and central forces in the intermolecular interaction, the fragment size distribution can change from a U-shaped distribution in the limit of large bond-bending rigidity, to a bell-shaped distribution with opposite curvature in the limit of a purely central-force Lennard-Jones potential. Intermediate bending stiffness values yield distributions with shape in between the two limiting cases.
It is first important to understand the microscopic origin of this qualitative difference upon varying the bending stiffness in the intermolecular interaction. Since the flexible chain breakup statistics closely resembles the prediction of Hill~\cite{hill}, we take a similar approach and consider the fragment-size dependence of the breakup rate within a chemical equilibrium assumption and for the special simplifying case of harmonic bonds. We have checked that with harmonic bonds the same behaviour trend as in Fig. \ref{fig3} is reproduced, with the only difference that the $P(s)$ distribution for the stiff filament is flat (as indeed proven by Lee~\cite{lee09}) instead of U-shaped in the case of stiff filaments (as the last curve in Fig. \ref{fig4} shows). That is, the Hill-like bell-shaped $P(s)$ is the universal result for fully flexible chains.
The equilibrium constant for a dissociation reaction $n \leftrightarrows n_{1} + n_{2}$ of a filament $n$ into two fragments $n_{1}, \, n_{2}$ takes the form: $K_\mathrm{eq}=V^{-1} Z({n_{1}}) Z({n_{2}})/Z({n})=K_{n1, n2}^{-}/K_{n1, n2}^{+}$, where $Z({n_{1}})$ is the partition function of fragment $n_1$. $K_{n1, n2}^{-}$ is the dissociation rate, while $K_{n1, n2}^{+}$ is the recombination rate of these two fragments. The latter can be estimated from the diffusion-controlled collision rate of two linear chains, upon accounting for the diffusion coefficient of the two chains (Kirkwood-Riseman approximation \cite{kirkwood}) and for the encounter efficiency of end-to-end collisions of the two chains. In this way, the size-dependence was found to be $K_{n1, n2}^{+} \propto (n_{2} \ln n_{1} + n_{1} \ln n_{2})/n_{1}n_{2}n$~\cite{hill}.
The size-dependence of the dissociation rate (and hence the fragment size-distribution) can be obtained by replacing this form for the association rate in the expression for the equilibrium constant, and upon evaluating the fragment-size dependence of the partition functions in the numerator of $K_\mathrm{eq}$.
From classical statistical mechanics, rigid-body translational degrees of freedom of the chain contribute to the partition function a factor $\sim n^{3/2}$, and rigid-body rotational degrees of freedom contribute an extra factor $\sim n^3$, since the overall mass of the filament is $\propto n$. Together these two factors give a partition function $\sim n^{9/2}$.
The vibrational contributions of the monomers in the chain factorise in the partition function, as for a chain of harmonic oscillators, resulting in standard factors of the type $\sim (k_BT/\hbar \omega)^n$, where $\omega$ is the Einstein frequency. Clearly these factors do not contribute to $K_\mathrm{eq}$ because the corresponding terms in the numerator and denominator cancel.
A full consideration of the normal modes of the linear chain with free ends, beyond the Einstein model, leads to an additional nontrivial size-dependence $\sim n^{-1/2}$, for vibrations of harmonic spheres in 1D, and to $\sim n^{-3/2}$ for vibrations in a flexible 3D chain~\cite{abraham,lothe}. In simple terms, upon increasing the chain length, more low-energy modes can be accommodated in the spectrum, which causes the partition function to decrease. The importance of this effect was first recognized by J. Frenkel~\cite{frenkel} in the context of nucleation phenomena. Hence with purely central-force interaction in 3D (flexible chain) the overall contribution is $\sim n^{9/2-3/2} = n^{3}$.
Akin to covalent bonds in molecular physics, the bending stiffness introduces additional degrees of freedom for rotations about the bond symmetry axis, which then leads to an overall dependence $\sim n^{9/2-3} = n^{3/2}$. One should note that with spheres and purely central-force bonds there is no such axis of symmetry for the rotations, and the three translational degrees of freedom per particle suffice to describe the vibrational behavior.
Including all these considerations, the dissociation rate will have a dependence on the fragment sizes given by
\begin{equation}\label{k1}
K_{n1, n2}^{-} \sim (n_{1}n_{2})^{x-1}(n_{2} \ln n_{1} + n_{1} \ln n_{2})/n.
\end{equation}
The exponent $x$, which collects all size-dependent contributions of the partition function, is different depending on whether the interaction is purely central-force, or has a bond-bending stiffness. For central forces, $x=3$, whereas with semiflexible or stiff chains one has $x=3/2$.
The leading contribution is then $\sim (n_{1}n_{2})^{2}$, with a pronounced bell-shape peaked in the middle for the exclusively central-force flexible chain, and $\sim (n_{1}n_{2})^{0.5}$, leading to a much flatter distribution for a chain with bond-bending penalty. {The fact that the slightly U-shaped distribution observed in simulations for stiff filaments is not recovered by this model should be attributed to the various approximations (Kirkwood-Riseman for chain diffusion, detailed balance, etc.) involved in the model, and also to the harmonic approximation of independent linear oscillators underlying the factorization of partition functions. }
This argument, however, explains, qualitatively, that a flatter distribution of fragments is to be expected in the presence of bond-bending, due to the additional rotational degrees of freedom about the stiff intermolecular bond symmetry axis, which is absent with purely central-force interactions.
\subsection*{Possible roles of electrostatics and temperature in amyloid fibril breakup}
We can briefly comment on the qualitative predictions of this model for the distribution of breakup fragments in realistic amyloid fibrils. {Realistic intermolecular forces which bind proteins in amyloid fibrils crucially depend on both electrostatics and temperature. We shall start considering the role of electrostatics first.}
{Electrostatic repulsion between two bound proteins in a filament is ubiquitous except for solutions at very high ionic strength. Electrostatic repulsion acts to ``lift up'' the bonding minimum, and it may also contribute an additional small energy barrier to the total interaction $U$, with a maximum $U_{max}$ co-existing or competing with the new lifted attractive minimum. We denote the new attractive minimum as $U_{min}^{*}<\epsilon$. Due to the fact that the electrostatic energy decreases with $r$, and the maximum is typically at $r>r_{min}$, the lifting up of the bonding minimum by the electrostatic repulsion is not entirely compensated by the energy barrier (the new maximum in $U$). Hence the total energy to be overcome for the particle to escape from the bonding minimum is $U_{max}-U_{min}^{*}<\epsilon$.
This consideration points towards a role of electrostatics which promotes breakup, or at least, restructuring into a different morphology where the electrostatic energy density is reduced. This outcome of our analysis is compatible with recent experimental observations where an increased electrostatic repulsions (e.g. at lower ionic strengths) is responsible for fission or scission phenomena of larger compact aggregates into smaller and more anisotropic aggregates~\cite{Dehsorkhi,fodera1}.}
Our simulations show a crossover from a U-shaped fragment distribution into a bell-shaped distribution upon going from high values of bond-bending stiffness $K$ to lower values. In our simulations, $K$ is fixed and set independently of $T$, the latter being kept constant throughout at varying $K$.
In reality, however, $K$ and $T$ may not be decoupled for a realistic model of amyloid fibrils. The reason is that the inter-protein bending stiffness $K$ originates, microscopically, from the strength of $\beta$-sheets which bind two adjacent proteins in the fibril. The mechanism is known: due to the planar, sheet-like, nature of two hydrogen-bonded $\beta$-sheets, there is an intrinsic bending resistance against sliding or rolling of the two proteins past each other. The same mechanism provides bending rigidity when two surfaces bonded by many anchored central-force springs are displaced tangentially apart.
Upon increasing $T$, the hydrogen and hydrophobic bonds which keep the two $\beta$-sheets together start to dissociate, leading to lower bending stiffness and lower values of $K$.
Hence, based on our simulation results, we can predict that the fragment distribution function of realistic amyloid fibrils should evolve from a U-shaped distribution at low temperature $T$, where the $\beta$-sheets of two adjacent proteins are tightly bound, into a bell-shaped distribution at higher $T$ where the $\beta$-sheet bonding becomes looser, which makes the bending stiffness $K$ decrease. This prediction seems to be confirmed by preliminary experiments~\cite{morbidelli}, and future work using ab-initio simulations should focus on identifying the relationship between $K$ and $T$, which controls the evolution of the fragment distribution with $T$. {In future research it will be important to combine all these effects into a general coarse-grained approach along the lines of~\cite{Knowles,Assenza}, to achieve a bottom-up description of realistic filaments and their size evolution.
}
\subsection*{Anharmonicity controls depolymerization from the ends in stiff filaments}
When the bending rigidity of the chain is high, the probability of spontaneous bond breaking is flat when the bond potential is harmonic~\cite{lee09} -- yet it adopts a very distinct and very strongly biased U-shape when the /asymmetry of the potential increases (Fig.\ref{fig4}). How can we quantitatively explain why the asymmetry of interaction potential between any two bonded subunits leads to higher breakup rates at the chain ends, and much smaller breakup rates in the middle? For a high bending modulus one can treat the bond at the filament end as a classical diatomic molecule, and a subunit in the middle of the chain as the inner particle in a linear triatomic molecule. In the latter case, the combined potential $W$ felt by the particle in the middle is sketched in Fig.\ref{fig1}c.
One would be tempted to explain the difference between the higher dissociation rate at the filament end and the lower one in the middle by referring to the overall lower energy (deeper potential well) felt by the particle in the middle sitting in the minimum of the combined potential $W(r)$. Applying a Kramers-type escape-rate argument would then lead to an Arrhenius dependence of the particle on the depth of the energy well and an overall large difference between the two rates. However, such an approach cannot explain the observation that the rate is the same in the middle and at the end for the case of harmonic potential; in that case the same argument about $W$ applies hence one would expect a lower rate in the middle, which is not observed, in agreement with previous calculations~\cite{lee09}. What is different in the case of the harmonic potential, is the fact that the asymmetry of the bonding potential is removed for the particle at the end of the chain (while the subunits in the middle effectively experience the harmonic potential in both cases).
It is in fact this asymmetry which facilitates dissociation at the termini of the chain, where less resistance is encountered by the particle escaping outwardly from the bound state. In order to verify that this is indeed the right physics, we also run a test simulation with a quartic potential $U\propto (r-r_{min})^{4}$, which is anharmonic yet fully symmetric about the minimum, just like the harmonic potential. Also in this case we found a completely flat distribution of fragments, as for the harmonic potential, which supports the proposed claim.
It is therefore the asymmetry, in the case of anharmonic potentials, which plays the major role in facilitating the preferential bond breakup at the chain ends. The explanation can be found in the different values of the mean thermal fluctuation from the equilibrium position (energy minimum) for the particle sitting in the asymmetric LJ potential at the chain end, and the particle moving in the more symmetric combined potential $W(r)$ in the middle of the chain. An analysis of the mean thermal fluctuation done long ago by J. Frenkel~\cite{frenkel}, shows that the mean thermal fluctuation of the particle feeling the anharmonic/asymmetric potential at the end is typically larger because of the shallower slope of the potential in the outward direction. For the particle in the middle, the situation is different because the combined potential $W(r)$ does not become shallower as the particle in the middle moves away from one of the two neighbours, due to the presence of the interaction with the other neighbour.
\section{CONCLUSIONS}
By means of Brownian dynamics simulations, we have shown that thermal breakup rates and breakup topology of model protein filaments (and other linear nanoparticle aggregates) are strongly affected by the presence of bond-bending stiffness in the interaction between subunits, and by the degree of asymmetry of the anharmonic binding potential. With stiff chains bonded by inter-particle forces with anharmonicity and asymmetry of the potential typical for intermolecular interaction potentials (van der Waals, hydrophobic attraction etc), we find a strongly preferential breakup at the chain ends, and an overall U-shaped fragment distribution.
In contrast, with purely central-force interactions between subunits, that is, fully flexible chains -- the fragment size distribution is bell-shaped, with a pronounced peak in the middle (symmetric breakup), and the lowest breakup rate is found at the ends of the chain.
While the preferential breakup at the end of stiff chains (filament depolymerization) can be explained in terms of the larger thermal fluctuations at the chain-end associated with potential anharmonicity/asymmetry in a perfectly stiff quasi-1D chain model, the dramatic change of breakup topology upon varying the strength of bond-bending interaction is more subtle. In this case we found a tentative explanation upon considering the degrees of freedom associated with the vibrational partition function of the fragments. In general, breakup into two equal fragments is favoured with purely central-force bonds because the product of the partition functions of two fragments is maximised (which is intuitive if one considers that the classical partition function for rigid body motions increases strongly with the fragment size). The vibrational partition function, instead, decreases with fragment size because more low-energy modes can be accommodated in longer fragments. This effect becomes stronger in the case of bond-bending, where the total number of vibrational degrees of freedom is larger due to the rotation axis of the stiff bond. As a result of this compensation between the size dependencies of the vibrational and rigid-body partition functions, the size-dependence of fragmentation rate with bond-bending is much weaker compared to the central-force case.
Hence, we found some general laws which govern the fragmentation behavior of model linear aggregates, as a function of the relative importance of central-force and bond-bending interactions between subunits. These findings are important towards achieving a bottom-up control over the length and time-evolution of filament populations, both in biological problems (acting, amyloid fibrils etc.) and in nanoparticle self-assembly for photonic applications.
\begin{acknowledgments}
\noindent We are grateful for many discussions and input of
T.P.J. Knowles, T. Michaels, C.M. Dobson and A. Bausch. This work has been supported by the
Ernest Oppenheimer Fellowship at Cambridge (AZ, LD) and by the Technische Universit\"{a}t M\"{u}nchen Institute for Advanced Study, funded by the German Excellence Initiative and the EU 7th Framework Programme under grant agreement nr. 291763 (AZ). LD also acknowledges the Marie Curie ITN-COMPLOIDS grant no. 234810.
\end{acknowledgments}
|
1,314,259,994,885 | arxiv | \section{Introduction and motivation}
\setcounter{equation}{0}
In modern mathematical physics, hypergeometric and
$q$-hypergeometric functions have found their applications in the
development of the theory of difference equations and in quantum and
non commutative geometry. And in many results, like the theory of
lattice integrable models, Bethe ansatz and Toda systems
\cite{FR92}-\cite{Z92} for instance, they
are formulated or realized in connection with these types of
mathematical functions. In this context and to illustrate a physical
application, we cite ref.\cite{MV93} where a representation of
$q$-hypergeometric functions of one variable was found in terms of
correlators of vertex operators made out of free scalar fields
propagating on Riemann sphere. Among these basic functions or
$q$-functions, there are polynomials which are structured in
schemes. An interesting one, so-called Askey-scheme \cite{KS} of
hypergeometric orthogonal polynomials, consists of all known sets of
orthogonal polynomials which can be defined in terms of a
hypergeometric function and their interrelations. The
$q$-Askey-scheme is the quantum version of the former, however the
hypergeometric orthogonal polynomials may admit several
$q$-analogues. Only few of these $q$-orthogonal polynomials possess
generating functions written in terms of $q$-exponential functions.
It is for this fact that we deal with these $q$-polynomials in this
paper. Our interest here was motivated by the results of reference
\cite{Q} where it was established that the two Jackson's
$q$-exponentials
\begin{equation}
e_q(z)=\sum_{k\in
\mathbb{N}}\frac{1}{(q;q)_k}z^k=\frac{1}{(z;q)_{\infty}},\qquad
E_q(z)=\sum_{k\in
\mathbb{N}}\frac{q^{k(k-1)/2}}{(q;q)_k}z^k=(-z;q)_{\infty},
\end{equation}with $(a;q)_0=1$, $(a;q)_k=\prod_{j=0}^{k-1}(1-aq^j)$,
and $(a;q)_\infty=\prod_{j=0}^{\infty}(1-aq^j)$, could be expressed
respectively as the exponential of series as follows
\begin{equation}\label{quesne1}
e_q(z)=\exp\left(\sum_{k\in
\mathbb{N}^{*}}\frac{z^k}{k(1-q^k)}\right)
\end{equation}
and
\begin{equation}\label{quesne2}
E_q(z)=\exp\left(\sum_{k\in
\mathbb{N}^{*}}\frac{(-1)^{k+1}z^k}{k(1-q^k)}\right)
\end{equation}
Furthermore, in the reference \cite {CJN}, the multiplicative series
form of the $q$-exponential were exploited to derive a new nonlinear
connection formula between $q$-orthogonal polynomials and their
classical versions, namely $q$-Hermite, $q$-Laguerre and
$q$-Gegenbauer polynomials. Their results are expressed in compact
form and some explicit examples are given. Also, the authors of
\cite{CJN} emphasized the possibility to extend their work for other
$q$-orthogonal polynomials such as little $q$-Jacobi ones. In the
present work we will take benefit of their idea to compute the
connection formula between other $q$-orthogonal polynomials,
appearing in the $q$-Askey scheme \cite{KS}, and their classical
counterparts, namely the continuous $q$-Laguerre, the continuous big
$q$-Hermite and the $q$-Meixner-Pollaczek polynomials and we give an
alternative
connection
formula for the $q$-Gegenbauer polynomials distinct from the one
given in \cite{CJN}. In our knowledge, these cases have not been treated
before. To proceed, similarly to the work of Chakrabarti
{\em
et al} \cite{CJN}, we first give the generating functions of any
$q$-polynomials cited above and use, on one side, the series
development. On the other side, the Quesne formulae allow us to
express the $q$-exponential function as a product series of the
classical exponential function. And this leads finally to the
connection formula, up to the resolution of a Diophantine partition
equation appearing during our computation for any examined cases.
As all the $q$-polynomials constituting the $q$-Askey scheme can be
defined in terms of the basic hypergeometric series ${}_r\phi_s$, we
recall here their expression (see for example \cite{GR}):
\begin{equation}
{}_r\phi_s(a_1,a_2,...,a_r;b_1,b_2,...,b_s;q;z)=
\sum_{n=0}^{\infty}\frac{(a_1;q)_n(a_2;q)_n...(a_r;q)_n}{(q;q)_n(b_1;q)_n(b_2;q)_n...(b_s;q)_n}\left[
(-1)^nq^{\frac{n(n-1)}{2}}\right]^{1+s-r}z^n
\end{equation}with
$q\neq0$. The ratio test shows that for generic values of the
parameters the radius of convergence is $\infty$, $1$ or $0$ for
$r<s+1$, $r=s+1$ or $r>s+1$ respectively. Since $(q^{-n};q)_k=0$ for
$k=n+1, n+2,...$, the series ${}_r\phi_s$ terminates if one of the
numerator parameters $\{a_i\}$ is of the form $q^{-n}$ with
$n=0,1,2,...$ and $q\neq 0$. The ${}_r\phi_s$ function is the
$q$-analogue of the hypergeometric function defined by
\begin{equation}
{}_rF_s(a_1,a_2,...,a_r;b_1,b_2,...,b_s;z)=\sum_{n=0}^{\infty}\frac{(a_1)_n(a_2)_n...(a_r)_n}{(b_1)_n(b_2)_n...(b_s)_n}
\frac{z^n}{n!}\end{equation} where $(a)_n$ denotes the Pochhammer
symbol defined by $$ (a)_0=1, \textrm{and} ~~
(a)_k=a(a+1)(a+2)...(a+k-1), k=1,2,...$$ When one of the numerator
parameters $a_i$ equals $-n$ where $n$ is a nonnegative integer this
hypergeometric series is a polynomial in $z$. Otherwise the radius
of convergence is $\infty$, $1$ or $0$ for $r<s+1$, $r=s+1$ or
$r>s+1$ respectively.
\vskip 0.5cm
\section{Continuous $q$-Laguerre polynomials}
\setcounter{equation}{0}
The continuous $q$-Laguerre polynomials had manifested their
apparition in the rational solutions of the $q$-analogue of
Painlev\'e V differential equation \cite{masuda}, namely as the
entries of its associated determinant.
They are defined by: \cite{KS}
\begin{eqnarray}
&& P_n^{\alpha}(x|q)=\frac{(q^{\alpha+1};q)_n}{(q;q)_{n}}
{}_3\phi_2\left(q^{-n} ,q^{\frac{1}{ 2} \alpha+ \frac{1}{
4}}e^{i\theta} ,q^{\frac{1}{ 2} \alpha+ \frac{1}{
4}}e^{-i\theta};q^{\alpha+1},0;q;q
\right),\quad x=\cos\theta\\
&&=\frac{(q^{\frac{1}{ 2} \alpha+ \frac{3}{
4}}e^{-i\theta};q)_n}{(q;q)_{n}}q^{({\frac{1}{ 2} \alpha+ \frac{1}{
4}})n}e^{in\theta} {}_2\phi_1\left(q^{-n} ,q^{\frac{1}{ 2} \alpha+
\frac{1}{ 4}}e^{i\theta} ;q^{-{\frac{1}{ 2} \alpha+ \frac{1}{
4}}-n};q;q^{-{\frac{1}{ 2} \alpha+ \frac{1}{ 4}}}e^{-i\theta}\right)
\end{eqnarray}
The
generating function of the continuous $q$-Laguerre polynomials is
given by \cite{KS}
\begin{eqnarray}\label {genrating1}
&&\textsf{G}_q^{\alpha}(x;t)\equiv \frac{(q^{\alpha+\frac{1}{2}}t;q)_\infty(q^{\alpha+1}t;q)_\infty}{(q^{\frac{1}{2}\alpha+\frac{1}{4}}e^{i\theta}t;q)_\infty(q^{\frac{1}{2}\alpha+\frac{1}{4}}e^{-i\theta}t;q)_\infty}\nonumber\\
&&=E_q(-q^{\alpha+\frac{1}{2}}t)E_q(-q^{\alpha+1}t)e_q(q^{\frac{1}{2}\alpha+\frac{1}{4}}e^{i\theta}t)e_q(q^{\frac{1}{2}\alpha+\frac{1}{4}}e^{-i\theta}t)
=\sum_{n\geq 0}P_n^{\alpha}(x|q)t^n.
\end{eqnarray}
In the $q\rightarrow 1$ limit, when $x$ is replaced by
$q^{\frac{x}{2}}$ in the above function (\ref{genrating1}), we find
the generating function
\begin{equation}\label {genrating2}
\textsf{G}^{\alpha}(x,t)\equiv
(1-t)^{-\alpha-1}\exp\left(\frac{xt}{t-1}\right)=\sum_{n=0}^{\infty}L_n^{\alpha}(x)t^n
\end{equation}for the classical
Laguerre polynomials \cite{AAR}
\begin{equation}
L_n^{\alpha}(x)=\frac{(\alpha+1)_n}{n!}{}_1F_1\left(-n;\alpha+1;x\right).
\end{equation}
Using (\ref{quesne1}) and (\ref{quesne2}) the
left hand side of (\ref {genrating1}) is reformulated as
\begin{equation}\label{genrating4}
\textsf{G}_q^{\alpha}(x;t)=\prod_{k\in\mathbb{N}^*}\exp\left(\frac{-q^{(\alpha+\frac{1}{2})k}-q^{(\alpha+1)k}+2q^{(\frac{1}{2}\alpha+\frac{1}{4})k}\cos
k\theta}{k(1-q^k)}t^{k}\right).
\end{equation}
To express the continuous $q$-Laguerre generating function (\ref
{genrating1}) as a multiplicative series of the classical Laguerre
generating function, we introduce the following parameters
\begin{eqnarray}
x_k&=&\frac{-q^{(\alpha+\frac{1}{2})k}-q^{(\alpha+1)k}+2q^{(\frac{1}{2}\alpha+\frac{1}{4})k}\cos
k\theta}{k(1-q^k)}\nonumber\\
\tau_k&=&\frac{t^k}{t^k-1}
\end{eqnarray}
then we have
\begin{eqnarray}\label{geni}
\textsf{G}_q^{\alpha}(x,t)&=&\prod_{k\in\mathbb{N}^*}\left[ \textsf{G}^{\alpha_k}(x_k,\tau_k)(1-\tau_k)^{\alpha_k+1}\right]\\
&=&\sum_{\{n_k\}}\prod_{k\in\mathbb{N}^*}\left[L_{n_k}^{\alpha_k}(x_k)\tau_
k^{n_k}(1-\tau _k)^{\alpha_k+1}\right]
\end{eqnarray}
For instance $\{\alpha_k\}$ is a family of generic parameters. We
rewrite
\begin{equation}
\tau_ k^{n_k}(1-\tau _k)^{\alpha_k+1}=(-1)^{n_k}\sum_{m_k\geq
0}\frac{(\alpha_k+n_k+1)_{m_k}}{m_k!}t^{k(n_k+m_k)}
\end{equation}
We obtain
\begin{eqnarray}\label{genrating3}
\textsf{G}_q^{\alpha}(x,t)=\sum_{\{n_k\}}\sum_{\{m_k\}}\prod_{k\in\mathbb{N}^*}\left[(-1)^{n_k}L_{n_k}^{\alpha_k}(x_k)\frac{(\alpha_k+n_k+1)_{m_k}}{m_k!}t^{k(n_k+m_k)}\right]
\end{eqnarray}
Inserting the series given in (\ref{genrating1}) in the later
relation (\ref{genrating3}) and comparing coefficients of equal
power in $t$ on both sides, we obtain our connection formula for the
continuous $q$-Laguerre polynomials in terms of their classical
analogues
\begin{equation}\label{connection}
P_{n}^{\alpha}(x|q)=\sum_{\{n_k\}}\sum_{\{m_k\}}\prod_{k\in\mathbb{N}^*}\left[(-1)^{n_k}L_{n_k}^{\alpha_k}(x_k)\frac{(\alpha_k+n_k+1)_{m_k}}{m_k!}\right]
\delta_{\sum_{k\in\mathbb{N}^*}k(n_k+m_k),n}.
\end{equation}
It's obvious that the family $\{\alpha_k\}$ could be any real
parameters and by construction the left hand side of
(\ref{connection})
must be independent of this family. Each set $\{\alpha_k\}$
provides
an expansion of the continuous $q$-Laguerre polynomials. The solutions of the Diophantine partition relation
\begin{equation}\label{partition}
\sum_{k\in\mathbb{N}^*}k(n_k+m_k)=n
\end{equation}
determine the set of classical Laguerre polynomials contributing to
the expansion of the continuous $q$-Laguerre polynomial. For an
explicit example we have used the connection formula
(\ref{connection}) to write the $P_{4}^{\alpha}(x|q)$ polynomial.
This is done after solving the Diophantine partition equation
(\ref{partition}) for $n=4$. In Table \ref{Table 1} we have listed
the corresponding solutions for this case together with their
respective classical Laguerre polynomials contributions to the
connection formula (\ref{connection}).
\vskip 0.5cm
\section{Continuous big $q$-Hermite polynomials}
\setcounter{equation}{0}
The continuous big $q$-Hermite polynomials $H_n(x;a;q)$ appear in
many contexts of mathematical physics in particular in \cite {FTV}
where it was shown that they realize a basis for a representation
space of an extended $q$-oscillator algebra. They depend on one
parameter and are defined by \cite {KS}
\begin{eqnarray}
H_n(x;a;q)&=&a^{-n}{}_3\phi_2\left(q^{-n}, ae^{i\theta},ae^{-i\theta};0,0;q;q\right)\nonumber\\
&=& e^{in\theta}{}_2\phi_0\left(q^{-n},
ae^{i\theta};-;q;q^{n}e^{-2i\theta}\right),\quad x=\cos\theta.
\end{eqnarray}
And their generating function is given by
\begin{eqnarray}\label {gen1}
\textsc{G}_q(x,a,t)&=&\frac{(at;q)_{\infty}}{(e^{i\theta t};q)(e^{-i\theta t};q)},\qquad x=\cos\theta\nonumber\\
&=&E_q(-at)e_q(e^{i\theta}t)e_q(e^{-i\theta}t)
=\sum_{n=0}^{\infty}\frac{H_n(x;a;q)}{(q;q)_{n}}t^n.
\end{eqnarray}
Let's recall that the classical Hermite polynomials, defined by
\cite {AAR}
\begin{equation}
H_n(x)=(2x)^n {}_2F_0\left(-n/2,-(n-1)/2;-;-\frac{1}{x^2}\right),
\end{equation}
can be obtained from the continuous big $q$-Hermite polynomials by the following limit
\begin{eqnarray}
\lim_{q\rightarrow
1}(\frac{1-q}{2})^{-\frac{n}{2}}H_n\left(x(\frac{1-q}{2})^{\frac{1}{2}};a(2(1-q))^{\frac{1}{2}};q\right)=H_n(x-a).
\end{eqnarray}
Their generating function is given by
\begin{eqnarray}\label {gen2}
\textsc{G}(x,t)=\exp(2xt-t^2)=\sum_{n=0}^{\infty}\frac{H_n(x)}{n!}t^n.
\end{eqnarray}
In similar construction, the two kinds of the $q$-exponential
function in the deformed generating function (\ref {gen1}) are
substituted by their expressions given in (\ref{quesne1}) and
(\ref{quesne2}) to obtain the following expression
\begin{equation}
\textsc{G}_q(x,a,t)=\prod_{k\geq1}\exp\left(\frac{-a^k+2\cos(k\theta)}{k(1-q^k)}t^k\right)
\end{equation}
For our purpose we set
\begin{equation}
x_k=\frac{-a^k+2\cos(k\theta)}{2k(1-q^k)}
\end{equation}
then we can write the deformed generating function
$\textsc{G}_q(x,a,t)$ as an infinite product series of the classical
Hermite generating function:
\begin{equation}\label{gen3}
\textsc{G}_q(x,a,t)=\prod_{k\geq1}\left(\textsc{G}(x_k,t^k)e^{t^{2k}}\right)
\end{equation}
Inserting the series given in rhs of (\ref{gen1}) and (\ref{gen2})
and the series of $e^{t^{2k}}$ in relation (\ref{gen3}); and
comparing coefficients of equal power in $t$ on both sides, we
obtain our connection formula for the continuous big $q$-Hermite
polynomials in terms of their classical analogues
\begin{equation}\label{connection2}
\frac{H_n(x;a;q)}{(q;q)_{n}}=\sum_{\{n_k\}}\sum_{\{m_k\}}\prod_{k\in\mathbb{N}^*}\frac{H_{n_k}(x_k)}{n_k!m_k!}\delta_{\sum_{k\in\mathbb{N}^*}k(n_k+2m_k),n}.
\end{equation}
Here again all the problem stands in finding the solutions of the
Diophantine partition equation
\begin{equation}\label{Diaph1}
\sum_{k\in\mathbb{N}^*}k(n_k+2m_k)=n. \end{equation} To illustrate
the connection formula (\ref{connection2}) we have listed in Table
\ref {Table 2} all the possible non-zero solutions of (\ref{Diaph1})
for $n=5$ and their corresponding classical Hermite polynomials
involved in the construction of $H_5(x;a;q)$.
{\remark
The continuous $q$-Hermite polynomials, (see, for example,
\cite{KS}, section 3.26), can easily be obtained from the continuous
big $q$-Hermite polynomials $H_n(x;a;q)$ by replacing $a=0$, then we
can derive their connection formula in terms of the classical
Hermite polynomials by taking $a=0$ in both sides of
(\ref{connection2}).}
\vskip 0.5cm
\section{$q$-Meixner-Pollaczek polynomials}
\setcounter{equation}{0}
In this section we treat the cases of the $q$-Meixner-Pollaczek
polynomials \cite{KS}
\begin{eqnarray}
P_n(x;\lambda;q)&=&q^{-n\lambda}e^{-in\phi}\frac{(q^{2\lambda};q)_n}{(q;q)_n}{}_3\phi_2\left(q^{-n},q^{\lambda}e^{i(\theta+2\phi)}, q^{\lambda}e^{-i\theta};q^{2\lambda},0;q;q\right),\quad x=\cos(\theta+\phi)\nonumber\\
&=&\frac{(q^{\lambda}e^{-i\theta};q)_n}{(q;q)_n}e^{in(\theta+\phi)}{}_2\phi_1\left(q^{-n},q^{\lambda}e^{i\theta};q^{1-\lambda-n}e^{i\theta};q;q^{1-\lambda}e^{-i(\theta+2\phi)}\right)
.
\end{eqnarray}
These are the $q$-analogue of the classical Meixner-Pollaczek
polynomials defined by \cite{AAR}
\begin{eqnarray}
P_n^{(\lambda)}(x;\phi)=\frac{(2\lambda)_n}{n!}e^{in\phi}{}_2F_1\left(-n,\lambda+ix;2\lambda;1-e^{-2i\phi}\right),\quad
\lambda >0,\quad 0<\phi<\pi.
\end{eqnarray}
It's obvious from the last two expressions that the following limit
is true:
\begin{eqnarray}
\lim_{q\rightarrow 1}P_n(\cos(\ln
q^{-x}+\phi);\lambda;q)=P_n^{(\lambda)}(x;-\phi)
\end{eqnarray}
The generating function of the $q$-Meixner-Pollaczek polynomials is
given by
\begin{eqnarray}\label{gen4}
\textbf{G}_q^{\lambda}(x,t)&=&\left|\frac{(q^{\lambda}e^{i\phi}t;q)_{\infty}}{(e^{i(\theta+\phi)}t;q)_{\infty}}\right|^2
=\frac{E_q(-q^{\lambda}e^{i\phi}t)E_q(-q^{\lambda}e^{-i\phi}t)}{E_q(-e^{i(\theta+\phi)}t)E_q(-e^{-i(\theta+\phi)}t)}\\
&=&\sum_{n=0}^{\infty}P_n(x;\lambda;q)t^n,\qquad
x=\cos(\theta+\phi).\nonumber
\end{eqnarray}
Which can be written, after using (\ref{quesne2}), as
\begin{eqnarray}
\textbf{ G}_q^{\lambda}(x,t) &=&
\prod_{k\geq1}\exp\left(\frac{2}{k}\frac{-q^{k\lambda}\cos
k\phi+\cos k(\theta+\phi)}{1-q^k}t^k\right)
\end{eqnarray}
We set
\begin{equation}
x_k=\frac{2}{k}\frac{-q^{k\lambda}\cos k\phi+\cos k(\theta+\phi)}{1-q^k}
\end{equation}
then we can write $
\textbf{\textbf{G}}_q^{\lambda}(x,t) $ as
\begin{equation}\label{GeneratingM}
\textbf{G}_q^{\lambda}(x,t)
=\sum_{\{n_k\}}\prod_{k\geq1}\left[\frac{1}{n_k!}x_k^{n_k}t^{kn_k}\right]
\end{equation}
On the other hand, for any $x\in\mathbb{R}$ and $m\in \mathbb{N}$, we
can expand $x^m$ with respect of the classical Meixner-Pollaczek
polynomials in the following way
\begin{equation}\label {expansionM}
x^m=\sum_{l=0}^m A_{l,m}^{\lambda,\phi}P_{l}^{(\lambda)}(x;\phi)
\end{equation}
where the $A_{l,m}^{\lambda,\phi}$ satisfy, for $l=0,...,m$ , the
following recursion relation
\begin{equation}\label{}
\left\{
\begin{array}{l}
A_{0,0}^{\lambda,\phi}=1\\2\sin \phi
A_{l,m+1}^{\lambda,\phi}=(l+2\lambda)A_{l+1,m}^{\lambda,\phi}-2(l+\lambda)\cos\phi
A_{l,m}^{\lambda,\phi}+l A_{l-1,m}^{\lambda,\phi},
\end{array} \right.
\end{equation}
and for $l>m$, $A_{l,m}^{\lambda,\phi}=0$. Using this expansion in
the rhs of (\ref {GeneratingM}) to express $x_k^{n_k}$ in term of
the classical Meixner-Pollaczek polynomials and comparing
coefficients of equal power in $t$ on both sides, we obtain our
connection formula for the $q$-Meixner-Pollaczek polynomials in
terms of their classical partners of lower dimensions:
\begin{equation}\label{connectionM}
P_n^{(\lambda)}(x;\phi;q)=\sum_{n_1,n_2,...=0}^{\infty}\sum_{{{0\leq
l_1\leq n_1,} \atop {0\leq l_2\leq
n_2,}}\atop{\ldots}}\prod_{k\geq1}\left[\frac{1}{n_k!}A_{l_k,n_k}^{\lambda_k,\phi_k
}P_{l_k}^{(\lambda
_k)}(x_k;\phi_k)\right]\delta_{\sum_{k\geq1}kn_k,n}
\end{equation}
Here, the same observation made above for the $\{\alpha_k\}$ family
of the continuous $q$-Laguerre polynomials occurs for the
$\{\lambda_k\}$ and $\{\phi_k\}$ families in (\ref {connectionM})
i.e. the later connection formula remains independent of the
$\lambda_k$ and $\phi_k$ parameters. As example we list in below a
few calculus of $q$-Meixner-Pollaczek polynomials in terms of their
classical counterparts, after solving the partition equation
$\sum_{k\geq1}kn_k=n$ in each cases.
\begin{eqnarray}
P_0^{(\lambda)}(x;\phi;q)&=&P_0^{(\lambda)}(x;\phi)=1 \nonumber\\
P_1^{(\lambda)}(x;\phi;q) &=& \frac{1}{2\sin\phi_1}\left[P_1^{(\lambda_1)}(x_1;\phi_1)-2\lambda_1\cos\phi_1 P_0^{(\lambda_1)}(x_1;\phi_1)\right] \nonumber\\
P_2^{(\lambda)}(x;\phi;q)&=&\frac{1}{4\sin^2\phi_1}\left[P_2^{(\lambda_1)}(x_1;\phi_1)-(2\lambda_1+1)\cos\phi_1P_1^{(\lambda_1)}(x_1;\phi_1)\right.\nonumber\\&+&\left.\lambda_1(2\lambda_1\cos^2\phi_1+1)P_0^{(\lambda_1)}(x_1;\phi_1)\right]\nonumber\\&+&\frac{1}{2\sin\phi_2}\left[P_1^{(\lambda_2)}(x_2;\phi_2)-2\lambda_2\cos\phi_2 P_0^{(\lambda_2)}(x_2;\phi_2)\right]\nonumber \\
P_3^{(\lambda)}(x;\phi;q)&=&\frac{1}{24\sin^3\phi_1}\left[3P_3^{(\lambda_1)}(x_1;\phi_1)-6(\lambda_1+1)\cos\phi_1P_2^{(\lambda_1)}(x_1;\phi_1)\right.\nonumber\\
&+&\left.((3\lambda_1+1)+2(3\lambda_1^2+3\lambda_1+1)\cos^2\phi)P_1^{(\lambda_1)}(x_1;\phi_1)\right.\nonumber\\&-&2\left.\lambda_1\cos\phi_1(3\lambda_1+1+2\lambda_1^2\cos^2\phi_1)P_0^{(\lambda_1)}(x_1;\phi_1)\right]\nonumber\\
&+&\frac{1}{4\sin\phi_1\sin\phi_2}\left[P_1^{(\lambda_1)}(x_1;\phi_1)-2\lambda_1\cos\phi_1
P_0^{(\lambda_1)}(x_1;\phi_1)\right]\times\nonumber\\
&\times&\left[P_1^{(\lambda_2)}(x_2;\phi_2)-2\lambda_2\cos\phi_2
P_0^{(\lambda_2)}(x_2;\phi_2)\right]\nonumber\\
&+&\frac{1}{2\sin\phi_3}\left[P_1^{(\lambda_3)}(x_3;\phi_3)-2\lambda_3\cos\phi_3
P_0^{(\lambda_3)}(x_3;\phi_3)\right]
\end{eqnarray}
\vskip 0.5cm
\section{$q$-Gegenbauer polynomials}
\setcounter{equation}{0}
The $q$-Gegenbauer (or continuous $q$-ultraspherical or Rogers)
polynomials are given by \cite{GR}
\begin{eqnarray}
C_n^{(\lambda)}(x;q)&=&\frac{(q^{2\lambda};q)_n}{(q;q)_{n}}q^{-\frac{n\lambda}{2}}
{}_4\phi_3\left(q^{-n} ,q^{2 \lambda+ n},
q^{\frac{\lambda}{2}}e^{i\theta}, q^{\frac{\lambda}{2}}e^{-i\theta}
;q^{\lambda+\frac{1}{ 2} }, -q^{\lambda},-q^{\lambda+\frac{1}{ 2}
};q;q
\right)\\
&=&\frac{(q^{2\lambda};q)_n}{(q;q)_{n}}q^{-n\lambda}e^{-in\theta}
{}_3\phi_2\left(q^{-n} ,q^{ \lambda}, q^{\lambda}e^{2i\theta}
;q^{2\lambda}, 0;q;q
\right)\nonumber\\
&=&\frac{(q^{\lambda};q)_n}{(q;q)_{n}}e^{in\theta}
{}_2\phi_1\left(q^{-n} ,q^{ \lambda} ;q^{1-n-\lambda},
q;e^{-2i\theta} \right),\qquad x=\cos\theta.\nonumber
\end{eqnarray}
These polynomials can also be written as
\begin{equation}\label{geg}
C_n^{(\lambda)}(x;q)=\sum_{k=0}^{n}\frac{(q^{\lambda};q)_{k}(q^{\lambda};q)_{n-k}}{(q;q)_{k}(q;q)_{n-k}}e^{i(n-2k)\theta},\qquad x=\cos\theta.
\end{equation}
which are the $q$-analogues of the classical Gegenbauer (or
ultraspherical) polynomials \cite{AAR}
\begin{eqnarray}
C_n^{(\lambda)}(x)&=&\frac{(2\lambda)_n}{n!}{}_2F_1\left(-n,n+2\lambda;\lambda+\frac{1}{2};\frac{1-x}{2}\right),\quad
\lambda\neq0\\
&=&\sum_{k=0}^{n}\frac{({\lambda})_{k}({\lambda})_{n-k}}{k!(n-k)!}e^{i(n-2k)\theta},\qquad x=\cos\theta.
\end{eqnarray}
The generating function of the $q$-Gegenbauer polynomials is given
by
\begin{eqnarray}\label {generating8}
\verb"G"_q^{\lambda}(x;t)&\equiv
&\frac{(q^{\lambda}e^{i\theta}t;q)_\infty(q^{\lambda}e^{-i\theta}t;q)_\infty}{(e^{i\theta}t;q)_\infty(e^{-i\theta}t;q)_\infty},\qquad
x=\cos\theta.\nonumber\\
&=&\frac{E_q(-q^{\lambda}e^{i\theta}t)E_q(-q^{\lambda}e^{-i\theta}t)}{E_q(-e^{i\theta}t)E_q(-e^{-i\theta}t)}
=\sum_{n\geq 0}C_n^{(\lambda)}(x;q)t^n
\end{eqnarray}
Again by the mean of (\ref{quesne2}) the last generating function
(\ref{generating8}) take the following form
\begin{eqnarray}
\verb" G"_q^{\lambda}(x,t) &=&
\exp\left(\sum_{k\geq1}\frac{2}{k}\frac{1-q^{k\lambda}}{1-q^k}\cos
(k\theta)
t^k \right)
\end{eqnarray}
With the parametrization $ x_k=\cos(k\theta)$ and $
[\lambda]_{q^k}=\frac{1-q^{k\lambda}}{1-q^k}$ the deformed
generating function reads
\begin{equation}\label{generating10}
\verb"G"_q^{\lambda}(x,t)=\sum_{n_k\geq0}\prod_{k\geq1}\left[\frac{1}{n_k!}\left(\frac{2}{k}[\lambda]_{q^k}\right)^{n_k}x_k^{n_k}t^{n_k}\right].
\end{equation}
Recall that for any $|x|<1$ and $m\in \mathbb{N}$, we can expand
$x^{m}$ with respect of the classical Gegenbauer polynomials as
\begin{equation}\label {expansion}
x^m=\frac{m!}{2^m}\sum_{l=0}^{[\frac{m}{2}]}a_{l,m}^{\lambda}C_{2l+s}^{(\lambda)}(x)
\end{equation}
with
\begin{equation}
a_{l,m}^{\lambda}=\frac{\Gamma(\lambda)(2l+s+\lambda)}{\Gamma([\frac{m}{2}]+l+s\lambda+1)([\frac{m}{2}]-l)!}
\end{equation}
and $s=0$ ( resp. $s=1$) for $m$ even ( resp. $m$ odd).
$[\frac{m}{2}]$ denotes the largest integer smaller than or equal to
$\frac{m}{2}$. Using this expansion in the rhs of (\ref
{generating10}) to rewrite $x_k^{n_k}$ in terms of the classical
Gegenbauer polynomials, and comparing coefficients of equal power in
$t$ on both sides, we obtain a new connection formula for the
$q$-Gegenbauer polynomials in terms of their classical analogues,
which is more general than the one found in \cite {CJN}
\begin{equation}\label{connection4}
C_n^{(\lambda)}(x;q)=\sum_{n_1,n_2,...=0}^{\infty}\sum_{{{0\leq
l_1\leq n_1,}\atop{0\leq l_2\leq
n_2,}}\atop{\ldots}}\prod_{k\geq1}\left[\left(\frac{1}{k}[\lambda]_{q^k}\right)^{n_k}a_{l_k,n_k}^{\lambda
_k}C_{2l_k+s_k}^{(\lambda
_k)}(x_k)\right]\delta_{\sum_{k\geq1}kn_k,n}
\end{equation}
Here also the connection formula (\ref {connection4}) remains
independent of the $\lambda_k$ parameters. Each real set
$\{\lambda_k\}$ provides an expansion of the $q$-Gegenbauer
polynomials. As an illustration we here list the calculus of the
first six $q$-Gegenbauer polynomials in terms of their classical
counterparts.
\begin{eqnarray}
C_0^{(\lambda)}(x;q)&=& C_0^{(\lambda)}(x)=1. \nonumber\\
C_1^{(\lambda)}(x;q) &=& [\lambda]_{q}\frac{1}{\lambda_1} C_1^{(\lambda_1)}(x_1).\nonumber\\
C_2^{(\lambda)}(x;q) &=& [\lambda]_{q}^2\frac{1}{\lambda_1+1}\left( C_0^{(\lambda_1)}(x_1)+\frac{1}{\lambda_1}C_2^{(\lambda_1)}(x_1)\right)+[\lambda]_{q^2}\frac{1}{2\lambda_2}C_1^{(\lambda_2)}(x_2).\nonumber\\
C_3^{(\lambda)}(x;q) &=&[\lambda]_{q}^3\frac{1}{\lambda_1+2}\left(\frac{1}{\lambda_1}C_1^{(\lambda_1)}(x_1)+\frac{1}{\lambda_1(\lambda_1+1)}C_3^{(\lambda_1)}(x_1)\right)\nonumber \\
&+&\frac{1}{2}[\lambda]_{q}[\lambda]_{q^2}\frac{1}{\lambda_1\lambda_2}C_1^{(\lambda_1)}(x_1)C_1^{(\lambda_2)}(x_2)
+\frac{1}{3}[\lambda]_{q^3}\frac{1}{\lambda_3}C_1^{(\lambda_3)}(x_3).
\nonumber\\
C_4^{(\lambda)}(x;q)&=&[\lambda]_{q}^4\frac{1}{\lambda_1+1}\left(\frac{1}{(2\lambda_1+2)}C_0^{(\lambda_1)}(x_1)+\frac{1}{\lambda_1(\lambda_1+3)}C_2^{(\lambda_1)}(x_1)\right.\nonumber\\
&+&\left.\frac{1}{\lambda_1(\lambda_1+2)(\lambda_1+3)}C_4^{(\lambda_1)}(x_1)\right)\nonumber \\
&+&\frac{1}{2}[\lambda]_{q^2}[\lambda]_{q}^2\frac{1}{\lambda_2(\lambda_1+1)}\left(C_0^{(\lambda_1)}(x_1)+\frac{1}{\lambda_1}C_2^{(\lambda_1)}(x_1)\right)C_1^{(\lambda_2)}(x_2)\nonumber \\
&+&
\frac{1}{3}[\lambda]_{q}[\lambda]_{q^3}\frac{1}{\lambda_1\lambda_3}C_1^{(\lambda_1)}(x_1)C_1^{(\lambda_3)}(x_3)+\frac{1}{4}[\lambda]_{q^4}\frac{1}{\lambda_4}C_1^{(\lambda_4)}(x_4).\nonumber
\end{eqnarray}
\begin{eqnarray}
C_5^{(\lambda)}(x;q)&=&[\lambda]_{q}^5\frac{1}{\lambda_1(\lambda_1+2)}\left(\frac{1}{2(\lambda_1+3)}C_1^{(\lambda_1)}(x_1)+\frac{1}{(\lambda_1+1)(\lambda_1+4)}C_3^{(\lambda_1)}(x_1)\right.\nonumber\\&+&\left.\frac{1}{(\lambda_1+1)(\lambda_1+3)(\lambda_1+4)}C_5^{(\lambda_1)}(x_1)\right)
\nonumber\\&+&\frac{1}{2}[\lambda]_{q}^3[\lambda]_{q^2}\frac{1}{\lambda_1\lambda_2(\lambda_1+2)}\left(C_0^{(\lambda_1)}(x_1)+\frac{1}{\lambda_1+1}C_3^{(\lambda_1)}(x_1)\right)C_1^{(\lambda_2)}(x_2)\nonumber
\\&+&\frac{1}{3}[\lambda]_{q}^2[\lambda]_{q^3}\frac{1}{\lambda_3(\lambda_1+1)}\left(C_0^{(\lambda_1)}(x_1)+\frac{1}{\lambda_1}C_2^{(\lambda_1)}(x_1)\right)C_1^{(\lambda_3)}(x_3)\nonumber\\
&+&[\lambda]_{q}[\lambda]_{q^4}\frac{1}{\lambda_1\lambda_4}C_1^{(\lambda_1)}(x_1)C_1^{(\lambda_4)}(x_4)+\frac{1}{6}[\lambda]_{q^2}[\lambda]_{q^3}\frac{1}{\lambda_2\lambda_3}C_1^{(\lambda_2)}(x_2)C_1^{(\lambda_3)}(x_3)\nonumber\\
&+&\frac{1}{4}[\lambda]_{q}[\lambda]_{q^2}^2\frac{1}{\lambda_1(\lambda_2+1)}\left(C_0^{(\lambda_2)}(x_2)+\frac{1}{\lambda_2}C_2^{(\lambda_2)}(x_2)\right)C_1^{(\lambda_1)}(x_1)+\frac{1}{5}[\lambda]_{q^5}\frac{1}{\lambda_5}C_1^{(\lambda_5)}(x_5).\nonumber\\
\end{eqnarray}
{\remark
Recall that the continuous $q$-Legendre polynomials, denoted
$P_n(x|q)$ (see \cite{KS} {subsection 3.10.2} ), are related to the
$q$-Gegenbauer polynomials by
\begin{equation}
P_n(x|q)=q^{\frac{n}{4}}C^{(\frac{1}{2})}_n(x;q)
\end{equation}and the classical Legendre polynomials can be obtained
from the classical Gegenbauer polynomials by replacing
$\lambda=\frac{1}{2}$, then we can derive a connection formula
between continuous $q$-Legendre and classical Legendre polynomials
by taking $\lambda=\lambda_k=\frac{1}{2}$, $\forall k\in
\mathbb{N}^*$ in (\ref {connection4}), and multiplying both sides by
$q^{\frac{n}{4}}$.}
\vskip 0.5cm
\section{Conclusion and discussion}
In this work, we had successfully written the connection formulae of
some $q$-orthogonal polynomials appearing in the Askey scheme
\cite{KS}. The first one was the continuous $q$-Laguerre
polynomials, which are representing others $q$-analogues of the
classical Laguerre polynomials. An explicit example $
P_4^{\alpha}(x| q)$ was given. It follows from these results that
the solutions of the Diophantine equation fix the finite dependence
structure between classical polynomials $ L_n^{\alpha}(x)$ and
deformed polynomials $ P_n^{\alpha}(x| q)$ for any fixed $n$. The
obtention of the connection formulae was possible only because the
generating function of continuous $q$-Laguerre polynomials are the
product of Jackson's $q$-exponentials which could be expressed in
more useful forms found by C. Quesne \cite{Q}. Our second sample in
the Askey scheme was the continuous big $q$-Hermite. In this case,
we had used the same arguments and method as in the precedent
example and our connection formula was supported by an explicit
example. The third polynomials in the list of this work were the
$q$-Meixner-Pollaczek ones. The uses of relations (\ref{quesne1})
and (\ref{quesne2}) obtained by \cite {Q} and series expansion
allowed us to write a well defined connection formula relating the
deformed polynomials to their classical counterparts. Several
examples were given. In the last section, it wasn't difficult to
give the connection formula of
$q$-Gegenbauer polynomials in more general form than the one given in \cite{CJN}.
In all cases, except in the big $q$-Hermite polynomials cases, the
generic family parameters which appear in computation process, drop
out by construction in the final results. This means that quantum
deformation of such orthogonal polynomials is not bijective.
However, for the others polynomials in the Askey scheme possessing
generating functions not expressed in product of $q$-exponentials,
the above prescription stops working. The case of Bessel functions,
which are not orthogonal polynomials, could be a good candidate to
write connection formulae since their generating function uses
Jackson's $q$-exponentials; but this is not an easy task because the
derived Diophantine partition equation couldn't be solved so easily,
even in the simplest cases. However, our results may be useful in
finding the relations of matrix elements of the unitary
co-representations of the some quantum group associated with
$q$-orthogonal polynomials.
For instance, we can notice from \cite{FV} the
existence of simple relations between the matrix elements of the
metaplectic representation of $su_q(1,1)$ and $q$-generalization of the Gegenbauer
polynomials which are slightly different from (\ref{geg}) i.e. the expressions (22) and (23). For these polynomials, we can
easily compute their associated connection formula and
use it, after setting $\lambda=-(n+m), -(n+m+1)$ and
$\lambda_k=-(n_k+m_k), -(n_k+m_k+1)$, to get a new relation which links the
quantum matrix elements associated with $SU_q(1,1)$ to their classical analogous;
constituting then an infinite dimensional representation of
$SU_q(1,1)$. In some way, this relation may be viewed as a kind of
realization map of the standard deformation of the group from its
non deformed form. But what is more interesting is when one takes a
generic value of $\lambda$; this provides a new continuous
representation more general than the precedent ones.
Whether this representation fits or not with actual known
representations of $SU(1,1)$ is now the question.
\vskip 0.5cm \noindent{\bf Acknowledgements:} {\em
We would like to thank Prof. B. Abdesselam, L. Frappat and P. Sorba for precious help and useful discussions. We are also grateful to Prof. P. Aurenche for his hospitality in LAPTH (Annecy).
This work is supported by the CMEP program 04 MDU 615. }
\vskip 0.5cm
|
1,314,259,994,886 | arxiv | \section{Introduction}
Decentralized decision-making in robotic networks is a ubiquitous problem, with applications as diverse as state estimation \cite{ROS:07}, formation control \cite{WR-RWB-EMA:07}, and cooperative task allocation \cite{MdW-BC:09}. In particular, the consensus problem, where the nodes in a robotic network have to agree on some common value, has received significant attention in the last decade following the works in \cite{AJ-JL-ASM:02, ROS-RMM:03c}. Most recent efforts in the Controls community have primarily focused on studying the properties and fundamental limitations of \emph{average-based} consensus, a subclass of the consensus problem where nodes average their state with their neighbors at each time step \cite{AO:10}. In these works, the dominant performance metric is time complexity, i.e., convergence time. In contrast, the computer science community has mainly focused its attention on the complementary notion of communication complexity, and ``communication-optimal" algorithms for selected consensus problems are now known \cite{NL:96}.
Despite the large interest in consensus problems in the last decade, little attention has been devoted to the problem of studying fundamental limitations of performance with respect to time and communication (in its broadest sense) complexity. In \cite{AO:10} the author proposes a lower bound on the time complexity of \emph{average-based} consensus algorithms; several lower bounds on the \emph{message} complexity of specific instances of the consensus problem are known in the CS literature \cite{NL:96}, but no comprehensive study for more general distributed decision-making problems is available.
This motivates our work: in this paper, we explore tight lower bounds on the complexity of a large class of consensus problems with respect to metrics relevant to robotic systems.
To appreciate the value of the time and communication complexity metrics, consider the following two scenarios.
Single-hop latency of robotic wireless communication protocols is often in excess of 10ms. The popular 802.15.4 protocol is bandwidth-limited: transmission of a 768 bit message containing the state of a 6DOF vehicle requires 7ms at the maximum allowable bit rate of 115200bps; latencies four to five times higher are typically observed in a laboratory environment. On the other hand, latency of high-bandwidth protocols such as 802.11 (WiFi) is greatly influenced by collisions: in presence of dozens of agents, latency is observed to consistently rise above 10ms.
Consider a typical network of 50 robotic agents in an arbitrary configuration. In Sec. \ref{sec:lbs} we note that the worst-case optimal time complexity of the consensus problem is $\Theta(n)$, with a constant factor close to one: thus, consensus can be achieved in approximately 500ms. On the other hand, the popular average-based consensus protocol has a convergence rate of $O(n^2) \cite{AO-JNT:07}$: convergence can require as much as $25s$ even on a static network. Intuitively, choosing a suitably fast consensus algorithm allows to track and control systems with orders-of-magnitude faster dynamic behavior.
On the other hand, energy consumption is also a concern in cyber-physical networks. Consider a swarm of autonomous underwater vehicles (AUV) tasked with performing a collaborative mission such as patrolling. Underwater ultrasonic communication has significantly higher energy demands than radio transmissions: for instance, in \cite{MS-TS-TT:92}, the authors use 18W to maintain a 16 kbps link (sufficient to stream rich telemetry or low-quality images) with $35^\circ$ antennas over 6500 m. For underwater operations, omnidirectional communication is unadvisable because of intersymbol interference caused by multipath propagation, a phenomenon exacerbated by wide-beam transceivers. Hence, underwater communication typically relies on directional antennas and communication with $n$ agents requires $n$ \emph{different} messages. Consider, now, an all-to-all communication scheme to reach consensus (as needed by flooding and average-based consensus algorithms): in this case, communication with 50 AUVs would require 900 W per vehicle, which is impractical (as a comparison, electric motors on modern Remotely Operated Vehicles (ROVs) such as NOAA's Autonomous Benthic Explorer only draw 100 W in cruise \cite{DRY-AMB-BBW:91}). Hence, in this setting, a time-optimal algorithm such as flooding cannot be implemented. In contrast, a communication-optimal algorithm such as GHS \cite{RGG-PAH-PMS:83} requires $O(\log_2 n)$ messages per AUV, resulting in a more practical power requirement of 102 W.
We mainly restrict our analysis to static networks, but also provide some extensions to time-varying network topologies. The contribution of our work is threefold: we extend results from the computer science literature to a broad class of distributed decision-making problems (collectively referred to as generalized consensus) relevant to the control systems and robotic community; we also present a unified complexity theory for generalized consensus on static networks, identifying lower bounds on performance for metrics relevant to robotic systems. Finally, we discuss algorithms that simultaneously make one or more of these bounds tight.
The paper is structured as follows. In Section \ref{sec:background} we propose a formal model for robotic networks, complexity measures and a rigorous definition of the consensus problem that encompasses problems including (weighed) mean as well as MAX, MIN and voting. In Section \ref{sec:lbs} we present lower bounds on the time, message, and byte complexity of the consensus problems for \emph{sparse} and \emph{dense} networks. In Section \ref{sec:upper} we show tightness of these lower bounds under mild assumptions. Finally, in Section \ref{sec:conclusions} we draw our conclusions and we discuss directions for future work. Lower bounds and optimality results are summarized in Table \ref{tab:consbounds}.
We remark two rather interesting findings. On static networks, a modified version of the GHS minimum spanning tree algorithm solves the consensus problem with \emph{simultaneously} optimal time, message, byte and storage complexity. Intuitively, if ``few'' link reconfigurations are expected, algorithms based on construction of a spanning tree are superior to nonhierarchical schemes.
Conversely, on time-varying networks, a simple \emph{flooding} algorithm Pareto-dominates average-based consensus in all metrics we investigate except for storage complexity (arguably a minor concern in modern robotic systems). However, in practical implementations average-based consensus may be preferable to flooding whenever time performance is not critical and \emph{bandwidth} (whose role we discuss in \cite{FR-MP:14b}) is limited.
\begin{table*}[h!tb]{\footnotesize
\begin{subtable}[h]{\textwidth}
\centering
\begin{tabular}{rcccc}
& Time & Message (S) &Message (D) & Msg. (broadcast) \\
\toprule
Lower bound & n& $n\log n$ & $n^2$ & $n\log n$\\
\hline
Flooding (no failures) & \textbf{n} & $n^2 \log n$ & $n^3$ &$ n^2$\\
GHS modified (no failures) &$\textbf{n}$ \cite{BAw:87} & $\mathbf{n\log n}$ & $\mathbf{n^2}$ & $\mathbf{n\log n}$\\
Avg. based (no failures, $\varepsilon$) & $n^2\log (1/\varepsilon)$& $n^3 \log n$& $n^4$& $n^3$\\
Hybrid clustering \cite{FR-MP:13} &$n\log (n/m)$& $ 2mn + k|E_c|m $ & $ 2mn + k|E_c|m $ & $n(m+\log(n/m)$\\
\hline
Flooding (D-connectivity) &$\mathbf{nD}$ & $n^2D \log n $&$n^3D$& $n^2D$\\
Avg.-based (D-connectivity) &$n^2D\log(1/\varepsilon)$&$n^3D\log n$&$n^4D $&$n^3D$\\
\bottomrule
\\
& Byte (S)&Byte (D)& Byte (broadcast) &Storage\\
\toprule
Lower bound & $(n\log n)(\log n + b)$ & $n^2(\log n + b)$ &$(n\log n)(\log n + b) $& $\log n + b$\\
\hline
Flooding (no failures) & $ n^2\log n (\log n +b)$ & $ n^3(\log n +b)$ &$n^2(\log n + b) $ & $n(\log n+b)$\\
GHS modified (no failures) &$\mathbf{(n\log n)(\log n + b)}$& $\mathbf{n^2(\log n + b)}$& $\mathbf{(n\log n)(\log n + b)}$ & $\mathbf{\log n + b}$\\
Avg. based (no failures, $\varepsilon$) & $n^3 \log n(\log n + b) $&$n^4(\log n + b) $& $n^3 (\log n + b) $ & $\mathbf{\log n + b}$\\
Hybrid clustering \cite{FR-MP:13} & $m(n+k|E_c|)(\log n+b)$ & $m(n+k|E_c|)(\log n+b)$ & $n(m+\log(n/m)\log n$ &$m(\log n + b)$ \\
\hline
Flooding (D-connectivity) &$n^3D\log n(\log n+b)$&$n^4D (\log n+b)$ & $n^3D(\log n + b)$ &$n(\log n + b)$\\
Avg.-based (D-connectivity) &$n^3D\log n(\log n+b)$&$n^4D (\log n+b)$ & $n^3 D (\log n + b)$ &$\mathbf{\log n + b}$\\
\bottomrule
\end{tabular}
\end{subtable}
\caption{\small Synoptic view of bounds for distributed consensus. Bounds on time complexity hold for all consensus functions, bounds on message complexity hold for locally-sensitive or extractive consensus functions, and, finally, bounds on byte and storage complexity hold for locally-sensitive or extractive consensus functions that are hierarchically computable. S and D denote, respectively, sparse and dense graphs. The number of agents is denoted as $n$ and the number of communication links is $|E|$. $m$ is a tuning parameter assuming values in $\{1,\ldots,n\}$. The parameter $|E_c|$ is $|E_c|=O(\min(E,m^2))$. Time-varying networks are $D$-connected \cite{AO:10}. For average-based algorithms, $\varepsilon$ is the convergence threshold for termination. We denote in bold face optimality results.\vspace{-.7cm}}
\label{tab:consbounds}
}
\end{table*}
\section{Problem Setup}
\label{sec:background}
In this section we discuss the network model and we define the distributed consensus problem we will study in this paper. A simplified version of our model has first been introduced by the authors in \cite{FR-MP:13}.
\subsection{Agent model}
An agent in a robotic network is modeled as an input/output (I/O) automaton, i.e., a labeled state transition system able to send messages, react to received messages and perform arbitrary internal transitions based on the current state and on any messages received.
A precise definition of I/O automaton is provided in \cite[pp. 200-204]{NL:96} and is omitted here in the interest of brevity. All nodes are identical except for a unique identifier (UID - for example, an integer). The time evolution of each node in the graph $G$ is characterized by two key assumptions:
\begin{itemize}
\item {\bf Fairness assumption}: the order in which transitions happen and messages are delivered is not fixed a priori. However, any enabled transition will \emph{eventually} happen and any sent message will \emph{eventually} be delivered.
\item {\bf Non-blocking assumption}: every transition is activated within $l$ time units of being enabled and every message is delivered within $d$ time units of being dispatched.
\end{itemize}
Essentially, the fairness assumption states that every node will have an opportunity to perform transitions, while the non-blocking assumption gives timing guarantees (but no synchronization). We refer the interested reader to \cite[pages 212-215]{NL:96} for a detailed discussion of these assumptions. We argue here that these are \emph{minimal} assumptions for most reliable real-world robotic networks.
\subsection{Network model}\label{subsec:model}
A \emph{robotic network} with $n$ agents is modeled as a \emph{connected}, \emph{undirected} graph $G = (V,E)$, where $V = \{1,\ldots, n\}$ is the node set, and $E\subset V\times V$, the edge set, is a set of \emph{unordered} node pairs modeling the availability of a communication channel. Two nodes $i$ and $j$ are neighbors if $(i, j)\in E$. The neighborhood set of node $i\in V$, denoted by $N_i$, is the set of nodes $j\in V$ neighbors of node $i$. Our model is \emph{asynchronous}, i.e., computation steps within each node and communication are, in general, asynchronous.
This paper focuses on \emph{static networks}, i.e., robotic networks where the edge set does not change during the execution of the algorithm. Given a node set with $n$ nodes, we will consider two classes of graphs:
\begin{itemize}
\item {\bf Sparse graphs}: that is graphs where the number of edges $|E|$ is less than $n\log n$.
\item {\bf Dense graphs}: that is graphs where the number of edges $|E|$ is larger than or equal to $n\log n$.
\end{itemize}
Henceforth, we will denote the set of sparse graphs as $\mathcal G_s$ and the set of dense graphs as $\mathcal G_d$.
Note that for any connected graph with $n$ nodes, $n-1 \leq |E|\leq {n\choose 2}$.
From a practical standpoint, sparse and dense graphs manifest themselves in different robotic problems and give rise to different issues. In dense graphs (present e.g. in formation control and rendezvous problems) time complexity is typically not an issue; on the other hand, message and byte complexity have to be carefully kept under control to avoid excessive bandwidth utilization and minimize message collisions. Especially for large graphs, it is crucial to ensure that agents only communicate with a small subset of their neighbors, even if many are available. On the other hand, in sparse graphs (which typically manifest themselves in patrolling and deployment applications, where large inter-agent distances are desirable) message complexity is not an issue; efficient routing of information, on the other hand, is crucial to ensure good time performance.
\subsection{Model of computation}
At a general level, in this paper we focus on decision-making problems where each node $i$, $i\in\{1, \ldots, n\}$, in the robotic network is endowed with an initial value $x_i$ and should output the value of a function of all initial values. In other words, each agent, after exchanging messages (with any content) with its neighbors and performing internal state transitions, should output $f(x_1, \ldots, x_n)$ for some computable function $f$, referred to as \emph{consensus} function. In the reminder of this section we formalize the notions of consensus functions and of decentralized algorithms.
\subsubsection{Consensus functions}
In this paper we consider functions defined over \emph{totally ordered sets}, that is we assume that the initial conditions $x_i$ belong to a set $\mathcal X$ equipped with a binary relation (denoted with $\leq$) which is transitive, antisymmetric, and total. Two sets of initial conditions $A=\{a_1, \ldots, a_n\}$ and $B=\{b_1, \ldots, b_n\}$ are said to be \emph{order-equivalent} if $a_i< a_j \Leftrightarrow b_i<b_j$.
The set of initial conditions $\mathcal C$ can be, for example, $\natural$, $\real$, and $\real^d$ (in the last case the total order could be the lexicographic order).
A consensus function is a \emph{computable}
function $f : \mathcal X^n \mapsto \real$ that depends on \emph{all} its arguments. More precisely, for each element $x = (x_1, \ldots, x_n) \in \mathcal X^n$ and for all $i\in\{1, \ldots, n\}$ one can find elements $x_{i}^{(1)}\in \mathcal X$ and $x_{i}^{(2)}\in \mathcal X$ such that
\[
f(x_1, \ldots, x_{i}^{(1)}, \ldots, x_n)\neq f(x_1, \ldots, x_{i}^{(2)}, \ldots, x_n).
\]
Loosely speaking, such choice of consensus function implies that each node is needed for the collective decision-making process.
For some of the results presented in this paper, we will need the following \emph{refinements} of the notion of consensus function:
\begin{itemize}
\item {\bf Locally-sensitive consensus function}: a consensus function that is sensitive to perturbations that preserve order. More precisely, let $\mathcal I = \{1,2, \ldots, n\}$ be the set of node indices and let $\sigma:\mathcal I \mapsto \mathcal I$ be a permutation
(i.e., a bijective correspondence) over $\mathcal I$. Then, for each permutation $\sigma$ over $\mathcal I$ there exists an element $x\in \mathcal X^n$ that is ordered with respect to $\sigma$, i.e., $x_{\sigma(i)} \leq x_{\sigma(j)}$ for all $i<j$ and such that for all $i\in \{1, \ldots, n\}$ there exist $x_{\sigma(i)}^{(1)}, x_{\sigma(i)}^{(2)} \in \mathcal X$, $x_{\sigma(i)}^{(1)} \neq x_{\sigma(i)}^{(2)}$ with the properties:
\begin{enumerate}
\item $x_{\sigma(i)}^{(1)}\leq x_{\sigma(j)}$ and $x_{\sigma(i)}^{(2)}\leq x_{\sigma(j)}$ for all $i<j$,
\item $x_{\sigma(j)}\leq x_{\sigma(i)}^{(1)}$ and $x_{\sigma(j)}\leq x_{\sigma(i)}^{(2)}$ for all $j < i$,
\item $f(x_1, \ldots, x_{\sigma(i)}^{(1)}, \ldots x_n) \!\neq \!f(x_1, \ldots, x_{\sigma(i)}^{(2)}, \ldots x_n)$.
\end{enumerate}
(The first two properties ensure that $x_{\sigma(i)}^{(1)}$ and $x_{\sigma(i)}^{(2)}$ preserve the order of $x$, while the last property reflects local sensitivity.)
\item {\bf Extractive consensus function}: a consensus function such that for all $x=(x_1, \ldots, x_n)\in \mathcal X^n$ one has $f(x_1,\ldots, x_n)=x_j$ for some $j\in\{1, \ldots, n\}$.
\end{itemize}
The classes of locally-sensitive and extractive consensus functions are neither mutually exclusive nor collectively exhaustive (however, they represent, arguably, a very broad class of consensus functions of interest to applications). Loosely speaking, locally-sensitive consensus functions model problems where the decision-making process depends \emph{continuously} on the initial values $\{x_i\}$, for example for average consensus
$
f(x) = c^T\, x,
$
where $\mathcal X =\real^n$, $x\in \mathcal X$, and $c$ is a vector in $\real^n$, or for
distributed optimization
\[
f(x) = \operatornamewithlimits{argmin}_{z\in \real^n} \, \sum_{i=1}^n \, \varphi_i(z,x_i)
\]
where the objective function is \emph{parametrized} by $x_i$, under certain conditions on $\varphi_i$ (consider for instance $\varphi_i$ a positive-semidefinite quadratic form parametrized by $x_i$).
On the other hand, extractive consensus functions model leader-election problems or problems where it is desired to extract some statistics from the data, e.g., MAX and MIN.
Finally, we introduce a \emph{representation} property for consensus functions that will be instrumental to deriving fundamental limitations of performance in terms of amount of information exchanged. A locally-sensitive or extractive consensus function is hierarchically computable if it can be written as the composition of a commutative and associative binary operator $\ast$, that is
\[
f(x_1, x_2,\ldots, x_n) = x_1 \ast x_2\ast \ldots\ast x_n.
\]
(The name is inspired by the observation that hierarchically computable functions can be computed with messages of small size on a \emph{hierarchical} structure such as a tree). All examples of consensus functions mentioned above are indeed hierarchically computable.
\subsection{Model of communication}
Nodes can communicate with their neighbors according to two communication schemes: \emph{directional} and \emph{local broadcast}. In the directional communication scheme a node sends messages to each neighbor individually. This is the case when nodes in the network are equipped with narrow-band, high-gain mechanically or electronically steerable antennas. In the local broadcast communication scheme, a node sends a message to all its neighbors simultaneously. This is the typical case for nodes equipped with omnidirectional antennas.
\subsection{Distributed algorithms}
A distributed algorithm for a robotic network is, simply, a collection of \emph{local} algorithms, one for each node of the network (of course, nodes can exchange messages). Nodes execute the same logical code. Each node is initialized with an initial condition $x_i \in \mathcal X$.
A distributed algorithm correctly computes a given function if, given an input $x\in \mathcal X^n$, \emph{each} node outputs the correct value of the function $f(x_1, \ldots, x_n)$, and terminates \cite{NL:96} in \emph{finite} time. Specifically, the algorithm should output the correct value in at most $R$ rounds (for synchronous executions) or after $R\, (l+d)$ time units (for asynchronous executions). $R$ can be an arbitrarily large number, hence this assumption is not limiting from a practical standpoint; however, it will be key to deriving some of our results.
Note that, for asynchronous executions, termination is not simultaneous.
A particular class of distributed algorithms that will be instrumental to derive some of the results is represented by comparison-based algorithms, defined next.
\begin{definition}[Comparison-based algorithms (\cite{GF-NL:87, MS:85})]
\label{def:compbased}
A distributed algorithm is comparison-based if each local algorithm manipulates the subset of (totally ordered) initial values that are locally known only via the three Boolean operators $<$,$ >$, and $=$ (recall that the set of initial values is a totally ordered set).
Accordingly, all internal transitions (including message generation) only depend on the \emph{order} of the initial values known to the local algorithm, as opposed to their numerical value.
\end{definition}
\subsection{Complexity measures}\label{subsec:com}
The following definitions naturally capture the notions of time and communication complexity and are widely used in the theory of distributed algorithms \cite{NL:96}.
Let $\mathcal G$ be a set of graphs with node set $V=\{1,\ldots, n\}$ (we are specifically interested in the class of sparse graphs $\mathcal G_s$ and in the class of dense graphs $\mathcal G_d$). For a given graph $G\in \mathcal G$, let $\mathcal F(a, x, G)$ be the set of \emph{fair executions} for an algorithm $a\in \mathcal A$ and a set of initial conditions $x \in \mathcal X^{n}$ (a fair execution is an execution of an algorithm that satisfies the fairness and non-blocking assumptions stated above).
\subsubsection{Time complexity} To measure execution time, we assume that a distributed algorithm starts at time $t=0$. Time complexity is defined as the infimum worst-case (over initial values and fair executions) completion time of an algorithm. Rigorously, the time complexity for a given consensus function $f$ with respect to the class of graphs $\mathcal G$ is
\[
\textrm{TC}(f, G):=\inf_{a\in \mathcal A} \, \sup_{G\in \mathcal G}\, \sup_{x \in \mathcal X^{|G|}} \, \sup_{\alpha \in \mathcal F(a, x,G)} \, T(a, x, \alpha, G),
\]
where $T(a, x, \alpha, G)$ is the first time when all nodes have computed the correct value for the consensus function $f$ and have stopped.
The order of the inf-sup operands in the above expression is naturally induced by our definitions. By dropping the leading $\inf_{a\in \mathcal A}$, one recovers the time complexity of a given algorithm $a$ for a given consensus function $f$. In our asynchronous setting, time complexity is expressed in multiples of $l+d$, defined in section \ref{subsec:model} (see also \cite{NL:96}). We will henceforth refer to $(l+d)$ as a \emph{time unit}. Note that $(l+d)$ is a (tight) upper bound on the actual time required to generate and deliver a message.
\subsubsection{Message complexity for directional communication} Message complexity is similarly defined as the infimum worst-case (over initial values and fair executions) \emph{number} of messages exchanged by an algorithm before completion. Rigorously, the message complexity for a given consensus function $f$ with respect to the class of graphs $\mathcal G$ is
\[
\text{MC}(f,\mathcal G)=\inf_{a\in \mathcal A} \, \sup_{G\in \mathcal G}\,\sup_{x \in \mathcal X^{|G|}} \, \sup_{\alpha \in \mathcal F(a, x,G)} \, M(a, x, \alpha,G),
\]
where $M(a, x, \alpha, G)$ is the number of messages exchanged between time $t=0$ and time $t = T(a, x, \alpha,G)$.
It is important to note that the \emph{type} of messages exchanged depends on the algorithm. In average-based consensus algorithms, nodes typically exchange their state, a real number. In algorithms such as the well-known Gallager, Humblet and Spira (GHS) algorithm \cite{RGG-PAH-PMS:83}, nodes exchange a wide range of logical commands establishing hierarchical relationships, informing neighbors about the progress of the algorithm, and requiring them to perform edge searches \cite{ BAw:87}. In flooding algorithms \cite{NL:96}, a single message may contain information from up to $n-1$ nodes. However, as far as message complexity is concerned, each message \emph{counts the same}, regardless of its type and size.
\subsubsection{Byte complexity for directional communication}
In many instances, message size plays a critical role in the energy needed for information transmission. To capture this aspect, in this paper we define byte complexity as the infimum worst-case (over initial values and fair executions) \emph{overall size} (in bytes) of all messages exchanged by an algorithm before its completion. Rigorously, the byte complexity for a given consensus function $f$ with respect to the class of graphs $\mathcal G$ is
\[
\textrm{BC}(f,\mathcal G):=\inf_{a\in \mathcal A} \, \sup_{G\in \mathcal G}\,\sup_{x \in \mathcal X^{|G|}} \, \sup_{\alpha \in \mathcal F(a, x,G)} \, B(a, x, \alpha,G),
\]
where $B(a, x, \alpha, G)$ is the overall size (in bytes) of all messages exchanged between time $t=0$ and time $t = T(a, x, \alpha,G)$.
\subsubsection{Message and byte complexity for local broadcast communication}
The definitions of message and byte complexity for local broadcast communication parallels the definitions of message and byte complexity for directional communication.
Rigorously, the broadcast message complexity for a given consensus function $f$ with respect to a class of graphs $\mathcal{G}$ is
\[
\textrm{bMC}(f,\mathcal{G})=\inf_{a\in\mathcal{A}}\sup_{G\in\mathcal{G}}\sup_{x\in\mathcal X^{|G|}}\sup_{\alpha\in\mathcal{F}(\alpha,x,G)} bM(a,x,\alpha,G)
\]
where $bM(a,x,\alpha,G)$ is the overall number of \emph{broadcast} messages exchanged between time $t=0$ and time $t = T(a, x, \alpha)$ (a broadcast message is a message sent by a node to \emph{all} its neighbors). Analogously, the broadcast byte complexity for a given consensus function $f$ with respect to a class of graphs $\mathcal{G}$ is
\[
\textrm{bBC}(f,\mathcal{G})=\inf_{a\in\mathcal{A}}\sup_{G\in\mathcal{G}}\sup_{k\in\mathcal X^{|G|}}\sup_{\alpha\in\mathcal{F}(\alpha,x,G)} bB(a,x,\alpha,G)
\]
where $bB(a,x,\alpha,G)$ is the size (in bytes) of all \emph{broadcast} messages exchanged between $t=0$ and $t = T(a, x, \alpha)$.
\subsection{Discussion}
It is of interest to compare our model of distributed consensus with the models for the consensus problem developed, respectively, by the Computer Science community and the Controls and Robotics community. In the Computer Science community, distributed consensus is typically defined as the task of computing via a distributed algorithm \emph{any} function of a set of initial conditions such that the following three properties are fulfilled: \emph{agreement} (no two processes decide on different values), \emph{validity} (in absence of failures, if all agents start with the same value, then every agent decides on that value) and \emph{termination} (all processes eventually decide) \cite{NL:96}. In the Controls and Robotics community, on the other hand, consensus is essentially a synonym for \emph{average-based} consensus, where agents compute an asymptotic approximation of a weighted average of their initial conditions via local communication \cite{AO:10,AJ-JL-ASM:02, ROS-RMM:03c,ROS:07,WR-RWB-EMA:07,MdW-BC:09}. Our model, thus, is more restrictive than the model considered in the Computer Science community (since the consensus function is required to fulfill some mild requirements), while it is (significantly) more general than the model considered in the Controls and Robotics community (since average-based consensus is a particular example of locally-sensitive consensus function defined over $\real^n$).
We mention that we also studied \emph{storage complexity}, defined as the infimum worst-case (over initial values and fair executions) storage size required by every agent executing the consensus algorithm. We do not discuss this complexity notion here due to space limitation and because in most cases it is not a bounding factor. However, we report our results in the synoptic table (Table \ref{tab:consbounds}).
In the remainder of the paper we discuss fundamental limitations of performance of the distributed consensus problem, in terms of fundamental scalings of the different complexity measures with respect to the network size (i.e., the number of nodes $n$). We also discuss algorithms that, in many cases, recover such asymptotic bounds. Our asymptotic notation (e.g., $O(g(n))$ or $\Omega(g(n))$ is standard.
\section{Lower Bounds on Achievable Performance for Distributed Consensus}
\label{sec:lbs}
In this section we present lower bounds for the complexity measures introduced in Section \ref{subsec:com}. We will discuss the tightness of these bounds in Section \ref{sec:upper}.
\subsection{Time complexity}
A lower bound on time complexity can be obtained rather easily.
\begin{proposition}[Lower bound on time complexity]\label{prop:lbtime}
For a given consensus function $f$ and class of graphs $\mathcal G$ with $n$ nodes, $\textrm{TC}(f, \mathcal G) \in \Omega(n)$.
\end{proposition}
\begin{proof}
By contradiction.
Let us assume that there exists a consensus algorithm $a$ that terminates in $o(n)$ time units for all graphs $G\in\mathcal{G}$, initial conditions $x\in\mathcal{X}$ and executions $\alpha$.
We restrict our analysis to synchronous executions of the algorithm (since synchronous executions are a special case of asynchronous executions, a lower bound with respect to the former is also a bound for the asynchronous case). We also consider a specific graph $G$ where the maximal distance between any two pairs of nodes is $\text{Diam}(G)=\Theta(n)$ (the \emph{line} graph is an example of one such graph).
Then there exist two nodes $u$, $v$ such that $n$ time units are required for \emph{any} information from agent $u$ to reach agent $v$ and vice versa.
Now, consider two executions of the consensus algorithm that only differ in the initial value of agent $v$. Rigorously, we consider two sets of initial values $x^{(1)}, x^{(2)}$ with $x^{(1)}_v\neq x^{(2)}_v$ and $x^{(1)}_k=x^{(2)}_k\,\forall k\neq v$.
Since the algorithm terminates in fewer than $n$ time units, agent $u$ does not hear any information from agent $v$ in either execution: therefore its state is identical at the end of both executions. Then agent $u$ decides on the same consensus value in both executions. However, $f(x^{(1)})\neq f(x^{(2)})$ since $x^{(1)}_v\neq x^{(2)}_v$: we have reached a contradiction.
\end{proof}
\subsection{Message complexity}
\label{sec:mc}
In this section we restrict our attention to either locally-sensitive or extractive consensus functions. Our strategy is to find first a lower bound for ``dense" graphs and then a lower bound for "sparse" graphs.
We start with the former case.
\begin{proposition}[Lower bound on message complexity for dense graphs]
\label{lemma:ccfull}
For a given locally-sensitive or extractive consensus function $f$ , $\textrm{MC}(P, \mathcal G_d) \in \Omega(|E|)$.
\end{proposition}
\begin{proof}
Computation of any consensus function $f$ requires that \emph{at least} one message is sent along \emph{every} edge of a spanning subgraph of the network. If this were not true, there would exist two subsets of the nodes $V_1\subset V$ and $V_2=V\setminus V_1$ s.t. no messages are exchanged between nodes in $V_1$ and in $V_2$. Then, nodes in $V_1$ would have no information about the initial values of nodes in $V_2$ and vice versa. Since $f$ depends on all initial values, this leads to a contradiction. Now, it can be shown that \emph{any} computation problem that requires use of a \emph{spanning subgraph} (i.e., at least one message sent along each of its edges) may require $|E|-1$ messages (and therefore $\Omega(|E|)$ messages) on a certain class of \emph{almost complete} graphs \cite{EK-SM-SZ:84}. This concludes the proof.
\end{proof}
We now turn our attention to a lower bound that becomes tight for sparse graphs. We first consider comparison-based algorithms; we will relax this assumption in Proposition \ref{lemma:ccring_noncomp}.
\begin{lemma}[Lower bound on message complexity for sparse graphs and comparison-based algorithms]
\label{lemma:ccring_comp}
Let $f$ be a locally-sensitive or extractive consensus function.
Let $\mathcal A_c$ be the set of comparison-based algorithms that solve the distributed consensus problem and assume that one minimizes message complexity over the set $\mathcal A_c$ (we denote the result of such minimization as $\textrm{MC}(f, \mathcal G_s)_{|\mathcal A = \mathcal A_c}$). Then $\textrm{MC}(f, \mathcal G_s)_{|\mathcal A = \mathcal A_c} \in \Omega(n\log n)$.
\end{lemma}
\begin{proof}
Consider the restriction to synchronous executions (since synchronous executions are a special case of asynchronous executions, a lower bound with respect to synchronous executions translates into a bound for the more general asynchronous case).
The proof is inspired by \cite{GF-NL:87} and relies on the notion of $c$-symmetric rings. Consider a graph with a ring topology (i.e., the $n$ nodes are lined up along a circle). A segment $S$ on the ring is a sequence of consecutive nodes in the ring, in clockwise order. Two segments of the same length are said order-equivalent if the ordered vector of initial conditions of their respective nodes are order-equivalent. Let $c$ be a positive constant. A ring is $c$-symmetric if, for every $l \in \natural$ such that $\sqrt{n}\leq l \leq n$, and for every segment $S$ of length $l$, there are at least $\lfloor cn/l \rfloor$ segments in the ring that are order-equivalent to $S$ (including $S$ itself). An example is shown in Fig. \ref{fig:csymmetric}.
We now study the message complexity of comparison-based algorithms on $c$-symmetric rings. To this purpose, without loss of generality\footnote{Any consensus algorithm on a ring can be simulated by an algorithm in this class, since we assume no bound on the nodes' computational power and no limitations on their internal transitions; therefore any lower bound on this class of algorithms applies to \emph{all} consensus algorithms on rings. }, we consider comparison-based algorithms where at each synchronous time step (recall that we are considering \emph{synchronous} executions)
a node decides whether to send a message to its right neighbor, whether to send a message to its left neighbor, and whether to stop execution and decide on a consensus value. Every received message is stored in the receiver node's state. Every sent message contains the sender's entire state. Nodes can perform arbitrary internal transitions and have unlimited computational power. At each time step, the state of a node contains its initial value, its UID, and the history of messages exchanged with the neighbors. The initial conditions \emph{include the nodes' UIDs}, endowed with a total ordering.
It can be shown that (see \cite{GF-NL:87}) (i) for any $n$, there exists a set of initial conditions such that there exists a $c$-symmetric ring for some $ c>0$, (ii) if a comparison-based algorithm exchanges $o (n\log n)$ messages on a $c$-symmetric ring, then every node receives information from nodes within distance $k$ where $k<n/2$ (hence, from a \emph{subset} of all nodes), and (iii) if a comparison-based algorithm exchanges $o(n\log n)$ messages on a $c$-symmetric ring, then every node $i$'s state is order-equivalent to another node $j$'s state at the end of the execution. More precisely, the $k$-neighborhoods of agents $i$ and $j$ (defined as the set containing the node and the $2k$ neighbors closest to it) contain agents with UIDs in identical order: such neighborhoods (shown in the example in Fig. \ref{fig:csymmetric}) can not be distinguished by a comparison-based algorithm.
\begin{figure}[h]
\centering
\includegraphics[width=.34\textwidth]{csymmetric}
\caption{\small A 1/2-symmetric ring. Each segment of length $l\leq4$ (and, in particular, $l=3$) is order-equivalent to another segment. Thus, each agent's 2-neighborhood is order-equivalent to another agent's.\vspace{-0.35cm}}
\label{fig:csymmetric}
\end{figure}
This implies that the leader election problem has a message complexity $\Omega(n\log n)$ \cite{GF-NL:87}, since after $o (n\log n)$ message at least two agents are in states that are indistinguishable by a comparison-based algorithm.
We now apply these results to distributed consensus problems with locally-sensitive or extractive consensus functions. The lower bound $\Omega(n\log n)$ on message complexity directly applies to extractive consensus functions: any distributed consensus algorithm capable of extracting the initial value of a node with $o(n\log n)$ messages would also solve the leader election problem with $o(n\log n)$ messages, which would contradict the aforementioned result for leader election.
Consider, now, locally-sensitive consensus functions. We proceed by contradiction, assuming that there exists a comparison-based algorithm that solves the consensus problem with a locally-sensitive consensus function with message complexity $o(n\log n)$. Consider a given set of initial conditions, $x$, and let the nodes compute the value of the consensus function. Then, consider a set of initial conditions $x^{\prime}$ identical to $x$ except that one of the node's initial values (and therefore the overall consensus value) is perturbed without changing the overall ordering of UIDs and node values (this is possible by the assumption of locally-sensitive consensus function). We next show that after $o(n\log n)$ messages at least one node would output the same consensus value it computed for initial condition $x$ -- a contradiction.
Let the nodes be arranged in a ring and let $x^{(1)}$ be an initial condition such that the ring is $c$-symmetric. By the contradiction hypothesis, after $o(n\log n)$ messages are exchanged, every node's state is order-equivalent to at least one other node's state and each node correctly outputs $f(x^{(1)})$ - call this execution $\alpha^{(1)}$. Note that, by fact (ii) above, each node has only received information from $2k+1<n$ neighbors, including itself. Consider a pair $u$, $v$ of nodes in order-equivalent states: there exists one node $w$ that belongs to the $k$-neighborhood of $u$ but does not belong to the $k$-neighborhood of $v$. Consider now an initial condition $x^{(2)}$ identical to $x^{(1)}$ except that the initial value of $w$ is perturbed without changing the overall order of agents' values and UIDs, and such that $f(x^{(1)})\neq f(x^{(2)})$ (this is always possible when the consensus function is locally-sensitive). Given initial condition $x^{(2)}$, under any execution $\alpha^{(2)}$ with $o(n\log n)$ messages, $v$'s state will be identical (and not just order-equivalent) to the state under execution $\alpha^{(1)}$, and therefore $v$ will output $f(x^{(1)})$ as its consensus value (since by the contradiction hypothesis the algorithm terminates after $o(n\log n)$ messages). This is a contradiction.
\end{proof}
The bound in Lemma \ref{lemma:ccring_comp} is instrumental to derive the desired lower bound over the much more general class of distributed consensus algorithms. Such bound requires that the set of initial values, i.e., $\mathcal X$ is ``large enough". This requirement is automatically satisfied whenever $\mathcal X$ has infinite cardinality.
\begin{proposition}[Lower bound on message complexity for sparse graphs]
\label{lemma:ccring_noncomp}
Let $f$ be a locally-sensitive or extractive consensus function. Then there exists a function $\psi(n,R)$ such that, if the cardinality of $\mathcal X$ is greater than or equal to $\psi(n,R)$, then $\textrm{MC}(f, \mathcal G_s) \in \Omega(n\log n)$.
\end{proposition}
\begin{proof}
The key idea (conceptually identical to that in \cite{GF-NL:87} and \cite{MS:85}) is to show that, perhaps surprisingly, if the cardinality of $\mathcal X$ is larger than a (very large) finite number, any distributed consensus algorithm in $\mathcal A$ executes on a small subset of $\mathcal X$ in a way that it is indistinguishable from the execution of a comparison-based algorithm. Lemma \ref{lemma:ccring_comp} then applies and the claim follows.
As in the proof of Lemma \ref{lemma:ccring_comp}, without loss of generality, we consider synchronous executions and distributed consensus algorithms (referred to as \emph{elementary} algorithm) where at each time step a node decides whether to send a message to its right neighbor, whether to send a message to its left neighbor, and whether to stop execution and decide on a consensus value. Every received message is stored in the receiver node's state. Every sent message contains the sender's entire state. Nodes can perform arbitrary internal transitions and have unlimited computational power. At each time step, the state of a node contains its initial value, its UID, and the history of messages exchanged with the neighbors. The initial conditions \emph{include the nodes' UIDs} (totally ordered according to some binary relation $\leq$).
We introduce the definition of \emph{indistinguishable} initial values.
Consider two sets of initial conditions $x^{(1)}$ and $x^{(2)}$, whose elements are arranged in increasing order, that is $x^{(1)}_i \leq x^{(1)}_j $ and $x^{(2)}_i \leq x^{(2)}_j $ for $i\leq j$. Let $\sigma$ be a permutation over the set of indexes $\mathcal I=\{1,\ldots, n\}$. We say that $x^{(1)}$ and $x^{(2)}$ are \emph{indistinguishable} with respect to an algorithm $a\in \mathcal{A}$ if, for \emph{any} permutation $\sigma$, the trace\footnote{Informally, the trace is the history of an execution the algorithm: for each time step, it records all agents' states and all messages exchanged. For a formal definition, we refer the reader to \cite{NL:96}.} of the execution with initial values $x^{(2)}$ (with indices permuted according to $\sigma$) can be obtained from the trace of the algorithm with initial values $x^{(1)}$ (with indices permuted according to $\sigma$) merely by substituting every occurrence of an element of $x^{(2)}$ with the element of corresponding order in $x^{(1)}$.
We claim that, if the set $\mathcal X$ of possible initial values is large enough, there exists a set $\mathcal W \subseteq \mathcal X$ with $|\mathcal W|\geq 2n-1$ such that \emph{any} two $n$-subsets $\mathcal U \subset \mathcal W$ of size $|\mathcal U|=n$ are indistinguishable with respect to a given consensus algorithm $a\in \mathcal A$. This claim follows from Ramsey's theorem \cite[Theorem B]{FPR:30}.
Specifically, color every subset $\mathcal U\subset \mathcal X$ of size $|\mathcal U|=n$ so that indistinguishable sets share the same color. For any elementary algorithm, there are finitely many set of indistinguishable initial conditions. This is due to the facts that (i) there are finitely many permutations $\sigma$ (specifically, $n!$), (ii) at each time step each node can make a finite number of decisions (specifically, 8), and (iii) the algorithm must terminate within a number of rounds equal to $R$. Then, by Ramsey's theorem, there exists a number $\phi(n,R)$ such that, if $|\mathcal X|\geq \phi(n,R)$, then there exists at least one set $\mathcal W \subset \mathcal X$ of size $|\mathcal W|=2n-1$ such that all its $n$-subsets $\mathcal U\subset \mathcal W$, $|\mathcal U|=n$, share the same color. Note that $\phi(n,R)$ does not depend on the specific algorithm under consideration.
We next claim that there is a set $\bar {\mathcal U} \subset \mathcal W$ of size $|\bar {\mathcal U}|=n$ such that any elementary algorithm behaves like a comparison-based algorithm (i.e., one can find a comparison-based algorithm yielding identical executions) on initial conditions taken from $\bar {\mathcal U}$. Specifically, take $\bar {\mathcal U}$ as the set of the $n$ lowest values in $\mathcal W$. We claim that \emph{any} two order-equivalent sets of size $m\leq n$ in $\bar {\mathcal U}$ (corresponding to $m$-neighborhoods of agents executing the distributed algorithm) yield identical executions, thus implying that the algorithm effectively emulates a comparison-based algorithms on initial values from $\bar {\mathcal U}$. To prove the claim, consider two sets $\bar {\mathcal U}_1$, $\bar {\mathcal U}_2\in \bar {\mathcal U}$ of size $m$. Now append to $\bar {\mathcal U}_1$ and $\bar {\mathcal U}_2$ the same $n-m$ elements of $\mathcal W\setminus \mathcal U$. The two resulting $n$-sets belong to $\mathcal W$ and elements of $\bar {\mathcal U}_1$ and $\bar {\mathcal U}_2$ appear in the same position in both $n$-sets: therefore they are indistinguishable and, in particular, whenever the states of two nodes $u$ and $v$ contain sets $\bar {\mathcal U}_1$ and $\bar {\mathcal U}_2$, respectively, such nodes will output the same value for the consensus function.
The proof is then completed by using executions of a (non-comparison-based) elementary algorithm on $\bar {\mathcal U}$ to ``construct" a comparison-based algorithm. Specifically, we construct a comparison-based algorithm whose transitions are identical to the transitions of a given elementary algorithm on $\bar{\mathcal U}$, based on the order of the elements in $\bar{\mathcal U}$. Since no comparison-based algorithm can solve the consensus problem with $o(n\log n)$ messages, the claim follows.
\end{proof}
We stress the fact that the assumptions of Proposition \ref{lemma:ccring_noncomp} are satisfied whenever the set of initial conditions has infinite cardinality (e.g., $\mathcal X = \real$ or $\mathcal X = \natural$).
\subsection{Byte complexity}
\label{ssec:bc}
To prove bounds on byte complexity, we must be a little bit more specific about the content of the messages. In particular, we will assume that messages carry the UID of the sender and/or of the receiver. In practice, virtually all wireless communication protocols require each message to carry a UID identifying the sender. In addition, in non-broadcast communication protocols (as those considered in this section), messages also need to carry a receiver UID. The proof of Propositions are omitted: they follow quite easily from Propositions \ref{lemma:ccfull} and \ref{lemma:ccring_noncomp} (and the fact that a UID requires $\log n$ bytes to be transmitted).
\begin{proposition}[Lower bound on byte complexity for sparse networks]
\label{lemma:bcring}
Assume messages carry the sender and/or the receiver UIDs. Let $f$ be a hierarchically computable and either locally-sensitive or extractive consensus function. There exists a function $\psi(n,R)$ such that, if the cardinality of $\mathcal X$ is greater than or equal to $\psi(n,R)$, then $\textrm{BC}(f, \mathcal G_s) \in \Omega\Bigl ((n\log n)\log n+n\, b \Bigr)$, where $b$ is the size (in bytes) of an initial condition in $\mathcal X$.
\end{proposition}
\begin{proposition}[Lower bound on byte complexity for dense networks]
\label{lemma:bcfull}
Assume messages carry the sender and/or the receiver UIDs.
Then $\textrm{BC}(f, \mathcal G_d) \in \Omega\Bigl (|E|\log n+n\, b \Bigr)$, where $b$ is the size (in bytes) of an initial condition in $\mathcal X$.
\end{proposition}
\subsection{Lower bounds for local broadcast algorithms}
The lower bounds on message complexity in Sections \ref{sec:mc} and \ref{ssec:bc} are derived under the assumption of directional communication (the lower bounds on time complexity is, instead, general). This section adapts those bounds to the case of local broadcast communication. The proofs for Propositions \ref{prop:byte1} and \ref{prop:byte2} are omitted: they are a simple consequence of the fact that on ring topologies local broadcasts only offer a twofold improvement in message complexity. Note that, in this case, we do \emph{not} make a distinction between dense and sparse graphs.
\begin{proposition}[Lower bound on broadcast message complexity]\label{prop:byte1}
Let $f$ be a locally-sensitive or extractive consensus function and $\mathcal G$ be a set of graphs with $n$ nodes.There exists a function $\psi(n,R)$ such that, if the cardinality of $\mathcal X$ is greater than or equal to $\psi(n,R)$, then $\textrm{bMC}(f, \mathcal G) \in \Omega(n\log n)$.
\end{proposition}
\begin{proposition}[Lower bound on broadcast byte complexity]\label{prop:byte2}
Assume messages carry the sender and/or the receiver UIDs. Let $f$ be a locally-sensitive or extractive consensus function and $\mathcal G$ be a set of graphs with $n$ nodes. There exists a function $\psi(n,R)$ such that, if the cardinality of $\mathcal X$ is greater than or equal to $\psi(n,R)$, then $\textrm{bBC}(f, \mathcal G) \in \Omega(n\log^2 n+nb)$.
\end{proposition}
Note that the lower bound on byte complexity for local broadcast schemes is lower than the corresponding lower bound for directional communication, as one might intuitively expect.
\section{Tightness of Lower Bounds on \\Achievable Performance}\label{sec:upper}
In this section we study the tightness of the bounds derived in Section \ref{sec:lbs}.
\subsubsection{Time complexity}
The bound on time complexity is tight and is achieved by a \emph{flooding algorithm}, which repeatedly transmits its initial value and \emph{all} received information to its neighbors (this result is well-known in the context of leader election -- its extension to our setting is straightforward).
\begin{proposition}[Tightness of time complexity]
\label{lemma:tctight}
For a given consensus function $f$ and class of graphs $\mathcal G$ with $n$ nodes, $\textrm{TC}(f, \mathcal G) \in \Theta(n)$.
\end{proposition}
\begin{proof}
In a flooding algorithm, information travels from a node $v$ to any node at distance $k$ from $v$ in no more than $k\, (l+d)$ time units (recall that, within our model, by the fairness and non-blocking assumptions, each node executing a flooding algorithm will transmit a message at least once every $l+d$ time units). Since the distance between any pair of nodes in $G$ is smaller than or equal to $n$, and since each node can correctly compute the value of $f$ once it has knowledge of all initial values, one can conclude that a flooding algorithm has time complexity $O(n)$. Comparison with the the lower bound in Proposition \ref{prop:lbtime} immediately leads to the claim.
\end{proof}
\subsubsection{Message complexity}
Remarkably, a slight variant of the GHS algorithm \cite{RGG-PAH-PMS:83, BAw:87} (which builds a minimum spanning tree) achieves message optimality both for dense and for sparse graphs.
\begin{proposition}
\label{lemma:mctight}
Let $f$ be a locally-sensitive or extractive consensus function. Then $\textrm{MC}(f,\mathcal G_d)\in \Theta(|E|)$. Furthermore, there exists a function $\psi(n,R)$ such that, if the cardinality of $\mathcal X$ is greater than or equal to $\psi(n,R)$, then $\textrm{MC}(f, \mathcal G_s) \in \Theta(n\log n)$.
\end{proposition}
\begin{proof}
Consider the following variant of the GHS algorithm. First, a rooted minimum spanning tree (MST) is constructed by executing the GHS algorithm. This operation requires $O(n\log n + |E|)$ messages. Note that at the end of the GHS algorithm the node that is the root of the MST is aware of this fact. The root node then requests from all nodes their initial values with a tree broadcast. After a node is contacted, it waits until all descendants (if any) have sent it their initial value; it then forward its initial value and its descendants' to its parent. Finally, the root computes the consensus function and sends the consensus value to all nodes. Given the tree structure, tree broadcasts and information collection require exactly $n-1$ messages each. The claim then follows.
\end{proof}
\subsubsection{Byte complexity}
To prove tightness of the byte complexity bound, we need to assume that the locally-sensitive or extractive consensus function is hierarchically computable.
\begin{proposition}
\label{lemma:bctight}
Assume messages carry the sender and/or the receiver UIDs. Let $f$ be a locally-sensitive or extractive consensus function that is hierarchically computable. Then $\textrm{BC}(f, \mathcal G_d) \in \Theta(|E|\log n + nb)$. Furthermore, there exists a function $\psi(n,R)$ such that, if $|\mathcal X| \geq \psi(n,R)$, then $\textrm{BC}(f, \mathcal G_s) \in \Theta((n\log n)\log n + nb)$.
\end{proposition}
\begin{proof}
Consider the same variant of the GHS algorithm introduced in the proof of Lemma \ref{lemma:mctight}. As discussed, the GHS algorithm computes a rooted MST in $O(n \log n + |E|)$ messages, and the consensus function is computed with further $O(n)$ messages. Each message exchanged by the GHS algorithm during the construction of the tree has size $O(\log n)$ \cite{RGG-PAH-PMS:83}. Furthermore, under the assumption that the consensus function is hierarchically computable, each message exchanged transmitted along the tree has size $O(b)$ and there are $O(n)$ such messages. The claim then follows.
\end{proof}
\subsubsection{Tightness of bounds for local broadcast communication}
The study of the tightness of the bounds for local broadcast communication hinges upon a slightly more intricate variation of the GHS algorithm. Specifically, the message complexity of the GHS algorithm is $O(n\log n + |E|)$; the $|E|$ factor is due exclusively to challenge-reject message pairs exchanged by nodes during the search for a minimum weight outgoing edge (MWOE). In the MWOE search phase, each node contacts the neighbor connected to its lowest-weight edge: the neighbor's reply is positive or negative depending on the two nodes' group IDs and can be delayed based on the two nodes' levels (we refer the interested reader to \cite{RGG-PAH-PMS:83} for an in-depth definition of this terminology).
Consider, now, a broadcast protocol, and let each node simply \emph{broadcast} its level and group ID every time these are updated (i.e., at most once per level). We remark that a node can assume at most $\log n$ levels during execution. Neighbor nodes locally record the broadcasts they receive and look them up when looking for the MWOE. It is easy to see that such an algorithm \emph{emulates} the execution of the GHS algorithm (and, hence, inherits its correctness).
\begin{proposition}[Broadcast message complexity of consensus]
\label{lemma:broadcastmc}
Let $f$ be a locally-sensitive or extractive consensus function and $\mathcal G$ the set of graphs with node set $n$. There exists a function $\psi(n,R)$ such that, if the cardinality of $\mathcal X$ is greater than or equal to $\psi(n,R)$, then $\textrm{bMC}(f, \mathcal G) \in \Theta(n\log n)$.
\end{proposition}
\begin{proof}
Consider a distributed consensus algorithm that first \emph{emulates} the GHS algorithm to construct a rooted spanning tree (as discussed above) and then performs the sequence of initial value requests and routings outlined in the proof of Proposition \ref{lemma:mctight}. The overall number of broadcasts exchanged per level during the emulation of the GHS algorithm is $\Theta(n)$, since each agent only updates its level and group ID once per level, and the number of levels is $O(\log n)$. Given that the initial values requests and routings require $O(n)$ broadcast operations, the algorithm has a broadcast message complexity of $O(n\log n)$: the claim follows.
\end{proof}
The tightness of the bound on byte complexity follows immediately from Proposition \ref{lemma:broadcastmc} and its proof is omitted in the interest of brevity.
\begin{proposition}[Broadcast byte complexity of consensus]
\label{lemma:broadcastbc}
Assume messages carry the sender or the receiver UIDs. Let $f$ be a locally-sensitive or extractive consensus function and $\mathcal G$ a the set of graphs with $n$ nodes. There exists a function $\psi(n,R)$ such that, if the cardinality of $\mathcal X$ is greater than or equal to $\psi(n,R)$, then
$\textrm{bBC}(f, \mathcal G) \in \Theta(n\log^2 n + nb))$.
\end{proposition}
\section{Discussion and conclusions}
\label{sec:conclusions}
In Table \ref{tab:consbounds} we provide a synoptic view of our results. Table \ref{tab:consbounds} also includes results for $D$-connected networks, that is networks where the edge set is time-varying and there exists a constant $D\in \real_{>0}$ (possibly unknown to the nodes) such that the union of all edges appearing in the time interval $[t, t+D)$ constitute a \emph{connected} graph.
Due to page limitations, we omit the proofs for the bounds pertaining to $D$-connected networks: such bounds can be derived with techniques very similar to the ones used in this paper.
Table \ref{tab:consbounds} elucidates the relative advantages of different approaches to distributed computation. On static networks, the modified versions of the GHS algorithm discussed in this paper \emph{simultaneously} achieve optimal time, message, byte, and storage complexity under mild assumptions regarding the consensus function, both for directional and broadcast communication, and both for sparse and dense graphs. On the other hand, the GHS algorithm is not readily applicable to time-varying networks and is sensitive to single-points of failure (since it relies on the construction of a spanning tree). In other words, a GHS algorithm has minimal robustness margins to the disruption of a communication channel.
A flooding algorithm is time-optimal, but as one can expect has poor message and byte complexity. Also the storage complexity is worst among all considered algorithms. On the other hand, flooding is maximally robust to communication disruptions. Also, somewhat surprisingly, this study shows how a simple flooding algorithm outperforms an average-based algorithm (that solves the \emph{specific} consensus problem with consensus function $f(x) = \frac{1}{n}\mathbf{1}^T x$) with respect to all performance metrics except storage complexity (which is arguably a minor concern for modern embedded systems).
Finally, the hybrid clustering algorithm introduced by the authors in \cite{FR-MP:13} has performance intermediate between those of flooding and GHS, as a function of a tuning parameter $m$. The main advantage of this algorithm is to ``trade" some of the optimality of GHS with a tunable ``degree of robustness".
As discussed, GHS-like algorithms (and also the hybrid algorithm in \cite{FR-MP:13}) can not be readily applied to dynamic settings. Note, however,
that the execution time of GHS is $O(n)$, an order of magnitude faster than average-based algorithms: if reconfigurations are infrequent (i.e., their frequency is much lower than $O(1/n)$), these algorithms can indeed be applied to nominally dynamic networks.
We conclude this paper with a discussion of the limitations of our analysis, which immediately reflect into a number of interesting directions for future research. First, optimal algorithms for $D$-connected networks are not currently known (of course, our lower bounds may not be tight), which represents a key area of study. Second, while locally-sensitive and extractive consensus functions represent a fairly large class of consensus functions, it is of interest to generalize our bounds on message, byte, and storage complexity even further. Third, we employ a worst-case approach over the classes of \emph{sparse} and \emph{dense} graphs. An interesting direction for future research would be (i) to derive bounds on a finer partition of the class of possible graphs, e.g. by parameterizing the graphs by their maximum node degree, and (ii) by embedding the problem within a probabilistic structure, to derive performance bounds with respect to sets of graphs randomly drawn from a given probability distribution, so that one can derive measures of average (as opposed to worst-case) performance. Fourth, our model essentially does not include the notion or robustness with respect to either stopping (i.e., for malfunctions) or byzantine (i.e., malicious) failures, which is an aspect of pivotal importance for the reliable deployment of cyber-physical systems. Finally, a practical implementations of the algorithms discussed in this paper could shed additional light on the relative benefits of the different approaches.
Overall, we hope that this work will prompt researchers in the field of multi-agent systems to compare their results against the fundamental lower bounds derived in this paper to properly evaluate the relative benefits of their approach.
\vspace{-0.0445cm}
\bibliographystyle{IEEEtran}
|
1,314,259,994,887 | arxiv | \section{Introduction}
\label{sec:intro}
\begin{figure*}
\centering
\includegraphics[trim=0cm 1.5cm 1cm 15.5cm,clip,width=1.0\textwidth]{cmd_cubi.pdf}
\caption{Left: the brighter portion of the CMD of the Carina dwarf in the \hbox{\it V\/}\ vs \hbox{\bb--\vv\/}\ plane. The
large colored circles represent the spectroscopic targets
in this investigation. The color coding derives from the selection
criteria shown in the right panel. Right: \hbox{\it V\/}\ vs \hbox{{\it c}$_{\rm U,B,I}$\/}\ diagram for the
same stars. This particular color combination,
(\hbox{\uu--\bb\/})--(\hbox{\bb--\ii\/}), allows us to split the RGB into old and intermediate-age
populations \citep{monelli14}. \label{cmdcubi}}
\end{figure*}
Empirical evidence indicates that dwarf spheroidal galaxies (dSphs) and
ultra-faint dwarfs (UFDs) are the smallest stellar systems to be dominated by
dark matter (DM). This finding is supported by new and more
precise kinematic measurements \citep{walker09a,walker09b},
implying that dSphs and UFDs can provide
firm constraints on the smallest DM halos that can retain baryons. The
nearby systems have the added advantage that we can sample a significant
fraction of their stellar content. Therefore, these interesting stellar
systems offer a unique opportunity to simultaneously probe their stellar
content and their total mass budget \citep{walker09b}.
There is intriguing empirical evidence that
both low- and high-mass galaxies follow the stellar
mass-metallicity relation \citep{tinsley79}. However, current
extragalactic surveys indicate that large galaxies have flat gas-phase
metallicity gradients \citep{moran12}. Dwarf galaxies, instead, show
different peaks in the metallicity distribution
(e.g., Tucana, \citealt{monelli10tucana} and
Sculptor, \citealt{deboer12}), but still lack firm evidence of a
metallicity gradient \citep{vanzee06}. The available
evidence seems to suggest not only that dwarf galaxies appear
to be less efficient star formers, but also that their chemical
enrichment might have been different from that of massive
galaxies.
Cosmological models also suggest that dSphs and UFDs are the fossil
records of the Galactic halo \citep{helmi08}.
Therefore, their kinematic and chemical
properties can provide firm constraints on the formation and evolution
of the Milky Way (MW). However, recent measurements from
high-resolution spectra indicate that the $\alpha$-element abundances in
nearby dSphs are, for iron abundances larger than \hbox{[Fe/H]}$>$--2, typically
less enhanced than halo stars and Galactic globular clusters
where \hbox{[$\alpha$/Fe]}$\approx$0.4 \citep{tolstoy09}. This conclusion is supported by a
recent investigation based on medium-resolution spectra collected with
X-Shooter at VLT for seven either extremely (\hbox{[Fe/H]}$<$--3) or very
(\hbox{[Fe/H]}$<$--2) metal-poor stars: \citet{starkenburg13} found that the
$\alpha$ enhancement is similar in the mean to halo stars of similar metallicities,
but the spread around the mean is larger than around the halo stars.
Spectroscopic measurements of metal-poor stars in UFDs support
the same scenario, and indeed \citet{gilmore13}, using
high-resolution spectra for seven very metal-poor red giants (RGs) in
Bo\"{o}tes~I, found that their $\alpha$ enhancement is consistent with
halo stars, but showing a spread around the mean.
These findings indicate that the chemical enrichment in low-mass
dwarfs has been slower than in the Galactic halo \citep{cayrel04} and in
the Galactic bulge \citep{lagioia14}. This means that dwarf galaxies
might have played a minor role in building up the Galactic spheroid
\citep{leaman13,stetson14,fiorentino15}.
On the other hand, all the Galactic globular clusters (GCs) investigated
so far show a specific chemical fingerprint: the abundances of Na--O
and Mg--Al are anticorrelated \citep{carretta09uves,carretta09gir,carretta14}.
This signature becomes even more compelling
if we consider the fact that field halo stars do not show evidence of
this anticorrelations \citep{gratton00}.
Moreover, massive GCs with a large spread in iron abundance but a clear
evidence of Na--O and Mg--Al anticorrelations
($\omega$-Cen, \citealt{johnson08};
M54, \citealt{carretta10m54}) have also been considered relic cores of
disrupted dwarf galaxies \citep{bekki03}. However, the current dwarf galaxies
for which we have a detailed knowledge of their chemical enrichment history
show a wide range in iron abundance, but no evidence of anticorrelations.
This evidence indicates that the role played by GCs and dwarf galaxies
in the early formation of the Galactic spheroid is still puzzling.
In this context, the Carina dSph can play a crucial role since it
is relatively close (DM$_0$=20.10~mag, \citealt{coppola13}),
it shows at least two clearly separated star-formation episodes, and
a wide range in iron that covers at least 1.5~dex.
The old stellar population has an age of 12~Gyr, while the
intermediate-age has ages ranging from 4 to 8 Gyr \citep{monelli03}.
In the investigation based on high-resolution (R$\sim$20,000) spectra
collected with FLAMES at VLT for 35 RGs, \citet{lemasle12} found that the
old stellar component in Carina is metal-poor (\hbox{[Fe/H]}$<$--1.5) and slightly
$\alpha$-enhanced (\hbox{[Mg/Fe]}$>$0). On the other hand, the intermediate-age
population is metal-intermediate (--1.5$<$\hbox{[Fe/H]}$<$--1.2) and shows a broad
spread in $\alpha$ enhancement. Indeed, the stars range from being
$\alpha$-poor (\hbox{[Mg/Fe]}$<$--0.3) to $\alpha$-enhanced (\hbox{[Mg/Fe]}$\sim$0.3).
These findings have been independently supported by the detailed star
formation history performed by \citet{deboer14}. They found
evidence of different age-metallicity relations and different trends in
the $\alpha$-element distributions between old- and intermediate-age
subpopulations.
More recently, \citet{vdberg15} used the star formation history
provided by \citet{deboer14} and found that specific sets of
cluster isochrones, covering a broad range in iron and in
$\alpha$-element abundances, take account of old horizontal branch (HB)
stars and red clump (RC) stars in
Carina.
It is worth mentioning that these analyses are typically based on
stellar ages estimated by comparing the position of the stars in
color-magnitude diagrams (CMDs)
with specific stellar isochrones. This approach is prone to
observational errors in distance determination, photometry, elemental
abundances, and interstellar reddening. It is also affected by
theoretical uncertainties as the efficiency of diffusive processes,
nuclear cross sections, and treatment of superadiabatic convection. For a more detailed discussion of the error budget we
refer to \citet{renzini91} and \citet{cassisi13book}.
A detailed spectroscopic analysis of Carina stars was also performed by
\citet{venn12} using high-resolution (R$\sim$40,000) spectra for nine bright RGs,
collected with UVES at VLT and with MIKE at Magellan. They found evidence of
inhomogenous mixing between the old and the intermediate-age population.
In particular, a broad spread in Mg was considered suggestive of poor mixing
in the gas from which the old population formed, while the offset in
$\alpha$-element abundance between the old and the intermediate-age population
suggested that the second broader star formation episode in Carina took place in
$\alpha$-enriched gas.
The present investigation of the chemical enrichment history of
this interesting system is based on the largest homogeneous data
set of Carina chemical abundances yet obtained.
Our motivation is twofold:\\
$(i)$ To distinguish old and intermediate-age Carina stars, we use the
\hbox{{\it c}$_{\rm U,B,I}$\/}=(\hbox{\uu--\bb\/})--(\hbox{\bb--\ii\/}) index \citep{monelli13,monelli14}.
Detailed photometric
investigations indicate that this index can remove the degeneracy
between age and metallicity along the red giant branch (RGB).
We note that one of the main
advantages of this index is that the separation of the two stellar
populations relies on a differential measurement. This means that it is
independent of uncertainties in the distance modulus, the
reddening, and the cluster isochrones.\\
$(ii)$ We secured high-resolution homogeneous spectra for 44 RGs
observed with either UVES or {FLAMES\slash GIRAFFE}-UVES with a nominal
spectral resolution of 40,000.
These spectra were supplemented with high- (R$\sim$20,000)
and medium-resolution (R$\sim$6,000) spectra collected with {FLAMES\slash GIRAFFE}.
Moreover, the latter spectra were also employed to investigate iron
and $\alpha$ abundances down to the luminosity of the RC
(\hbox{\it V\/}$\sim$20.5~mag).
The paper is organized as follows.
In Sect. \ref{sec:cubi} we introduce the photometric index \hbox{{\it c}$_{\rm U,B,I}$\/}\
and its use in separating the old and intermediate-age stellar populations
along the Carina RGB.
The three spectroscopic data sets adopted in the current investigation
are discussed in Sect. \ref{sec:observation}. In particular, we focus on spectral resolution, wavelength coverage, and the
signal-to-noise ratio of the different spectra.
In Sect. \ref{sec:stack} we describe the procedure adopted to
stack the {FLAMES\slash GIRAFFE}\ spectra in detail. This is a fundamental step for providing
accurate abundance determinations down to the RC magnitude level.
The techniques for measuring equivalent widths, for computing synthetic
spectra, and for estimating elemental abundances and their errors are
described in Sects. \ref{sec:ew} and \ref{sec:abund}. In these sections
we also present a comparison between the current results and those
available in the literature.
In Sect. \ref{sec:abund_vs_logg} we discuss the difference in iron and
magnesium abundances between the old and intermediate-age Carina
stellar populations.
The comparison between Carina's metallicity distribution and similar
abundances in Galactic halo stars and in Galactic and Magellanic globular
clusters are discussed in Sects. \ref{sec:abund_vs_mw} and
\ref{sec:abund_vs_GC}, respectively.
Comparisons between Carina's $\alpha$-element abundances and similar
abundances in dSph and UFD galaxies are presented
in Sect. \ref{sec:abund_vs_dwarf}.
In Sect. \ref{sec:na-o} we investigate the possibile occurrence of
a correlation between Na and O abundances in Carina RGs.
Finally, in Sect.\ref{sec:summary} we summarize the results and outline the
future prospects of the Carina Project.
\section{ \hbox{{\it c}$_{\rm U,B,I}$\/}\ index and different stellar populations}
\label{sec:cubi}
\begin{figure}
\centering
\includegraphics[trim=2.3cm 1.5cm 8.cm 15.8cm,clip,width=0.95\columnwidth]{map_cubi.pdf}
\caption{Spatial distribution of our spectroscopic targets.
Symbols and colors are the same as in Fig.~\ref{cmdcubi}. The
dashed ellipses indicate the core and tidal radii of Carina
\citep{mateo98araa}.
\label{mapcubi}}
\end{figure}
\begin{figure*}[ht!]
\centering
\includegraphics[trim=0.cm 1.5cm 1.8cm 0.cm,clip,angle=90,width=0.9\textwidth]{spect_stack.pdf}
\caption{Left column: \hbox{\it I\/}\ vs \hbox{\bb--\ii\/}\ CMDs showing stars with
Giraffe HR (top) or MR (bottom) spectra. Red squares and blue diamonds show the
old and intermediate-age stellar components. Dotted lines mark the boundaries
of the gravity bins adopted for the spectrum-stacking procedure.
Two isochrones
from the BaSTI database \citep{pietrinferni04,pietrinferni06}
are shown as red dashed and blue dot-dashed lines. The adopted
distance modulus and reddening are indicated \citep{coppola13}.
Right column: colored symbols show the positions of the stacked spectra
in the stellar parameter \hbox{$\log g$}\ vs \hbox{T$_{\mathrm{eff}}$}\ plane. \label{cmdstack}}
\end{figure*}
Recent results have revealed that different stellar populations in
old Galactic globular clusters can be easily isolated along the
whole CMD, from the main sequence, up to the subgiant branch,
RGB, and even the HB from an
appropriate combination of broadband filters
\citep{marino08,sbordone11,milone12}.
\citet{monelli13}
showed that their \hbox{{\it c}$_{\rm U,B,I}$\/}\ index is a powerful tool for identifying
multiple stellar sequences in the RGB of old GCs, and that the
\hbox{{\it c}$_{\rm U,B,I}$\/}\ pseudo-color of RGB stars correlates with the chemical
abundances of light elements. Moreover, \citet{monelli14} have shown
that \hbox{{\it c}$_{\rm U,B,I}$\/}\ can also distinguish a significant
fraction of the RGB stars of Carina's two main populations: the
old stars ($\sim$12~Gyr) have a more negative \hbox{{\it c}$_{\rm U,B,I}$\/}\ pseudo-color than
the intermediate-age stars (4-8~Gyr).
Figure~\ref{cmdcubi} shows the \hbox{\it V\/}\ vs \hbox{\bb--\vv\/}\ (left) and \hbox{\it V\/}\ vs
\hbox{{\it c}$_{\rm U,B,I}$\/}\ (right) diagrams for stars brighter than \hbox{\it V\/}=21~mag: the
brighter portion of Carina's RGB, the RC,
and part of the HB, and contaminating field stars at
\hbox{\bb--\vv\/}$>$0.45~mag. We note that the main evolutionary features
in the \hbox{\it V\/}\ vs \hbox{{\it c}$_{\rm U,B,I}$\/}\ diagram are reversed, and the hottest stars attain
higher \hbox{{\it c}$_{\rm U,B,I}$\/}\ values. The distribution of Carina RGB stars in this
plane has been discussed by \citet{monelli14}, who showed that the
\hbox{{\it c}$_{\rm U,B,I}$\/}\ index largely removes the age-metallicity degeneracy
affecting the RGB stars. Following this analysis, the
right panel of Fig.~\ref{cmdcubi} shows a selection of old,
more metal-poor (red symbols) and intermediate-age, less
metal-poor stars (blue symbols). In particular, the red and blue
symbols identify stars with \hbox{{\it c}$_{\rm U,B,I}$\/}$<$--1.7~mag and
\hbox{{\it c}$_{\rm U,B,I}$\/}$>$--1.7~mag, respectively. We note that in the
classical \hbox{\it V\/}\ vs \hbox{\bb--\vv\/}\ plane these stars are mixed along the RGB.
The different symbols mark the position of the different spectroscopic
data sets (see labels and the discussion in
Sect.~\ref{sec:observation}).
The anonymous referee suggested that we discuss in more detail whether the
\hbox{{\it c}$_{\rm U,B,I}$\/}\ index is either an age or a metallicity indicator. The
empirical evidence suggests that the \hbox{{\it c}$_{\rm U,B,I}$\/}\ index is mainly an age
diagnostics, as has previously been discussed in detail by \citet{monelli14}.
However, we address this question below to further support
the empirical framework we are developing concerning Carina stellar
populations.
Dating back to the seminal investigation by \citet{smecker96},
it became clear that Carina experienced two clearly separated star formation
episodes. However, optical and optical to near-infrared CMDs indicate that the two
subpopulations overlap along the RGB. The \hbox{{\it c}$_{\rm U,B,I}$\/}\ pseudo-color distribution
shows clear evidence of an asymmetric and possibly dichotomous distribution
of RGB stars.
It is plausible to assume that this distribution is correlated with the
difference in age of the two subpopulations. This is the reason why we
associated the red and the blue RGB stars with the old- and intermediate-age
subpopulations. However, we cannot exclude that the \hbox{{\it c}$_{\rm U,B,I}$\/}\ index is also
affected by heavy element abundances. This means that the \hbox{{\it c}$_{\rm U,B,I}$\/}\ distribution
might also be affected by a difference in CNO and/or in $\alpha$-element
abundances. The main conclusion of this investigation, that is, the presence of
two subpopulations that experienced two different chemical enrichment
histories, is not affected by the intrinsic parameters affecting the \hbox{{\it c}$_{\rm U,B,I}$\/}\
index.
In passing we note that the cut adopted to split old- and
intermediate-age subpopulations was fixed according to the \hbox{{\it c}$_{\rm U,B,I}$\/}\
distribution. It is arbitrary, but quantitative tests indicate that
plausible changes in the cut do not affect the conclusions concerning
the metallicity distributions of the two main subpopulations.
Finally, we note that the age-metallicity pairs found for individual
Carina stars by \citet{deboer14} and by \citet{lemasle12} cannot be
recovered in this analysis. The theoretical reasons that led us to
overtake the fit with individual cluster isochrones, and in turn
individual age estimates of RGB stars, have been discussed in
\citet{monelli14} and in Sect. \ref{sec:intro}.
\section{Observations and data analyses}
\label{sec:observation}
Our data were collected with two
spectrographs mounted at the UT2 (Kueyen) at the Very Large
Telescope (VLT) of the European Southern Observatory (ESO). The
Fibre Large Array Multi Element Spectrograph (FLAMES; \citealt{pasquini02})
multi-object spectrograph was used to collect high- and medium-resolution
spectra with both the Ultraviolet and Visual Echelle
Spectrograph (UVES; \citealt{dekker00}) and GIRAFFE fiber modes.
Moreover, we also included in our analysis spectra collected with
the slit-mode of UVES.
\subsection{UVES and FLAMES\slash UVES spectra}
We present an extension of the analysis for the high-resolution
(R$\sim$40,000) UVES and FLAMES\slash UVES red-arm spectra presented in
\citet[][Paper~V]{fabrizio12}, where we obtained the \hbox{Fe\,{\scriptsize I}}\ and
\hbox{Fe\,{\scriptsize II}}\ abundances of 44 red giant Carina stars (hereafter UVES).
The stars in Fig.~\ref{cmdcubi} represent the UVES
targets (five stars), while the circles are for stars
observed with FLAMES/UVES (39 stars). The numbers in parentheses
indicate the number of stars belonging to the old (2+8) and
intermediate-age (3+31) populations, respectively, based on the \hbox{{\it c}$_{\rm U,B,I}$\/}\ index.
Figure~\ref{mapcubi} shows the spatial distribution of our
spectroscopic targets with the same color coding and symbols. The
data reduction, radial velocity (RV) measurements, and
estimation of the stellar parameters of these spectra follow
the approach described in Paper~V. In particular, the
spectroscopic targets used in this analysis, with photometric,
astrometric, and stellar parameters, are listed in Table~1 of
Paper~V.
\subsection{{FLAMES\slash GIRAFFE}\ spectra}
To increase the spectroscopic data set and cover the whole
extent of the RGB up to the intermediate-age RC helium-burning
region (\hbox{\it V\/}$\sim$20.5 and \hbox{\bb--\vv\/}$\sim$0.6~mag), we included in our
analysis spectra collected with {FLAMES\slash GIRAFFE}. In particular,
we adopted both the high-
(HR10, HR13, and HR14A)\footnote{HR10: 5339$<$$\lambda$(\AA)$<$5619, R=19,800\\
HR13: 6120$<$$\lambda$(\AA)$<$6405, R=22,500\\
HR14A: 6308$<$$\lambda$(\AA)$<$6701, R=17,740}
and the medium-resolution (LR08)\footnote{LR08: 8206$<$$\lambda$(\AA)$<$9400, R=6,500}
spectra that were presented by \citet{koch06},
\citet{lemasle12}, and \citet[][Paper~IV]{fabrizio11}.
The stars with high-resolution spectra were selected using the
following criteria:
{\it (i)} their radial velocities are within 4$\sigma$ from the Carina velocity
peak (180$<$RV$<$260~\hbox{km\,s$^{-1}$}) and the precision on the individual RVs is
better than 10 \hbox{km\,s$^{-1}$}\ (71 stars);
{\it (ii)} they have been measured in at least three photometric bands (\hbox{\it U\/}, \hbox{\it B\/}, \hbox{\it I\/});
{\it (iii)} they have \hbox{\bb--\ii\/}\ colors that are typical of RGB stars at the same apparent
\hbox{\it I\/}-band magnitudes ($\Delta$(\hbox{\bb--\ii\/})$\le$0.25~mag).
We obtained a sample of 65 out of the 71 stars. Almost 50\%\ (35)
of the selected stars have previously been analyzed by \citet{lemasle12}.
The others are used here for the first time to estimate iron and
$\alpha$-element abundances. We note that selected stars adopted
in the stacked spectra have between two to eight individual spectra.
We refer to the end of Sect. \ref{sec:stack} for a more
detailed discussion concerning the number of stars per stacked spectrum.
Similar criteria were also adopted to select 802 stars from the {FLAMES\slash GIRAFFE}\
medium-resolution sample. In particular, we obtained 483 stars along the
RGB out of a sample of 529 candidate Carina stars (91\%). In the RC region
we included 319 stars out of 407 candidate Carina stars (78\%). We
excluded anomalous Cepheids and bright RC stars. The selected stars, adopted
in the stacked spectra, have between two to 35 individual spectra.
The reduction of these spectra follows the approach described in Paper~IV.
\subsubsection{High resolution}
\label{sec:stack_HR}
The HR spectroscopic targets (hereafter GHR) are shown as colored
squares in Figs.~\ref{cmdcubi} and \ref{mapcubi}. The old
population includes 24 stars, while that with intermediate-age stars
includes 41 objects. The top left panel of
Fig.~\ref{cmdstack} shows these stars in the
\hbox{\it I\/}\ vs \hbox{\bb--\ii\/}\ CMD (red squares and blue diamonds).
Here, we overplotted
two isochrones (from the BaSTI database\footnote{\tt
http://www.oa-teramo.inaf.it/basti},
\citealt{pietrinferni04,pietrinferni06}), representing the
two main star-formation episodes of Carina. The adopted true
distance modulus and reddening values are from
\citet{coppola13} and are labeled in the figure, and we
used extinction coefficients from \citet{mccall04}. The
isochrones were used to divide the sample into four bins, using
iso-gravity loci (dotted lines). This approach produced four
subsamples of spectra that we stacked
because they have quite similar stellar parameters.
The stellar parameters of each individual star were determined
following the procedure described in Paper~V. In Table~\ref{tab_param_stack}
we list the
mean values of effective temperature and surface gravity for each bin
with their uncertainties and in Col. 4 the number of individual
stars per stacked spectrum. We note that the uncertainties in the different
bins are the standard deviations of the individual stellar parameters
summed in quadrature.
The stacking procedure is described in Sect.~\ref{sec:stack}.
The top right panel of Fig.~\ref{cmdstack} shows the
position of stacked spectra in the \hbox{T$_{\mathrm{eff}}$}\ vs \hbox{$\log g$}\ plane
(see also Table~\ref{tab_param_stack}). The
bars indicate the range of stellar parameters covered by
individual spectra; they range from $\Delta\hbox{$\log g$}$$\sim$0.1~dex,
$\Delta\hbox{T$_{\mathrm{eff}}$}$$\sim$50~K to $\sim$0.25~dex, $\sim$200~K. The
signal-to-noise ratio (S/N) of the individual spectra
ranges from $\sim$10 to $\sim$50 for the brightest targets.
This data set was also adopted by \citet{lemasle12} to investigate
the chemical abundances of 35 Carina RG stars.
It is worth mentioning that the ranges in \hbox{$\log g$}\ and \hbox{T$_{\mathrm{eff}}$}\
covered by individual spectra belonging to the same gravity and
temperature
bin allow us to provide accurate abundance estimates. Indeed, the quoted
variations in \hbox{T$_{\mathrm{eff}}$}\ and \hbox{$\log g$}\ (see Sect. \ref{subsec:ab_uncert}) cause
an uncertainty on individual abundances of about 0.15~dex.
\input{tab_param_stack.tex}
\subsubsection{Medium resolution}
\label{sec:sec:stack_LR}
We repeated the approach described above with the LR08
spectra (hereafter GMR). This data set is the
combination of two observing runs in 2003 (GMR03) and
2008 (GMR08). The details of these samples and their
combination were discussed in Paper~IV. We obtained 157 stars in the old and 645 stars
in the intermediate-age population.
The bottom left panel of
Fig.~\ref{cmdstack} shows the CMD and iso-gravity loci.
The sample was
split into nine bins, plus a particular region enclosing the RC
stars.
The stellar parameters and their uncertainties were estimated
following the same approach discussed in Sect.~\ref{sec:stack_HR}
and listed in Table~\ref{tab_param_stack}.
The S/N of the individual spectra ranges from $\approx$10
to 50 for GMR03 (17$\lesssim$\hbox{\it V\/}$\lesssim$20.5~mag) and from
$\approx$5 to 15 for GMR08 (18.5$\lesssim$\hbox{\it V\/}$\lesssim$20.75~mag).
The positions of the stacked spectra in the \hbox{T$_{\mathrm{eff}}$}\ vs \hbox{$\log g$}\ plane
are shown in the bottom right panel of
Fig.~\ref{cmdstack} (see also Table~\ref{tab_param_stack}),
where we obtained values of variations from
$\Delta\hbox{$\log g$}$$\sim$0.1~dex, $\Delta\hbox{T$_{\mathrm{eff}}$}$$\sim$50~K to $\sim$0.25,
$\sim$300~K. In this context, it is worth mentioning that the
GMR08 sample was previously used to constrain the kinematic
properties of Carina stars (Paper~IV). However, this is the first
time they are used to constrain the elemental abundances of RG
stars down to the RC magnitude level.
For clarity in tracing back the identification of adopted
spectra and stars, Table~\ref{tab_allstars} gives in the first three
columns the position ($\alpha$, $\delta$) and the current ID, based on
Paper~V. Columns 4 and 5 give the IDs of the UVES and GHR samples,
while Cols. 6 and 7 report the IDs of the GMR03 and GMR08 samples. The star
IDs adopted by \citet{venn12} are listed in Col. 8. Moreover, in
Table~\ref{tab_allstars} we also list the same information for the
individual GHR and GMR spectra adopted in the stacking of different
effective temperature and surface gravity bins (see next section).
\input{tab_allstars.tex}
\section{Stacking procedure for {FLAMES\slash GIRAFFE}\ spectra}
\label{sec:stack}
The individual spectra belonging to each bin and population were
stacked in a two-step procedure.\\ The first fundamental step is
estimating the continuum to gain individual normalized spectra.
By default, each spectrum is divided into 200 intervals. To
properly identify the continuum while avoiding lines, spikes, and
contaminants we calculated the biweight mean \citep{beers90} for
each interval using the inverse square-root of the signal as the
weight. The mean value was augmented by 75\%\ of the dispersion to
define the upper envelope of the signal. Then, the 200 local
estimates were connected using a running average with a fixed step
of 40. The resulting curve is a good approximation of the
continuum over the entire spectral range. The resulting normalized spectra can be visually checked
and, if the normalization
is problematic, the number of intervals and the averaging step
can be changed.\\ In the second step we averaged all normalized
spectra belonging to the different gravity bins of the two
populations. To do this, each spectrum was accurately rectified for
its radial velocity and then was rebinned with a fixed wavelength
step (depending on the resolution). Finally, a biweight mean was
applied to each wavelength step, averaging all spectra together.
Stacking 4-9 (OLD) and 9-14 (INT) individual targets increased
the S/N of the GHR spectra, in particular for the faintest targets
in the last bin, by a factor of 3-4. For the GMR data set,
stacking 8-32 (OLD) and 14-61 (INT) individual targets increases
the S/N by a factor of 4-8 (see Table~\ref{tab_param_stack}).
Figure~\ref{specstack} shows an example of the stacked spectrum
for an old and an intermediate-age star in the HR10 (top) and LR08
(bottom) grisms. The shaded area represents the dispersion of the
individual spectra, and the plots are centered on two \hbox{Fe\,{\scriptsize I}}\ lines
that are recognizable in the wavelength range.
\begin{figure}[ht!]
\centering
\includegraphics[trim=0.3cm 1.1cm 1cm 1.5cm,clip,width=1\columnwidth]{spect_cfr.pdf}
\caption{Examples of stacked spectra. Top: resulting stack of 7(14)
spectra belonging to the old (intermediate) stellar population,
collected with the Giraffe HR10 grism. The shaded area shows the
dispersion of individual spectra. We show a portion around an
\hbox{Fe\,{\scriptsize I}}\ line marked by the dashed line. Bottom: the same as the top
panel, but for spectra collected with Giraffe LR08 grism.
\label{specstack}}
\end{figure}
\section{Equivalent width measurement}
\label{sec:ew}
\subsection{Line list and atomic data}
We selected isolated and unblended iron, sodium, and $\alpha$-element
(\hbox{O\,{\scriptsize I}}, \hbox{Mg\,{\scriptsize I}}, \hbox{Si\,{\scriptsize I}}, \hbox{Ca\,{\scriptsize I}}, and \hbox{Ti\,{\scriptsize II}}) atomic
lines in the wavelength range of our spectra from different sources in the literature.
In particular, we merged
the line lists of \citet{shetrone03}, \citet{koch08},
\citet{fabrizio12}, \citet{lemasle12}, and \citet{venn12}. We updated the
atomic data for these lines from the
{\scriptsize VALD}\footnote{\tt
http://www.astro.uu.se/$\sim$vald/php/vald.php} data base
\citep{kupka00}.
The final line lists adopted for each data set are shown in
the first four columns of Tables~\ref{tab_UVES}, \ref{tab_HR},
\ref{tab_LRold}, and \ref{tab_LRint}. They list the line wavelength (Col.1),
element species (Col.2), excitation potential (Col.3), and $\log gf$ (Col.4).
\begin{figure*}
\centering
\includegraphics[trim=2.5cm 1.5cm 1cm 8.5cm,clip,width=0.8\textwidth]{ew.pdf}
\caption{Equivalent width comparisons for stars in common
with literature data sets: \citet{shetrone03} (top left), \citet{koch08}
(top right), \citet{lemasle12} (bottom left) and \citet{venn12} (bottom right).
The dashed line shows the bisector of the plane. The dotted
lines display the 10\%\ uncertainty convolved with a 10~m\AA\
error. The mean measurement errors are also displayed. \label{ew}}
\end{figure*}
\input{tab_UVES.tex}
\input{tab_HR.tex}
\input{tab_LR.tex}
\subsection{UVES equivalent widths}
\label{ew_meas}
The elemental abundances for the UVES and FLAMES/UVES
spectra were determined from equivalent
width (EW) measurements. EWs were measured with a proprietary
IDL\footnote{IDL is distributed by the Exelis Visual Information
Solutions.} interactive procedure, based on a Gaussian or Voigt fitting
routine. The user controls the continuum placement, the
profile of individual lines, and the contribution of the wings to the EW
values. Continuum estimation, in particular, is crucial for
the robustness of the final results.
To minimize any systematic bias in the
continuum estimate that is due to the subjectivity of the operator, three of
us have independently performed EW measurements on a sample of
selected lines (weak and strong, high and low S/N). The
internal dispersion is lower than 6~m\AA,\ and there is no evidence of
systematics. We also performed a sanity check on the profile measurement
using the IRAF\footnote{IRAF is distributed by the National Optical Astronomy
Observatory, which is operated by the Association of Universities for
Research in Astronomy, Inc., under cooperative agreement with the
National Science Foundation.} task {\tt splot}. The differences
are within few percent.
We estimated the uncertainties in the equivalent widths
(EW$_{\rm rms}$) using the formula presented by \citet{cayrel88},
revisited by \citet{venn12}:
$$
{\rm EW_{rms}}=({\rm S/N})^{-1}\times\sqrt{1.5\times{\rm
FWHM}\times\delta x,}
$$
where S/N is the signal-to-noise ratio per pixel, FWHM is the line full
width at half-maximum, and $\delta x$ is the pixel size. Following this
approach, we adopted a more conservative EW error:
$$
\epsilon{\rm EW}={\rm EW_{rms}}+0.1\times{\rm EW}.
$$
This conservative approach, which we consider robust, gives
EW$_{\rm rms}$$\approx$2~m\AA\ for the whole sample with a final error
$\epsilon{\rm EW}$$\approx$10~m\AA. Measured EWs with errors are listed
in Table~\ref{tab_UVES}.
To evaluate the precision of our EWs, we compared the measurements of
non-iron group lines (listed in Table~\ref{tab_UVES}) with those
available in the literature.
Specifically, we compared EWs from \citet{shetrone03}, \citet{koch08}, and
\citet{venn12}, based on UVES and on {FLAMES\slash GIRAFFE}-UVES spectra,
and from \citet{lemasle12}, based on {FLAMES\slash GIRAFFE}-HR spectra.
Figure~\ref{ew} shows the EW comparison for the four samples, with
our measurements always on the x-axis. The
top left panel represents the sample of \citet{shetrone03}, with which we
have five stars in common (one symbol per stars).
The black dashed line represents equality, and the dotted lines show
a 10\%\ error convolved with the 10~m\AA\ error, following \citet{shetrone03}.
The error bars in the right bottom corner
display the mean errors of the two EW measurements. The mean difference
and the dispersion are also labeled (in unit of m\AA). The comparison
shows that our estimates are higher on average by $\sim$8~m\AA, but
the measurements agree well within 10\%. We attribute
these systematic differences to the continuum normalization,
since a typical uncertainty of 10\%\ on the location of the
continuum causes a difference of 10\%\ in the EW.
The top right panel of Fig.~\ref{ew} shows the same
comparison for the sample of \citet{koch08} (ten stars). The higher dispersion ($\sim$20~m\AA) is
mainly due to the low S/N of these spectra, while the mean
difference is about 12~m\AA. Once again, we
overestimated the EWs. The bottom left panel shows the
comparison for eleven stars in common with the sample of \citet{lemasle12}.
In this case, the systematic difference is larger ($\sim$--19~m\AA),
but here our EW estimates are lower. The high dispersion
seems to be caused by the different spectral resolution (GHR$\sim$20,000 vs.
UVES$\sim$40,000), and the mean error on EWs decreases by almost
a factor of two (20.5 vs $\sim$12~m\AA).
The bottom right panel shows the comparison
with the recent work of \citet{venn12} (seven stars, six of them
reanalyzed by us). In this case, we obtain a difference of
$\sim$--10~m\AA\ with a dispersion of 18~m\AA, which is mainly due to the modest
S/N of these spectra (10-30).
In conclusion, the data plotted in Fig.~\ref{ew} indicate that
the current EWs agree on average with similar estimates
available in the literature, within 10-15\%.
\section{Abundances}
\label{sec:abund}
\subsection{Model atmospheres}
The individual model atmospheres come from the interpolation on the {\scriptsize MARCS}\
grid \citep{gustafsson08}, using a modified version of the interpolation
code developed by \citet{masseron06}. The individual models were computed
for the stellar parameters ($\hbox{T$_{\mathrm{eff}}$}, \log g$) listed in Table~1
of Paper~V and in Table~\ref{tab_param_stack}.
Moreover, we selected models with
spherical geometry, an $\alpha$-enhanced (\hbox{[$\alpha$/Fe]}=+0.4)
chemical mixture, a mass value of 1~\hbox{M$_{\odot}$}\ , and a constant
microturbulence velocity ($\xi$=2~\hbox{km\,s$^{-1}$}), as described in Paper~V. It is
noteworthy that we did not include lines shortward of 4800~\AA\
to avoid any possible continuum scattering effect in this
wavelength region \citep{sobeck11}.
\subsection{UVES abundances}
\label{ew_abund}
For the abundance determinations, we used the 2010 version of the
stellar abundance code {\scriptsize MOOG}\
\citep{sneden73}\footnote{\tt http://www.as.utexas.edu/$\sim$chris/moog.html},
in particular its {\it abfind} driver.
The abundances presented in the following sub-sections were computed
with a 1D LTE analysis. We chose the solar chemical
composition from \citet{grevesse07} to be consistent with the iron
abundances derived in Paper~V. The reference values adopted for the
individual species and abundance results are listed in Table~\ref{tab_abund}.
The anonymous referee suggested to provide more quantitative estimates
concerning the upper limits on the abundances of weak lines (O, Na, Si)
in metal-poor stars. To constrain the above limits, we performed a series
of simulations using synthetic and observed spectra with S/N$\ge$40. We
found that we can measure lines with EWs larger than 11~m\AA\ for stars
with iron abundances ranging from \hbox{[Fe/H]}=--1.50 to \hbox{[Fe/H]}=--2.50. The quoted
limit implies upper limits in the abundance of the quoted elements of
about \hbox{[Na/Fe]}=--0.9$\div$0.2, \hbox{[O/Fe]}=--0.1$\div$0.6 and
\hbox{[Si/Fe]}=--0.6$\div$0.4. We performed the same test using spectra with
lower S/N and found that we can only measure lines with EWs larger
than 20~m\AA. This means upper limits in the abundances of
\hbox{[Na/Fe]}=--0.3$\div$0.8, \hbox{[O/Fe]}=0.2$\div$0.9 and \hbox{[Si/Fe]}=--0.3$\div$0.7.
\subsubsection{Comparison with literature values}
Figure~\ref{abcompare} shows the individual UVES abundance results
obtained in this work compared to literature values (rescaled to the
same solar reference abundances). In particular, each panel of
Fig.~\ref{abcompare} shows the $\Delta\hbox{[X/H]}$=\hbox{[X/H]}$_{\rm UVES}$--\hbox{[X/H]}$_{\rm
Other}$ as a function of \hbox{[Fe/H]}\ for the stars in common with
\citet[][black circles]{shetrone03}, \citet[][blue
squares]{koch08}, \citet[][red diamonds]{venn12}, and
\citet[][green triangles]{lemasle12}.
The error bars plotted in this figure were estimated by summing in
quadrature current uncertainties with uncertainties evaluated by the quoted
authors.
The current abundances agree, within 1$\sigma$, with
high-resolution abundances available in the literature, namely
\citet{shetrone03}, \citet{venn12}, and \citet{lemasle12}.
The abundances by \citet{koch08} show a systematic offset and a large
scatter when compared with our measurements. The quoted discrepancy
appears to be caused by the differences in the measured EWs
(see Sect. \ref{ew_meas} and also Fig.~\ref{ew}) and in the adopted
stellar parameters. Their surface gravities are higher on average by
0.5~dex than current ones. The difference seems to be due to
the different approach adopted to estimate the gravity, that
is, by forcing
the balance between \hbox{Fe\,{\scriptsize I}}\ and \hbox{Fe\,{\scriptsize II}}\ vs. photometric gravities.
A more detailed discussion is reported in Sect. 5.2 of Paper~V.
Owing to the lack of evident trends and significant
systematics with the estimates available in the literature, we did not
apply any correction to our UVES abundances.
\begin{figure}
\centering
\includegraphics[trim=0.5cm 0cm 4cm 0.5cm,clip,width=0.5\textwidth]{cfr_ALPHA.pdf}
\caption{Comparison of UVES abundances with the literature
data samples indicated, $\Delta$\hbox{[X/H]}=\hbox{[X/H]}$_{\rm UVES}$--\hbox{[X/H]}$_{\rm Other}$.
\label{abcompare}}
\end{figure}
\subsection{{FLAMES\slash GIRAFFE}-HR abundances}
The approach described in Sects. \ref{ew_meas} and
\ref{ew_abund} was also used to measure the EWs (see Table~\ref{tab_HR})
and obtain
chemical abundances for the stacked {FLAMES\slash GIRAFFE}-HR spectra
(see Table~\ref{tab_abund}).
To check the validity of our measurements on the stacked spectra
and to avoid any systematics, we performed the same analysis
on the individual spectra for the stars in common with the UVES
sample. We compare the abundances of \hbox{Fe\,{\scriptsize I}}, \hbox{Ca\,{\scriptsize I}}\ and \hbox{Mg\,{\scriptsize I}}\ as
functions of \hbox{[Fe/H]}\ in Fig.~\ref{uves-hr}.
The agreement is good, within 1$\sigma$ (see labeled values), for most
measurements and without evidence of a drift as a function of \hbox{[Fe/H]}.
The bottom panel of Fig.~\ref{uves-hr} shows that four objects display a
difference in \hbox{Mg\,{\scriptsize I}}\ abundance that is larger than 1$\sigma$. In
particular, the difference for the most metal-poor (Car40) and the most
metal-rich (Car51) is about 2$\sigma$. We double-checked these objects
together with Car27 and Car33, located at \hbox{[Fe/H]}$\approx$--2,
and we found that they are the faintest targets in the UVES data sample,
meaning they have the lowest signal-to-noise ratio.
Moreover, the continuum in the region bracketing the only available Mg line
($\approx$5528~\AA) is relatively noisy. The EWs based on UVES data
show a mean difference of $\sim$50~m\AA\ with those based on HR spectra.
We note that these differences do not affect the results of
this investigation.
\subsection{Abundance uncertainties}
\label{subsec:ab_uncert}
The abundance errors were estimated as the maximum of two
values. The first comes from propagation
of the errors in the EW measurements
($\epsilon$EW), estimated following the approach described in
Sect.~\ref{ew_meas}, to obtain a $\sigma$(EW) for each
line. When the quoted error was asymmetric, the
average value was adopted. The second error value was based on the
standard deviation of the abundances if more than three lines of the
element were available $\sigma(X)$. Otherwise, we set
$\sigma(X)$=$\sigma(\hbox{Fe\,{\scriptsize I}})$. Moreover, to account for the
uncertainties in the stellar parameters, we added in quadrature
the contributions coming from the following error budget:
we computed the abundance variations by changing,
one at a time, the temperature ($\pm$75~K), gravity
($\pm$0.2~dex), microturbulence ($\pm$0.25~\hbox{km\,s$^{-1}$}), equivalent width
($\pm$10~m\AA), $\log gf$ ($\pm$0.15), metallicity ($\pm$0.2~dex),
and $\alpha$-content ($\pm$0.4~dex). We note that we used
generous estimates for the uncertainties in the atmospheric
parameters (see Paper~V) to include the differences
between our set of parameters, models, and atomic data as compared
to the literature ones. The estimation was performed on the star
Car12, since its effective temperature ($\sim$4400~K) and surface
gravity ($\sim$0.80~dex) can be considered representative of
the entire sample. The results are listed in Table~\ref{tab_err}.
For the {FLAMES\slash GIRAFFE}-HR stacked spectra, the dispersion of
individual spectra (see the top panel of Fig.~\ref{specstack}) produces
an uncertainty in the measured EWs of about 10\%. In terms of
abundances, this effect results in an uncertainty of $\sim$0.15~dex.
\input{tab_err.tex}
\subsection{{FLAMES\slash GIRAFFE}-MR abundance}
The spectral features in the {FLAMES\slash GIRAFFE}-MR data are severely
affected by the blending effect that is caused by the medium resolution of
the spectra (R$\sim$6,000). Equivalent width measurements are
thus not reliable; to distinguish the contribution of the various blends,
synthetic spectra need to be computed. For this, we used
the \textit{synth} driver of {\scriptsize MOOG}. The synthetic spectra were
convolved with a Gaussian broadening function to reproduce the low
instrumental resolution. We excluded the effect of stellar rotation.
The synthetic spectra were computed for various
abundances of iron, magnesium, and calcium. Then they
were compared, line by line, with the observed spectra. The
resulting abundance for each line was measured from the
minimum of the residual function.
The uncertainties for individual lines were estimated as the
sum in quadrature of three contributions: the abundance step adopted in
spectral synthesis computations, the error in the quadratic fit used to
interpolate the residual function, and the resulting uncertainty in the
abundances ($\sim$0.15~dex) that is due to the dispersion of individual spectra
(see bottom panel of Fig.~\ref{specstack} and Sect. \ref{subsec:ab_uncert}).
Measured abundances with
errors are listed in Tables~\ref{tab_LRold} and \ref{tab_LRint}.
To verify the validity
of our measurements on the stacked spectra and to avoid any
systematics, we performed the same analysis on the individual
spectra for the stars in common with the UVES sample. We
compare the abundances of \hbox{Fe\,{\scriptsize I}}\ and \hbox{Mg\,{\scriptsize I}}\ as function of \hbox{[Fe/H]}\ in
Fig.~\ref{uves-lr}.
The agreement is good, within 1$\sigma$ (see labeled values), for most
measurements and without evidence of a trend as a function of \hbox{[Fe/H]}.
Figure~\ref{hr-lr} shows the comparison between the resulting
abundances of \hbox{Fe\,{\scriptsize I}}\ and \hbox{Mg\,{\scriptsize I}}\ from stacked {FLAMES\slash GIRAFFE}-HR and -MR
spectra. We do not find any significant systematic trends
between the two data sets.
We note that the two objects that in the bottom panel display a difference
of about 2$\sigma$ are once again Car27 and Car33, that is, the faintest
tail of UVES targets.
\begin{figure}
\centering
\includegraphics[trim=0.cm 6.5cm 1cm 0.5cm,clip,width=0.95\columnwidth]{cfr_UVES-HR.pdf}
\caption{Comparison of \hbox{Fe\,{\scriptsize I}}, \hbox{Ca\,{\scriptsize I}},\ and \hbox{Mg\,{\scriptsize I}}\ abundances between
UVES and individual Giraffe HR spectra,
$\Delta$\hbox{[X/H]}=\hbox{[X/H]}$_{\rm UVES}$--\hbox{[X/H]}$_{\rm GHR}$.
Red squares and blue diamonds show abundances of old and intermediate-age stars.
\label{uves-hr}}
\end{figure}
\begin{figure}
\centering
\includegraphics[trim=0cm 13cm 1cm 0.5cm,clip,width=0.95\columnwidth]{cfr_UVES-LR.pdf}
\caption{Comparison of \hbox{Fe\,{\scriptsize I}} and \hbox{Mg\,{\scriptsize I}}\ abundances between UVES and
individual Giraffe MR spectra,
$\Delta$\hbox{[X/H]}=\hbox{[X/H]}$_{\rm UVES}$--\hbox{[X/H]}$_{\rm GMR}$
\label{uves-lr}}
\end{figure}
\begin{figure}
\centering
\includegraphics[trim=0cm 13cm 1cm 0.5cm,clip,width=0.95\columnwidth]{cfr_LR-HRstack.pdf}
\caption{Comparison of \hbox{Fe\,{\scriptsize I}}\ and \hbox{Mg\,{\scriptsize I}}\ abundances between stacked
Giraffe HR and LR08 spectra,
$\Delta$\hbox{[X/H]}=\hbox{[X/H]}$_{\rm GHR}$--\hbox{[X/H]}$_{\rm GMR}$
\label{hr-lr}}
\end{figure}
\section{Abundances of old and intermediate-age stars}
\label{sec:abund_vs_logg}
The resulting abundances for individual and stacked spectra are
listed in Table~\ref{tab_abund}. Figure~\ref{ab-logg} shows the
\hbox{Fe\,{\scriptsize I}}\ and \hbox{Mg\,{\scriptsize I}}\ abundances as function of gravity for the whole
data set. As usual, the red squares are used for the
old and the blue diamonds for the intermediate-age
population. The plots show an evident dichotomy in the abundances that
covers the entire gravity range, from the top of the RGB (\hbox{$\log g$}$\simeq$0.5~dex)
to the RC level ($\sim$2.5~dex).
This figure presents several interesting features. \\
{\it (i)} Iron abundances (top panel) based on UVES, GHR, and GMR
spectra show that the old stellar population is, over the entire
gravity range, systematically more metal-poor than the
intermediate-age stellar population. The mean iron abundances
based on the three different sets of spectra are listed in
Table~\ref{tab_cl}. The weighted total mean for the old population
is \hbox{[Fe/H]}=--2.15$\pm$0.06 ($\sigma$=0.28), while for the
intermediate-age population it is \hbox{[Fe/H]}=--1.75$\pm$0.03
($\sigma$=0.21). The difference is slightly larger than 1$\sigma$.
To provide a more quantitative estimate, we smoothed the
metallicity distributions of the old and intermediate-age
data sets with a Gaussian kernel with unitary weight and sigma
equal to the individual abundance uncertainties. We performed a
$\chi^2$ comparison of the two distributions and the confidence levels (CL)
are listed in Col. 4 of Table~\ref{tab_cl}. These data
indicate that the iron abundances of the two stellar
populations differ with a confidence level that ranges from
75\%\ (global sample) to 84\%\ (GHR).\\
{\it (ii)} Magnesium abundances plotted in the bottom panel of
Fig.~\ref{ab-logg}
display a similar
trend. The mean abundances for the different
spectroscopic samples are also listed in Table~\ref{tab_cl}. The
mean magnesium abundance for the old population based on the
entire sample is \hbox{[Mg/H]}=--1.91$\pm$0.05 ($\sigma$=0.22), while for
the intermediate-age population it is \hbox{[Mg/H]}=--1.35$\pm$0.03
($\sigma$=0.22). The difference is slightly larger than
1$\sigma$. We followed the same approach adopted for the iron
abundances and found that they differ with a confidence level
that ranges from 80\%\ (GHR) to 91\%\ (GMR).\\
{\it (iii)} The iron and the magnesium abundances based on GHR and
GMR spectra agree in the overlapping surface gravity regime,
with individual abundances based on UVES spectra.\\
{\it (iv)} The largest surface gravity bin (\hbox{$\log g$}=2.48) shows
the Fe and the Mg abundances of RC stars. The abundances are,
within the errors, similar to the other intermediate-age
abundances. This further confirms the difference between the
two subpopulations, since RC stars are reliable tracers of the
intermediate-age population \citep{cassisi13book}.
\begin{figure*}[!ht]
\centering
\includegraphics[trim=0cm 9cm 1cm 0.5cm,clip,width=0.7\textwidth]{cfr_elements.pdf}
\caption{Top: \hbox{[Fe/H]}\ abundances based on individual and stacked spectra. Red squares
and blue diamonds represent abundances of old and intermediate-age stars.
Abundances based on individual high-resolution UVES spectra are
displayed as small squares and diamonds, without bars.
The error bars plotted in the bottom right
corner of the panel display the typical uncertainty for the UVES
abundances and on surface gravities (see also Paper~V).
Abundances based
on GHR spectra are marked by medium squares and diamonds,
while those based on GMR spectra are marked by large squares/diamonds.
The vertical bars represent the uncertainty in
iron while the horizontal ones show the gravity ranges adopted in
Fig.~\ref{cmdstack}.
Bottom: same as the top, but for the \hbox{[Mg/H]}\ abundances.
\label{ab-logg}}
\end{figure*}
\input{tab_abund.tex}
\input{tab_confidence.tex}
\section{Comparison with the Galactic halo}
\label{sec:abund_vs_mw}
\input{tab_mean_compare.tex}
Figure~\ref{mwelem} displays the abundance trends of five $\alpha$-elements,
including Na,
for the entire sample of
old- (red squares) and intermediate-age (blue diamonds)
stars. For a detailed comparison with field halo stars, the
large sample of
elemental abundances compiled by
\citet{frebel10mw} is shown as purple dots. These abundances are
based on high-resolution spectra of field stars of all
evolutionary stages. We note that these measurements have been
rescaled to the same solar elemental abundances adopted in this
investigation.\\
The [Na/Fe] abundances are only available for a limited sample
(ten) of intermediate-age stars. The mean weighted
abundance---[Na/Fe]=0.18 ($\sigma$=0.27)---appears slightly
larger than the abundances of
field halo stars in the iron range covered by
Carina stars---[Na/Fe]=--0.11 ($\sigma$=0.29).
However, the difference is within 1$\sigma$ (see
Table~\ref{tab_mean_compare}). We note that the field value is
based on a large sample (72) and shows an intrinsic dispersion
that is higher than the individual measurements (see the error
bars plotted in the top right corner).
Moreover, intermediate-age Carina stars attain either solar or
slightly supersolar Na abundances. The \hbox{[Na/Fe]}\ abundances provided by
\citet{venn12} are on average subsolar. The discrepancy for the stars
with \hbox{[Fe/H]}$>$--2.0 is caused by the difference in the mean iron
abundance $\Delta$(our--Venn)=--0.37$\pm$0.11~dex (see Sect.~5.2 and
Fig.~3 in Paper~V). In passing we note that the plausibility
of the current \hbox{[Na/H]}\ abundances is supported by the mild difference with
similar abundances provided by \citet{shetrone03}, \citet{venn12}, and
\citet{koch08} (see panel (a) of Fig.~\ref{abcompare}).\\
[O/Fe] abundances are available for a few old (four) and for a
good sample of intermediate-age (20) stars. They are O enhanced
and attain very similar abundances within the errors (see
Table~\ref{tab_mean_compare}). The mean weighted [O/Fe] abundance
of the entire sample---[O/Fe]=0.63 ($\sigma$=0.23)---agrees quite
well with similar abundances---[O/Fe]=0.55 ($\sigma$=0.33)---for
field halo stars (57) in the same iron interval.
We note that for several metal-poor objects in our sample both O and Si
display very weak lines and their EWs have modest or poor precision.\\
[Mg/Fe] abundances are available for a sizable sample of both old
and intermediate-age stars (see Sect. \ref{sec:abund_vs_logg}). They
are Mg enhanced and agree---[Mg/Fe]=0.29 ($\sigma$=0.28) vs
[Mg/Fe]=0.40 ($\sigma$=0.22)---within the errors. We note that old
and intermediate-age Carina stars show more similar [Mg/Fe]
abundances than \hbox{[Mg/H]}\ because the old sample is systematically
more iron-poor than the younger one. The mean
weighted [Mg/Fe] abundance of the entire sample---[Mg/Fe]=0.36
($\sigma$=0.24)---agrees very well with similar
abundances---[Mg/Fe]=0.34 ($\sigma$=0.19)---for field halo stars
(581) in the same metallicity interval.
This finding supports early results obtained by \citet{idiart00}
concerning the Mg abundances of field Halo stars. The
non-LTE correction for the \hbox{Mg\,{\scriptsize I}}\ abundances of both halo and
Carina stars were not taken into account. However, \citet{merle11}
found that the non-LTE corrections to the EWs of two Mg lines at
5711 and 5528~\AA\ are smaller than 10\%.\\
The [Si/Fe] abundances are available for a sizable sample of
intermediate-age (16) stars but for only one old star. They are Si
enhanced and the mean weighted abundance of the entire
sample---[Si/Fe]=0.56 ($\sigma$=0.25)---is larger than the mean
abundance---[Si/Fe]=0.27 ($\sigma$=0.25)---of field halo stars
(87). They agree within 1$\sigma$. The mean Si abundance
decreases to 0.54~dex ($\sigma$=0.22) when the old star is excluded.\\
The [Ca/Fe] abundances of old (14) and intermediate-age (38)
Carina stars agree quite well---[Ca/Fe]=0.18 ($\sigma$=0.33) vs
[Ca/Fe]=0.27 ($\sigma$=0.12)---with each other. The weighted
mean [Ca/Fe] abundance of the entire sample---[Ca/Fe]=0.25
($\sigma$=0.17)---agrees very well with similar
abundances---[Ca/Fe]=0.20 ($\sigma$=0.13)---for field halo stars
(540) in the same iron interval. We excluded the non-LTE corrections
to the EWs of \hbox{Ca\,{\scriptsize I}}\ lines for both halo and Carina stars from
the comparison. \citet{merle11} found that the non-LTE corrections to the EWs of
the two adopted \hbox{Ca\,{\scriptsize I}}\ lines (6122, 6166~\AA) are smaller than 10\%.
The anonymous referee noted the paucity of subsolar [Mg/Fe] and
[Ca/Fe] abundance ratios, plotted in panels (c) and (e) of Fig.~\ref{mwelem},
when compared with similar abundances provided by \citet{lemasle12}.
The good agreement between the two different data sets has already
been discussed in Sect.~\ref{sec:abund}. The above difference is
mainly caused by a difference of --0.27$\pm$0.09~dex in iron abundance.
We refer
to Paper~V for a more detailed discussion.\\
The [Ti/Fe] abundances are based on \hbox{Ti\,{\scriptsize II}}. The abundances of old
and intermediate-age Carina stars are enhanced and agree quite
well---[Ti/Fe]=0.40 ($\sigma$=0.24) vs [Ti/Fe]=0.24
($\sigma$=0.28). The former sample includes nine stars,
while the latter contains almost three dozen stars. The mean
weighted [Ti/Fe] abundance of the entire sample---[Ti/Fe]=0.28
($\sigma$=0.30)---agrees very well with similar
abundances---[Ti/Fe]=0.34
($\sigma$=0.15)---for field halo stars
(515) in the same iron interval.
The abundances for neutral \hbox{Ti\,{\scriptsize I}}\ are not used here to
avoid non-LTE effects that cause an ionization imbalance in
this species, as shown by \citet{bergemann11} and
\citet{bergemann14}. It is noteworthy that the correction of
$+0.25$~dex for \hbox{Ti\,{\scriptsize I}}, suggested by \citet{bergemann11} and based
on the metal-poor RGB star HD~122563 (\hbox{[Fe/H]}=--2.5), agrees very
well with the difference we found in our stars
\hbox{Ti\,{\scriptsize I}}--\hbox{Ti\,{\scriptsize II}}=+0.28~dex.\\
To further constrain the [$\alpha$/Fe] abundance of Carina stars,
we also summed the individual $\alpha$-elements with
reliable measurements. The top panel of Fig.~\ref{mwalpha} shows
[Mg+Ca/2Fe] as a function of the iron abundance. The
old and the intermediate-age subpopulations have, once again,
very similar abundances. They also agree quite well with similar
abundances for field halo stars (see also
Table~\ref{tab_mean_compare}). The same result is found for the
[Mg+Ca+Ti/3Fe] $\alpha$-element abundances plotted in
the bottom panel of that figure.
The standard deviations of the Carina subpopulations are, as noted
by the anonymous referee, larger than the standard deviations of the
halo sample. The difference is mainly due to the sample size. We
performed a number of tests and found that
the Mg distribution of Carina and halo stars agree at 95\%\ CL.
We found a similar agreement for the Ca (90\%\ CL) distribution,
while for Ti it is at 50\%\ CL.
These findings are soundly supported by the mean of the
$\alpha$-elements plotted in Fig.~\ref{mwalpha} and listed in
Table~\ref{tab_mean_compare}. The sum of Mg and Ca do agree at
99\%\ CL, while the sum of the three $\alpha$-elements (bottom panel of
Fig.~\ref{mwalpha}) agree at 75\%\ CL.
This comparison highlights two relevant findings.\\
{\it (i)} The [$\alpha$/Fe] abundances of old and intermediate-age
Carina stars are enhanced. They do not show
any significant difference within the errors.\\
{\it (ii)} The current mean weighted [$\alpha$/Fe] abundances
agree quite well with similar abundances of field halo stars in
the same range in iron as covered by Carina RG stars.
\begin{figure}
\centering
\includegraphics[trim=0.5cm 0cm 4cm 0.5cm,clip,width=0.5\textwidth]{mw1.pdf}
\caption{Element abundances as function of \hbox{[Fe/H]}. The open red squares and blue diamonds
are the measurements based on UVES spectra of this work for the old and intermediate-age populations.
The crossed squares and diamonds show the measurements based on Giraffe-HR spectra,
while small solid symbols are for the Giraffe-MR sample.
The purple dots show the Milky Way halo stars from \citet{frebel10mw}.
\label{mwelem}}
\end{figure}
\begin{figure}
\centering
\includegraphics[trim=0.5cm 17.5cm 4cm 0.5cm,clip,width=0.5\textwidth]{mw2.pdf}
\caption{Same as Fig.~\ref{mwelem}, but for the element
combination indicated. \label{mwalpha}}
\end{figure}
\section{Comparison with globular clusters}
\label{sec:abund_vs_GC}
The comparison between Carina's
elemental abundances and
abundances in the Galactic halo is partially hampered by the fact that
the latter abundances are derived from
spectra with different
spectral resolutions and different wavelength ranges. To further constrain
the $\alpha$-element abundances of Carina stars, we repeated the
comparison using abundances of RG stars in Galactic
\citep{pritzl05,carretta09uves,carretta09gir,carretta10uves} and
Magellanic \citep{mucciarelli10,colucci12} globular clusters.
This sample has several distinct differences compared to the field stars:
{\it (i)} a significant fraction of the abundances rely on high-resolution spectra
similar to those of the Carina stars. They also cover very similar
wavelength ranges and therefore similar line lists.
{\it (ii)} A significant fraction of the abundances are on a homogenous
$\alpha$-element scale.
{\it (iii)} The spectroscopic targets include only cluster RG stars.
{\it (iv)} They show distinctive spectroscopic features (anticorrelations) when
compared with field stars, thus suggesting a different chemical enrichment history.
\begin{figure}
\centering
\includegraphics[trim=.5cm 0cm 4cm 0.5cm,clip,width=0.5\textwidth]{gc1.pdf}
\caption{Element abundances as functions of \hbox{[\fei/H]}\ for Galactic and some LMC
globular clusters. Red squares and blue diamonds show abundances of
old and intermediate-age Carina stars. The small cyan asterisks are mean
abundances for Galactic globular clusters from \citet{pritzl05}.
The colored small symbols are data for LMC clusters from
\citet[][NGC~1786, NGC~2210, and NGC~2257]{mucciarelli10} and
\citet[][NGC~1916, NGC~2005, and NGC~2019]{colucci12}.
The gray dots are individual
abundances for 19 Galactic globular clusters from
\citet{carretta09uves,carretta09gir,carretta10uves}.
The gray error bars in the bottom left corner of each
panel show the mean abundance errors in GCs. The two dot-dashed lines
show the limiting positions of the Milky Way halo stars \citet{frebel10mw}.
\label{gcelem}}
\end{figure}
Panel (a) of Fig.~\ref{gcelem} shows that the Na abundances of
Carina's intermediate-age RGs agree quite well with cluster stars.
However, Carina RGs, in the metallicity range they cover,
attain Na abundances that are slightly underabundant compared to the
cluster abundances. They appear, indeed, to agree better
with the Na abundances of field halo stars (see Table~\ref{tab_mean_compare}).
The two dot-dashed lines plotted in Fig.~\ref{gcelem} display the limiting
position of Milky Way halo stars according to \citet{frebel10mw}. To avoid
spurious fluctuations in the range of elemental abundances covered by
field stars, we ranked the entire sample as a function of the iron
abundance. Then we estimated the running average by using a box including
the first 100 objects in the list. We estimated the mean abundances
(iron, element) and the standard deviations of the subsample. We
estimated the same quantities by moving one object in the ranked list
until we took account of the last 100 objects in the sample. We
performed several tests changing both the number of objects included in
the box and the number of stepping stars. We found that the limiting
positions are minimally affected by plausible variations.\\
The comparison between Carina and cluster O abundances is shown in panel (b)
of the same figure. Here, the situation is reversed:
they attain O abundances that are slightly enhanced
compared with cluster stars. The (anti-)correlation Na--O of Carina stars
is discussed in more detail in Sect. \ref{sec:na-o}.\\
The Mg abundances of Carina RGs agree quite well with cluster Mg abundances.
They show, within the errors, very similar enhancements over the
entire metallicity range covered by both globular and Carina samples. \\
The same conclusion applies to globular and Carina Si abundances
(see panel (d) of Fig.~\ref{gcelem}).\\
The comparison between globular and Carina Ca abundances appears to be more complex.
Panel (e) of Fig.~\ref{gcelem} shows that Carina's intermediate-age
subpopulation agrees quite well with globular Ca abundances.
On the other hand, Carina's old subpopulation shows a slightly broader
spread when compared with cluster stars (see Table~\ref{tab_mean_compare})
and with the
intermediate-age subpopulation. The internal difference appears
reliable ($\sigma$=0.33 vs 0.12~dex), since it is differential and based
on GHR and UVES spectra. However, more accurate Ca abundances of Carina
old-population stars are required to confirm this preliminary
evidence.
In passing we note that the current findings
support previous results by \citet{thevenin01} for Mg and Ca abundances
of seven turn-off stars in the metal-poor Galactic globular cluster NGC~6397.\\
The bottom panel of Fig.~\ref{gcelem} shows the comparison between globular
cluster and Carina \hbox{Ti\,{\scriptsize II}}\ abundances. The two samples agree quite well over
the entire metallicity range. There is mild evidence that a fraction of
Carina stars might be slightly underabundant in \hbox{Ti\,{\scriptsize II}}\ for \hbox{[Fe/H]}=--1.8,
but the difference is within the intrinsic dispersion of the two samples
(see error bars).
\begin{figure}
\centering
\includegraphics[trim=0.5cm 17.5cm 4cm 0.5cm,clip,width=0.5\textwidth]{gc2.pdf}
\caption{Same as Fig.~\ref{gcelem}, but for the element
combination indicated. \label{gcalpha}}
\end{figure}
The top and bottom panels of Fig.~\ref{gcalpha} reveal to
even a cursory scrutiny that the sum of Mg and Ca and
the sum of Mg, Ca, and \hbox{Ti\,{\scriptsize II}}\ agree quite well with the mean $\alpha$-element
abundances of globular stars. This indicates that the
$\alpha$-element enrichments appear to be quite similar. This
evidence is quite compelling because it applies not only to the old,
but also to the intermediate-age subpopulation.
In passing we note that this comparison also suggests
that nearby stellar systems and field halo stars attain very similar $\alpha$ enhancements
in the metallicity range they cover.
This further supports the evidence that $\alpha$-elements, in contrast
with {\it s}- and {\it r}-elements, are poor diagnostics to
constrain possible differences in chemical enrichment between old
and intermediate-age stellar populations \citep{cescutti08,matteucci14}.
\section{Comparison with nearby dwarfs}
\label{sec:abund_vs_dwarf}
\input{tab_dwarf.tex}
\begin{figure}
\centering
\includegraphics[trim=0.5cm 0cm 4cm 0.5cm,clip,width=0.5\textwidth]{dwarf1.pdf}
\caption{Element abundances as functions of \hbox{[\fei/H]}\ for the dwarf spheroidal galaxies
\citep[Draco:][pluses]{shetrone01,fulbright04,cohen09} -
\citep[Fornax:][circles]{shetrone03,tafelmeyer10,letarte10,hendricks14} -
\citep[LeoI :][stars]{shetrone03} -
\citep[Sculptor:][triangles]{shetrone03,geisler05,frebel10scl,starkenburg13} -
\citep[Sextans:][upside-down triangles]{shetrone01,aoki09,tafelmeyer10} -
\citep[Ursa Minor:][crosses]{shetrone01}.
\label{dwarfelem}}
\end{figure}
\begin{figure}
\centering
\includegraphics[trim=0.5cm 0cm 4cm 0.5cm,clip,width=0.5\textwidth]{ufd1.pdf}
\caption{$\alpha$-element abundances as functions of \hbox{[\fei/H]}\ for the ultra-faint dwarf galaxies:
\citep[Bootes:][pluses]{feltzing09,norris10} -
\citep[Ursa Major:][circles]{frebel10ufd} -
\citep[ComaBer:][stars]{frebel10ufd} -
\citep[Hercules:][triangles]{koch08her} -
\citep[LeoIV:][upside-down triangles]{simon10}.
\label{ufdelem}}
\end{figure}
To further characterize the chemical enrichment history of Carina's old
and intermediate-age subpopulations, we extended the comparison
to other nearby dSphs and UFDs.
The dSphs included in the current comparison---Draco,
Fornax, LeoI, Sculptor, Sextans, and Ursa Minor---have accurate
elemental abundances from high-resolution spectra,
covering a broad range in iron abundances (see Table~\ref{tab_dwarf}).
Moreover, they show quite different star formation histories,
but they all host a clearly defined old (t$\sim$12~Gyr) subpopulation.
Panels (a) and (b) of Fig.~\ref{dwarfelem}
display the comparison between
Na and O abundance in Carina and the
selected dSphs.
These data show that Na and O abundances in
nearby dSphs agree within the errors with abundances in
field halo stars over the entire metallicity range covered by
dSphs. The only exception is Fornax. This is the most metal-rich
system and has Na abundances (purple asterisks) that are systematically
lower by $\sim$0.3-0.5~dex than field halo stars and
the few metal-rich stars in Sculptor (cyan triangles). A similar
underabundance in Na was also found by \citet{mcwilliam13} in
RGs of the metal-rich Sagittarius dSph galaxy.
This is a metallicity regime in which Na abundances might be affected
by non-LTE effects \citep{gratton99,carretta10m54}, but the
detailed spectroscopic analysis performed by \citet{fulbright07}
among K-type
giants and FGK-type dwarfs in the Galactic disk indicates that
the non-LTE effects are weak (see also \citealt{mcwilliam13}).
Panels (c), (d), and (e) show the comparison between Mg, Si, and Ca
abundances in Carina and other nearby dwarfs. Stars in dSphs are all
enhanced in these elements and agree with each other over
the entire metallicity range. They also agree quite well with
abundances in field Halo stars (dashed lines).
The only exception is, once again, Fornax, showing a well-defined
underabundance in the quoted $\alpha$ elements. There are a few
metal-rich stars in Sculptor showing mild underabundances, but the
possible difference is within 1$\sigma$.
The bottom panel (f) shows that \hbox{Ti\,{\scriptsize II}}\ abundances
in nearby dSphs are on average enhanced over the entire metallicity
range. Moreover, they agree quite well with each other and with
field Halo stars. The same agreement is also found for
Fornax stars. There is weak evidence that the dispersion in \hbox{Ti\,{\scriptsize II}}\
abundances is, at fixed metal content, slightly higher in dwarfs
than in the field (see also dispersion values listed in
Table~\ref{tab_mean_compare} and \ref{tab_dwarf}).
The insight emerging from this comparisons does not
allow us to reach firm conclusions concerning the chemical enrichment
history of Carina and nearby dwarfs. Indeed, O, Mg and Na are mainly produced by
massive stars during hydrostatic burning phases, and they appear to have
similar abundances in nearby dSphs and among field halo stars.
On the other hand, the most metal-rich systems (Fornax and Sagittarius)
appear to be underabundant in these three elements.
The scenario becomes even more surprising for the explosive $\alpha$-elements,
namely Si, Ca, and Ti. Si and Ca abundances in field halo stars and
in nearby dwarfs, except for Fornax, agree quite well. Once again, metal-rich systems
show either solar or slightly underabundant Si and Ca abundances.
On the other hand, Ti abundances agree quite well over the entire metallicity
range covered by the nearby dSphs.
We performed the same comparisons with RGs in five nearby UFDs
(Bo\"otes, Ursa Major, Coma Ber, Hercules, and Leo IV) in
Fig.~\ref{ufdelem}. The results are similar to the results
found for metal-por dSphs (see Fig.~\ref{dwarfelem} and Table~\ref{tab_dwarf}).
However, the sample of stars is still too limited to reach firm conclusions.
In conclusion, we are left with the following empirical evidence:
$\alpha$-element abundances in nearby dwarf are similar to the Galactic
field halo stars and to globular clusters in the metal-poor regime
(\hbox{[Fe/H]}$<$--1.5). The difference is smaller on average than 1$\sigma$ (see
Tables~\ref{tab_mean_compare} and \ref{tab_dwarf}).
There is change in the trend when moving into the more metal-rich regime
(\hbox{[Fe/H]}$>$--1.5). The Fornax \hbox{[$\alpha$/Fe]}\ abundance ratios are on average
underabundant when compared with halo stars. Sculptor appears to be
a transitional stellar system, since the \hbox{[$\alpha$/Fe]}\ abundance ratios are
slightly higher or lower than solar.
\begin{figure}
\centering
\includegraphics[trim=0.5cm 17.5cm 4cm 0.5cm,clip,width=0.5\textwidth]{dwarf2.pdf}
\caption{Same as Fig.~\ref{dwarfelem}, but for the element
combination indicated. \label{dwarfalpha}}
\end{figure}
\subsection{Hydrostatic vs. explosive}
To further investigate the difference between hydrostatic and explosive
elements, the top panel of Fig.~\ref{dwarfalpha} shows the comparison
between the sum of Mg and O for field halo stars. In particular, Mg is
produced in hydrostatic core C and O burning, while Ca is one product of
explosive Si burning during the supernova type II (SN~II) explosion.
They overlap quite well until the metal-rich regime. The bottom panel
shows the comparison of the sum of the explosive $\alpha$-elements (Si,
Ca, and Ti). The agreement is quite good in the metal-poor and in the
metal-intermediate iron regimes.
The depletion of the quoted sum for Fornax stars in the metal-rich
regime is somehow mitigated by the inclusion of titanium. The
depletion might have been even stronger if we had only summed Si and Ca
abundances of Fornax stars.
Figure~\ref{mgca} shows the comparison between the abundances of two
elements, Mg and Ca, as yields of SN~II events. The
different ratios of these elements are due to the progenitor mass of the
SN~II \citep{iwamoto99}. For Carina, the ratio [Mg/Ca] shows
a weaker enhancement than in the MW stars (0.15 vs 0.03~dex, top
panel, see also Table~\ref{tab_mean_compare}), but it is well within 1$\sigma$
(0.27~dex). The same behavior is found in the comparison between individual
abundances of \hbox{[Mg/H]}\ and \hbox{[Ca/H]}\ (bottom panel).
\begin{figure}
\centering
\includegraphics[trim=0.5cm 13.7cm 4.cm .5cm,clip,width=0.5\textwidth]{mg_ca.pdf}
\caption{Top panel: the ratio [Mg/Ca] as a function of \hbox{[Fe/H]}
for Carina as compared to MW halo stars. Symbols and colors are
the same as in Fig.~\ref{mwelem}. The abundances of \hbox{[Mg/H]}\ and
\hbox{[Ca/H]}\ are compared in the bottom panel, where the dashed line
shows the bisector of the plane.
\label{mgca}}
\end{figure}
\section{Carina chemical enrichment}
\label{sec:na-o}
Data plotted in Figs.~\ref{mwelem} and \ref{mwalpha}
show that Carina's chemical
enrichment history is quite complex. Similar conclusions were also
reached by \citet{lemasle12} and \citet{venn12}, who found
evidence that the metal-poor subpopulation is less $\alpha$-enhanced
than the metal-rich one. This result was independently
supported by \citet{deboer14}, who performed a detailed star formation
history of the Carina dSph galaxy.
On the other hand, the current individual (Fig.~\ref{mwelem})
and mean (Fig.~\ref{mwalpha})
$\alpha$ abundance ratios of the two subpopulations are very
similar within
1$\sigma$. Data listed in Table~\ref{tab_mean_compare}
indicate that the difference is at most on the order of 0.1~dex.
However, \hbox{[Mg/Fe]}\ (panel (c) Fig.~\ref{mwelem}) and \hbox{[Ca/Fe]}\ (panel (e) Fig.~\ref{mwelem})
abundance ratios of the old subpopulation appear to be less
$\alpha$-enhanced than the intermediate-age subpopulation in the iron
range (--2.3$<$\hbox{[Fe/H]}$<$--1.9) they have in common (see also the top panel of
Fig.~\ref{mwalpha}). The comparison for the other $\alpha$-elements is hampered by
statistics and by the limited range in iron abundance in common between
the two subpopulations.
The mean $\alpha$-abundance ratios plotted
in the bottom panel of Fig.~\ref{mwalpha} show that the sum of Mg, Ca, and Ti does
not display any significant difference between the old- and the
intermediate-age subpopulation. The main difference between the current
analysis and previous investigations available in the literature is in
the sample size. We worked with $\alpha$-element abundances for 67
stars, 46 of which belonged to the intermediate-age subpopulation.
The sample discussed by \citet{lemasle12} is a factor of two smaller
(35 objects). The difference in the sample size becomes on the order of
20\%\ (55 objects) if we also include abundances on high-resolution
spectra provided by \citet{shetrone03}, \citet{koch08}, and
\citet{venn12}.
This evidence indicates that homogeneous $\alpha$-element
abundances for a sizable sample of RGB stars do not show a clear
difference between old- and intermediate-age subpopulations. The same
outcome applies to the possible occurrence of a "knee" either in the
metal-poor (\hbox{[Fe/H]}=--2.5) or in the metal-rich subpopulation (\hbox{[Fe/H]}=--1.6).
There are three (Car45, Car27, and Car19) stars in the top panel and two
(Car27 and Car19) in the bottom panel of Fig.~\ref{mwalpha} that
show less
enhanced $\alpha$ abundance ratios. However, the difference is either
within or slightly larger than 1$\sigma$.
To further constrain the chemical enrichment history of Carina,
we also investigated the (anti-) correlation between Na and
O. There is solid evidence that evolved and unevolved cluster
stars display a well-defined anticorrelation in Na--O and in Mg--Al
\citep{carretta09uves,carretta09gir,carretta14}.
We note that the environment appears to play a minor role, if any, in these
cluster star anticorrelations, and indeed, they have also been identified
in globulars belonging to LG dwarf galaxies (LMC, \citealt{mucciarelli10};
Fornax, \citealt{letarte06}).
The occurrence of light-element anticorrelations in GCs is considered
to be the consequence of deep potential wells that are able to retain the
ejecta of candidate stellar polluters, such as intermediate-mass
asymptotic giant branch stars and/or fast-rotating massive stars (see
\citealt{cassisi13book} and references therein). Nearby dwarf galaxies
typically have low central stellar densities
\citep{mateo98araa,mcconnachie12}, therefore a correlation between Na
and O is expected. However, we still lack detailed spectroscopic
investigations of nearby dSphs that are characterized by high central densities
(LeoI, Draco, Ursa Minor). Accurate light element abundances in these
systems are required before reaching firm conclusions concerning the
environmental impact on their chemical enrichment history.
The top panel of Fig.~\ref{nao} shows that Carina stars have a
(positive) correlation between Na and O. Moreover, the correlation is
quite similar to the correlation of field halo stars found by
\citet{frebel10mw}. The current data soundly support previous results
obtained by \citet{carretta10m54} and \citet{mcwilliam13} for
Sagittarius stars. The key advantage of the current comparison is that
we investigate the correlation for a system that is significantly
more metal-poor than Sagittarius ($\sim-2.0$ vs $\sim-0.6$~dex). To
define the difference with cluster stars on a more quantitative basis,
the bottom panel shows the comparison between the current sample and the
entire sample of cluster stars investigated by
\citet{carretta09uves,carretta09gir}. The difference is quite clear, and
indeed Carina stars display a steady increase in the regime of \hbox{[O/Fe]}\
abundances in which the \hbox{[Na/Fe]}\ in Galactic globulars becomes less and
less abundant. Unfortunately, we cannot constrain whether the candidate
old stars show the same trend, since the Na abundance measurements for
those stars are lacking.
\begin{figure}
\centering
\includegraphics[trim=0.5cm 13cm 11.5cm 0.5cm,clip,width=0.4\textwidth]{na_o.pdf}
\caption{Comparison of Carina stars with MW halo stars (top)
and with 1958 stars of 19 galactic globular clusters
\citep[bottom,][]{carretta09uves,carretta09gir} in the classical Na vs O diagram.
\label{nao}}
\end{figure}
\section{Summary and final remarks}
\label{sec:summary}
We have presented a new spectroscopic investigation of Carina RG stars. The
abundance analysis was focused on Na plus five $\alpha$-elements:
O, Mg, Si, Ca, and Ti. The current approach, when compared with similar
spectroscopic investigations available in the literature, has two
distinct features.\\
{\it (i)} We used spectroscopic data collected with UVES (high spectral
resolution) and with {FLAMES\slash GIRAFFE}\ (high- and medium-resolution) at the VLT.
The current spectroscopic data sets cover a significant fraction of
Carina's RGB and, for the first time, reach the red clump stars
(\hbox{\it V\/}$\sim$20.5, \hbox{\bb--\vv\/}=0.6 mag), that is, a reliable tracer of the intermediate-age
stellar population. We obtained accurate abundance analyses
for 44 RGs based on UVES spectra that were previously analyzed in the literature
\citep{koch08,venn12,fabrizio12}. They were supplemented with 65
(high-resolution, \citealt{lemasle12,fabrizio11}) and 802
(medium-resolution,
\citealt{koch06,fabrizio11}) {FLAMES\slash GIRAFFE}\ spectra. The
abundance analysis of 46\%\ of the former sample and 84\%\ of the
latter were discussed here for the first time.\\
{\it (ii)} We took advantage of the new photometry index \hbox{{\it c}$_{\rm U,B,I}$\/}\
introduced by \citet{monelli13,monelli14} as an age and probably
a metallicity indicator to split stars along the Carina's RGB.
It is noteworthy that the main conclusion of this investigation, that is, the presence of
two subpopulations that experienced two different chemical enrichment
histories, is not affected by the intrinsic parameters affecting the \hbox{{\it c}$_{\rm U,B,I}$\/}\
index.
To improve the accuracy of the abundance analysis in the faint
magnitude limit, we devised a new data reduction strategy. The
entire {FLAMES\slash GIRAFFE}\ data set was divided into ten surface gravity and
effective temperature bins. The spectra of the stars belonging to
the same gravity and temperature bin are characterized by similar stellar
parameters and were stacked together.
This allowed us to increase the
signal-to-noise ratio in the faint magnitude limit (\hbox{\it V\/}$\ge$20.5~mag)
by at least a factor of five.
In this context we note that the spectra of the stars
belonging to the same gravity and temperature bin are quite similar
because of the modest variation in the intrinsic parameters. This means
an improvement in the accuracy of individual abundance estimates. Moreover,
this approach allowed us to control possible systematics (surface gravity
and effective temperature dependence of non-LTE effects) between the old
and intermediate-age stellar populations.
On the basis of these data sets, we have performed the
largest and the most homogeneous abundance analysis of the Carina dSph
galaxy. The abundances were estimated using both EWs (high-resolution
spectra) and spectrum synthesis (medium-resolution stacked spectra).
The main results of the current analysis are listed below.\\
$\bullet$ There is increasing evidence that Carina's old and
intermediate-age stellar populations display two distinct
\hbox{[Fe/H]}\ and \hbox{[Mg/H]}\ distributions. The dichotomy
is present over the entire gravity range (0.5$<$\hbox{$\log g$}$<$2.5); this
means from the tip of the RGB down to the RC stars. Specifically,
we found that the old stellar populations has a mean iron abundance
of --2.15$\pm$0.06~dex ($\sigma$=0.27), while the intermediate-age population
has a mean iron abundance of --1.75$\pm$0.03~dex ($\sigma$=0.21). The two
distributions differ at the 75\%\ level. This agrees quite well with preliminary
results by \citet{monelli14} based on data available in the literature
and with \citet{lemasle12}, using a subsample of the current spectroscopic data set.
Moreover, we found that the old and intermediate-age
stellar populations have mean \hbox{[Mg/H]}\ abundances of
--1.91$\pm$0.05~dex ($\sigma$=0.22)
and of --1.35$\pm$0.03~dex ($\sigma$=0.22). They differ at the 83\%\ level.\\
$\bullet$ The individual \hbox{[$\alpha$/Fe]}\ abundances of Carina's old and intermediate-age
evolved stars are enhanced over the entire iron range.\\
$\bullet$ Carina's $\alpha$-element abundances and abundances
for Galactic halo stars agree quite well (1$\sigma$) over the entire
iron range covered by Carina stars. The same conclusion applies to the
comparison between $\alpha$-element abundances in Carina and in Galactic
and Magellanic globular clusters. However, Na and O abundances display
different trends.\\
$\bullet$ Carina's $\alpha$-element abundances also agree within
1$\sigma$ with similar abundances for LG dwarf spheroidals and
ultra-faint dwarf galaxies in the iron range we considered.\\
$\bullet$ We found evidence of a clear correlation between
Na and O abundances. Carina's correlation agrees quite well with
the typical Na--O correlation of MW halo stars. This supports
previous findings by \citet{carretta10m54} and
\citet{mcwilliam05}.
These results support the evidence of a close similarity in
the chemical enrichment history of field halo and Carina stars
\citep{idiart00}.
The stacked spectra will also allow us to investigate the abundances
of several {\it s}- and {\it r}-elements.
Of course, the data reduction we devised
to stack the spectra in gravity and temperature bins is opening the path
to a detailed spectroscopic investigation of the old HB stars
(\hbox{\it V\/}$\sim$21-21.5~mag), the most reliable tracers of the Carina old stellar
population.
\begin{acknowledgements}
It is a pleasure to acknowledge the anonymous referee
for her/his comments and suggestions that improved the content and
the readability of our manuscript.
M.F. acknowledges financial support from the PO FSE Abruzzo 2007-2013
through the grant "Spectro-photometric characterization of stellar populations
in Local Group dwarf galaxies" prot.89/2014/OACTe/D (PI:~S.~Cassisi).
He also thanks ESO for support as science visitor (DGDF12-42, F.~Primas).
This work was partially supported by PRIN-INAF 2011 "Tracing the formation and
evolution of the Galactic halo with VST" (P.I.: M. Marconi), by PRIN-MIUR
(2010LY5N2T) "Chemical and dynamical evolution of the Milky Way and Local Group
galaxies" (P.I.: F. Matteucci) and by the Education and Science Ministry of
Spain (grant AYA2010-16717). S.C. and R.B. warmly thank for the financial support from
PRIN-INAF 2014 "The kaleidoscope of stellar populations in Galactic Globular Clusters
with Hubble Space Telescope" (PI: S. Cassisi).
\end{acknowledgements}
|
1,314,259,994,888 | arxiv | \section{Introduction}
One of the main goals of theoretical condensed-matter physics is to achieve a systematic understanding of the interplay between symmetry and topology in many-body systems. The topological properties of noninteracting band insulators can be characterized by various kinds of winding numbers, such as Berry phases and Chern numbers, of Bloch wave-functions as a function of the single-particle momentum~\cite{RMP_TI,RevModPhys.83.1057}. This picture remains valid even when interactions are perturbatively taken into account~\cite{hohenadler2013correlation,XLQ2010,PhysRevB.83.085426,PhysRevB.87.121113}. However, in the nonperturbative regime one needs an alternative approach.
One possible solution to this problem can be formulated in terms of the twisted boundary condition. This is a generalization of the more standard periodic boundary condition in which pairs of two surfaces in the opposite sides of the system are identified. In the twisted boundary condition, a U(1) phase is multiplied to one surface before being identified with its pair (the more precise definition is given in Sec.~\ref{sec:tbc}). The twisted phase $e^{i\theta_i}$ ($i=1,2,\ldots,d$) can be assigned independently for each direction. It has been empirically known that the set of angles $\vec{\theta}=(\theta_1,\theta_2,\ldots,\theta_d)$ often serves as the many-body generalization of the single-particle momentum $\vec{k}=(k_1, k_2,\ldots,k_d)$. For example, the pumped charge in the Thouless pump~\cite{Thouless,NiuThouless} and the quantized Hall conductance of the quantum Hall effect~\cite{NTW,AvronSeiler} in interacting systems can be characterized by a Chern number formulated in terms of $\theta_i$ in stead of $k_i$. There are also many studies defining the $\mathbb{Z}_2$ index for the many-body quantum spin Hall insulator using the twisted boundary condition~\cite{KaneMele,XLQ2006,Sheng2006,FuKane,RyuPRL,Hirano,XLQ2010,XGW2014,Chiu2016}.
There is, however, a fundamental difference between $\vec{k}$ and $\vec{\theta}$.
The single-particle momentum $\vec{k}$ can be varied over the first Brillouin zone even under a fixed boundary condition.
Thus topological invariants written in terms of Bloch wavefunctions are properly defined for each Hamiltonian. In contrast, varying $\theta$ changes the Hamiltonian itself, implying that the many-body topological invariants that involve integration(s) by $\theta_i$ are only defined for a series of Hamiltonians parametrized by $\vec{\theta}$. For example, the Hall conductance $\sigma_{12}(\vec{\theta})$ can be computed using the linear response theory for each $\vec{\theta}=(\theta_1,\theta_2)$. The quantization of this quantity nor its connection to Chern number is not obvious in this form. The prescription proposed by Refs.~\onlinecite{NiuThouless,NTW} is to take an average of $\sigma_{12}(\vec{\theta})$ over all possible values of $\vec{\theta}$, \emph{assuming} that the $\vec{\theta}$-dependence of $\sigma_{12}(\vec{\theta})$ is negligibly small~\footnote{Ref.~\onlinecite{NTW} proposed a wrong power-law scaling ($\partial_{\theta_1}\sigma_{12}(\vec{\theta})\sim\frac{\xi}{L_1}$) in appendix, rather than the exponential suppression we show in this work.}. Then the resulting integral takes the form of the Chern number and the quantization to integers becomes apparent. There are several recent studies that present an alternative proof of the quantization without performing such an average~\cite{HastingsMichalakis,Koma,Bachmann}. Note that the $\theta_i$-independence has been a common assumption behind countless subsequent works~\cite{AvronYaffe,Titus2011,LiuZhao,Zeng2015,Matsugatani}.
Another prominent application of the twisted boundary condition in the context of the topology in many-body systems is the generalization of the Lieb-Shultz-Mattis theorem~\cite{Lieb1961,Affleck1986,Affleck1988,OYA1997,Yamanaka} to multi-dimensions~\cite{Oshikawa2000,Hastings2004,NachtergaeleSims}. The theorem states that, in a translation invariant system with the particle-number conservation, the filling (the average number of particles per unit cell) has to be an integer in order to realize a unique ground state with a nonzero excitation gap. An immediate consequence of this theorem is that any symmetric gapped phase with a fractional filling has to develop a ``topological order," which is usually accompanied by a fractionalization of particle statistics. The first proof of the Lieb-Schultz-Mattis theorem in dimensions greater than one is given by Oshikawa~\cite{Oshikawa2000}, who interpreted the twist operator in the original one-dimensional argument~\cite{Lieb1961} as the large gauge translation operator. In the proof, he considered an adiabatic change of the twisted angle $\theta_x$ from $0$ to $2\pi$, \emph{assuming} that the excitation gap does not close in the process. The recent refinement of the Lieb-Schultz-Mattis theorem in nonsymmorphic space groups~\cite{Sid2013,PNAS,WatanabePRB} essentially relies on the same assumption. In fact, the stability of the gap against an increase of $\theta_x$ is, in general, not at all for granted. For example, the excitation gap in the Kitaev chain vanishes at some values of $\theta_x$~\cite{Kawabata}. Hastings then gave an alternative proof without such an assumption~\cite{Hastings2004,NachtergaeleSims}, but instead relying on a `reality condition'~\footnote{See \emph{Condition LSM6} in Sec. 1.2 of Ref.~\onlinecite{NachtergaeleSims} that corresponds to the footnote [19] in Ref~\onlinecite{Hastings2004}.}.
To summarize, the $\theta$-independence of the bulk properties such as the excitation gap and the linear response coefficient have been an assumption in pioneering studies on the many-body topological invariants and the multi-dimensional Lieb-Schultz-Mattis theorem. Although there have been follow-up works that discuss an alternative derivation that goes around the assumption for each problem, it would be nicer to have a general and direct verification of the assumption itself, as it may lead to new applications of the twisted boundary condition. In this paper we give a general proof of the insensitivity of bulk properties to the twisted angle $\vec{\theta}$, assuming (i) the locality and the U(1) symmetry of the Hamiltonian and (ii) a non-zero excitation gap and the uniqueness of the ground state for one value of $\vec{\theta}$ (e.g, $\vec{\theta}=\vec{0}$). Our argument coherently applies to expectation values [see Eq.~\eqref{eq:statement1}], static susceptibilities, the Thouless pump and the Hall conductance, and many other bulk response properties [see Eq.~\eqref{eq:statement2}].
As a by-product, we prove the exponential decay of several new types of correlation functions [see Eqs.~\eqref{decay2}, \eqref{bounddecay3}, and \eqref{bounddecay4}].
The organization of this paper is as follows. In Sec.~\ref{sec:decay}, we summarize the general behavior of correlation functions in gapped phases. In Sec.~\ref{sec:tbc}, we review the definition of the twisted boundary condition and its understanding in terms of the magnetic flux.
With these preparations, we prove that various quantities in many-body systems do not depend on the twisted angle of the boundary condition in the limit of a large system size. We start from the expectation value of charge-conserving operators in Sec.~\ref{sec:exp}, and then move on to the static responses and topological transport properties in Sec.~\ref{sec:response}, and finally discuss the excitation gap in Sec.~\ref{sec:gap}.
Then we conclude in Sec.~\ref{sec:conclude}.
\section{Exponential decay of correlation functions}
\label{sec:decay}
\subsection{Assumptions: the locality and the gap}
Consider a quantum system in $d$ spatial dimensions. To discuss a finite-size system without a boundary, we impose the periodic boundary condition with the linear dimension $L_i$ in $i$-th direction ($i=1,2,\ldots,d$). Suppose that the Hamiltonian $\hat{H}$ of the system is given as a sum of \emph{local} terms:
\begin{equation}
\label{eq:hamiltonian}
\hat{H}=\sum_{\vec{x}}\,\hat{H}_{\vec{x}}.
\end{equation}
We say $\hat{H}_{\vec{x}}$ is local when its range $r$ is finite and does not scale with the system size. Namely, $\hat{H}_{\vec{x}}$ does not affect the local Hilbert space at $\vec{y}$ whenever $|\vec{y}-\vec{x}|>r$~\footnote{The assumption of finite-range interactions could be relaxed to exponential decaying or even to algebraic decaying interactions. This, in turn, requires more careful and mathematically elaborated treatment~\cite{HastingsKoma}.}. For example, the term $\hat{H}_{\vec{x}}=\sum_{\vec{y}}t_{\vec{x},\vec{y}}c_{\vec{x}}^\dagger c_{\vec{y}}+\text{h.c.}$ in the tight-binding model is local when $t_{\vec{x},\vec{y}}=0$ for $|\vec{y}-\vec{x}|>r$. The support of an operator $\hat{H}_{\vec{x}}$ is the set of $\vec{y}$ at which $\hat{H}_{\vec{x}}$ acts nontrivially. Thus, the support is a subset of the ``ball" with the radius $r$ centered at $\vec{x}$. For continuum model, the sum in Eq.~\eqref{eq:hamiltonian} should be replaced by an integral.
Throughout the paper, we assume that the ground state $|0\rangle$ of $\hat{H}$ is unique and that the excitation gap $\Delta$ does not vanish in the limit of large system size. We will comment on the case with a finite ground-state degeneracy at the end of the paper. We focus on zero temperature $T=0$ and $\langle\hat{O}\rangle$ denotes the expectation value $\langle 0|\hat{O}|0\rangle$ with respect to the ground state. Furthermore, $\delta\hat{O}$ represents the fluctuation $\hat{O}-\langle \hat{O}\rangle$ and the time-evolution of an operator is defined by $\hat{O}(t)\equiv e^{i\hat{H}t}\hat{O}e^{-i\hat{H}t}$.
\subsection{The behavior of correlation functions}
Let $\hat{O}$ and $\hat{V}$ be local operators and let $R\equiv\text{dist}(\hat{O},\hat{V})$ be the minimum distance between their support [Fig.~\ref{fig} (a)]. We assume $R>0$; in other words, the support of $\hat{O}$ and $\hat{V}$ do not overlap. In gapped phases, it is well known, and is also rigorously proven~\cite{HastingsPRL,HastingsKoma}, that the equal-time (connected) correlation function decays exponentially with the distance:
\begin{equation}
F_0\equiv\langle\delta\hat{O}\,\delta\hat{V}\rangle,\quad |F_0| \leq C_0e^{-\frac{R}{\xi}}.\label{decay1}
\end{equation}
In fact, a similar argument proves that the correlation function of the following form also decays exponentially
\begin{equation}
F_{n}\equiv\langle\delta\hat{O}\frac{1}{(\hat{H}-E)^{n}}\delta\hat{V}\rangle,\quad |F_{n}|\leq C_n R^{\frac{n}{2}}e^{-\frac{R}{\xi}}\label{decay2}.
\end{equation}
Here, $n=1,2,\ldots$ is an arbitrary natural number and $E$ is the ground state energy. The proof for $F_{2}$ can be found in Ref.~\onlinecite{Koma}, although it is buried in a long mathematically-elaborated paper. In Appendix A, we present the simplest version of the proof in a way applicable to all $n$. The key tool of the proof is the Lieb-Robinson bound~\cite{LRB,HastingsReview}
\begin{equation}
\label{eq:LRB}
\|[\hat{O},\hat{V}(t)]\|\leq Ce^{-\frac{R}{\xi_0}}(e^{\frac{v|t|}{\xi_0}}-1).
\end{equation}
Here $\|\hat{O}\|\equiv\text{sup}_{|\psi\rangle, \langle\psi|\psi\rangle=1}\|\hat{O}|\psi\rangle\|$ denotes the norm of the operator $\hat{O}$, and the constants $\xi_0$ and $v$ are dependent on the Hamiltonian $\hat{H}$ but are independent of the choice of operators $\hat{O}$ or $\hat{V}$. The Lieb-Robinson bound intuitively estimates the spreads of the operator $\hat{V}(t)$ as the time evolves. For example, at $t=0$, the right-hand side of Eq.~\eqref{eq:LRB} vanishes. This is because operators $\hat{O}$ and $\hat{V}$ themselves commute (recall our assumption of $R>0$). As the time grows, the support of the operator $\hat{V}(t)$ expands and overlaps with the support of $\hat{O}$. The Lieb-Robinson bound gives the upper limit of the velocity $v$ of this spread.
\begin{figure}
\begin{center}
\includegraphics[width=0.99\columnwidth]{fig.pdf}
\caption{\label{fig} (a) The spatial configuration of local operators $\hat{O}$, $\hat{O}'$ and $\hat{V}$. The red shades represent their support. (b) The flux $\theta$ piercing the ring. }
\end{center}
\end{figure}
The correlation length $\xi$ in Eqs.~\eqref{decay1} and \eqref{decay2} is given by $\xi\equiv\xi_0+\frac{2v}{\Delta}$, where the constants $\xi_0$ and $v$ are those appearing in Eq.~\eqref{eq:LRB}. When the gap $\Delta$ is small, the correlation length $\xi$ is dominated by $\frac{2v}{\Delta}$ and diverges in the limit of $\Delta\rightarrow+0$ as expected.
The correlation functions in Eqs.~\eqref{decay1} and \eqref{decay2} are about two operators at a distance. Let us now consider correlations involving more operators, e.g., $G_{00}\equiv\langle\delta\hat{O}\,\delta\hat{O}'\,\delta\hat{V}\rangle$. We assume that the support of $\hat{V}$ is well separated from that of $\hat{O}$ and $\hat{O}'$, while assuming nothing about the distance between the support of $\hat{O}$ and $\hat{O}'$ [Fig.~\ref{fig} (a)]. In this case, one can simply regard the product $\hat{O}\hat{O}'$ as a single operator and apply Eq.~\eqref{decay1} to get a bound $|G_{00}|\leq C_{00}e^{-\frac{R'}{\xi}}$, where $R'$ is either the smaller one of $\text{dist}(\hat{O},\hat{V})$ and $\text{dist}(\hat{O}',\hat{V})$. In contrast, the following correlations cannot be evaluated directly through Eqs.~\eqref{decay1} or \eqref{decay2},
\begin{eqnarray}
G_{mn}&\equiv&\langle\delta\hat{O}\frac{1}{(\hat{H}-E)^{m}}\delta\hat{O}'\frac{1}{(\hat{H}-E)^{n}}\delta\hat{V}\rangle,\label{decay3}\\
G'_{mn}&\equiv&\langle\delta\hat{O}\frac{1}{(\hat{H}-E)^{m}}\delta\hat{V}\frac{1}{(\hat{H}-E)^{n}}\delta\hat{O}'\rangle,\label{decay4}
\end{eqnarray}
because the product $\hat{O}\frac{1}{(\hat{H}-E)^{m}}\hat{O}'$ ($m=1,2,\ldots$) is not necessarily local even when $\hat{O}$ and $\hat{O}'$ are~\footnote{This is because $(\hat{H}-E)^{-m}$ is not necessarily a sum of local operators. This might be understood easily by recalling that, even if a matrix $M$ is almost diagonal, $M^{-1}$ can be non-diagonal at all.}. Nevertheless, we can prove (see Appendix B)
\begin{eqnarray}
|G_{mn}|&\leq& C_{mn} {R'}^{\frac{n+m}{2}}e^{-\frac{R'}{\xi'}},\label{bounddecay3}\\
|G_{mn}'|&\leq& C_{mn}' {R'}^{\frac{n+m+1}{2}}e^{-\frac{R'}{\xi'}},\label{bounddecay4}
\end{eqnarray}
where $\xi'\equiv\xi_0+\frac{4v}{\Delta}$ and $R'$ is defined above Eq.~\eqref{decay3}.
\subsection{Perturbation at distance}
The properties of correlation functions summarized above have many valuable implications which do not seem fully explored. As a simple example, let us show that any perturbation at a long distance never affects the expectation value of a local operator. We consider a Hamiltonian $\hat{H}(h)=\hat{H}-h\hat{V}$ with a local perturbation $\hat{V}$. Let $|h\rangle$ be the unique ground state $\hat{H}(h)$. Then, differentiating the defining equation $\hat{H}(h)|h\rangle=E(h)|h\rangle$, one gets
\begin{equation}
\hat{Q}(h)\partial_h|h\rangle=-\frac{1}{\hat{H}(h)-E(h)}\delta\hat{V}|h\rangle,\label{derivative}
\end{equation}
where $\hat{Q}(h)\equiv1-|h\rangle\langle h|$ is the projection onto excited states. For the expectation value $O(h)\equiv\langle h|\hat{O}|h\rangle$ of a local Hermitian operator $\hat{O}$, the derivative $\partial_hO(h)$ is thus given in the form of $F_1$:
\begin{equation}
\partial_hO(h)=-\langle h|\delta\hat{O}\frac{1}{\hat{H}(h)-E(h)}\delta\hat{V}|h\rangle+\text{c.c.},\label{susceptibility}
\end{equation}
which is exponentially small when $\hat{O}$ and $\hat{V}$ are well-separated, as suggested by Eq.~\eqref{decay2}:
\begin{equation}
|\partial_hO(h)|\leq C \sqrt{R}e^{-\frac{R}{\xi}}
\end{equation}
where $C$ is a constant and $R$ is the distance between $\hat{O}$ and $\hat{V}$.
\section{Twisted boundary condition and U(1) symmetry}
\label{sec:tbc}
As a preparation for discussing more nontrivial applications of the exponential decay of correlation functions, in this section we review the basics of the twisted boundary condition and its connection to magnetic flux.
\subsection{Twisted boundary condition}
\label{subsec:tbc}
Suppose that the Hamiltonian $\hat{H}=\int d^dx\,\hat{H}_{\vec{x}}$ is written in terms of the creation (annihilation) operator $\hat{c}_{\vec{x}}^\dagger$ ($\hat{c}_{\vec{x}}$). The total number operator $\hat{N}\equiv\int d^dx\,\hat{n}_{\vec{x}}$ is the integral of the number density operator $\hat{n}_{\vec{x}}\equiv\hat{c}_{\vec{x}}^\dagger \hat{c}_{\vec{x}}$ and the global U(1) phase rotation is described by $e^{i\phi\hat{N}}$.
Let $\hat{T}_{\vec{v}}$ be the operator that describes the translation by $\vec{v}$ and let $\hat{x}_i$ be the unit vector along the $i$-th axis ($i=1,2,\ldots,d$) of the Cartesian coordinate. Recall that the periodic boundary condition is set by identifying two surfaces $x_i=0$ and $x_i=L_i$. In other words, we identify the translation operator $\hat{T}_{L_i\hat{x}_i}$ as the identity operator:
\begin{equation}
\hat{T}_{L_i\hat{x}_i}=1.
\end{equation}
The extension to the twisted boundary condition can be done simply by setting instead the product of the translation operator $\hat{T}_{L_i\hat{x}_i}$ and the phase rotation operator $e^{i\theta_i\hat{N}}$ as the identity:
\begin{equation}
\label{eq:tbc}
\hat{T}_{L_i\hat{x}_i}e^{i\theta_i\hat{N}}=1.
\end{equation}
Under this identification, the creation operator $\hat{c}_{\vec{x}}^\dagger$, for example, satisfies
\begin{equation}
\hat{c}_{\vec{x}+L_i\hat{x}_i}^\dagger=e^{-i\theta_i}\hat{c}_{\vec{x}}^\dagger
\end{equation}
for every $i=1,2,\ldots,d$. We denote by $\hat{H}(\vec{\theta})$ the resulting Hamiltonian written in terms of operators $\hat{c}_{\vec{x}}^\dagger$ and $\hat{c}_{\vec{x}}$ in the range $\vec{x}\in[0,L_1)\times[0,L_2)\times\ldots\times[0,L_d)$.
\subsection{U(1) symmetry and magnetic flux}
\label{subsec:u1}
There is a distinct but equivalent view of $\theta_i$ in terms of the magnetic flux when the Hamiltonian has the global U(1) symmetry. Let us start with the Hamiltonian under the periodic boundary condition $\hat{H}(\vec{0})$.
Let us consider a unitary operator $\hat{U}_\chi\equiv e^{i\int d^dx\,\chi(\vec{x})\hat{n}_{\vec{x}}}$ that multiplies a position-dependent phase $e^{i\chi(\vec{x})}$ to $\hat{c}_{\vec{x}}^\dagger$. Here, $\chi(\vec{x})$ is an arbitrary piecewise smooth function of $\vec{x}$, and the Hamiltonian is not necessarily invariant under such a local U(1) rotation. We introduce a \emph{non-dynamical} gauge field $\vec{A}(\vec{x})$ in such a way that (i) $\hat{H}[\vec{A}]=\int d^dx\,\hat{H}_{\vec{x}}[\vec{A}]$ transform as
\begin{equation}
\hat{U}_\chi\hat{H}_{\vec{x}}[\vec{A}]\hat{U}_\chi^\dagger=\hat{H}_{\vec{x}}[\vec{A}'],\quad \vec{A}'(\vec{x})\equiv \vec{A}(\vec{x})-\partial_{\vec{x}}\chi(\vec{x})\label{Htrans}
\end{equation}
and (ii) $\hat{H}[\vec{A}]$ reduce to $\hat{H}(\vec{0})$ when $\vec{A}(\vec{x})=\vec{0}$. We can always introduce $\vec{A}$ with this property as long as the Hamiltonian $\hat{H}(\vec{0})$ has the global U(1) symmetry (i.e., commutes with the number operator $\hat{N}$). The simplest example of $\hat{H}_{\vec{x}}[\vec{A}]$ may be
\begin{equation}
\hat{H}_{\vec{x}}[A]=\hat{c}_{\vec{x}}^\dagger\left[-\tfrac{1}{2m}(\partial_{\vec{x}}+i\vec{A}(\vec{x}))^2+U(\vec{x})\right]\hat{c}_{\vec{x}}+\hat{H}_{\vec{x}}^{\text{int}},
\end{equation}
where $U(x)$ is the single particle potential and $\hat{H}_{x}^{\text{int}}$ describes the many-body interactions. A bad example would be the (meanfield) BCS Hamiltonian which lacks the U(1) symmetry due to the presence of terms proportional to $\hat{c}\hat{c}$ or $\hat{c}^\dagger \hat{c}^\dagger$. In this case, there is no way to introduce $\vec{A}(x)$ satisfying Eq.~\eqref{Htrans}.
We describe the ``magnetic flux" $\theta_i\equiv\int_0^{L_i} dx_i A_i(x)$ by choosing a position-independent vector potential
\begin{equation}
\vec{A}(\vec{x})=(\tfrac{\theta_1}{L_1},\tfrac{\theta_2}{L_2},\ldots,\tfrac{\theta_d}{L_d}).
\end{equation}
We write the resulting Hamiltonian as $\hat{H}'(\vec{\theta})=\hat{H}[\vec{A}]$. Note that we did not actually apply any real ``magnetic field" to the system. The magnetic flux $\theta_i$ is pierced through the hole of the ``ring" formed by the $x_i$ axis under the boundary condition identifying $x_i=L_i$ and $x_i=0$. See Fig.~\ref{fig} (b) for the illustration in the case of $d=1$.
\subsection{Equivalence of $\hat{H}(\vec{\theta})$ and $\hat{H}'(\vec{\theta})$}
The Hamiltonian $\hat{H}(\vec{\theta})$ under the twisted boundary condition in Sec.~\ref{subsec:tbc} and the Hamiltonian $\hat{H}'(\vec{\theta})$ under the magnetic flux in Sec.~\ref{subsec:u1} are, in fact, unitary equivalent with each other. Therefore, they describe physically the same system; in particular, their spectrum and the properties of correlation functions, for example, are the same. The two Hamiltonians are related by $\hat{U}_{\chi}$ with $\chi(\vec{x})=\sum_{i=1}^d\theta_i\frac{x_i}{L_i}$:
\begin{equation}
\hat{U}_{\chi}\hat{H}'(\vec{\theta})\hat{U}_{\chi}^\dagger=\hat{H}(\vec{\theta}).
\end{equation}
Note that the function $\chi(\vec{x})$ is discontinuous at the boundary jumping from $\theta_i$ at $x_i=L_i$ to $0$ at $x_i=0$. Using Eq.~\eqref{Htrans}, we find that
\begin{equation}
A_i'(\vec{x})=\tfrac{\theta_i}{L_i}-\partial_{x_i}\chi(\vec{x})=\theta_i\delta(x_i),
\end{equation}
where the $\delta$-function originates from the discontinuity of $\chi$ at the boundary. This means that the Hamiltonian $\hat{H}(\vec{\theta})$ under the twisted boundary condition can be interpreted as the Hamiltonian subjected to the $\delta$-function-type vector potential localized at the boundary. This should also clarifies that we can freely move the position of the $\delta$-function peak in the system by performing a proper gauge transformation. This s is actually what we do in the following sections [e.g., see Eq.~\eqref{gaugey}].
\section{Insensitivity of expectation values}
\label{sec:exp}
With these preparations, let us now demonstrate that the expectation value of a wide class of operators do not depend on $\vec{\theta}$ in the limit of large $L_i$.
To simplify the notation here we focus on 1D systems (and thus drop the subscript ``1"). This is actually sufficient to prove the same claim in higher dimensions since we can apply the 1D argument for each direction separately.
Let us consider an operator $\hat{O}=\int_0^{L}dx\,\hat{O}_x$ that is given as an integral of local terms $\hat{O}_x$ and commutes with $\hat{N}$.
We can then introduce $A$ so that $\hat{O}[A]=\int_0^{L}dx\,\hat{O}_x[A]$ transforms in the same way as $\hat{H}[A]$ does in Eq.~\eqref{Htrans}. The operator $\hat{O}$ can be the Hamiltonian $\hat{H}$ itself, but it may also be, for example, the polarization operator $\hat{P}=\int_0^{L}dx\,x\hat{n}_x$ or the current operator.
Now we choose the uniform vector potential $A(x)=\frac{\theta}{L}$. We denote the unique ground state of $\hat{H}'(\theta)=\hat{H}[\frac{\theta}{L}]$ by $|\theta\rangle$. Our claim is that the $\theta$-dependence of the expectation value
\begin{equation}
O(\theta)\equiv\langle\theta|\hat{O}[\tfrac{\theta}{L}]|\theta\rangle=\int_0^{L} dx\langle\theta|\hat{O}_x[\tfrac{\theta}{L}]|\theta\rangle\label{expectation}
\end{equation}
is suppressed for a large $L$ by a factor $L^{3/2}e^{-\frac{L}{2\xi}}$. When $\hat{O}=\hat{H}$, the statement is the flatness of the ground state energy $E_\theta$ as a function of $\theta$, which was numerically observed before, e.g., in Ref.~\onlinecite{Misguich}. Later we will also argue that the excitation gap is independent of $\theta$ in the limit of large $L$.
To prove the claim, let us define a function of $x$ labeled by $y\in[0,L]$. It reads
\begin{equation}
\label{gaugey}
\chi_{y}(x)=
\begin{cases}
\tfrac{\theta}{L}x& (0\leq x<y)\\
\tfrac{\theta}{L}(x-L)& (y\leq x<L).
\end{cases}
\end{equation}
The corresponding unitary operator $\hat{U}_{\chi_{y}}=e^{i\int_0^{L}dx\,\chi_{y}(x)\hat{n}_x}$ induces the gauge transformation
\begin{equation}
A(x)=\tfrac{\theta}{L}\,\,\,\rightarrow\,\,\,A_{y}(x)\equiv \theta\delta(x-y).\label{Gauge}
\end{equation}
In this gauge, one can say $\theta$ is the U(1) phase of the twisted boundary condition at the new boundary $x=y$.
The key observation is that, thanks to the assumed locality, \emph{$\hat{O}_x[A_y]$ is independent of $\theta$} and thus is identical to $\hat{O}_x[0]$ when $y$ is out of the range of $\hat{O}_x$. Namely, if we denote by $r$ the maximum range of $\hat{O}_x$ over all $x\in[0,L]$, then we have
\begin{equation}
\hat{O}_x[A_y]=\hat{O}_x[0]\quad\text{if}\quad|y-x|>r.\label{eq:trick}
\end{equation}
For example, in the case of $\hat{O}_x[A]=t\hat{c}_{x+r}^\dagger e^{-i\int_{x}^{x+r}dz A(z)}\hat{c}_x$,
\begin{equation}
\hat{O}_x[A_y]=t\hat{c}_{x+r}^\dagger e^{-i\theta\int_{x}^{x+r}dz \delta(z-y)}\hat{c}_x=t\hat{c}_{x+r}^\dagger\hat{c}_x
\end{equation}
is independent of $\theta$ as long as $|y-x|>r$. It follows that the local terms of the Hamiltonian $\hat{H}_x[A_{y}]$ do not depend on $\theta$ either unless $y$ is within the range of $\hat{H}_x$.
Inserting $\hat{U}_{\chi_{y}}^\dagger\hat{U}_{\chi_{y}}=1$ to the last expression in Eq.~\eqref{expectation} and writing $|\theta_{y}\rangle\equiv\hat{U}_{\chi_{y}}|\theta\rangle$, we get
\begin{equation}
O(\theta)=\int_0^{L}dx\,\langle\theta_{y}|\hat{O}_x[A_y]|\theta_{y}\rangle.
\end{equation}
Note that the value of $y$ here is arbitrary and can be chosen \emph{depending on $x$}. Thus we can freely set $y$ to be far away from $x$ so that $\hat{O}_x[A_y]=\hat{O}_x[0]$ [Fig.~\ref{fig} (b)]. For example, take the opposite point of $x$ on the ring with $|x-y|=\frac{L}{2}$:
\begin{equation}
O(\theta)=\int_0^{L}dx\,\langle\theta_{y}|\hat{O}_x[0]|\theta_{y}\rangle,\quad |x-y|=\tfrac{L}{2}>r.
\end{equation}
Then, intuitively, the twisted angle $\theta$ does not affect the expectation value $\langle\theta_{y}|\hat{O}_x[0]|\theta_{y}\rangle$ since $|\theta_{y}\rangle=\hat{U}_{\chi_{y}}|\theta\rangle$ is the ground state of $\hat{H}[A_{y}]$ twisted only near $y$, far away from $x$.
In fact, using Eq.~\eqref{derivative} for $h=\theta$, we can express $\partial_\theta O(\theta)$ in the form of $F_1$:
\begin{eqnarray}
\label{eq:step}
\partial_\theta O(\theta)=&&-\int_0^{L} dx\Big(\langle\theta_{y}|\delta\hat{O}_x[0]\frac{1}{\hat{H}[A_{y}]-E_\theta}\delta\hat{J}[A_{y}]|\theta_{y}\rangle\notag\\
&&\quad\quad+\langle\theta_{y}|\delta\hat{J}[A_{y}]\frac{1}{\hat{H}[A_{y}]-E_\theta}\delta\hat{O}_x[0]|\theta_{y}\rangle\Big),
\end{eqnarray}
where
\begin{equation}
\hat{J}[A_{y}]\equiv \partial_\theta\hat{H}[A_{y}]
\end{equation}
is the local current operator at $y$. Therefore, one can apply Eq.~\eqref{decay2} for $R=\frac{L}{2}$ to the integrand and get the desired bound
\begin{equation}
|\partial_\theta O(\theta)|< C L^{3/2}e^{-\frac{L}{2\xi}}
\end{equation}
with a constant $C$. In a higher dimension, the same argument leads to
\begin{equation}
\label{eq:statement1}
|\partial_{\theta_i} O(\vec{\theta})|< C V L_i^{1/2}e^{-\frac{L_i}{2\xi}}
\end{equation}
for each direction $i=1,2,\dots,d$. Here, $V=L_1\cdots L_d$ is the volume of the system, which originates from the integral in Eq.~\eqref{eq:step}.
\section{Insensitivity of bulk responses}
\label{sec:response}
Let us move on to the discussion of $\vec{\theta}$-independence of bulk responses. Specifically, we will focus on the class of responses that can be characterized by the correlation function of the form
\begin{equation}
G_n(\theta)=\langle\theta|\delta\hat{O}[\tfrac{\theta}{L}]\frac{1}{(\hat{H}[\tfrac{\theta}{L}]-E_\theta)^{n}}\delta\hat{O}'[\tfrac{\theta}{L}]|\theta\rangle.
\end{equation}
For example, the \emph{static} susceptibility, in general, takes the form $G_1(\theta)$ as demonstrated in Sec.~\ref{sec:decay} (see Eq.~\eqref{susceptibility}). The simplest instance is the static magnetic susceptibility corresponding to the choice $\hat{O}=\hat{O}'=\hat{S}_z$. As we will see now, the correlation $G_2(\theta)$ is related to topological transports.
\subsection{Thouless pump}
When the Hamiltonian has an adiabatic and periodic time dependence, the phenomenon so-called Thouless pump takes place and a certain amount of charge is transported through the system over time. According to Ref.~\onlinecite{NiuThouless}, the pumped charge of a weakly time-dependent Hamiltonian over one cycle $T$ is given by
\begin{equation}
\Delta Q(\theta)=i\int_{0}^Tdt (\partial_t\langle\theta|\partial_\theta|\theta\rangle-\partial_\theta\langle\theta|\partial_t|\theta\rangle).\label{ThoulessPump}
\end{equation}
Here $|\theta\rangle$ is the ground state of the snapshot Hamiltonian $\hat{H}[\frac{\theta}{L}]$. Using Eq.~\eqref{derivative}, we can rewrite $\Delta Q(\theta)$ in the form of $G_2(\theta)$:
\begin{eqnarray}
\Delta Q(\theta)=&&i\int_{0}^Tdt\Big(\langle\theta|\delta(\partial_t\hat{H}[\tfrac{\theta}{L}])\frac{1}{(\hat{H}[\tfrac{\theta}{L}]-E_\theta)^{2}}\delta\hat{J}[\tfrac{\theta}{L}]|\theta\rangle\notag\\
&&\quad-\langle\theta|\delta\hat{J}[\tfrac{\theta}{L}]\frac{1}{(\hat{H}[\tfrac{\theta}{L}]-E_\theta)^{2}}\delta(\partial_t\hat{H}[\tfrac{\theta}{L}])|\theta\rangle
\Big).\label{ThoulessPump2}
\end{eqnarray}
Note that the $\theta$-integral is missing in Eqs.~\eqref{ThoulessPump} and \eqref{ThoulessPump2}.
\subsection{Hall conductance}
The Hall conductance can be formulated in a similar manner. Following Refs.~\onlinecite{NTW,AvronSeiler}, let us introduce the constant vector potential $\vec{A}(x,y)=(\frac{\theta_1}{L_1},\frac{\theta_2}{L_2})$. If we denote by $|\vec{\theta}\rangle$ the ground state of $\hat{H}[\vec{A}]$, the Hall conductance is given by~\cite{NTW,AvronSeiler}
\begin{equation}
\sigma_{12}(\vec{\theta})=\frac{e^2}{h}2\pi i(\partial_{\theta_2}\langle\vec{\theta}|\partial_{\theta_1}|\vec{\theta}\rangle-\partial_{\theta_1}\langle\vec{\theta}|\partial_{\theta_2}|\vec{\theta}\rangle),\label{Hall}
\end{equation}
which can be written in the form of $G_2(\vec{\theta})$ using Eq.~\eqref{derivative}:
\begin{eqnarray}
\sigma_{12}(\vec{\theta})=&&\frac{e^2}{h}2\pi i\Big(\langle\vec{\theta}|\delta\hat{J}_2[\vec{A}]\frac{1}{(\hat{H}[\vec{A}]-E_{\vec{\theta}})^{2}}\delta\hat{J}_1[\vec{A}]|\vec{\theta}\rangle\notag\\
&&\quad-\langle\vec{\theta}|\delta\hat{J}_1[\vec{A}]\frac{1}{(\hat{H}[\vec{A}]-E_{\vec{\theta}})^{2}}\delta\hat{J}_2[\vec{A}]|\vec{\theta}\rangle
\Big).\label{Hall2}
\end{eqnarray}
Again, $\theta_{1,2}$ integrals are missing in Eqs.~\eqref{Hall} and \eqref{Hall2}, although they are the key in identifying this quantity as the Chern number. As we will show in Sec.~\ref{sec:proofGn}, $G_n(\theta)$ is almost independent of $\theta$ in a large system. Thus one can approximate $\sigma_{12}(\vec{\theta})$ by its average $\bar{\sigma}_{12}$~\cite{NTW}:
\begin{eqnarray}
\bar{\sigma}_{12}&\equiv&\int_0^{2\pi}\frac{d\theta_1}{2\pi} \int_0^{2\pi}\frac{d\theta_2}{2\pi} \sigma_{12}(\vec{\theta})=\frac{e^2}{h}C,\\
C&\equiv&\int\frac{d^2\theta}{2\pi}i(\partial_{\theta_2}\langle\vec{\theta}|\partial_{\theta_1}|\vec{\theta}\rangle-\partial_{\theta_1}\langle\vec{\theta}|\partial_{\theta_2}|\vec{\theta}\rangle).
\end{eqnarray}
The connection to the Chern number is now evident~\cite{NTW}. We can perform the same trick to $\Delta Q(\theta)$ and relate it to a Chern number in the $t$-$\theta$ space~\cite{NiuThouless}.
\subsection{Insensitivity of $G_n(\theta)$}
\label{sec:proofGn}
Motivated by these examples, let us now prove that the $\theta$-dependence of $G_n(\theta)$ is exponentially suppressed for a large system by a factor $L^{2+\frac{n}{2}}e^{-\frac{L}{4\xi'}}$ with $\xi'\equiv\xi_0+\frac{4v}{\Delta}$. Our proof proceeds in the same way as that for $O(\theta)$. Again we focus on one dimension.
We first write $G_n(\theta)$ in terms of the integral of local operators
\begin{equation}
G_n(\theta)=\int dxdx'\langle\theta|\delta\hat{O}_x[\tfrac{\theta}{L}]\frac{1}{(\hat{H}[\tfrac{\theta}{L}]-E_\theta)^{n}}\delta\hat{O}_{x'}'[\tfrac{\theta}{L}]|\theta\rangle
\end{equation}
and then insert $\hat{U}_{\chi_{y}}^\dagger\hat{U}_{\chi_{y}}=1$:
\begin{equation}
G_n(\theta)=\int dxdx'\langle\theta_y|\delta\hat{O}_x[0]\frac{1}{(\hat{H}[A_y]-E_\theta)^{n}}\delta\hat{O}_{x'}'[0]|\theta_y\rangle.\label{Gnp}
\end{equation}
In Eq.~\eqref{Gnp}, we have chosen $y\in[0,L]$ to be out of the range of $\hat{O}_x$, $\hat{O}_{x'}'$ as illustrated in Fig.~\ref{fig} (b) and used Eq.~\eqref{eq:trick}. In fact, for every $x,x'\in [0,L]$, we can always find $y$ on the ring such that $|x-y|\geq \frac{L}{4}$ and $|x'-y|\geq \frac{L}{4}$. Again using Eq.~\eqref{derivative}, we can express $\partial_\theta G_n(\theta)$ in terms of $G_{m,\ell}$ and $G_{m,\ell}'$ with $m+\ell=n+1$:
\begin{widetext}
\begin{eqnarray}
\partial_\theta G_n(\theta)=-\int_0^{L}dx\int_0^{L}dx'&&\Big(\sum_{m=1}^{n}\langle\theta_y|\delta\hat{O}_x[0]\frac{1}{(\hat{H}[A_y]-E_\theta)^{m}}\delta\hat{J}[A_{y}]\frac{1}{(\hat{H}[A_y]-E_\theta)^{n-m+1}}\delta\hat{O}_{x'}'[0]|\theta_z\rangle\notag\\
&&\quad\quad\quad\quad+\langle\theta_y|\delta\hat{O}_x[0]\frac{1}{(\hat{H}[A_y]-E_\theta)^n}\delta\hat{O}_{x'}'[0]\frac{1}{\hat{H}[A_y]-E_\theta}\delta\hat{J}[A_{y}]|\theta_y\rangle\notag\\
&&\quad\quad\quad\quad+\langle\theta_y|\delta\hat{J}[A_z]\frac{1}{\hat{H}[A_y]-E_\theta}\delta\hat{O}_x[0]\frac{1}{(\hat{H}[A_y]-E_\theta)^n}\delta\hat{O}_{x'}'[0]|\theta_y\rangle\Big).
\end{eqnarray}
\end{widetext}
Thus one can use Eqs.~\eqref{bounddecay3} and \eqref{bounddecay4} with $R'=\frac{L}{4}$ to get the stated bound,
\begin{equation}
|\partial_\theta G_n(\theta)|< C L^{3+\frac{n}{2}}e^{-\frac{L}{4\xi'}}
\end{equation}
with a constant $C$. In a higher dimension, the same argument suggests
\begin{equation}
\label{eq:statement2}
|\partial_{\theta_i} G_n(\vec{\theta})|< C V^2L_i^{1+\frac{n}{2}}e^{-\frac{L_i}{4\xi'}}.
\end{equation}
\section{Excitation energy}
\label{sec:gap}
So far we have only investigated the ground state properties. Here let us discuss what we can say about excitations.
\subsection{Energy expectation value of variational state}
Let us consider an operator $\hat{O}$ of the form $\hat{O}=\int d^d\vec{x}\,\hat{O}_{\vec{x}}$ with local operators $\hat{O}_{\vec{x}}$.
We construct a variational state $|O\rangle=\delta\hat{O}|0\rangle$, which is orthogonal to the ground state by definition. Its energy expectation value measured from the ground state energy is given by
\begin{eqnarray}
\Delta_O&\equiv&\frac{\langle O|\hat{H}|O\rangle}{\langle O|O\rangle}-E=\frac{\langle \delta\hat{O}^\dagger[\hat{H},\delta\hat{O}]\rangle}{\langle \delta\hat{O}^\dagger \delta\hat{O}\rangle}\notag\\
&=&\frac{\int d^d\vec{x}d^d\vec{y}\langle \delta\hat{O}_{\vec{x}}^\dagger [\hat{H},\delta\hat{O}_y]\rangle}{\int d^d\vec{x}d^d\vec{y}\langle \delta\hat{O}_{\vec{x}}^\dagger \delta\hat{O}_{\vec{y}}\rangle}.\label{energy}
\end{eqnarray}
The denominator is proportional to the system size $V=L_1L_2\ldots L_d$ because of the exponential decay of the correlation function.
Similarly, the numerator is also proportional to $V$ since the commutator $[\hat{H},\delta\hat{O}_y]$ is still local owing to the locality of the Hamiltonian. Therefore, the energy expectation value $\Delta_O$ can be at most $O(V^0)$~\footnote{In order to achieve higher energy states whose excitation energy grows as $O(V^\epsilon)$ with $\epsilon>0$, one needs a non-local operation rather than simply superposing local perturbations. In fact, when $\hat{O}=\hat{O}_1\hat{O}_2$ is a product of two well-separated local operators, we have $\Delta_O\simeq \Delta_{O_1}+\Delta_{O_2}$ and the correction decays exponentially with their distance. This implies that one can get a higher-energy state by creating many local excitations simultaneously.}.
We show that the excitation energy of locally excited states is almost independent of the flux $\theta$. To this end,
suppose that the Hamiltonian $\hat{H}[A]$ has a U(1) symmetry satisfying Eq.~\eqref{Htrans}.
We assume the form $\hat{O}[A]=\int d^d\vec{x}\,\hat{O}_{\vec{x}}[A]$ with local operators $\hat{O}_{\vec{x}}[A]$ obeying in Eq.~\eqref{Htrans}. We set $A(x)=\frac{\theta}{L}$ and construct the variational state $|O[\frac{\theta}{L}]\rangle=\delta\hat{O}[\frac{\theta}{L}]|\theta\rangle$. Now, note that the last expression of Eq.~\eqref{energy} is written in terms of the expectation value of local operators. Thus we can apply the result in Sec.~\ref{sec:exp}. Therefore, the derivative $\partial_\theta\Delta_{O[\frac{\theta}{L}]}$ is bounded by $F_1$ in Eq.~\eqref{decay2} with $R=\frac{L}{4}$.
\subsection{Insensitivity of excitation gap}
Now let us discuss the $\theta$-dependence of the true excitation gap.
More precisely, here $\Delta_\theta$ denotes the gap to the first excited state $|1\rangle_\theta$ in the same sector of the conserved U(1) charge. We assume that there exits a local operator $\hat{O}_0$ such that the state $\hat{O}_0|0\rangle$ has a nonzero overlap with $|1\rangle$, i.e., $|\langle1|\hat{O}_0|0\rangle|^2=w>0$. (The weight $w$ can be proportional to $L^{-\alpha}$ with $\alpha\geq0$. The expectation value of the excitation energy $\Delta_{O'}$ can be much larger than $\Delta$.) Then, by applying the energy filter~\cite{Verstraete}, one can construct a low-energy local operator $\hat{O}$ from $\hat{O}_0$ such that, for any $\epsilon>0$, (i) the excitation energy $\Delta_O$ satisfies $\Delta\leq\Delta_O\leq\Delta(1+\epsilon)+\delta$, where $\delta=\frac{\tilde{C}}{w}(\tilde{R}/\xi_0)^\ell e^{-\epsilon\tilde{R}/\tilde{\xi}}$ is an exponentially small correction with some power $\ell$ and $\tilde{\xi}\equiv\frac{\sqrt{2}v}{\Delta}+\epsilon \xi_0$ and (ii) the support $\Omega$ of $\hat{O}$ is finite and includes the support of $\hat{O}_0$ inside. Here, $\tilde{R}=\text{dist}(\partial\Omega,\hat{O}_0)$ denotes the minimum distance between the boundary of $\Omega$ and the support of $\hat{O}_0$~\cite{Verstraete,Koma}. We reproduce the derivation in Appendix C.
Using this operator $\hat{O}$, we prove that $\Delta_\theta$ does not depend much on $\theta$ for a large system size. Our argument is proof by contradiction.
Suppose that the gap becomes smaller at $\theta=\theta_0$ ($0<\theta_0<2\pi$) than the value $\Delta_0$ at $\theta=0$. Namely, there exists $\xi$ ($0<\xi<1$) such that
\begin{equation}
\Delta_{\theta_0}=\xi\Delta_{0}.\label{eq:assumption}
\end{equation}
By setting $\epsilon=\frac{1-\xi}{2\xi}$ and $\tilde{R}=\frac{L}{2}$, for example, we can construct a local operator $\hat{O}[\frac{\theta_0}{L}]$ such that
\begin{equation}
\Delta_{\theta_0}\leq \Delta_{O[\frac{\theta}{L}]}\leq\Delta_{\theta_0}(1+\epsilon)+\delta=\tfrac{1+\xi}{2}\Delta_{0}+\delta.
\end{equation}
Since $\Delta_{O[\frac{\theta}{L}]}$ does not depend much on $\theta$ as proven above, it in turn implies that
\begin{equation}
\Delta_{O[0]}<\tfrac{1+\xi}{2}\Delta_{0}+\delta+\delta'
\end{equation}
with another exponentially small correction $\delta'$. We can make $\delta+\delta'$ smaller than $\frac{1-\xi}{2}\Delta_{0}$ by choosing a sufficiently large $L$ so that
\begin{equation}
\Delta_{O[0]}<\Delta_{0}
\end{equation}
This is a contradiction, since the energy expectation value of a variational state can never be smaller than the real excitation energy.
Therefore, the assumption in Eq.~\eqref{eq:assumption} must be wrong and $\Delta_{\theta_0}$ cannot be smaller than $\Delta_0$ by any finite amount. In fact, the excitation gap $\Delta$ can depend on $\theta$ at most by an exponentially small amount with respect to the system size.
This, in particular, indicates that that the excitation energy $\Delta_\theta$ never vanishes if $\Delta_0$ is finite in the limit of large system size. This corollary completes, with one remaining assumption on the existence of the local operator $\hat{O}$, the proof of the higher-dimensional Lieb-Schultz-Mattis theorem by Oshikawa~\cite{Oshikawa2000}, without assuming the reality of the Hamiltonian.
\section{Concluding remarks}
\label{sec:conclude}
We demonstrated the $\theta$-independence of static responses among other things. In fact, one can replace $\hat{H}-E$ in Eqs.~\eqref{decay2}--\eqref{decay4} by $\hat{H}-E- \omega$ as long as $\omega<\Delta$, which simply gives the ``effective gap" $\Delta-\omega$. Therefore, the dynamical susceptibility with a frequency lower than $\Delta$ can be covered by the method developed in this work.
The $\theta$-dependence of the ground state energy is related to the transport properties: the first derivative $\partial_\theta E_\theta$ represents the persistent current and the second derivative gives the Drude weight via the Kohn formula $D=\frac{\pi L^2}{V}\partial_\theta^2 E_\theta$~\cite{Kohn,Scalapino,OshikawaDrude}.
Our argument for expectation values and response properties proves that both of them are exponentially small with the system size in U(1) symmetric gapped phases.
In the derivation we assumed the uniqueness of the ground state. However, similar statements should hold even when a finite (quasi-)degeneracy originates from spontaneous breaking of discrete symmetries or the presence of topological orders~\cite{Koma}. Let us denote by $\{|0_\alpha\rangle\}_{\alpha=1}^q$ the $q$-fold (quasi-)degenerate ground states. In general, off-diagonal matrix elements $\langle0_\alpha|\hat{O}|0_\beta\rangle$ ($\alpha\neq\beta$) are expected to be exponentially small with the system size as long as the operator $\hat{O}=\sum_{\vec{x}}\hat{O}_{\vec{x}}$ is a sum (or integral) of local operators. They should be proportional to $e^{-\frac{V}{\xi^d}}$ in phases with discrete symmetry breaking and $e^{-\frac{L}{\xi}}$ for topologically ordered phases. Assuming this scaling, the degenerate case does not seem fundamentally different, but we will leave the concrete analysis to future work.
\begin{acknowledgments}
H. W. thanks Tohru Koma for fruitful discussions and for explaining Ref.~\onlinecite{Koma} in detail. This work is supported by JSPS KAKENHI Grant Numbers JP17K17678.
\end{acknowledgments}
|
1,314,259,994,889 | arxiv | \section{Introduction}\label{sec:introduction}%
\global\everypar=\everypar}}%
\vspace{-1\baselineskip}\vspace{-\parskip}\par
\else
\section{Introduction}\label{sec:introduction}\par
\fi
%
%
%
\IEEEPARstart{I}{n} recent years there has been a growing interest in studying the motion of large groups of objects, both in two and in three dimensions: animals, humans, automotive vehicles, cells and microorganisms in field or laboratory experiments, as well as tracer particles in turbulent fluids flows~\cite{adrian1991arfm,luthi2005jfm,ouellette2006eif,cullen1965ab,dahmen1984prsb,pomeroy1992auk,malik1993eif,doh2000eif,willneff2002istpdrm,betke2009iccv,zou2009Aiccv,wu2009Biccv,wu2011oe,wu2011cvpr,liu2012eccv,ardekani2012jrs}. This kind of studies requires tracking, the automated process of following in space and time individual objects using visual information from single- or multi-camera video sequences.
%
%
\begin{figure*
\includegraphics[width=1.99\columnwidth]{fig1}
\caption{\small Input and output data of our tracking algorithm. Top row: examples of original images extracted from the video sequences taken during four field experiments of flocking birds and swarming insects (from left to right, experimental events $E3$, $E6$, $E14$, and $E15$, respectively. See Section~\ref{sec:expdata} and Table~\ref{tab:exp} therein). The original images are slightly cropped and enhanced for the sake of readability. Bottom row: the 3D reconstruction of the full trajectories for each experimental event.}
\label{fig1}
\end{figure*}
%
%
Sometimes experimental data contains information on the object's features, so that for example color- or pattern-matching strategies can be exploited to simplify the problem. This, however, is not our case: we focus here on the three-dimensional tracking problem using stereometric information only. Examples of input and output data of our tracking algorithm are shown in Fig.~\ref{fig1}.
There are two main reasons why tracking is hard. First, when the average inter-object distance in the images is small compared to the average displacement of the objects between two consecutive frames, ambiguities arise when identifying individual objects in time. This is easily solved using cameras with a sufficiently high temporal resolution.
A second and far more serious difficulty arises when the average inter-objects distance in the images is small compared to their optical size, making optical occlusions highly likely. Each time an ambiguity due to an occlusion occurs, there is a high probability that the tracked trajectories of the objects involved are interrupted.
These interruptions are a minor problem when estimating velocity fields, but the situation becomes more problematic when we use the velocities to infer the inter-individual interactions within a group of animals. Several interruptions at any given time frame are equivalent to missing some of the individuals, which potentially biases the inferred interaction. The problem is even more serious when we measure observables that depend on the \textit{entire} individual trajectories such as diffusion properties~\cite{cavagna2013prslb} or the kinematics of turning~\cite{attanasi2014information} in collective animal behavior. Even in turbulence studies, the lack of complete trajectories can introduce serious statistical biases on some physical observables~\cite{biferale2008pof}.
In fact, interruptions are the best case scenario when we have many optical occlusions. The worst case is the introduction of non-existent trajectories that mix the identities of two different objects, especially problematic in physical and biological analysis.
%
%
\subsection{Literature survey}\label{sec:literature}
%
%
Tracking algorithms differ according to how they exploit the information available in the images.
In the last thirty years, several algorithms have been developed in the field of fluid dynamics~\cite{adrian1991arfm,luthi2005jfm,ouellette2006eif}. In particular, the algorithm by Ouellette~\textit{et~al.}~\cite{ouellette2006eif}, in case of ambiguities, optimizes the solution for all tracked particles locally in time. In the field of collective animal behavior, different algorithms have been developed to reconstruct the 3D-positions of individual animals in groups~\cite{cullen1965ab,dahmen1984prsb,pomeroy1992auk}.
The technical shortcomings and limited results of these initial studies have been the catalyst for the successive empirical investigations on collective animal phenomena, and lead to the development of tracking algorithms tested on various animals as fruit flies~\cite{straw2010multi,liu2012eccv,zou2009Aiccv}, mosquitoes~\cite{butail20113d, butail2012reconstructing}, bees~\cite{veeraraghavan2006motion}, bats~\cite{betke2009iccv,wu2011cvpr,wu2009tracking}, and fish~\cite{butail20103d}.
A recent significant breakthrough in the field is represented by the Multiple Hypothesis Tracking (MHT) approach~\cite{reid1979algorithm}, which finds objects correspondences across multiple views through an NP-hard multidimensional assignment problem.
MHT methods based on global optimization over the space of the tracked objects and/or time have been implemented with greedy approaches. Betke~\textit{et~al.}~\cite{betke2009iccv} developed an algorithm based on a multi-dimensional assignment problem solved with a greedy approximation. Zou~\textit{et~al.}~\cite{zou2009Aiccv} implemented a tracking algorithm which uses a global correspondence selection scheme, and applies Gibbs sampling locally in time to reduce the complexity of the algorithm. H.S. Wu~\textit{et~al.}~\cite{wu2009Biccv,wu2011oe} implemented a different algorithm based on three linear assignment problems, making use of ghost objects to partially solve the problem of short-term optical occlusions.
Liu ~\textit{et~al.}~\cite{liu2012eccv} proposed a very efficient algorithm in the framework of particle filter able to deploy weak visual information to distinguish the identities of the tracked objects. More interesting is the approach proposed by Z. Wu~\textit{et~al.}~\cite{wu2011cvpr}, who recognized the importance of a global optimization over the full temporal sequence and over all the tracked objects, posing the problem in the form of a weighted set-cover.
Our efforts focus on 3D tracking of large groups of featureless objects for long temporal sequences, and the long-term optical occlusions typical of our experimental data need to be addressed in a different way. Therefore we aim at the globally optimal solution of the problem, and not at efficient and fast ways to approximate it with greedy approaches. The bottleneck of this strategy is the computational complexity which grows exponentially with the duration of the acquisition.
%
%
\subsection{Our tracking approach}\label{sec:ourtrackingapproach}
%
%
We propose here a novel Global and Recursive Tracking Algorithm (GReTA), an approach which dramatically reduces the computational complexity of the global optimization problem thanks to a recursive divide and conquer strategy. Within this new framework, we can optimize the solution globally over longer temporal sequences. In order to preserve the global scope, we introduce a way to extend the temporal horizon over which ambiguous choices are made within the divide and conquer scheme.
Thanks to this method, we are able to resolve optical occlusions lasting up to dozens of consecutive frames, and therefore to distinguish the identities of the tracked objects without creating interruptions, even when the optical density in the images is very large.
The reconstructed trajectories have negligible fragmentation even in the presence of large optical density and frequent occlusions.
We validate our approach using synthetic data as ground-truth, and we test its potential by applying it to original experimental field data of flocking birds and swarming insects.
The rest of the paper is divided into 6 sections. Section 2 explains the algorithm. In Section 3 we analyze the problem complexity. Sections 4 and 5 report the validation of our algorithm with synthetic data and experimental field data. In Section 6 we present a comparison with prior works, and the conclusions in Section 7.
%
%
%
%
\section{Methods}\label{sec:methods}
%
%
%
The main idea of our method is that when an occlusion occurs, we assign multiple temporal links and we use these links to create all possible paths running through the occlusion. Many of these paths will be non-physical, but certainly the paths corresponding to the real objects will also be there. Then, the information from all cameras is assembled and the selection of the physically meaningful paths, namely those that optimize multi-camera coherence, is performed globally in space (over the tracked objects) and in time (over the temporal sequence), making use of a recursive divide and conquer scheme.
%
%
\subsection{The basic steps of a tracking system}\label{sec:trackingtasks}
%
%
\begin{figure*
\includegraphics[width=1.99\columnwidth]{fig2}
\caption{\small Scheme illustrating the main steps of the full tracking system (image processing and tracking algorithms). A small crop of the original images extracted from the video sequences of three synchronized cameras are shown on the first row. The segmented images are shown on the second row using blue color for the object borders, and red for their centers of mass. On the third row, we show one example of a (trifocal) stereoscopic link connecting the views of the same object in the three cameras, and one example of temporal link connecting the same object between subsequent frames in each camera sequence. In the fourth row, we show a crop of the temporal graphs for each camera view, which represents a useful visualization of the full set of temporal links assigned for each camera.
The figures show only a small crop representing a few objects for $9$ time frames, and we indicated with grey color a cluster of paths stereoscopically linked across the three views. The fifth row illustrates the 2D paths reconstructed in the image space of each camera, obtained by simple propagation of the temporal links. The algorithm outputs all possible 2D paths at this step. Global optimization is used to match the correct 2D paths between the camera views, as shown on the sixth and last row, from which we finally obtain the 3D the trajectory using standard stereoscopic geometry.}
\label{fig2}
\end{figure*}
The goal is to track individual objects in time while reconstructing their positions in 3D space. We use stereoscopic video-sequences of the target objects acquired via a synchronized and calibrated three-camera system. The data gathering procedures we used in our experiments are described in Appendix~A.
\\\\
\textbf{Image segmentation.} The first step of a tracking algorithm is the detection of the objects in the images, done by image segmentation, see Fig.~\ref{fig2}, first and second rows. Several approaches may be used to perform the segmentation, and the choice strongly depends on the type of objects. Our approach to image segmentation is not an essential part of the tracking system we propose, so we leave its description to Appendix~B.
\\\\
\textbf{Stereoscopic linking.} The second step is to compute the stereoscopic linking of the detected objects, which consists of matching the individual objects across the images acquired by different cameras at the same time, see Fig.~\ref{fig2}, third row. We assign multiple stereoscopic links between the object images as seen by three cameras using standard trifocal geometry~\cite{hartley2003book}. The details of the linking method do not matter, and we describe the exact procedure in Appendix~C.
\\\\
\textbf{Temporal linking.} The third step of our algorithm is to assign multiple temporal links for each object, which consists of matching individual objects from one frame to the next one, as shown on the same third row of Fig.~\ref{fig2}. We use different prediction strategies according to the specific data we process. The precise details of the linking methods are not essential, and the exact procedures are described in Appendix~C.
\\\\
\textbf{Tracking.} Recent global optimization approaches, as the one we propose here, rely on the assumption that objects may be linked to several other objects (multi-linking instead of one-to-one linking), and the global optimization is performed over the space of these links to select the matches corresponding to real 3D trajectories. In the following sections, we explain the method and we present the formalisms of our tracking approach.
%
%
\subsection{Multi-path branching}\label{sec:multipath}
%
%
\begin{figure*
\includegraphics[width=0.99\textwidth]{fig3}
\caption{\small Scheme illustrating a real case of confined data. Two objects $A$ and $B$ occlude each other for two frames in the left view, while they always appear as separate objects in the right view. During the first iteration of the recursive divide and conquer approach, the event is divided into three intervals, $I_1$, $I_2$, and $I_3$. In the first and in the third intervals, there are no tracking ambiguities. The propagation of the temporal links results only in the two correct 2D paths, $A$ and $B$, in each camera. We define the cost of a pair of 2D paths as the sum of the costs of the links between them, i.e. the number of missing stereoscopic links. The global optimization selects the correct matches, $(A,A)$ and $(B,B)$.
In the second interval, there are still not ambiguities in the right view: the only 2D paths, $A$ and $B$, are correct. Instead, in the left view, we propagate the temporal links and we create $4$ 2D paths; the two correct ones (green lines) $AA$ and $BB$, and the two wrong ones (red lines) $AB$ and $BA$. Two of the possible set-cover solutions $\Gamma$ are: the correct one $G\equiv\{(AA,A), (BB,B)\}$ with a cost equal to $0$, and the wrong one $\{(AB,A), (BA,B)\}$ with a cost equal to $4$. Here, the global optimization is essential to select the correct solution.
All the matched 2D paths are then analyzed at the second iteration as meta-objects. At this iteration, there are no occlusions. Propagating the temporal links, we create two 2D paths in each camera view, and the tracking problem is correctly solved for the entire duration of the event.}
\label{fig3}
\end{figure*}
%
Let us consider the example shown in Fig.~\ref{fig3}, which illustrates a partial temporal sequence of two objects $A$ and $B$ as seen by two cameras. The two objects overlap in the image of the left camera for three frames. Most prior work would assign only one temporal link to each detected object, therefore the points of occlusion belong to only one uninterrupted trajectory, the second recovered trajectory being broken. This results in a fragmented trajectory. Furthermore, assigning temporal links using information that is purely local has the drawback that the identities of occluding objects might be lost.
\\\\
To tackle this concerns, we use a path branching approach with global optimization. For the example in Fig.~\ref{fig3}, we assign multiple temporal links and create all possible paths running through the occlusion in the left camera view. In this case, there are four paths, of which two are real ($AA$ and $BB$) and two have hybrid object identities ($AB$ and $BA$). In order to build the set of all possible paths through the segmented objects, the temporal links are propagated for each camera view to build the temporal graph of each camera.
We then have to solve the problem of how to select the correct paths in the 2D graph of each camera and match them across cameras. The advantage of our approach is that at an early stage each object can have more than one path, which is what is needed to handle occlusions.
%
%
\subsection{Global optimization}\label{sec:globalopt}
%
%
The selection of the correct matches between the 2D paths across cameras is the core of the tracking problem. We create all the possible 2D paths propagating the temporal links in the image space of each camera, while we choose how to match them using stereoscopic links. The assumption here is that the 2D paths representing the same 3D object are strongly linked stereoscopically. On the contrary, 2D paths corresponding to different 3D objects are loosely linked stereoscopically. We define a measure of the stereoscopic quality of each match, \textit{i.e.} a cost function, and we use a global optimization approach to retrieve the set of the correct matches. This is an NP-hard multidimensional assignment problem, and we solve it using Integer Linear Programming (ILP) techniques in order to find the globally optimal solution~\cite{fisher2004lagrangian}.
\\\\
\textbf{Definition of trajectory.} Consider a system of three cameras, and denote a trajectory $\gamma$ as a triplet of matched 2D paths, $\gamma=(\gamma_1,\gamma_2,\gamma_3)$. Each 2D path $\gamma_i$ represents a temporal sequence of 2D objects detected in the images of the $i$-th camera, and connected by temporal links. Moreover the triplet $(\gamma_1, \gamma_2, \gamma_3)$ is stereoscopically linked for at least one frame. Let $\Gamma$ be the set of all the possible trajectories. The goal is to find the correct subset of trajectories $G\subseteq\Gamma$, see Fig.~\ref{fig3}.
\\\\
\textbf{Cost function.} We evaluate the quality of each subset $\widehat{\Gamma}\subseteq\Gamma$ by defining a cost function $C(\widehat{\Gamma})$. Let $C(\widehat{\Gamma})=\sum_{\gamma\in\widehat{\Gamma}}c(\gamma)$, where $c(\gamma)$ is a cost associated to the trajectory $\gamma$ and based on the stereoscopic coherence (the higher the quality, the lower the cost), see Fig.~\ref{fig3}.
Let us formally define the cost function. Considering a three-camera system, for each $\gamma\in\Gamma$ and at each instant of time, the cost function $c(\gamma(t))=c(\gamma_1(t),\gamma_2(t),\gamma_3(t))$ is defined as the trifocal distance~\cite{hartley2003book} in the case of matched triplets, and as the epipolar distance~\cite{hartley2003book} in the case of pairs (corresponding to miss-detection in one camera). Whenever the cost exceeds a threshold value $c_{max}$, or in the case of absence of a stereoscopic link, we set $c=c_{max}$. The cost of a trajectory $\gamma$ is then defined as the temporal average:
\begin{equation}
c(\gamma)=\frac{\sum_{t\in T_{\gamma}}c(\gamma(t))}{|T_{\gamma}|},
\label{eqn:costfunction}
\end{equation}
where $T_\gamma$ is the set of time frames, $t$, for which $c(\gamma(t))$ is defined.
%
\subsubsection{Formalization of the tracking problem}\label{sec:formalism}
%
Let us distinguish between two different types of input data.
\begin{description}
\item[{\it Confined data}:]\hspace{30pt} the objects are in the common field-of-view of the camera system at least for a short temporal sequence. Each segmented object belongs to at least one trajectory, and the solution of the tracking problem is a cover for the set of all the objects.
\item[{\it Non-confined data:}]\hspace{48pt} one or more objects never appear in the common field-of-view of the camera system, but they are seen by one camera only. Therefore they are far from the objects of interest in three dimensions, and they should not be matched as they do not belong to any trajectory. A typical example is represented by pollen particles passing in front of one camera only, and appearing as large blurred objects. The problem becomes more complex, and the covering condition needs to be relaxed to exclude these objects.
\end{description}
\noindent\textbf{Confined data, joint weighted set-cover.} When applied to confined data, the global optimization approach is equivalent to a joint weighted set-cover (as in ~\cite{wu2011cvpr}). The tracking problem can be formulated as:
\begin{equation}
c(\Gamma_{opt})=\min_{\lbrace x\rbrace}\sum_{\gamma\in\Gamma}c(\gamma)x_{\gamma}~,
\label{eqn:minimum}
\end{equation}
with the constraint:
\begin{equation}
\forall p~,~~\sum_{\gamma\in\Gamma_p}x_{\gamma}\geq1~,
\label{eqn:constraint}
\end{equation}
where $x_\gamma$ is a boolean variable associated to $\gamma$, $p$ is a 2D object in the image space of a camera, and $\Gamma_p$ is the set of all trajectories passing by $p$. The retrieved set $\Gamma_{opt}\equiv\{\gamma~|~x_{\gamma}=1\}$ covers with the best weight the full set of segmented objects.
It can be proven that, under suitable conditions, the global optimization approach finds the correct solution. Indeed, in the case of confined data, when all the correct temporal and stereoscopic links are known and when some particular ambiguities are forbidden (for a formal definition, see Appendix~D), the correct solution of the tracking problem is the only set-cover minimizing the cost defined by Eq.~\ref{eqn:minimum} with the constraint in Eq.~\ref{eqn:constraint}. We refer the reader to Theorem~1 in Appendix~D
for the exact list of hypotheses holding this statement, together with its proof.
\\\\
\textbf{Non-confined data, relaxed joint weighted set-cover.} In the case of non confined data, not all the segmented objects can be tracked. We need to discard those objects not appearing in all the three cameras, therefore lacking stereoscopic correspondance, because they do not belong to the group of interest. To this aim, we need to relax the covering constraint in Eq.~\ref{eqn:constraint}. Let us introduce for each detected object $p$ a new boolean variable $y_p$. The relaxed joint weighted set-cover problem is then formalized as:
\begin{equation}
\min_{\lbrace x,y\rbrace}\left[\sum_{\gamma\in\Gamma}c(\gamma)x_{\gamma}+\frac{\lambda}{T}\sum_{p}\left(1-y_p\right)\right]~,
\label{eqn:relaxedminimum}
\end{equation}
where $T$ is the event duration, and with the constraint:
\begin{equation}
\forall p~,~~\sum_{\gamma\in\Gamma_p}x_{\gamma}\geq y_p~.
\label{eqn:relaxedconstraint}
\end{equation}
The contribution of a discarded object, for which $y_p=0$, to the global cost of the solution is $\lambda/T$. Assigning to $\lambda$ a value lower than the highest cost assigned to the stereoscopic links (the threshold value $c_{max}$), we manage to exclude from the solution the objects detected only in one camera view. We experimentally choose $\lambda=0.9c_{max}$. See the example sketched in Fig.~\ref{fig4}.
\begin{figure}[!b]
\centering
\includegraphics[width=0.75\columnwidth]{fig4}
\caption{\small Scheme illustrating a real case of non-confined data, i.e. data corrupted by the images of objects in one camera view which do not belong to the group of interest and do not appear in the common field-of-view of all cameras (e.g., insects or birds flying in front of one camera only, or a pollen particle passing in front of a camera lens). A tracked object $A$ and a pollen particle $B$ occlude each other in the left view, while only $A$ is visible in the right view. There are only two possible set-cover solutions, both characterized by the same cost equal to $4$: $\{(AA,A), (BB,A)\}$ and $\{(AB,A), (BA,A)\}$. Both solutions would produce wrong trajectories, and the algorithm would fail to find the correct solution. Relaxing the set-cover constraint, the correct solution is found as the trajectory $(AA,A)$, while the objects belonging only to the 2D path $BB$ in the left view are discarded.}
\label{fig4}
\end{figure}
%
In our implementation, the optimization problem is solved using linear programming, for which we use the library in~\cite{cplex1994}.
%
%
%
\subsection{Recursive divide and conquer}\label{sec:recursion}
%
%
The computational complexity of global optimization problem strongly limits the size of the datasets that can be processed. In order to reduce the complexity, the full temporal sequence can be divided into shorter intervals over which smaller optimization problems can be solved. A well-known method used to join the subtrajectories constructed within limited time windows is the sliding window approach~\cite{wu2009tracking}.
This approach matches the subtrajectories of the first interval with the ones of the second one, then the ones of the second with the ones of the third interval, repeating the procedure until the full trajectories are recovered. Such approach is very efficient and extremely powerful when applied to sparse data or when the tracked objects can be identified using features like shape, pattern, or color. Its weakness resides in the optimization which is not performed globally in time. For this reason, the identities of the objects can easily be lost whenever treating dense data of featureless objects.
The GReTA algorithm we propose here is based on a recursive divide and conquer strategy. We divide the acquisition into temporal intervals with length $\tau_1<T$. The optimization described in the previous section is performed in each time interval. Each path -- a temporal sequence of linked 2D objects -- belonging to a selected trajectory becomes then a \textit{meta}-object, which can be linked stereoscopically and in time to other meta-objects (paths). The procedure is then iterated. A new time interval $\tau_2$ is selected, where now $\tau_2$ enumerates the number of intervals of length $\tau_1$, \textit{i.e.} of \textit{meta}-frames. The procedure is applied recursively, until the product of the $\tau$'s of each iteration equals the duration of the entire acquisition, $\tau_1\tau_2\tau_3=T$. Finally, the partial solutions retrieved at each iteration are combined into the solution of the full problem at the last iteration.
It is possible to prove that, when the conditions that guarantees the uniqueness of the solution (see Section~\ref{sec:formalism} and Appendix~D) are satisfied within each interval of length $\tau_1$, the solution obtained using the recursive approach coincides with the one obtained solving the problem over the entire temporal sequence. The reader is referred to Corollay~1 in Appendix~D.
Note that the recursive approach offers two key advantages when compared to the classical sliding window one. First, it permits to evaluate the optimization problem for several interval interfaces at once, giving it a more global scope. Second, it allows postponing ambiguous choices at each iteration to later ones, effectively extending the temporal horizon over which these choices are made.
%
%
\subsection{Making the algorithm robust against wrong or missing links}\label{sec:mods}
%
%
Dealing with real data, the sets of temporal and stereoscopic links are affected by noise, which results in missing links and fluctuations of the stereoscopic distances. In some particular situations, the optimization operated within finite intervals of time is not guaranteed to be equivalent to a truly global optimization over the entire temporal sequence.
We describe here two modifications of the algorithm that take into account such situations typical of experimental data: a way to postpone ambiguous choices to the next iteration, effectively extending the temporal horizon over which a choice is made, and a way to recognize and re-join fragments of the same trajectory.
%
\subsubsection{Postponing ambiguous choices to the following iterations}\label{sec:modsmps}
%
The absence of some correct links may lead to one or more trajectories representing unreal objects. These trajectories are characterized by at least one long time gap during which the stereoscopic links are absent.
We detect these cases by using a threshold over the maximum acceptable number of consecutive frames of missing stereoscopic links, and we discard them. We then run the optimization algorithm. Next, we propagate the links over all the discarded 2D objects, creating new 2D paths. These are then passed to the optimization algorithm at the following iteration together with all the matched 2D paths. In this way, we discard any ambiguous choice made locally within any interval at the current iteration, and postpone the decision to the following iteration, effectively extending the temporal window when necessary.
Such refined algorithm is applied at each iteration. At the last iteration, the trajectories lacking stereoscopic coherence are discarded. New 2D paths are obtained by propagating through the objects which belonged to the discarded trajectories, and they are added to the set of 2D paths. Finally, the set of all the 2D paths is passed to the optimization algorithm running for a second and last time over the full temporal sequence. This time, the trajectories lacking coherence will not be discarded.
%
\subsubsection{Joining trajectory fragments}\label{sec:modsscrondo}
%
There are two reasons for the algorithm to output correct but fragmented trajectories. First, when a temporal link is missing. Second, when the modification described above breaks a wrong trajectory and reconstructs two correct fragments.
In both cases, it is possible to re-join fragments of trajectories which are consecutive in time and stereoscopically coherent. We do so after each iteration by connecting fragmented 2D paths in each camera that are stereoscopically connected to the same full 2D path in another camera. Note that in our implementation that is designed for a system of three cameras, we actually match fragments of paths in one camera only with matched pairs of paths in the other two cameras.
%
%
%
\subsection{Final quality check and 3D reconstruction}\label{sec:qualitycheck}
%
%
In the case of field experiments with freely moving animals, individuals happen to leave the field-of-view of one camera for long times during the recorded events. Furthermore, the noise present in real experimental images often results in errors of the segmentation routine, both miss-detections and over-detections. Because of these reasons, it is not possible to correctly track all the objects for the entire temporal sequence, expecially when the size of the problem is very large and the image data is heavily corrupted with noise.
To ameliorate these issues, we discard those few trajectories lacking stereoscopic coherence for a considerably long time gap. As shown in the following Section~\ref{sec:synthdata}, this amounts to roughly $4\%$ of the final trajectories for an average-sized dataset ($512$ objects and $500$ frames, see Table~\ref{tab:synth}). We then cut those trajectories, in order to save the fragments which do satisfy the stereoscopic coherence. Such operation results in a minor trajectory fragmentation.
Finally, we are left with a set of matched triplets of 2D paths. Given a triplet of 2D paths, we can reconstruct the corresponding trajectory in 3D by applying standard stereometric formulas~\cite{hartley2003book} to each triplets of synchronous 2D points belonging to the paths, as shown on the last row of Fig.~\ref{fig2}.
%
%
%
%
%
\section{Complexity of the tracking problem}\label{sec:complexity}
%
%
%
The global optimization requires comparing all the possible solutions and selecting the one that minimizes the cost function. This is a multidimensional assignment, and it is NP-hard. We solve it by finding the globally optimal solution using ILP techniques. We compute the cost of each possible triplet of linked 2D paths $\gamma\in\Gamma$, \textit{i.e.} with at least one stereoscopic link. This implies that the number of variables to handle, $H$, corresponds to the number of possible trajectories, $|\Gamma|$. The parameter $H$ strictly depends on the number $P$ of 2D paths in the graph of each camera, obtained by propagation of the temporal links, and on the stereoscopic links between them. Therefore both $H$ and $P$ depend on the number of objects to be tracked, and they both grow exponentially with the temporal duration of the event. Let us analyze in detail such trend.
%
\begin{figure*}[!b]
\unitlength=1in
\centering
\psfrag{N}{\scriptsize{$\mathcal{N}$}}
\psfrag{H}{\footnotesize{$H$}}
\psfrag{Hf}{\footnotesize{$H_{full}$}}
\includegraphics[width=0.99\textwidth]{fig5}
\caption{\small The intrinsic complexity of the tracking problem is shown with synthetic data, with and without the recursive divide and conquer scheme is compared.
In panel ($a$), the number $P$ of possible paths for a synthetic dataset of $1024$ objects is shown as a function of the considered temporal duration $T$.
In panel($b$), the dependence of $\beta$ on $T$.
In panel ($c$), the values of $H$ as a function of $T$ are plotted as yellow diamonds; these are compared to the values of $H$ obtained when the recursive divide and conquer strategy is applied (choosing $\tau_1=25$~frames), and plotted as green circles.
In panel ($d$), the computational time is plotted versus $H$ for several runs with different complexities, with and without recursion as in panel ($c$).
In panel ($e$), the MOTA (see Sec.~\ref{sec:synthdata}) values obtained on several runs with different complexities are plotted versus $H$, with and without recursion as in panel ($c$).}
\label{fig5}
\end{figure*}
%
%
%
\subsection{Complexity of the temporal graph of each camera space}
%
%
The number of 2D paths, $P$, for a certain camera depends on the temporal length of the acquisition and on the connectivity of the graph on that camera. This dependence can be described by introducing a bifurcation coefficient $\alpha\geq0$, which is an indirect measure of the number of occlusions per frame in each camera view. The higher is the number of occlusions between the detected objects, the higher is the average number of multiple links per object in the corresponding graph, the higher is the value of $\alpha$.
We can predict $P$ as a function of $\alpha$, of the number of tracked objects $N$, and of the event duration $T$, as:
\begin{equation}
P=Ne^{\alpha T}~.
\label{eqn:mathcalN}
\end{equation}
For $\alpha=0$, the number of paths is exactly equal to the number of real objects, hence $\alpha=0$ corresponds to the ideal case of zero occlusions.
Eq.~\ref{eqn:mathcalN} is confirmed by tests on synthetic data. In Fig.~\ref{fig5}($a$), the values of $P$ as a function of $T$ are plotted for a synthetic dataset of flocking birds ($N=1024$, see Section~\ref{sec:synthdata} for details). For each $T$, we propagate only the correct temporal links and we measure $P(T)$ for several intervals lasting $T$ frames. The mean value is plotted against $T$, and a linear fit is performed to retrieve the value of $\alpha$. Typically, our experimental data (birds and insects) are characterized by $\alpha\in[0.001,~0.2]$.
%
%
\subsection{Full computational complexity of the problem}
%
%
The number of paths $P$ is not by itself the computational bottleneck of the algorithm. What really matters is how many triplets built out of these $P$ paths have a nonzero probability to be stereoscopically connected to each other, because this is what actually enters the global optimization problem. We call $H$ the number of possible stereoscopic matches between the 2D paths across cameras. In the best case scenario, \textit{i.e.} when there are no stereoscopic ambiguities, each one of the 2D paths belongs to only one trajectory (\textit{i.e.}, $\gamma\in\Gamma$), and $H=P$. In the worst case scenario, each one of the 2D paths has at least one stereoscopic link with every other path in the image spaces of the other cameras and, for a three-camera system, $H=P^3$. We can therefore express $H$ as a function of $P$ in the following way,
\begin{equation}
H=P^\beta=\left(Ne^{\alpha T}\right)^\beta~,
\label{eqn:mathcalH}
\end{equation}
where the parameter $\beta\in[1,~3]$ gives the measure of the degree of hybridization between 2D paths on different cameras, which is in turn a function of the optical density of the objects and of the number of occlusions. The longer the temporal acquisition, the higher the probability that one 2D path intersects another one; this is effect is large if there is high diffusion of the real 3D objects in the center of mass reference frame of the group. Therefore, we expect the exponent $\beta$ to grow with the time duration of the event. This growth of $\beta(T)$ is indeed what we find (see Fig.~\ref{fig5}($b$)); although the saturation limit $\beta\sim2$ is below the upper bound $\beta=3$, the growth of $\beta$ with time means that the exponential explosion of the computational complexity $H$ is rather severe.
In Fig.~\ref{fig5}($c$) we report the computational complexity $H$ as a function of the number of frames $T$ for the same synthetic dataset (yellow diamonds). The exponential growth of $H$ clearly shows that a multi-path branching algorithm by itself cannot solve the matching problem for long time intervals, however optimized it is and however large the memory resources are. Indeed, the very existence of optical occlusions, which is the reason to bifurcate paths in the first place, makes it impossible to reduce significantly the values of $\alpha$ and $\beta$.
%
%
\subsection{Reducing the complexity via recursive divide and conquer}
%
%
The modification through which we are able to drastically decrease the computational complexity of the problem is a recursive divide and conquer strategy.
The number of 2D paths created in each interval at the first iteration is $P_1=Ne^{\alpha\tau_1}$. The number of possible matches between these paths at the first iteration is $H_1=\left(Ne^{\alpha\tau_1}\right)^{\beta_1}$, where $\tau_1<T$ and $\beta_1=\beta(\tau_1)<\beta(T)$. At the end of the first iteration, the algorithm chooses which 2D paths are kept in memory, and discards the other ones. The number of paths passed at the following iteration is of the same order of $N$ ($N_1\simeq N$ meta-objects are created in each interval). Therefore,
\begin{equation}
P_2=N e^{\alpha\tau_2}~~\mathrm{and}~~H_2=\left(N e^{\alpha\tau_2}\right)^{\beta_2}~,
\label{eqn:mathcalN@iter2}
\end{equation}
where $\tau_2<\tau_1<T$ and $\beta_1<\beta_2<\beta(T)$. At the $n$-th iteration,
\begin{equation}
P_n=N e^{\alpha\tau_n}~~\mathrm{and}~~H_n=\left(N e^{\alpha\tau_n}\right)^{\beta_n}~,
\label{eqn:mathcalN@itern}
\end{equation}
where $\tau_n<\tau_{n-1}<\cdots<\tau_1<\tau$ and $\beta_1<\beta_2<\cdots<\beta_n\leq\beta(T)$. The crucial point is that $\tau_n\beta_n\ll T\beta(T)$, because we can decide and tune $\tau_n$ to tame the exponential explosion of the computational complexity. Moreover, such strategy allows us balancing the increase of $\beta_i$ from iteration to iteration with a decrease of $\tau_i$. As a result, we can handle a very large number of objects $N$ for an arbitrarily long interval of time, $T$, regardless of the intrinsic complexity of the problem (expressed in terms of $\alpha$ and $\beta$).
In Figure~\ref{fig5}($d$) we plot the computational time versus the problem complexity $H$, with and without the recursive divide and conquer strategy. Thanks to the recursive approach, the computational time is reduced by several orders of magnitude for large values of $H$. Note that, for small values of $H$, the non-recursive approach performs better and should be the preferred choice for small datasets.
In Figure~\ref{fig5}($e$) we report a quality indicator (MOTA, see next section for its definition) for several runs on synthetic datasets with different complexities $H$, and comparing the results obtained with the global optimization with and without recursive scheme. The plot reveals that the two approaches perform similarly in terms of tracking accuracy.
%
%
%
\section{Validation with synthetic data}\label{sec:synthdata}
%
%
%
We validate our algorithm making use of synthetic datasets.
\\\\
\textbf{Synthetic data.} We simulate 3D trajectories of flocking birds by adopting a model of self-propelled particles~\cite{bialek2014social}. We use the positions projected in 2D planes directly, rather than generating realistic renderings from them. We do this since we are not interested in testing the performance of the segmentation routine and since it remains hard to predictably simulate the interaction of camera noise and very small objects. Instead, we simulate the errors of the segmentation routine adding white noise directly to the 2D coordinates of the projected objects, in terms of pixel displacements. We also simulate the formation of optical occlusions.
For the details concerning the generation of the synthetic data, the reader is addressed to Appendix~E. Note though that our simulation still preserves the correspondences between 3D trajectories, the set of 2D projected paths, and the perturbed paths, which can all be used as ground-truth data.
\\\\
\textbf{Quality parameters.} Let $G$ be the ground-truth set of trajectories and let $N_G$ be the number of trajectories in $G$. The noisy 2D positions of the ground-truth trajectories are fed to our tracking algorithm, which outputs the set of trajectories $O$. The two sets $G$ and $O$ are compared, and the quality of the output is evaluated in terms of the following parameters:
\begin{description}
\item[\textit{MOTA}]: Multiple Object Tracking Accuracy~\cite{keni2008evaluating}, \textit{i.e.} the ratio of the number of correctly reconstructed 3D positions over the total number of 3D positions;
\item[$G_{90}$]: the ratio of the number of ground-truth trajectories correctly reconstructed for at least $90\%$ frames over the entire event, over $N_G$. For example, given an event lasting $100$ frames, $G_{90}$ represents the percentage of ground-truth trajectories correctly reconstructed for $90$ frames or more).
\end{description}
In the best case scenario -- \textit{i.e.} all the ground-truth trajectories are correctly reconstructed -- \textit{MOTA}$=1$ and $G_{90}=1$.
%
\begin{table}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{\small Summary of the synthetic datasets used to validate the new tracking software. For each dataset, we report its duration expressed in frames, the number of objects $N_G$ of the ground-truth set $G$, the number of output trajectories $N_O$, the value of the parameter $\xi$, and the values of the quality parameters \textit{MOTA} and $G_{90}$.}
\label{tab:synth}
\centering
\begin{tabular}{ccccccccccc}
\toprule
Synthetic & Duration & $N_G$ & $N_O$ & $\xi$ & \textit{MOTA} & $G_{90}$ \\
dataset & (frames) & & & & & \\
\midrule
$S1$ & $125$ & $256$ & $256$ & $0.19$ & $0.999$ & $1$ \\
$S2$ & $125$ & $512$ & $512$ & $0.24$ & $0.9989$ & $0.998$ \\
$S3$ & $125$ & $1024$ & $1024$ & $0.27$ & $0.9970$ & $0.987$ \\
$S4$ & $250$ & $256$ & $257$ & $0.19$ & $0.9975$ & $0.996$ \\
$S5$ & $250$ & $512$ & $513$ & $0.24$ & $0.9958$ & $0.988$ \\
$S6$ & $250$ & $1024$ & $1030$ & $0.27$ & $0.9931$ & $0.967$ \\
$S7$ & $500$ & $256$ & $257$ & $0.19$ & $0.9989$ & $0.988$ \\
$S8$ & $500$ & $512$ & $517$ & $0.24$ & $0.9944$ & $0.961$ \\
$S9$ & $500$ & $1024$ & $1033$ & $0.27$ & $0.9895$ & $0.928$ \\
$S10$ & $1000$ & $256$ & $257$ & $0.19$ & $0.9991$ & $0.984$ \\
$S11$ & $1000$ & $512$ & $526$ & $0.24$ & $0.9873$ & $0.902$ \\
$S12$ & $1000$ & $1024$ & $1060$ & $0.27$ & $0.9784$ & $0.869$ \\
\bottomrule
\end{tabular}
\end{table}\linespread{1}
%
\\\\
\textbf{Results on synthetic datasets.} Results for several synthetic datasets are shown in Table~\ref{tab:synth}.
The quality parameter \textit{MOTA} is always greater than $0.97$, and greater than $0.99$ for 9 datasets over 12. The percentage of correctly reconstructed trajectories is greater than $0.786$. This percentage grows rapidly, as soon as we consider the trajectories which are reconstructed correctly for more than the $90\%$ of the total duration: $G_{90}\geq0.869$.
%
%
%
\section{Tests on experimental field data}\label{sec:expdata}
%
%
%
\begin{table*}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{\small Summary of the field events analyzed with the new tracking software. For each event, we indicate the object type, the estimated number of objects, the duration (in frames and seconds), the acquisition frame-rate, and the percentage of reconstructed trajectories whose length is greater than $90\%$ of the acquisition duration.}
\label{tab:exp}
\centering
\begin{tabular}{ccccccc}
\toprule
Experimental & ~Object~ & ~~Estimated~~ & ~~~Duration~~~ & ~Frame-rate~~ & ~$\%$ of trajectories~ \\
dataset & Type & $\#$ Objects & (frames~$|$~s) & (Hz) & with Length $>90\%$ \\
\midrule
$E1$ & birds & $179$ & $440~|~5.50$ & $80$ & $87.0\%$\\
$E2$ & birds & $551$ & $360~|~4.50$ & $80$ & $90.2\%$\\
$E3$ & birds & $365$ & $128~|~1.60$ & $80$ & $78.6\%$\\
$E4$ & birds & $120$ & $310~|~1.82$ & $170$ & $99.2\%$\\
$E5$ & birds & $50$ & $1000~|~5.88~$ & $170$ & $98.0\%$\\
$E6$ & birds & $482$ & $761~|~4.48$ & $170$ & $84.3\%$\\
$E7$ & birds & $117$ & $500~|~2.94$ & $170$ & $88.7\%$\\
$E8$ & birds & $110$ & $661~|~3.89$ & $170$ & $97.2\%$\\
$E9$ & birds & $381$ & $960~|~5.65$ & $170$ & $72.3\%$\\
$E10$ & birds & $168$ & $300~|~1.76$ & $170$ & $81.2\%$\\
$E11$ & birds & $1270$ & $300~|~1.76$ & $170$ & $87.6\%$\\
$E12$ & birds & $60$ & $609~|~3.58$ & $170$ & $89.8\%$\\
$E13$ & insects & $37$ & $2000~|~11.76$ & $170$ & $97.1\%$\\
$E14$ & insects & $332$ & $465~|~2.73$ & $170$ & $80.2\%$\\
$E15$ & insects & $115$ & $1000~|~5.88~$ & $170$ & $85.6\%$\\
$E16$ & insects & $147$ & $1000~|~5.88~$ & $170$ & $84.6\%$\\
$E17$ & insects & $210$ & $500~|~2.94$ & $170$ & $82.7\%$\\
$E18$ & insects & $124$ & $1024~|~6.02~$ & $170$ & $84.0\%$\\
$E19$ & insects & $633$ & $250~|~1.47$ & $170$ & $82.3\%$\\
\bottomrule
\end{tabular}
\end{table*}\linespread{1}
%
We also tested our algorithm using our experimental data of flocking birds and swarming insects acquired on the field, as well as using public benchmark datasets.
Testing the algorithm with our data, we analyzed $12$ events of starling flocks~\cite{attanasi2014information}, and $7$ events of swarming midges~\cite{attanasi2014collective,attanasi2014prl}, as summarized in Table~\ref{tab:exp}.
Fig.~\ref{fig1} shows the reconstructed trajectories for four events, the bird flocks labelled $E3$ and $E6$, and the midge swarms labelled $E14$ and $E15$ -- see Table~\ref{tab:exp}. The original video sequences of these four experimental events (in slow-motion, $0.15\times$~slower than the original speed), together with the reconstructed trajectories, are included as Supplemental Material.
Clearly, ground-truth trajectories are not available in the case of experimental data, and -- due to the size of our datasets -- manual inspection is not feasible, except for a limited number of ambiguous cases. The quality of the reconstructed trajectories is assessed in this case only in terms of trajectory fragmentation. In Table~\ref{tab:exp}, we report the percentage of trajectories longer than the $90\%$ of the duration of the acquisition. The majority of the reconstructed trajectories are of full-length, and trajectory fragmentation is negligible. Such high-quality data have been used to perform the analysis presented in~\cite{attanasi2014information} and~\cite{attanasi2014collective,attanasi2014prl}.
\begin{table}[!b]
\renewcommand{\arraystretch}{1.3}
\caption{\small Comparison of the quality of the output trajectories retrieved using GReTA and the ones retrieved using the algorithms MHT and SDD-MHT, as published by Z.~Wu~\textit{et~al.}~\cite{wu2014thermal} (see Table~IV therein) on the datasets labeled \textit{Davis-08 sparse} and \textit{Davis-08 dense}.}
\label{tab:wu}
\centering
\begin{tabular}{clccccc}
\toprule
Dataset & Algorithm & MT & ML & FM & IDS & \textbf{MOTA} \\
& & ($\%$) & ($\%$) & ($\#$) & ($\#$) & ($\%$) \\
\midrule
& MHT & $96.6$ & ~$0$ & $105$ & ~$97$ & ~$\mathbf{64.1}$ \\
\textit{sparse} & SDD-MHT & $95.2$ & ~$0$ & $145$ & $126$ & ~$\mathbf{78.9}$ \\
& GReTA & $83.1$ & $3.4$ & $188$ & ~~$9$ & ~$\mathbf{82.4}$ \\
\\
& MHT & $71.9$ & $2.5$ & $274$ & $355$ & $\mathbf{-32.0}$~ \\
\textit{dense~} & SDD-MHT & $61.1$ & $3.0$ & $454$ & $444$ & ~$\mathbf{44.9}$ \\
& GReTA & $78.8$ & $2.9$ & $335$ & ~~$8$ & ~$\mathbf{80.3}$ \\
\bottomrule
\end{tabular}
\end{table}\linespread{1}
%
%
To the best of our knowledge, the only public benchmark datasets for 3D-tracking of animal groups are the thermal infrared videos (the raw image sequences with the corresponding sets of ground-truth trajectories) published by Z. Wu and coworkers~\cite{wu2014thermal}. We tested our tracking algorithm on the two datasets \textit{Davis08-sparse} and \textit{Davis08-dense}, and we evaluated the output trajectories using the quality parameters defined in~\cite{wu2014thermal}: the numbers of Mostly Tracked ($MT\geq80\%$) trajectories, Mostly Lost ($ML\leq20\%$) trajectories, track fragmentations (FM) and Identity Switches (IDS), as well as the MOTA. In Table~\ref{tab:wu}, we report the quality of our output trajectories compared to the results on the same datasets published by Z. Wu~\textit{et~al.}~\cite{wu2014thermal}.
Our results exhibit better values of MOTA and IDS on both datasets, \textit{dense} and \textit{sparse}, revealing that the trajectories are characterized by low values of false positives, identity switches and mismatches. In terms of MT, ML and FM, GReTA performs slightly worse on the \textit{sparse} dataset than MHT and SDD-MHT; on the other hand, when applied to the \textit{dense} dataset, its performance is comparable to the one of the other methods.
This implies that MHT and SDD-MHT output a larger percentage of complete trajectories, which nevertheless are characterized by more identity switches and false positives -- as revealed by the lower values of MOTA and higher values of IDS. We were not surprised by this, as our tracking algorithm is intentionally designed to discard false positives, preferring short and correct trajectories to long but incorrect ones.
We believe that this benchmark proves the performance advantages, as well as the flexibility of GReTA to process very diverse experimental data.
%
%
%
\section{Comparison with prior work}\label{sec:comp}
%
%
%
In order to situate our algorithm in the 3D tracking landscape, an estimate (when explicit information was not published) of the number of tracked objects $N$ and of the temporal duration $T$ (the average trajectory length is used in case of objects entering and leaving the field-of-view) is shown in Fig.~\ref{fig6} for a number of 3D tracking results published in the literature. The points scattered on the plot have been classified according to the field of investigation for which the respective algorithms have been developed: fluid dynamics experiments ({\tiny$\blacksquare$}), biological experiments ($\bullet$), and the experimental data presented in this paper and listed in Table~\ref{tab:exp} (\textcolor{red}{$\triangle$}~and~\textcolor{red}{$\bigtriangledown$} for birds and insects, respectively). The numbers next to the symbols correspond to the references to the papers from which $T$ and $N$ have been estimated -- see Bibliography.
Given that $N$ and $T$ are both valid -- but qualitatively different -- criteria of evaluation, to compare different methods according to $N$ and $T$ we can use a multi-objective optimization approach, the simplest of which is defining the Pareto frontier~\cite{pareto1964book,messac2003smo} in the $\lbrace T,~N\rbrace$ plane.
We sketched with a dashed line the Pareto frontier for the plotted data-points in the 2D space of $N$ and $T$. The best tracking performance is given by the points closest to the frontier.
%
\begin{figure}[!b]
\centering
\includegraphics[width=0.90\columnwidth]{fig6}
\caption{\small Comparison of test-cases used for several tracking algorithms, quantified in terms of temporal duration $T$ and estimated number of tracked objects $N$. The largest datasets processed by different tracking algorithms define the Pareto frontier in the two-dimensional space $\lbrace T,~N\rbrace$. The points are classified according to the field of investigation for which the respective algorithms have been developed: fluid dynamics experiments ({\tiny$\blacksquare$}), biological experiments ($\bullet$), and the experimental data processed using GReTA and presented in this paper and listed in Table~\ref{tab:exp}
(\textcolor{red}{$\triangle$}~and~\textcolor{red}{$\bigtriangledown$} for birds and insects, respectively). The numbers next to the symbols correspond to the references to the papers from which $T$ and $N$ have been estimated -- see Bibliography.}
\label{fig6}
\end{figure}
The plot clearly shows that fluid dynamics tracking algorithms have been optimized to track large number of tracer particles for short times, and that previous tracking approaches restricted biological experiments to relatively small numbers of animals, even though for longer times. The data processed using GReTA mark the Pareto frontier, assessing the important step forward in terms of performance of the algorithm we proposed. This proved to be suitable for tracking large groups of objects for considerably long durations, without suffering from frequent optical occlusions.
%
%
%
\section{Conclusions}\label{sec:conclusions}
%
%
%
We presented a novel Global and Recursive Tracking Algorithm (GReTA) in three dimensions able to reconstruct uninterrupted trajectories for large numbers of objects and long time intervals, even with frequent optical occlusions. This recursive divide and conquer algorithm is based on the idea of global optimization of the solution -- global in space as well as in time. The applicability of a global optimization is limited by the computational complexity, which grows exponentially fast with the time duration of the sequence.
Here we achieve a dramatic reduction of the computational complexity by making use of a recursive divide and conquer strategy, which allows to first optimize the matches globally over shorter temporal intervals, and then iterate to cover the entire temporal sequence. In this way, the computational complexity is drastically reduced while preserving the global scope, permitting to track very large datasets (large in terms of number of objects and of duration of the video acquisition). We further proposed several adaptations making the algorithm robust against wrong or missing links.
We implemented the algorithm; we validated it making use of synthetic data with available ground-truth information; we tested it on new experimental field data of flocking birds and swarming insects; we compared its performance using public benchmark datasets. We showed that the algorithm is capable of reconstructing 3D-trajectories with negligible fragmentation, and that the quality of the trajectories is not affected by the recursive divide and conquer strategy. To the best of our knowledge, the results based on synthetic data and on the public datasets proved the superior performance of the proposed tracking approach compared to other existing methods.
We processed bird flock data, insect swarm data, and bats data, despite these systems being very different from each other: insects in a swarm fly in a very jerky manner and occlude frequently in the images, but for very short times; the flight of birds in a flock is highly coordinated, so that occlusions are typically very long-lasting, and can involve several birds at the time; bats exiting a cave continuously enter and leave the field-of-view. Because of this flexibility, we believe that the GReTA approach can be successfully applied to process the most diverse experimental data.
%
%
%
%
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
%
This work was supported by grants IIT--{Seed Artswarm}, ERC--{StG n.257126}, and AFOSR--{FA95501010250} (through the University of Maryland). F.~Pellacini was partially supported by Intel. We acknowledge the advice of Carlo Lucibello on multi-objective optimization.
%
|
1,314,259,994,890 | arxiv | \section{ Introduction }
By its name, a waveguide serves for propagating waves which may be of
different physical origins: fluctuations of pressure in acoustics,
electromagnetic waves in optics, particle waves in quantum mechanics,
surface water waves in hydrodynamics, etc. The transmission
properties of a waveguide can be characterized by its resonance
frequencies or, equivalently, by the spectrum of an operator which
describes the waves motion (e.g., the Laplace operator in the most
usual case). For infinite waveguides, the spectrum consists of two
parts: (i) the essential (or continuous) spectrum for which the
related resonances are extended over the whole domain and thus have
infinite $L_2$ norms, and (ii) the discrete (or point-like) spectrum
for which the related eigenfunctions have finite $L_2$ norms and thus
necessarily ``trapped'' or ``localized'' in a region of the waveguide.
A wave excited at the frequency of the trapped eigenmode remains in
the localization region and does not propagate.
The existence of trapped, bound or localized eigenmodes in classical
and quantum waveguides has been thoroughly investigated (see reviews
\cite{Duclos95,Linton07} and also references in \cite{Olendski10}).
In the seminal paper, Rellich proved the existence of a localized
eigenfunction in a deformed infinite cylinder \cite{Rellich}. His
results were significantly extended by Jones \cite{Jones53}. Ursell
reported on the existence of trapped modes in surface water waves in
channels \cite{Ursell51,Ursell87,Ursell91}, while Parker observed
experimentally the trapped modes in locally perturbed acoustic
waveguides \cite{Parker66,Parker67}. Exner and Seba considered an
infinite bent strip of smooth curvature and showed the existence of
trapped modes by reducing the problem to Schr\"odinger operator in the
straight strip, with the potential depending on the curvature
\cite{Exner89}. Goldstone and Jaffe gave the variational proof that the
wave equation subject to Dirichlet boundary condition always has a
localized eigenmode in an infinite tube of constant cross-section in
any dimension, provided that the tube is not exactly straight
\cite{Goldstone92}. The problem of localization in acoustic
waveguides with Neumann boundary condition has also been investigated
\cite{Evans92,Evans94}. For instance, Evans {\it et al.} considered
a straight strip with an inclusion of arbitrary (but symmetric) shape
\cite{Evans94} (see \cite{Davies98} for further extension). Such an
inclusion obstructed the propagation of waves and was shown to result
in trapped modes. The effect of mixed Dirichlet, Neumann and Robin
boundary conditions on the localization was also investigated (see
\cite{Olendski10,Bulla97} and references therein). A mathematical
analysis of guided water waves was developed in \cite{Bonnet-Ben93}.
All the aforementioned works dealt with infinite waveguides for which
the Laplace operator spectrum is essential, with possible inclusion of
discrete points. Since these discrete points were responsible for
trapped modes, the major question was whether or not such discrete
points exist below the essential spectrum. It is worth noting that
the localized modes have to decay relatively fast at infinity in order
to guarantee the finite $L_2$ norm. But the same question about the
existence of rapidly decaying eigenfunctions may be formulated for
bounded domains (resonators) with long branches that we call ``finite
waveguides'' (Fig. \ref{fig:domain}). This problem is different in
many aspects. Since all eigenfunctions have now finite $L_2$ norms,
the definition of trapped or localized modes has to be revised. Quite
surprisingly, a rigorous definition of localization in bounded domains
turns out to be a challenging problem \cite{Sapoval91,Even99,Felix07}.
In the context of the present paper concerning finite waveguides, an
eigenmode is called trapped or localized if it decays exponentially
fast in prominent subregions (branches) of the bounded domain. The
exponential decay of an eigenfunction in the branch can be related to
the smallness of the associated eigenvalue in comparison to the
cut-off frequency, i.e. the first eigenvalue of the Laplace operator
in the cross-section of that branch \cite{Jackson}. In other words,
the existence of a trapped mode is related to ``smallness'' of the
eigenvalue, in full analogy to infinite waveguides. Using the
standard mathematical tools such as domain decomposition, explicit
representation of solutions of the Helmholtz equation and variational
principle, we aim at formalizing these ideas and providing a
sufficient condition on the branch lengths for getting a trapped mode.
The dependence of the localization character on the length of branches
is the main result of the paper and a new feature of finite waveguides
which was overseen in the well-established theory of infinite
waveguides. As in practice all quantum waveguides are finite, this
dependence may be important for microelectronic devices.
The paper is organized as follows. In Sec. \ref{sec:main}, we adapt
the method by Bonnet-Ben Dhia and Joly \cite{Bonnet-Ben93} in order to
reduce the original eigenvalue problem in the whole domain to the
nonlinear eigenvalue problem in the domain without branches. Although
the new problem is more sophisticated, its variational reformulation
provides a general framework for proving the trapping (or
localization) of eigenfunctions. We use it to derive the main result
of the paper: a sufficient condition (\ref{eq:cond_general}) on the
branch lengths for getting a trapped mode. In sharp contrast to an
infinite (non-straight) waveguide of a constant cross-section, for
which the first eigenfunction is always trapped and exponentially
decaying \cite{Goldstone92}, finite waveguides may {\it or may not}
have such an eigenfunction, depending on the length of branches. This
method is then illustrated in Sec. \ref{sec:examples} for several
finite waveguides (e.g., a bent strip and a cross of two stripes).
For these examples, we estimate the minimal branch length which is
sufficient for getting at least one localized mode. At the same time,
we provide an example of a waveguide, for which there is no
localization for any branch length. We also construct a family of
finite waveguides for which the minimal branch length varies
continuously. As a consequence, for a given (large enough) branch
length, one can construct two almost identical resonators, one with
and the other without localized mode. This observation may be used
for developing quantum switching devices.
\section{ Theoretical results }
\label{sec:main}
For the sake of clarity, we focus on planar bounded domains with
rectangular branches, while the extension to arbitrary domains in
${\mathbb R}^n$ with general cylindrical branches is straightforward and
provided at the end of this Section.
\begin{figure}
\begin{center}
\includegraphics[width=120mm]{figure1.eps}
\end{center}
\caption{
Two examples of a finite waveguide: (a) a planar bounded domain $D$
which is composed of a basic domain $\Omega$ of arbitrary shape and
three rectangular branches $Q_i$ of lengths $a_i$ and width $b=1$; (b)
a three-dimensional bounded domain with three general cylindrical
branches. }
\label{fig:domain}
\end{figure}
We consider a planar bounded domain $D$ composed of a basic domain
$\Omega$ of arbitrary shape and $M$ rectangular branches $Q_i$ of
lengths $a_i$ and width $b$ as shown on Fig. \ref{fig:domain}:
\begin{equation*}
D = \Omega\cup \bigcup\limits_{i=1}^M Q_i .
\end{equation*}
We denote $\Gamma_i = \partial \Omega \cap \partial Q_i$ the inner boundary
between the basic domain $\Omega$ and the branch $Q_i$ and $\Gamma =
\partial\Omega \backslash \bigcup_{i=1}^M \Gamma_i$ the exterior boundary
of $\Omega$. We study the eigenvalue problem for the Laplace operator
with Dirichlet boundary condition
\begin{equation}
\label{1}
-\Delta U = \lambda U , \quad U|_{\partial D}=0 .
\end{equation}
\subsection{ Solution in rectangular branches }
Let $u_i(x,y)$ denote the restriction of the solution $U(x,y)$ of the
eigenvalue problem (\ref{1}) to the branch $Q_i$. For convenience, we
take $b=1$ and assume that the coordinates $x$ and $y$ are chosen in
such a way that $Q_i = \{ (x,y)\in{\mathbb R}^2~:~ 0<x<a_i,~ 0<y<1 \}$ (the
final result will not depend on this particular coordinate system).
The eigenfunction $u_i(x,y)$ satisfying Dirichlet boundary condition
on $\partial Q_i$ has the standard representation:
\begin{equation}
\label{2}
u_i(x,y) \equiv U_{|Q_i} = \sum\limits_{n=1}^\infty c_n \sinh(\gamma_n (a_i-x)) \sin (\pi n y) ,
\end{equation}
where $\gamma_n = \sqrt{\pi^2 n^2 - \lambda}$ and $c_n$ are the
Fourier coefficients of the function $U$ at the inner boundary
$\Gamma_i$ (at $x = 0$):
\begin{equation}
\label{eq:cn}
c_n = \frac{2}{\sinh(\gamma_n a_i)} \int\limits_0^1 dy ~ U(0,y) \sin (\pi n y) .
\end{equation}
Substituting this relation into Eq. (\ref{2}) yields
\begin{equation}
\label{eq:u1u2}
u_i(x,y) = 2\sum\limits_{n=1}^{\infty} \bigl(U_{|\Gamma_i}, \sin (\pi n y)\bigr)_{L_2(\Gamma_i)}
\frac{ \sinh (\gamma_n(a_i - x))}{\sinh (\gamma_n a_i)} \sin (\pi n y) ,
\end{equation}
where the integral in (\ref{eq:cn}) was interpreted as the scalar
product in $L_2(\Gamma_i)$. The representation (\ref{eq:u1u2}) is of
course formal because its coefficients are still unknown.
Nevertheless, one can already distinguish two different cases.
(i) If $\lambda < \pi^2$, all $\gamma_n$ are real, and the
representation (\ref{eq:u1u2}) decays exponentially.
In fact, writing the squared $L_2$-norm of the function $u_i(x,y)$
along the vertical cross-section of the branch $Q_i$ at $x$,
\begin{equation*}
I_i(x) \equiv \int\limits_0^1 u_i^2(x,y) dy
= 2\sum \limits_{n=1}^{\infty} \bigl(U_{|\Gamma_i}, \sin (\pi n y)\bigr)^2_{L_2(\Gamma_i)}
\frac{\sinh^2(\gamma_n(a_i - x))}{\sinh^2(\gamma_n a_i)} ,
\end{equation*}
one can use the inequality $\sinh(\gamma_n (a_i-x)) \leq
\sinh(\gamma_n a_i) e^{-\gamma_1 x}$ to get
\begin{equation}
\label{eq:decay}
I_i(x) \leq 2 \sum\limits_{n=1}^{\infty} \bigl(U_{|\Gamma_i}, \sin(\pi n y)\bigr)^2_{L_2(\Gamma_i)} e^{-2 \gamma_1 x}
= I_i(0) e^{-2\gamma_1 x} \quad (0 < x < a_i).
\end{equation}
This shows the exponential decay along the branch with the decay rate
$2\gamma_1 = 2\sqrt{\pi^2 - \lambda}$.
(ii) In turn, if $\lambda > \pi^2$, some $\gamma_n$ are purely
imaginary so that $\sinh(\gamma_n z)$ becomes $\sin(|\gamma_n|z)$, and
the exponential decay is replaced by an oscillating behavior.
One sees that the problem of localization of the eigenfunction in the
basic domain $\Omega$ is reduced to checking whether the eigenvalue
$\lambda$ is smaller or greater than $\pi^2$ (or $\pi^2/b^2$ in
general).
\subsection{ Nonlinear eigenvalue problem }
The explicit representation (\ref{eq:u1u2}) of the eigenfunction in
the branch $Q_i$ allows one to reformulate the original eigenvalue
problem (\ref{1}) in the whole domain $D$ as a specific eigenvalue
problem in the basic domain $\Omega$. In fact, the restriction of $U$
onto the basic domain $\Omega$, $u \equiv U_{|\Omega}$, satisfies the
following equations
\begin{equation}
\label{eq:eigen_basic0}
-\Delta u = \lambda u \quad \mathrm{in}~\Omega, \quad u|_{\Gamma}=0 , \quad u|_{\Gamma_i} = u_i|_{\Gamma_i},
\quad \frac{\partial u}{\partial n}|_{\Gamma_i} = - \frac{\partial u_i}{\partial n}|_{\Gamma_i} ,
\end{equation}
where $\partial/\partial n$ denotes the normal derivative directed outwards the
domain. The last two conditions ensure that the eigenfunction and its
derivative are continuous at inner boundaries $\Gamma_i$ (the sign
minus accounting for opposite orientations of the normal derivatives
on both sides of the inner boundary). The normal derivative of $u_i$
can be explicitly written by using Eq. (\ref{eq:u1u2}):
\begin{equation}
\label{eq:aux1}
\frac{\partial u_i}{\partial n}|_{\Gamma_i} = - \frac{\partial u_i}{\partial x}|_{x=0} =
2\sum\limits_{n=1}^{\infty} \gamma_n \coth(\gamma_n a_i) \bigl(U_{|\Gamma_i}, \sin (\pi n y)\bigr)_{L_2(\Gamma_i)} \sin (\pi n y) .
\end{equation}
Denoting $T_i(\lambda)$ an operator acting from $H^{1/2}(\Gamma_i)$ to
$H^{-1/2}(\Gamma_i)$ (see \cite{Lions} for details) as
\begin{equation*}
T_i(\lambda) f \equiv 2\sum\limits_{n=1}^{\infty} \gamma_n \coth(\gamma_n a_i) \bigl(f, \sin (\pi n y)\bigr)_{L_2(\Gamma_i)} \sin (\pi n y) ,
\end{equation*}
the right-hand side of Eq. (\ref{eq:aux1}) can be written as
\begin{equation*}
\frac{\partial u_i}{\partial n}|_{\Gamma_i} = T_i(\lambda) U_{|\Gamma_i} .
\end{equation*}
The eigenvalue problem (\ref{eq:eigen_basic0}) admits thus a closed
representation as
\begin{equation}
\label{eq:eigen_basic}
-\Delta u = \lambda u \quad \mathrm{in}~\Omega, \quad u|_{\Gamma}=0 , \quad
\frac{\partial u}{\partial n}|_{\Gamma_i} = - T_i(\lambda) u|_{\Gamma_i} .
\end{equation}
The presence of branches and their shapes are fully accounted for by
the operators $T_i$ which are somewhat analogous to
Dirichlet-to-Neumann operators.
Although this domain decomposition allows one to remove the branches
and get a closed formulation for the basic domain $\Omega$, the new
eigenvalue problem is {\it nonlinear} because the eigenvalue $\lambda$
appears also in the boundary condition through the operators
$T_i(\lambda)$. A trick to overcome this difficulty goes back to the
Birman-Schwinger method \cite{Birman61,Schwinger61} (see also
\cite{Aslanyan81}). Following \cite{Bonnet-Ben93}, we fix $\lambda$
and solve the {\it linear} eigenvalue problem
\begin{equation}
\label{eq:eigen_basic2}
-\Delta u = \mu(\lambda) u \quad \mathrm{in}~\Omega, \quad u|_{\Gamma}=0 , \quad
\frac{\partial u}{\partial n}|_{\Gamma_i} = - T_i(\lambda) u|_{\Gamma_i} ,
\end{equation}
where $\mu(\lambda)$ denotes the eigenvalue which is parameterized by
$\lambda$. The solution of the original problem is recovered when
$\mu(\lambda) = \lambda$.
From a practical point of view, a numerical solution of
Eqs. (\ref{eq:eigen_basic2}) with the subsequent resolution of the
equation $\mu(\lambda)=\lambda$ is in general much more difficult than
solving the original eigenvalue problem (see also \cite{Levitin08} for
possible numerical schemes). In turn, Eqs. (\ref{eq:eigen_basic2})
are convenient for checking whether the first eigenvalue $\lambda_1$
is smaller or greater than $\pi^2$, as explained below.
\subsection{ Variational formulation }
We search for a weak solution of the eigenvalue problem
(\ref{eq:eigen_basic2}) in the Sobolev space
\begin{equation*}
H^1_0(\Omega) = \{ v(x,y) \in L_2(\Omega), ~ \partial v/\partial x \in L_2(\Omega),~ \partial v/\partial y \in L_2(\Omega),~ v|_{\Gamma} = 0\} .
\end{equation*}
Multiplying Eq. (\ref{eq:eigen_basic2}) by a trial function $v\in
H^1_0(\Omega)$ and integrating by parts, one gets
\begin{equation*}
\mu(\lambda) \int\limits_{\Omega} v u = - \int\limits_{\Omega} v \Delta u = \int\limits_{\Omega} (\nabla v, \nabla u)
- \int\limits_{\partial \Omega} v \frac{\partial u}{\partial n} .
\end{equation*}
Since $v$ vanishes on $\Gamma$, the weak formulation of the problem
reads as
\begin{equation}
(\nabla u,\nabla v)_{L_2(\Omega)} + \sum\limits_{i=1}^M \bigl(T_i(\lambda) u, v\bigr)_{L_2(\Gamma_i)}
= \mu(\lambda) \bigl(u,v\bigr)_{L_2(\Omega)}~~ \forall v \in H^1_0(\Omega) .
\end{equation}
The first eigenvalue $\mu_1(\lambda)$ is then obtained from the
Rayleigh's principle
\begin{equation}
\label{eq:nu1}
\mu_1(\lambda) = \inf\limits_{v\in H^1_0(\Omega),~ v\ne 0} \frac{\bigl(\nabla v,\nabla v\bigr)_{L_2(\Omega)}+
\sum\nolimits_{i=1}^M \bigl(T_i(\lambda) v, v\bigr)_{L_2(\Gamma_i)}}{\bigl(v,v\bigr)_{L_2(\Omega)}} .
\end{equation}
One can show that $\mu_1(\lambda)$ is a continuous monotonously
decreasing function of $\lambda$ on the segment $(0, \pi^2]$. For
this purpose, one first computes explicitly the derivative of the
function
\begin{equation*}
h(\lambda) \equiv \gamma_n \coth(\gamma_n a_i) = \sqrt{\pi^2 n^2 - \lambda} \coth(\sqrt{\pi^2 n^2 - \lambda}~ a_i)
\end{equation*}
and checks that $h'(\lambda) < 0$. Now one can show that
$\mu_1(\lambda_1) \leq \mu_1(\lambda_2)$ if $\lambda_1 > \lambda_2$.
If some trial function $v_2$ minimizes the Rayleigh's quotient
(\ref{eq:nu1}) for $\lambda_2$, one has
\begin{eqnarray*}
\mu_1(\lambda_1) & \leq & \frac{\bigl(\nabla v_2,\nabla v_2\bigr)_{L_2(\Omega)}+
\sum\nolimits_{i=1}^M \bigl(T_i(\lambda_1) v_2, v_2\bigr)_{L_2(\Gamma_i)}}{\bigl(v_2,v_2\bigr)_{L_2(\Omega)}} \\
& \leq & \frac{\bigl(\nabla v_2,\nabla v_2\bigr)_{L_2(\Omega)}+
\sum\nolimits_{i=1}^M \bigl(T_i(\lambda_2) v_2, v_2\bigr)_{L_2(\Gamma_i)}}{\bigl(v_2,v_2\bigr)_{L_2(\Omega)}} = \mu_1(\lambda_2) ,
\end{eqnarray*}
where the monotonous decrease of the function $h(\lambda)$ was used
(the mathematical proof of the continuity for an analogous functional
is given in \cite{Delitsyn04}).
Since the function $\mu_1(\lambda)$ is positive, continuous and
monotonously decreasing, the equation $\mu_1(\lambda) = \lambda$ has a
solution $0 < \lambda < \pi^2$ if and only if $\mu_1(\pi^2) < \pi^2$.
This is a necessary and sufficient condition for getting a trapped
mode for the linear eigenvalue problem (\ref{eq:eigen_basic2}).
\subsection{ Sufficient condition }
For any trial function $v\in H^1_0(\Omega)$, we denote the Rayleigh's
quotient
\begin{equation}
\mu(v) = \frac{\bigl(\nabla v,\nabla v\bigr)_{L_2(\Omega)} +
\sum\nolimits_{i=1}^M \bigl(T_i(\pi^2) v, v\bigr)_{L_2(\Gamma_i)}}{\bigl(v,v\bigr)_{L_2(\Omega)}} .
\end{equation}
Since $\gamma_n(\pi^2) = \pi \sqrt{n^2-1}$, one has $\gamma_1(\pi^2) =
0$, and the operators $T_i(\pi^2)$ can be decomposed into two parts so
that
\begin{eqnarray}
\mu(v) &=& \bigl(v,v\bigr)^{-1}_{L_2(\Omega)} \biggl\{\bigl(\nabla v,\nabla v\bigr)_{L_2(\Omega)} +
2\sum\limits_{i=1}^M \frac{1}{a_i} \bigl(v, \sin(\pi y)\bigr)^2_{L_2(\Gamma_i)} \nonumber \\
& & + 2\pi \sum\limits_{n=2}^\infty \sqrt{n^2-1} \sum\limits_{i=1}^M \coth(\pi a_i\sqrt{n^2-1})
\bigl(v, \sin(\pi ny)\bigr)^2_{L_2(\Gamma_i)}\biggr\} .
\end{eqnarray}
If one finds a trial function $v\in H^1_0(\Omega)$ for which $\mu(v) <
\pi^2$, then the first eigenvalue $\mu_1(\pi^2)$ necessarily satisfies
this condition because $\mu_1(\pi^2) \leq \mu(v)$. The inequality
$\mu(v) <\pi^2$ is thus a sufficient (but not necessary) condition.
Given that $\coth(\pi a_i \sqrt{n^2-1}) \leq \coth(\pi a_i\sqrt{3})$
for any $n\geq 2$, the sufficient condition can be written as
\begin{equation}
\label{eq:cond}
\sum\limits_{i=1}^M \frac{\sigma_i}{a_i} < \beta - \sum\limits_{i=1}^M \kappa_i \coth(\pi a_i\sqrt{3}) ,
\end{equation}
where
\begin{eqnarray}
\beta &=& \pi^2 \bigl(v,v\bigr)_{L_2(\Omega)} - \bigl(\nabla v,\nabla v\bigr)_{L_2(\Omega)} , \\
\sigma_i &=& 2\bigl(v, \sin(\pi y)\bigr)^2_{L_2(\Gamma_i)} , \\
\label{eq:kappa_i}
\kappa_i &=& 2\pi \sum\limits_{n=2}^\infty \sqrt{n^2-1} ~\bigl(v, \sin(\pi n y)\bigr)^2_{L_2(\Gamma_i)} .
\end{eqnarray}
Before moving to examples, several remarks are in order.
(i) The shape of the branches enters through the operator
$T_i(\lambda)$. Although the above analysis was presented for
rectangular branches, its extension to bounded domains in ${\mathbb R}^n$ with
general cylindrical branches is straightforward and based on the
variable separation (in directions parallel and perpendicular to the
branch). In fact, the Fourier coefficients $(u, \sin(\pi
ny))_{L_2(\Gamma_i)}$ on the unit interval (i.e., the cross-section of
the rectangular branch) have to be replaced by a spectral
decomposition over the orthonormal eigenfunctions
$\{\psi_n(y)\}_{n=1}^\infty$ of the Laplace operator $\Delta_\perp$ in
the cross-section $\Gamma_i$ of the studied branch (in general,
$\Gamma_i$ is a bounded domain in ${\mathbb R}^{n-1}$):
\begin{equation}
\label{eq:psi}
\Delta_\perp \psi_n + \nu_n \psi_n = 0 \quad \mathrm{in}~\Gamma_i, \qquad \psi_n|_{\partial \Gamma_i} = 0.
\end{equation}
In particular, the operator $T_i(\lambda)$ becomes
\begin{equation*}
T_i(\lambda) f = \sum\limits_{n=1}^\infty \gamma_n \coth(\gamma_n a_i) \bigl(f, \psi_n)_{L_2(\Gamma_i)} \psi_n(y) ,
\end{equation*}
with $\gamma_n = \sqrt{\nu_n - \lambda}$. Repeating the above
analysis, one immediately deduces a sufficient condition for getting a
trapped mode:
\begin{equation}
\label{eq:cond_general}
\sum\limits_{i=1}^M \frac{\sigma_i}{a_i} < \beta - \sum\limits_{i=1}^M \kappa_i \coth(a_i\sqrt{\nu_2 - \nu_1}) ,
\end{equation}
with
\begin{eqnarray}
\label{eq:beta_gen}
\beta &=& \nu_1 \bigl(v,v\bigr)_{L_2(\Omega)} - \bigl(\nabla v,\nabla v\bigr)_{L_2(\Omega)} , \\
\label{eq:sigma_gen}
\sigma_i &=& \bigl(v, \psi_1\bigr)^2_{L_2(\Gamma_i)} , \\
\label{eq:kappa_gen}
\kappa_i &=& \sum\limits_{n=2}^\infty \sqrt{\nu_n-\nu_1} ~\bigl(v, \psi_n\bigr)^2_{L_2(\Gamma_i)} .
\end{eqnarray}
One retrieves the above results for rectangular branches by putting
$\psi_n(y) = \sqrt{2}\sin(\pi n y)$ and $\nu_n = \pi^2 n^2$.
The inequality (\ref{eq:cond_general}) is the main result of the
paper. Although there is no explicit recipe for choosing the trial
function $v$ (which determines the coefficients $\beta$, $\sigma_i$
and $\kappa_i$), this is a general framework for studying the
localization (or trapping) in domains with cylindrical branches.
(ii) If the branches are long enough (e.g., $a_i \gg
(\nu_2-\nu_1)^{-1/2}$), the values $\coth (a_i \sqrt{\nu_2-\nu_1})$
are very close to $1$ and can be replaced by $1+\epsilon$ where
$\epsilon$ is set by the expected minimal length so that the
inequality (\ref{eq:cond_general}) becomes more explicit in terms of
$a_i$:
\begin{equation}
\label{eq:cond2}
\sum\limits_{i=1}^M \frac{\sigma_i}{a_i} < \beta - (1+\epsilon)\sum\limits_{i=1}^M \kappa_i .
\end{equation}
In the particular case when all $\sigma_i$ are the same, one can
introduce the threshold value $\eta$ as
\begin{equation}
\label{eq:eta}
\sum\limits_{i=1}^M \frac{1}{a_i} < \eta , \qquad \eta \equiv \frac{\beta}{\sigma_1} -
\frac{(1+\epsilon)}{\sigma_1}\sum\limits_{i=1}^M \kappa_i .
\end{equation}
For domains with identical branches, $a_i = a$, the above condition
determines the branch length $a_{\rm th} = M/\eta$ which is long
enough for the emergence of localization. This means that for any $a
> a_{\rm th}$ there is a localized eigenmode. Since $a_{\rm th}$ was
obtained from the sufficient condition (\ref{eq:cond_general}), the
opposite statement is not true: for $a < a_{\rm th}$, this condition
does not indicate whether the eigenfunction is localized or not. In
fact, $a_{\rm th}$ is the upper bound for the minimal branch length
$a_{\rm min}$ which may distinguish waveguides with and without
localized modes (see Sec. \ref{sec:examples}).
(iii) The trial function should be chosen to ensure the convergence of
the series in (\ref{eq:kappa_gen}). If the boundary of $\Omega$ is
smooth, the series in (\ref{eq:kappa_gen}) converges for any function
$v$ from $H^1_0(\Omega)$ according to the trace theorem \cite{Lions}.
In turn, the presence of corners or other singularities may require
additional verifications for the convergence, as illustrated in
Sec. \ref{sec:strip}.
(iv) The implementation of various widths $b_i$ of the rectangular
branches is relatively straightforward (e.g., $\sin(\pi ny)$ is
replaced by $\sin(\pi ny/b_i)$, etc.). In order to guarantee the
exponential decay in all branches, one needs $\lambda < \pi^2/b_i^2$
for all $i$, i.e., $\lambda < \pi^2/\max \{b_i^2\}$. Rescaling the
whole domain in such a way that $\max \{b_i\} = 1$, one can use the
above conditions.
(v) According to Eq. (\ref{eq:decay}), the decay rate, $2\gamma_1$, is
determined by the eigenvalue $\lambda$. Since $\mu(v)$ is an upper
bound for the first eigenvalue, one gets a lower bound for the decay
rate:
\begin{equation*}
2\gamma_1 \geq 2\sqrt{\nu_1 - \mu(v)} = 2(v,v)^{-1/2}_{L_2(\Omega)}
\left(\beta - \sum\limits_{i=1}^M [\sigma_i/a_i + \kappa_i \coth(a_i\sqrt{\nu_2-\nu_1})]\right)^{1/2} .
\end{equation*}
\section{ Several examples }
\label{sec:examples}
As we already mentionned, there is no general recipe for choosing a
trial function $v$ from $H^1_0(\Omega)$. Of course, the best possible
choice is the eigenfunction on which $\mu(v)$ reaches its minimum.
Except for few cases, the eigenfunction is not known but one can often
guess how it behaves for a given basic domain. Since the gradient of
the trial function $v$ enters into the coefficient $\beta$ with the
sign minus, slowly varying functions are preferred. In what follows,
we illustrate these concepts for several examples.
\begin{figure}
\begin{center}
\includegraphics[width=150mm]{figure2.eps}
\end{center}
\caption{
Three types of a bent waveguide: (a) L-shape, (b) bent strip, and (c)
truncated L-shapes parameterized by the length $\ell$ varying from $0$
(triangular basic domain) to $1$ (the original L-shape). }
\label{fig:bent}
\end{figure}
\subsection{ L-shape }
We start by a classical problem of localization in L-shape with two
rectangular branches of lengths $a_1$ and $a_2$ (Fig. \ref{fig:bent}a)
for which the basic domain is simply the unit square. In the limit
case $a_1 = a_2 = 0$ (i.e., $D = \Omega$, without branches), the first
eigenvalue $\lambda_1 = 2\pi^2 > \pi^2$ so that, according to our
terminology, there is no localization. Since $\lambda_1$ continuously
varies with $a$ ($a_1 = a_2 = a$), the inequality $\lambda_1 > \pi^2$
also remains true for relatively short branches. In turn, given that
$\lambda_1 < \pi^2$ for infinitely long branches, there should exist
the minimal branch length $a_{\rm min}$ such that $\lambda_1 = \pi^2$.
At this length, the first eigenfunction passes from non-localized
state ($a < a_{\rm min}$) to localized state ($a > a_{\rm min}$). In
what follows, we employ the sufficient condition (\ref{eq:cond}) in
order to get the upper bound for $a_{\rm min}$.
The most intuitive choice for the trial function would be the first
eigenfunction for the unit square with Dirichlet boundary condition,
$v(x,y) = \sin(\pi x)\sin(\pi y)$. However, one easily checks that
$\beta = 0$ for this function, while $\sigma_i$ and $\kappa_i$ are
always non-negative. As a consequence, the condition (\ref{eq:cond})
is never satisfied for this trial function. It simply means that the
first choice was wrong.
For the trial function
\begin{equation}
\label{eq:trial_L-shape}
v(x,y) = (1+x) \sin(\pi y) + (1+y) \sin(\pi x) ,
\end{equation}
the explicit integration yields
\begin{eqnarray*}
&& \bigl(v,v\bigr)_{L_2(\Omega)} = \int\limits_{-1}^0 dx \int\limits_{-1}^0 dy \bigl[(1+x) \sin(\pi y)
+ (1+y) \sin(\pi x)\bigr]^2 = \frac{1}{3} + \frac{2}{\pi^2} , \\
&& \bigl(\nabla v,\nabla v\bigr)_{L_2(\Omega)} = \int\limits_{-1}^0 dx \int\limits_{-1}^0 dy
\biggl[\biggl(\frac{\partial v}{\partial x}\biggr)^2 + \biggl(\frac{\partial v}{\partial y}\biggr)^2\biggr] = \frac{\pi^2}{3} + 1 , \\
&& \bigl(v, \sin(\pi y)\bigr)_{L_2(\Gamma_1)} = \int\limits_{-1}^0 dy \bigl[(1+0) \sin(\pi y) + (1+y) \sin(\pi 0)\bigr] \sin (\pi y) = \frac12 , \\
&& \bigl(v, \sin(\pi n y)\bigr)_{L_2(\Gamma_1)} = \bigl(v, \sin(\pi n x)\bigr)_{L_2(\Gamma_2)} = 0 \qquad (n\geq 2) .
\end{eqnarray*}
from which
\begin{equation*}
\beta = 1, \quad \sigma_1 = \sigma_2 = \frac12, \quad \kappa_1 = \kappa_2 = 0 .
\end{equation*}
The condition (\ref{eq:cond}) reads as
\begin{equation}
\label{eq:cond_L-shape}
\frac{1}{a_1} + \frac{1}{a_2} < 2 .
\end{equation}
If the branches have the same length, $a_1 = a_2 = a$, then the upper
bound of the minimal branch length for getting a localized
eigenfunction is given by $a_{\rm th} = 1$.
\begin{figure}
\begin{center}
\includegraphics[width=120mm]{figure3.eps}
\end{center}
\caption{
The first eigenvalue $\lambda_1$ divided by $\pi^2$, as a function of
the branch length $a$ ($a_1 = a_2 = a$), for three bent waveguides
shown on Fig. \ref{fig:bent}: L-shape (solid line), bent strip (dashed
line) and truncated L-shape with $\ell = 0$ (dash-dotted line). For
the first two cases, the curves cross the level $1$ at $a_{\rm
min}\approx 0.84$ and $a_{\rm min}\approx 2.44$, respectively. In
turn, the third curve always remains greater than $1$ (see
explanations in Sec. \ref{sec:triangular}). For $a = 0$, $\lambda_1$
is respectively equal to $2\pi^2$, $4\alpha^2$ and $5\pi^2$, where
$\alpha \approx 2.4048$ is the first positive zero of the Bessel
function $J_0(z)$. }
\label{fig:lambda}
\end{figure}
We also solved the original eigenvalue problem (\ref{1}) for L-shape
with $a_1 = a_2 = a$ by a finite element method (FEM) implemented in
Matlab PDEtools. The first eigenvalue $\lambda_1$ as a function of
the branch length $a$ is shown by solid line on Fig. \ref{fig:lambda}.
One can clearly see a transition from non-localized ($\lambda_1 >
\pi^2$) to localized ($\lambda_1 < \pi^2$) states when $a$ crosses
the minimal branch length $a_{\rm min} \approx 0.84$. As expected,
the theoretical upper bound $a_{\rm th}$ which was obtained from a
{\it sufficient} condition, exceeds the numerical value $a_{\rm min}$.
In order to improve the theoretical estimate, one has to search for
trial functions which are closer to the true eigenfunction. At the
same time, $a_{\rm th}$ and $a_{\rm min}$ are close to each other, and
the accuracy of the theoretical result is judged as good. Similar
results for L-shape in three dimensions are derived in
\ref{sec:Lshape_3D}.
\subsection{ Cross }
Another example is a crossing of two perpendicular rectangular
branches (Fig. \ref{fig:cross}a), for which the basic domain is again
the unit square. Since the trial function (\ref{eq:trial_L-shape})
also satifies the boundary condition for this problem, the previous
sufficient condition (\ref{eq:cond_L-shape}) remains applicable for
arbitrary lengths $a_3$ and $a_4$. This is not surprising because any
increase of the basic domain (i.e., if the basic domain was considered
as the unit square with two branches $Q_3$ and $Q_4$) decreases the
eigenvalue. A symmetry argument implies that other consecutive pairs
of branch lengths can be used in the condition
(\ref{eq:cond_L-shape}), e.g., the eigenfunction is localized if
$1/a_2 + 1/a_3 < 2$ for arbitrary $a_1$ and $a_4$. In turn, the
condition $1/a_1 + 1/a_3 < 2$ is not sufficient for localization (in
fact, taking $a_2 = a_4 = 0$ yields a rectangle without localization).
The specific feature of the cross is that the exterior boundary of the
basic domain $\Omega$ consists of 4 corner points. We suggest another
trial function
\begin{equation}
v(x,y) = x(1+x) + y(1+y) ,
\end{equation}
which satisfies the Dirichlet boundary condition at these points. The
direct integration yields
\begin{eqnarray*}
\beta &=& \frac{11}{90}\pi^2 - \frac{2}{3}, \qquad \sigma_i = \frac{64}{\pi^6} , \\
\kappa_i &=& 2\pi \sum\limits_{n=2}^\infty \sqrt{n^2-1} \left(2\frac{1-(-1)^n}{\pi^3 n^3}\right)^2 \approx 0.0029 .
\end{eqnarray*}
The condition (\ref{eq:cond}) reads now as
\begin{equation}
\label{eq:cross_cond}
\sum\limits_{i=1}^4 \frac{1}{a_i} < \frac{\beta}{\sigma_1} - \frac{\kappa_1}{\sigma_1} \sum\limits_{i=1}^4 \coth(\pi a_i \sqrt{3}) .
\end{equation}
If all the branches have the same length $a$, the upper bound of the
minimal branch length can be estimated by solving the equation
\begin{equation*}
\frac{4}{a_{\rm th}} = \frac{\beta}{\sigma_1} - \frac{4\kappa_1}{\sigma_1} \coth(\pi a_{\rm th}\sqrt{3}) ,
\end{equation*}
from which one gets $a_{\rm th}\approx 0.2407$. Note that this result
proves and further extends the prediction of localized eigenmodes in
the crossing of infinite rectangular stripes which was made by Schult
{\it et al.} by numerical computation \cite{Schult89}. In that
reference, the importance of localized electron eigenstates in four
terminal junctions of quantum wires was discussed.
\begin{figure}
\begin{center}
\includegraphics[width=140mm]{figure4.eps}
\end{center}
\caption{
(a) Crossing of two rectangular branches; (b) an extension of the
related basic domain $\Omega$; and (c) coupling between two waveguides
from Fig. \ref{fig:bent}c ($\ell = 0$) with an opening of size
$\varepsilon$. }
\label{fig:cross}
\end{figure}
Figure \ref{fig:cross_eigen} presents first eigenfunctions for the
crossing of two rectangular branches with $a_i = 5$ (the second
eigenfunction, which looks similar to the third one, is not shown).
As predicted by the sufficient condition (\ref{eq:cross_cond}), the
first eigenfunction (with $\lambda_1 \approx 0.66\pi^2$) is clearly
localized in the basic domain and exponentially decaying in the
branches. In turn, all other eigenfunctions (with $\lambda_n >
\pi^2$) are not localized.
It is worth noting again that any increase of the basic domain (see
Fig. \ref{fig:cross}b) reduces the eigenvalue and thus favors the
localization.
\begin{figure}
\begin{center}
\includegraphics[width=120mm]{figure5.eps}
\end{center}
\caption{
First eigenfunctions for the crossing of two rectangular branches
($a_i = 5$). The associated eigenvalues are $\lambda_1 \approx
0.661\pi^2$, $\lambda_2 = \lambda_3 \approx 1.032
\pi^2$, $\lambda_4 \approx 1.036\pi^2$ and $\lambda_5\approx
1.044\pi^2$.}
\label{fig:cross_eigen}
\end{figure}
\subsection{ Bent strip }
\label{sec:strip}
In previous examples, the basic domain was the unit square. We
consider another shape for which the analytical estimates can be
significantly advanced. This is a sector of the unit disk which can
be seen as a connector between two parts of a bent strip
(Fig. \ref{fig:bent}b). In contrast to the case of infinite stripes
for which Goldstone and Jaffe have proved the existence of a localized
eigenmode for any bending (except the straigh strip)
\cite{Goldstone92}, there is a minimal branch length required for the
existence of a localized eigenmode in a finite bent strip. In order
to demonstrate this result, we consider the family of trial functions
\begin{equation}
v_\alpha(r) = \frac{\sin \pi r}{r^\alpha} \qquad (0 < \alpha < 1) .
\end{equation}
In \ref{sec:bent}, we derive Eqs. (\ref{eq:beta_bent},
\ref{eq:sigma_bent}, \ref{eq:kappa_bent}) for the coefficients
$\beta$, $\sigma_i$, and $\kappa_i$, respectively. Since all these
coefficients depend on $\alpha$, its variation can be used to maximize
the threshold $\eta$ given by Eq. (\ref{eq:eta}). The numerical
computation of these coefficients suggests that $\eta$ is maximized
for $\alpha$ around $1/3$: $\eta \approx 0.7154$. If $a_1 = a_2 = a$,
one gets the upper bound $a_{\rm th}$ of the minimal branch length
which ensures the emergence of the localized eigenfunction:
\begin{equation*}
a > a_{\rm th} = \frac{2}{\eta} \approx 2.7956 .
\end{equation*}
We remind that this is sufficient, not necessary condition. The
numerical computation of the first eigenvalue in the bent strip (by
FEM implemented in Matlab PDEtools) yields $a_{\rm min} \approx 2.44$.
One can see that the upper bound $a_{\rm th}$ is relatively close to
this value. The behavior of the eigenvalue $\lambda_1$ as a function
of the branch length $a$ is shown by dashed line on
Fig. \ref{fig:lambda}.
\subsection{ Waveguide without localization }
\label{sec:triangular}
Any increase of the basic domain $\Omega$ reduces the eigenvalues and
thus preserves the localization. In turn, a decrease of $\Omega$ may
suppress the trapped mode. For instance, the passage from L-shape
($\Omega$ being the unit square) to the bent strip ($\Omega$ being the
quarter of the disk) led to larger minimal length required for keeping
the localization ($a_{\rm min}\approx 2.44$ instead of $a_{\rm min}
\approx 0.84$). For instance, when $a_1 = a_2 = 2$, the first
eigenfunction, which was localized in the L-shape, is not localized in
the bent strip (Fig. \ref{fig:bent_eigenf_small}). Further decrease
of the basic domain $\Omega$ may completely suppress the localization.
\begin{figure}
\begin{center}
\includegraphics[width=100mm]{figure6.eps}
\end{center}
\caption{
The first eigenfunction for three bent waveguides shown on
Fig. \ref{fig:bent} ($\ell = 0$), with $a = 2$. The associate
eigenvalue $\lambda_1$ is equal to $0.9357 \pi^2$, $1.0086 \pi^2$ and
$1.1435\pi^2$, respectively. Although the first eigenmode is
localized, all three eigenfunctions visually look similar.}
\label{fig:bent_eigenf_small}
\end{figure}
In order to illustrate this point, we consider the truncated L-shape
shown on Fig. \ref{fig:bent}c with $\ell = 0$ for which
\begin{equation*}
\Omega = \{ (x,y)\in{\mathbb R}^2~:~ -1 < x < 0,~ -1 < y <0, ~ x+y > - 1 \}
\end{equation*}
is a triangle. It is easy to see that $u(x,y) = \cos(\pi x) +
\cos(\pi y)$ is the first eigenfunction of the following eigenvalue
problem in $\Omega$:
\begin{equation*}
- \Delta u = \tilde{\mu} u \quad \mathrm{in}~\Omega, \qquad u|_{\Gamma} = 0, \qquad \frac{\partial u}{\partial n}|_{\Gamma_i} = 0 ,
\end{equation*}
with the eigenvalue $\tilde{\mu}_1 =\pi^2$. From the variational
principle
\begin{equation*}
\tilde{\mu}_1 = \inf \limits_{v \in H^1_0(\Omega), v \ne 0}
\frac{\bigl(\nabla v, \nabla v\bigr)_{L_2(\Omega)}}{\bigl(v, v\bigr)_{L_2(\Omega)}} ,
\end{equation*}
so that
\begin{equation*}
\bigl(\nabla v, \nabla v\bigr)_{L_2(\Omega)} \geq \tilde{\mu}_1 \bigl(v, v\bigr)_{L_2(\Omega)}
= \pi^2 \bigl(v, v\bigr)_{L_2(\Omega)} \quad \forall v \in H^1_0(\Omega) .
\end{equation*}
Moreover, the Friedrichs-Poincar\'e inequality in the branches $Q_i$
implies \cite{Lions}
\begin{equation*}
\bigl(\nabla v, \nabla v\bigr)_{L_2(Q_i)} \geq \pi^2 \bigl(v,v\bigr)_{L_2(Q_i)} \quad \forall v \in H^1_0(Q_i) ,
\end{equation*}
from which
\begin{equation*}
\bigl(\nabla v, \nabla v\bigr)_{L_2(D)} \geq \pi^2 \bigl(v,v\bigr)_{L_2(D)} \quad \forall v \in H^1_0(D) .
\end{equation*}
As a consequence, all eigenvalues of the original eigenvalue problem
(\ref{1}) in $D$ exceed $\pi^2$ and the corresponding eigenfunctions
are not localized in the basic domain $\Omega$, whatever the length of
the branches.
When one varies continuously $\ell$ in Fig. \ref{fig:bent}c, the basic
domain transforms from the unit square (Fig. \ref{fig:bent}a) to
triangle so that one can get any prescibed minimal length $a_{\rm
min}$ between $0.84$ and infinity. In other words, for any prescribed
branch length, one can always design such a basic domain (such $\ell$)
for which there is no localized eigenmodes. As a consequence, the
localization may be very sensitive to the shape of the basic domain
and to the length of branches. These effects which were overseen for
infinite waveguides, may be important for microelectronic
applications.
Figure \ref{fig:bent_eigenf} shows the first eigenfunction for three
bent waveguides from Fig. \ref{fig:bent} with $a = 20$. The associate
eigenvalue $\lambda_1$ is equal to $0.9302 \pi^2$, $0.9879 \pi^2$ and
$1.0032 \pi^2$, respectively. Although the last two values are very
close to each other, the behavior of the associated eigenfunctions is
completely different. According to the sufficient condition, the
first two eigenfunctions are localized in the basic domain, while the
last one is not. One can clearly distinguish these behaviors on
Fig. \ref{fig:bent_eigenf}.
\begin{figure}
\begin{center}
\includegraphics[width=100mm]{figure7.eps}
\end{center}
\caption{
The first eigenfunction for three bent waveguides shown on
Fig. \ref{fig:bent} ($\ell = 0$), with $a = 20$. The associate
eigenvalue $\lambda_1$ is equal to $0.9302 \pi^2$, $0.9879 \pi^2$ and
$1.0032 \pi^2$, respectively. Although the last two values are very
close to each other, the behavior of the eigenfunctions is completely
different.}
\label{fig:bent_eigenf}
\end{figure}
\subsection{ Two coupled waveguides }
The coupling of infinite waveguides has been intensively investigated
\cite{Exner96}. We consider a coupling of two finite crossing
waveguides through an opening of variable size $\varepsilon$ as shown on
Fig. \ref{fig:cross}c. When $\varepsilon = 0$, one has two decoupled
waveguides from Fig. \ref{fig:bent}c for which we proved in the
previous subsection the absence of localized eigenmodes. When $\varepsilon =
\sqrt{2}$, there is no barrier and the waveguides are fully coupled.
This is the case of crossing between two rectangular branches as shown
on Fig. \ref{fig:cross}a, for which we checked the existence of
localized eigenmodes under weak conditions (\ref{eq:cond_L-shape}) or
(\ref{eq:cross_cond}). Varying the opening $\varepsilon$ from $0$ to
$\sqrt{2}$, one can continuously pass from one situation to the other.
This transition is illustrated on Fig. \ref{fig:cross_eigenf_eps}
which presents the first eigenfunction for two coupled waveguides
shown on Fig. \ref{fig:cross}c, with $a_i = 5$ and different coupling
(opening $\varepsilon$). The first two eigenfunctions, with $\varepsilon = 0$ (fully
separated waveguides) and $\varepsilon = 0.4 \sqrt{2}$ (opening $40\%$), are
not localized, while the last two eigenfunctions, with $\varepsilon = 0.5
\sqrt{2}$ (opening $50\%$) and $\varepsilon =
\sqrt{2}$ (fully coupled waveguides, i.e. a cross), are localized.
The critical coupling $\varepsilon_c$, at which the transition occurs (i.e.,
for which $\lambda_1 = \pi^2$) lies between $40\%$ and $50\%$.
Although numerical computation may allow one to estimate $\varepsilon_c$ more
accurately, we do not perform this analysis because the value $\varepsilon_c$
anyway depends on the branch lengths. In general, for any $a > a_{\rm
min} \approx 0.84$, there is a critical value $\varepsilon_c(a)$ for which
$\lambda_1 = \pi^2$. For $\varepsilon < \varepsilon_c$, there is no localized modes,
while for $\varepsilon > \varepsilon_c$ there is at least one localized mode. The
high sensitivity of the localization character to the opening $\varepsilon$
and to the branch lengths can potentially be employed in quantum
switching devices (see \cite{Timp88} and references therein).
\begin{figure}
\begin{center}
\includegraphics[width=130mm]{figure8.eps}
\end{center}
\caption{
The first eigenfunction for two coupled waveguides shown on
Fig. \ref{fig:cross}c, with $a_i = 5$ and different coupling (opening
$\varepsilon$): $\varepsilon = 0$ (fully separated waveguides, zero coupling), $\varepsilon =
0.4 \sqrt{2}$ (opening $40\%$ of the diagonal), $\varepsilon = 0.5 \sqrt{2}$
(opening $50\%$ of the diagonal) and $\varepsilon = \sqrt{2}$ (fully coupled
waveguides, no barrier). The associate eigenvalue $\lambda_1$ is
equal to $1.05\pi^2$, $1.02\pi^2$, $0.97\pi^2$, and $0.67\pi^2$,
respectively. In the first two cases, the eigenmodes is not
localized. Changing the opening $\varepsilon$, one passes from non-localized
to localized eigenmodes. }
\label{fig:cross_eigenf_eps}
\end{figure}
\section*{Conclusion}
We investigated the problem of trapped or localized eigenmodes of the
Laplace operator in resonators with long branches that we called
``finite waveguides''. In this context, the localization was
understood as an exponential decay of an eigenfunction inside the
branches. This behavior was related to the smallness of the
associated eigenvalue $\lambda$ in comparison to the first eigenvalue
of the Laplace operator in the cross-section of the branch with
Dirichlet boundary condition. Using the explicit representation of an
eigenfunction in branches, we proposed a general variational formalism
for checking the existence of localized modes. The main result of the
paper is the sufficient condition (\ref{eq:cond_general}) on the
branch lengths for getting a trapped mode. In spite of the generality
of the formalism, a practical use of the sufficient condition relies
on an intuitive choice of the trial function in the basic domain
(without branches). This function should be as close as possible to
the (unknown) eigenfunction. Although there is no general recipe for
choosing trial functions, one can often guess an appropriate choice,
at least for relatively simple domains.
These points were illustrated for several typical waveguides,
including 2D and 3D L-shapes, crossing of the rectangular stripes, and
bent stripes. For all these cases, the basic domain was simple enough
to guess an appropriate trial function in order to derive an explicit
sufficient condition for getting at least one localized mode. In
particular, we obtained the upper bound of the minimal branch length
which is sufficient for localization. We proved the existence of a
trapped mode in finite L-shape, bent strip and cross of two stripes
provided that their branches are long enough, with an accurate
estimate on the required minimal length. These results were confirmed
by a direct numerical resolution of the original eigenvalue problem by
finite element method implemented in Matlab PDEtools. The presented
method can be applied for studying the localization in many other
waveguides, e.g. smooth bent strip \cite{Olendski10}, sharply bent
strip \cite{Carini93} or Z-shapes \cite{Carini97}.
It is worth emphasizing that the distinction between localized and
non-localized modes is much sharper in infinite waveguides than in
finite ones. Although by definition the localized eigenfunction in a
finite waveguide decays exponentially, the decay rate may be
arbitrarily small. If the branch is not long enough, the localized
mode may be visually indistinguishable from a non-localized one, as
illustrated on Fig. \ref{fig:bent_eigenf_small}. In turn, the
distinction between localized and non-localized modes in infinite
waveguides is always present, whatever the value of the decay rate.
The main practical result is an explicit construction of two families
of waveguides (truncated L-shapes on Fig. \ref{fig:bent}c and coupled
waveguides on Fig. \ref{fig:cross}c), for which the minimal branch
length $a_{\rm min}$ for getting a trapped mode continuously depends
on the parameter $\ell$ or $\varepsilon$ of the basic domain. For any
prescribed (long enough) branch length, one can thus construct two
almost identical finite waveguides, one with and the other without a
trapped mode. The high sensitivity of the localization character to
the shape of the basic domain and to the length of branches may
potentially be used for switching devices in microelectronics.
\section*{Acknowledgments}
This work has been partly supported by the RFBR N 09-01-00408a grant
and the ANR grant ``SAMOVAR''.
|
1,314,259,994,891 | arxiv | \section{Introduction}
\noindent Most real-world data is multidimensional, i.e. it is a function of several independent variables, and typically represented by a multidimensional array of numbers. These arrays are often referred to as tensors \cite{de2000multilinear}. For instance, a color image is a third-order tensor
defined by two indices for spatial variables and one index
for color mode. Similarly, a video comprised of color images is a
fourth-order tensor, time being the fourth dimension besides spatial and spectral.
Recently, tensors have received attention in machine learning community, where given a collection of training tensors $\mathcal{Y} \in \mathbb{R}^{I_1\times\dots\times I_N\times K\times C}$ from $C$ classes each with $K$ samples, the goal is to extract low-dimensional features for subsequent classification tasks. Vectorizing high dimensional inputs may result in poor classification performance due to overfitting when the training sample size is relatively small compared to the feature vector dimension \cite{sidiropoulos2017tensor,li2014multilinear,tao2005supervised,tao2007general,yan2006multilinear}. For this reason, a variety of supervised tensor learning methods for feature extraction, selection, regression and classification have been proposed \cite{lu2008mpca,tao2007general,kotsia2012higher,hao2013linear,li2014multilinear,guo2016support}. Most of the existing work has utilized Tucker decomposition. However, for larger tensors, Tucker representation can be exponential in storage requirements \cite{wang2019principal,chaghazardi2017sample}.
In order to address this issue of exponential storage requirements and high computational complexity, in this paper, we introduce a supervised subspace learning approach based on the tensor-train (TT) structure. In particular, we present a discriminant subspace learning approach using the TT model, namely the Tensor-Train Discriminant Analysis (TTDA). The proposed approach is based on linear discriminant analysis (LDA) and learns a tensor-train subspace (TT-subspace) \cite{wang2018tensor,wang2019principal} that maximizes the linear discriminant function. Although this approach provides an efficient structure for storing the learnt subspaces, it is computationally prohibitive. For this reason, we propose a multi-branch tensor network structure and develop computationally efficient, low storage complexity implementations of TTDA.
\subsection{Related Work}
The proposed work builds on two fundamental lines of research: 1) Linear supervised learning methods for tensors and 2) Tensor-Train subspace learning methods. In the area of supervised tensor learning, methods to learn discriminant subspaces from a set of labelled training examples have been proposed. These include extensions of Linear Discriminant Analysis (LDA) to Multilinear Discriminant Analysis (MDA) for face and gait recognition \cite{yan2006multilinear,tao2007general,li2014multilinear}; Discriminant Non-negative Tensor Factorization (DNTF) \cite{zafeiriou2009discriminant}; Supervised Tensor Learning (STL) where one projection vector along each mode of a tensor is learnt \cite{tao2005supervised,he2014dusk}. More recently, the linear regression model has been extended to tensors to learn multilinear mappings from a tensorial input space to a continuous output space \cite{guo2011tensor,yu2016learning}. Finally, a framework for tensor-based linear large margin classification was formulated as Support Tensor Machines (STMs), in which the parameters defining the separating hyperplane form a tensor \cite{kotsia2012higher,hao2013linear,guo2016support}. However, almost all of these methods are based on Tucker decomposition. For large tensors, these representations are computationally expensive and their storage requirements grow exponentially \cite{cichocki2017tensor,chaghazardi2017sample}.
In \cite{holtz2012manifolds,cichocki2016tensor}, it was shown that tensor-train representation can address these shortcomings. Tensor networks are factorizations of very large tensors into networks of smaller tensors with applications in applied mathematics, physics and machine learning \cite{orus2014practical}. The matrix product state (MPS) or tensor-train is one of the best understood tensor networks for which efficient algorithms have been developed \cite{oseledets2011tensor,oseledets2010approximation}. TT is a special case of a tensor network where a tensor with $N$ indices is factorized into a chain-like product of low-rank, three-mode tensors. This model provides better compression than Tucker models, especially for higher order tensors \cite{cichocki2014era}. Even though early applications of TT decomposition focused on compression and dimensionality reduction \cite{oseledets2010approximation,oseledets2011tensor}, more recently TT has been used in machine learning applications. In \cite{bengua2017matrix}, MPS is implemented in an unsupervised manner to first compress the tensor of training samples and then the resulting lower dimensional core tensors are used as features for subsequent classification. In \cite{wang2019principal}, TT decomposition is associated with a structured subspace model, namely the tensor-train subspace. Learning this structured subspace from training data is posed as a non-convex problem referred to as TT-PCA. Once the subspaces are learnt from the training data, the resulting low-dimensional subspaces are used to project and classify the test data. \cite{wang2018tensor} extends TT-PCA to manifold learning by proposing a tensor-train neighborhood preserving embedding (TTNPE). The classification is conducted by first learning a set of tensor subspaces from the training data and then projecting the training and testing data onto the learnt subspaces. Apart from employing TT for subspace learning, recent work has also considered the use of TT in classifier design. In \cite{chen2019support}, a support tensor train machine (STTM) is introduced to replace the rank-1 weight tensor in Support Tensor Machine (STM) \cite{tao2005supervised} by a tensor-train that can approximate any tensor with a scalable number of parameters.
\subsection{Contributions of the Proposed Work}
The contributions of the proposed work can be summarized as follows:
\begin{itemize}
\item This paper is the first that uses tensor-train decomposition to formulate LDA for supervised subspace learning.
Unlike recent work on TT-subspace learning \cite{bengua2017efficient,bengua2017matrix,wang2019principal,wang2018tensor} which focuses on dimensionality reduction for feature extraction, the proposed work learns discriminative TT-subspaces and uses them to extract features that will optimize the linear discriminant function.
\item A computationally efficient way to implement tensor-train decomposition is presented. The proposed multi-branch structure is akin to a hybrid between tensor-train and Tucker decompositions using the flexibility of tensor networks. This structure is not limited to LDA as it can also be utilized within other subspace learning tasks, e.g. PCA. A convergence analysis for the proposed algorithm to solve the resulting non-convex optimization problem is also provided.
\item A theoretical analysis of storage and computational complexity of this new framework is presented. A method to find the optimal implementation of the multi-branch TT model given the dimensions of the input tensor is also given.
\item The proposed method provides higher classification accuracy at a reduced storage complexity and reduces the computational complexity by a factor of $10^2$ especially at high compression ratios compared to Tucker based supervised learning methods. Moreover, the proposed method is able to learn more discriminative subspaces from a small number of training samples compared to MDA.
\end{itemize}
The rest of the paper is organized as follows. In Section \ref{sec:back}, we provide background on tensor operations, TT and Tucker decomposition, LDA and MDA. In Section \ref{sec:ttda}, we introduce an optimization problem to learn the TT-subspace structure that maximizes the linear discriminant function. In Section \ref{sec:mbttda}, we introduce multi-branch implementations of TTDA to address the issue of high computational complexity. In Section \ref{sec:compC}, we provide an analysis of storage cost, computational complexity and convergence for the proposed algorithms. We also provide a procedure to determine the optimal TT structure for minimizing storage complexity. In Section \ref{sec:Exp}, we compare the proposed methods with state-of-the-art tensor based discriminant analysis and subspace learning methods for classification applications.
\section{Background}
\label{sec:back}
\noindent Let $\mathcal{Y} \in \mathbb{R}^{I_1\times\dots\times I_N\times K\times C}$ be the collection of samples of training tensors. For a given $\mathcal{Y}$ with $C$ classes and $K$ samples per class, define $\mathcal{Y}_c^k\in \mathbb{R}^{I_1 \times I_2 \times\dots\times I_N }$ as the sample tensors where $c\in\{1,\dots,C\}$ is the class index and $k\in\{1,\dots,K\}$ is the sample index.
\subsection{Notation}
\noindent \textbf{Definition 1.} (Vectorization, Matricization and Reshaping) $\mathbf{V}(.)$ is a vectorization operator such that $\mathbf{V}(\mathcal{Y}_c^k)\in \mathbb{R}^{I_1I_2\dots I_N\times1}$. $\mathbf{T}_n(.)$ is a tensor-to-matrix reshaping operator defined as $\mathbf{T}_n(\mathcal{Y}_c^k)\in \mathbb{R}^{I_1\dots I_n\times I_{n+1}\dots I_N}$ and the inverse operator is denoted as $\mathbf{T}_n^{-1}(.)$
\noindent \textbf{Definition 2.} (Left and right unfolding) The left unfolding operator creates a matrix from a tensor by taking all modes except the last mode as row indices and the last mode as column indices, i.e. $\mathbf{L}(\mathcal{Y}_c^k) \in \mathbb{R}^{I_1I_2\dots I_{N-1}\times I_N}$ which is equivalent to $\mathbf{T}_{N-1}(\mathcal{Y}_c^k)$. Right unfolding transforms a tensor to a matrix by taking all the first mode fibers as column vectors, i.e. $\mathbf{R}(\mathcal{Y}_c^k) \in \mathbb{R}^{I_1\times I_2I_3\dots I_N}$ which is equivalent to $\mathbf{T}_1(\mathcal{Y}_c^k)$. The inverse of these operators are denoted as $\mathbf{L}^{-1}(.)$ and $\mathbf{R}^{-1}(.)$, respectively.
\noindent \textbf{Definition 3.} (Tensor trace) Tensor trace is applied on matrix slices of a tensor and contracts them to scalars. Let $\mathcal{A} \in \mathbb{R}^{I_1\times I_2\times \dots \times I_N}$ with $I_{k'}=I_k$, then trace operation on modes $k'$ and $k$ is defined as:
\begin{gather}
\mathcal{D}=tr_{k'}^k(\mathcal{A})=\sum_{\substack{i_{k'}=i_k=1}}^{I_{k}}\mathcal{A}(:,\dots, i_{k'},:,\ldots, i_k,:,\dots, :),\nonumber
\end{gather}
where $\mathcal{D} \in \mathbb{R}^{I_1\times\dots\times I_{k'-1} \times I_{k'+1} \times\dots\times I_{k-1} \times I_{k+1}\times\dots\times I_N}$ is a $N-2$-mode tensor.
\noindent \textbf{Definition 4.} (Tensor Merging Product) Tensor merging product connects two tensors along some given sets of modes. For two tensors $\mathcal{A}\in \mathbb{R}^{I_1\times I_2\times\dots\times I_N}$ and $\mathcal{B}\in \mathbb{R}^{J_1\times J_2\times\dots\times J_M}$ where $I_n=J_m$ and $I_{n+1}=J_{m-1}$ for some $n$ and $m$, tensor merging product is given by \cite{cichocki2017tensor}:
\begin{equation}
\mathcal{C}=\mathcal{A}\times_{n,n+1}^{m,m-1}\mathcal{B}. \nonumber
\end{equation}
$\mathcal{C}\in\mathbb{R}^{I_1\times\dots \times I_{n-1}\times I_{n+2}\times \dots \times I_N\times J_1\times\dots\times J_{m-2}\times J_{m+1}\times\dots\times J_M}$ is a $(N+M-4)$-mode tensor that is calculated as:
\small
\begin{gather}
\mathcal{C}(i_1,\dots , i_{n-1}, i_{n+2}, \dots , i_N, j_1,\dots, j_{m-2}, j_{m+1},\dots, j_M)= \nonumber \\
\sum_{t_1=1}^{I_n}\sum_{t_2=1}^{J_{m-1}}\big[\mathcal{A}(i_1,\dots,i_{n-1},i_n=t_1,i_{n+1}=t_2,i_{n+1},\dots,i_N) \nonumber \\ \mathcal{B}(j_1,\dots,j_{m-2},j_{m-1}=t_2,j_m=t_1,j_{m+1},\dots,j_M)\big].\nonumber
\end{gather}
\normalsize
A graphical representation of tensors $\mathcal{A}$ and $\mathcal{B}$ and the tensor merging product defined above is given in Fig. \ref{fig:tmp}.
A special case of the tensor merging product can be considered for the case where $I_n=J_m$ for all $n,m\in \{1,\dots,N-1\}, M\geq N$. In this case, the tensor merging product across the first $N-1$ modes is defined as:
\begin{gather}
\mathcal{C}'=\mathcal{A}\times_{1,\dots,N-1}^{1,\dots,N-1}\mathcal{B},
\label{eq:cprime}
\end{gather}
where $\mathcal{C}' \in \mathbb{R}^{I_N\times J_N \times\dots\times J_M}$. This can equivalently be written as:
\begin{gather}
\mathbf{R}(\mathcal{C}')=\mathbf{L}(\mathcal{A})^\top\mathbf{T}_{N-1}(\mathcal{B}),
\label{eq:cprime2}
\end{gather}
where $\mathbf{R}(\mathcal{C}')\in \mathbb{R}^{I_N\times\prod_{m=N}^MJ_m}$.
\def.7{.5}
\tikzset{
net node/.style = {circle, minimum width=2*.7 cm, inner sep=0pt, outer sep=0pt, outer color=gray!50!cyan, inner color=cyan},
net connect/.style = {line width=1pt, draw=blue!50!cyan!25!black},
net thick connect/.style = {net connect, line width=2.5pt},
second node/.style = {circle, minimum width=2*.7 cm, inner sep=0pt, outer sep=0pt, outer color=green!25!gray!40!yellow, inner color=green!25!gray!40!yellow},
second connect/.style = {line width=1pt, draw=red!60!gray},
second thick connect/.style = {net connect, line width=2.5pt}
}
\begin{figure}
\begin{subfigure}[b]{.45\columnwidth}
\centering
\scalebox{0.7}{
\begin{tikzpicture}
\foreach \i in {1,...,2}{
\path (225+\i*45:1.5) node (b\i) {$I_\i$};}
\path (225:1.5) node (b3) {$I_{N}$};
\path (180:1.5) node (b4) {$I_{N-1}$};
\path (90:1.5) node (b5) {$\dots$};
\path (0:0) node (b6) [second node] {$\mathcal{A}$};
\node [rotate=120] at (45: 0.7) {$\dots$};
\node [rotate=60] at (135: 0.7) {$\dots$};
\foreach \i in {1,...,5}{
\path [second connect] (b\i) -- (b6);;}
\end{tikzpicture}
}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{.45\columnwidth}
\centering
\scalebox{.7}{%
\begin{tikzpicture}
\foreach \i in {1,...,2}{
\path (225+\i*45:1.5) node (b\i) {$J_\i$};}
\path (225:1.5) node (b3) {$J_{M}$};
\path (90:1.5) node (b4) {$\dots$};
\path (0:0) node (b6) [second node] {$\mathcal{B}$};
\node [rotate=120] at (45: 0.7) {$\dots$};
\node [rotate=60] at (135: 0.7) {$\dots$};
\foreach \i in {1,...,4}{
\path [second connect] (b\i) -- (b6);;}
\end{tikzpicture}
}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{.98\columnwidth}
\centering
\scalebox{.7}{%
\begin{tikzpicture}
\path (270:1.5) node (a1) {$I_{N}$};
\path (225:1.5) node (a3) {$I_{1}$};
\path (180:1.5) node (a4) {$I_{2}$};
\path (90:1.5) node (a5) {$\dots$};
\path (0:0) node (a6) [second node] {$\mathcal{A}$};
\node [rotate=120] at (60: 0.8) {$\dots$};
\node [rotate=-120] at (-60: 0.8) {$\dots$};
\node [rotate=60] at (135: 0.7) {$\dots$};
\foreach \i in {1,3,4,5}{
\path [second connect] (a\i) -- (a6);;}
\tikzset{every node/.append style={xshift=3cm}}
\foreach \i in {1}{
\path (270+\i*45:1.5) node (b\i) {$J_\i$};}
\path (270:1.5) node (b3) {$J_{M}$};
\path (90:1.5) node (b4) {$\dots$};
\path (0:0) node (b6) [second node] {$\mathcal{B}$};
\node [rotate=120] at (45: 0.7) {$\dots$};
\node [rotate=60] at (120: 0.8) {$\dots$};
\node [rotate=-60] at (-120: 0.8) {$\dots$};
\foreach \i in {1,3,4}{
\path [second connect] (b\i) -- (b6);;}
\draw [second connect] (a6) to[out=20, in=160] (b6);
\path [second connect] (a6) to[out=-20, in=-160] (b6);
\node at (-1.5, 0.7) {$I_n,J_m$};
\node at (-1.5, -0.7) {$I_{n+1},J_{m-1}$};
\path (0:1.5) node {\bf\large{=}};
\foreach \i in {1}
\path (260+\i*20:1.5) node (c\i) [right=3.5cm] {$I_{\i}$};
\path (260:1.5) node (c2) [right=3.5cm] {$J_{M}$};
\path (-10:1.5) node (c3) [right=3.5cm] {$I_{n-1}$};
\path (10:1.5) node (c4) [right=3.5cm] {$I_{n+2}$};
\path (80:1.5) node (c5) [right=3.5cm] {$I_N$};
\path (100:1.5) node (c6) [right=3.5cm] {$J_1$};
\path (170:1.5) node (c7) [right=3.5cm] {$J_{m-2}$};
\path (190:1.5) node (c8) [right=3.5cm] {$J_{m+1}$};
\path (0:0) node (c10) [second node] [right=3.4cm] {$\mathcal{C}$};
\node [rotate=120] at (20: 0.7) [above right=0.05cm and 4cm] {$\dots$};
\node [rotate=60] at (170: 0.7) [above right=0.05cm and 4cm] {$\dots$};
\node [rotate=-120] at (-20: 0.5) [above right=0.05cm and 4cm] {$\dots$};
\node [rotate=-60] at (-160: 1) [above right=0.05cm and 4cm] {$\dots$};
\foreach \i in {1,...,8}{
\path [second connect] (c\i) -- (c10);;}
\end{tikzpicture}
}
\caption{}
\end{subfigure}
\caption{Illustration of tensors and tensor merging product using tensor network notations. Each node represents a tensor and each edge represents a mode of the tensor. (a) Tensor $\mathcal{A}$, (b) Tensor $\mathcal{B}$, (c) Tensor Merging Product between modes $(n,m)$ and $(n+1,m-1)$.}
\label{fig:tmp}
\end{figure}%
\noindent \textbf{Definition 5.} (Tensor-Train Decomposition (TT)) Tensor-train decomposition represents each element of $\mathcal{Y}_c^k$ using a series of matrix products as:
\begin{gather}
\mathcal{Y}_c^k(i_1,i_2,\dots,i_N)= \nonumber \\ \mathcal{U}_1(1,i_1,:)\mathcal{U}_2(:,i_2,:) \dots \mathcal{U}_N(:,i_N,:)\mathbf{x}_c^k,
\label{eq:ttsc}
\end{gather}
where $\mathcal{U}_n\in \mathbb{R}^{R_{n-1}\times I_n \times R_n}$ are the three mode low-rank tensor factors, $R_n<I_n$ are the TT-ranks of the corresponding modes $n\in\{1,\dots,N\}$ and $\mathbf{x}_c^k\in \mathbb{R}^{R_N\times 1}$ is the projected sample vector. Using tensor merging product form, (\ref{eq:ttsc}) can be rewritten as
\begin{equation}
\mathcal{Y}_c^k=\mathcal{U}_1\times_3^1\mathcal{U}_2\times_3^1\dots \times_3^1\mathcal{U}_N\times_3^1\mathbf{x}_c^k.
\label{eq:proj}
\end{equation}
A graphical representation of (\ref{eq:proj}) can be seen in Fig. \ref{fig:ttd}. If $\mathcal{Y}_c^k$ is vectorized, another equivalent expression for (\ref{eq:ttsc}) in terms of matrix projection is obtained as:
\begin{equation}
\mathbf{V}(\mathcal{Y}_c^k)=\mathbf{L}(\mathcal{U}_1\times_3^1\mathcal{U}_2\times_3^1\dots \times_3^1\mathcal{U}_N)\mathbf{x}_c^k.\nonumber
\end{equation}
Let $U=\mathbf{L}(\mathcal{U}_1\times_3^1\mathcal{U}_2\times_3^1\dots \times_3^1\mathcal{U}_N)$ where $U\in \mathbb{R}^{I_1I_2\dots I_N\times R_N}$. When $\mathbf{L}(\mathcal{U}_n)$s are left orthogonal, $U$ is also left orthogonal \cite{holtz2012manifolds}, i.e. $\mathbf{L}(\mathcal{U}_n)^\top\mathbf{L}(\mathcal{U}_n)=\mathbb{I}_{R_{n-1}I_n}, \forall n$ implies $U^\top U =\mathbb{I}_{I}, I=\prod_{n=1}^NI_n$ where $\mathbb{I}_{I}\in \mathbb{R}^{I\times I}$ is the identity matrix.
\def.7{.7}
\tikzset{
net node/.style = {circle, minimum width=2*.7 cm, inner sep=0pt, outer sep=0pt, outer color=green!25!gray!50!cyan, inner color=green!25!gray!50!cyan},
net connect/.style = {line width=1pt, draw=blue!50!cyan!25!black},
net thick connect/.style = {net connect, line width=2.5pt},
second node/.style = {circle, minimum width=2*.7 cm, inner sep=0pt, outer sep=0pt, outer color=green!25!gray!40!yellow, inner color=green!25!gray!40!yellow},
second connect/.style = {line width=1pt, draw=red!60!gray},
second thick connect/.style = {net connect, line width=2.5pt},
third connect/.style = {line width=1pt, draw=green},
third thick connect/.style = {net connect, line width=2.5pt},
third node/.style = {circle, minimum width=2*.7 cm, inner sep=0pt, outer sep=0pt, outer color=green!25!gray!40!red, inner color=green!25!gray!40!red}
}
\begin{figure}
\centering
\scalebox{0.47}{
\Large
\begin{tikzpicture}
\foreach \i in {1,...,2}
\path (225+\i*45:2) node (c\i) [left=7cm] {$I_{\i}$};
\path (225:2) node (c3) [left=7cm] {$I_{N}$};
\path (180:2) node (c4) [left=7cm] {$I_{N-1}$};
\path (90:2) node (c5) [left=7cm] {$\dots$};
\path (0:0) node (c6) [second node] [left=6.8cm] {$\mathcal{Y}_c^k$};
\node [rotate=120] at (45: 1) [above left=0.4cm and 7.3cm] {$\dots$};
\node [rotate=60] at (135: 1) [above left=0.25cm and 7.3cm] {$\dots$};
\path (180:6) node {\bf\large{=}};
\foreach \i in {1,2}{
\path (180:6-2*\i) node (n\i) [net node] {$\mathcal{U}_{\i}$};
\path (180:6-2*\i) node (b\i) [below=2cm] {$I_{\i}$};
}
\path (180:0) node (n3) {$\dots$};
\path (0:2) node (n4) [net node] {$\mathcal{U}_{N-1}$};
\path (0:2) node (b4) [below=2cm] {$I_{N-1}$};
\path (0:4) node (n5) [net node] {$\mathcal{U}_{N}$};
\path (0:5.5) node (n6) [second node] {$\mathbf{x}_{c}^k$};
\path (0:4) node (b5) [below=2cm] {$I_{N}$};
\foreach \i in {9,10}{
\path (180:16-2*\i) node (n\i) [below=2cm] {};}
\foreach \i in {1,...,5}{
\path [second connect] (c\i) -- (c6);;}
\path [net connect] (n1) -- (n2) -- (n3) -- (n4) -- (n5) -- (n6);;
\path [second connect] (n1) -- (b1) (n2) -- (b2) (n4)-- (b4) (n5) --(b5);;
\draw [net connect] (-5.5,0) node[anchor=south]{$R_0=1$} -- (n1);;
\end{tikzpicture}
\normalsize
}
\caption{Tensor-Train Decomposition of $\mathcal{Y}_c^k$ using tensor merging products.}
\label{fig:ttd}
\end{figure}
\noindent \textbf{Definition 6.} (Tucker Decomposition (TD)) If the number of modes of the projected samples, $\mathcal{X}_c^k$, is equal to the number of modes of the input tensors $\mathcal{Y}_c^k$, the TT-model becomes equivalent to Tucker decomposition. In this case, $\mathcal{X}_c^k$ is known as the core tensor. This is shown in Fig. \ref{fig:tckr} and given by:
\begin{equation}
\mathcal{Y}_c^k=\mathcal{X}_c^k \times_1^2 U_1 \times_2^2 U_2 \dots \times_N^2 U_N,\nonumber
\end{equation}
where $U_n\in \mathbb{R}^{I_n\times R_n}$ and $\mathcal{X}_c^k\in \mathbb{R}^{R_1\times R_2\times\dots \times R_N}$.
\begin{figure}
\centering
\scalebox{0.60}{
\large
\begin{tikzpicture}
\foreach \i in {1,...,2}
\path (225+\i*45:2) node (c\i) [left=5.5cm] {$I_{\i}$};
\path (225:2) node (c3) [left=5.5cm] {$I_{N}$};
\path (180:2) node (c4) [left=5.5cm] {$I_{N-1}$};
\path (90:2) node (c5) [left=5.5cm] {$\dots$};
\path (0:0) node (c6) [second node] [left=5.2cm] {$\mathcal{Y}_c^k$};
\node [rotate=120] at (45: 1) [above left=0.4cm and 5.8cm] {$\dots$};
\node [rotate=60] at (135: 1) [above left=0.25cm and 5.8cm] {$\dots$};
\foreach \i in {1,...,5}{
\path [second connect] (c\i) -- (c6);;}
\path (180:4.5) node {\bf\large{=}};
\foreach \i in {1,...,2}{
\path (225+\i*45:2) node (b\i) [net node] {$U_{\i}$};
\path (225+\i*45:4) node (n\i) {$I_{\i}$};}
\path (225:2) node (b3) [net node] {$U_{N}$};
\path (225:4) node (n3) {$I_{N}$};
\path (180:2) node (b4) [net node] {$U_{N-1}$};
\path (180:4) node (n4) {$I_{N-1}$};
\path (90:2) node (b5) [net node] {$\dots$};
\path (0:0) node (b6) [second node] {$\mathcal{X}_c^k$};
\node [rotate=120] at (45: 1.2) {$\dots$};
\node [rotate=60] at (135: 1.2) {$\dots$};
\foreach \i in {1,...,4}{
\path [net connect] (b\i) -- (b6);;
\path [second connect] (n\i) -- (b\i);;}
\path [net connect] (b5) -- (b6);;
\end{tikzpicture}
\normalsize
}
\caption{Tensor network notation for Tucker decomposition.}
\label{fig:tckr}
\end{figure}
\vspace{-1em}
\subsection{Linear Discriminant Analysis (LDA)}
LDA for vectorized tensor data finds an orthogonal projection $U$ that maximizes the discriminability of projections \footnote{The original formulation optimizes the trace ratio. Prior work showed the equivalence of trace ratio to trace difference used in this paper \cite{fukunaga2013introduction}.}:
\begin{gather}
U=\argmin_{\hat{U}}\left[tr(\hat{U}^\top S_W\hat{U})-\lambda tr(\hat{U}^\top S_B\hat{U})\right]=\nonumber \\\argmin_{\hat{U}}tr(\hat{U}^\top (S_W-\lambda S_B)\hat{U})=\argmin_{\hat{U}}tr(\hat{U}^\top S \hat{U}),
\label{eq:LDA}
\end{gather}
where $S=S_{W}-\lambda S_{B}$, $\lambda$ is the regularization parameter that controls the trade-off between $S_W$ and $S_B$ which are within-class and between-class scatter matrices, respectively, defined as:
\begin{gather}
S_W=\sum_{c=1}^C\sum_{k=1}^K \mathbf{V}(\mathcal{Y}_c^k-\mathcal{M}_c)\mathbf{V}(\mathcal{Y}_c^k-\mathcal{M}_c)^\top, \label{eq:LDAscat} \\
S_B=\sum_{c=1}^C\sum_{k=1}^K \mathbf{V}(\mathcal{M}_c-\mathcal{M})\mathbf{V}(\mathcal{M}_c-\mathcal{M})^\top,
\label{eq:LDAscat2}
\end{gather}
where $\mathcal{M}_c=\frac{1}{K}\sum_{k=1}^K\mathcal{Y}_c^k$ is the mean for each class $c$ and $\mathcal{M}=\frac{1}{CK}\sum_{c=1}^C\sum_{k=1}^K\mathcal{Y}_c^k$ is the total mean of all samples. Since $U$ is an orthogonal projection, (\ref{eq:LDA}) is equivalent to minimizing the within-class scatter and maximizing the between class scatter of projections. This can be solved by the matrix $U\in \mathbb{R}^{\prod_{n=1}^NI_n\times R_N}$ whose columns are the eigenvectors of $S\in \mathbb{R}^{\prod_{n=1}^NI_n\times \prod_{n=1}^NI_n}$ corresponding to the lowest $R_N$ eigenvalues.
\subsection{Multilinear Discriminant Analysis (MDA)}
MDA extends LDA to tensors using TD by finding a subspace $U_n\in \mathbb{R}^{I_n\times R_n}$ for each mode $n\in\{1,\dots,N\}$ that maximizes the discriminability along that mode \cite{li2014multilinear,tao2007general, yan2005discriminant}. When the number of modes $N$ is equal to 1, MDA is equivalent to LDA. In the case of MDA, within-class scatter along each mode $n \in \{1,\dots, N\}$ is defined as:
\begin{gather}
S_W^{(n)}=\sum_{c=1}^C\sum_{k=1}^{K_c} \left[(\mathcal{Y}_c^k-\mathcal{M}_c)\prod_{\substack{m \in \{1,\dots,N\} \\ m\neq n}}\times_m^1 {U_{m}} \right]_{(n)} \nonumber \\ {\left[(\mathcal{Y}_c^k-\mathcal{M}_c)\prod_{\substack{m \in \{1,\dots,N\} \\ m\neq n}}\times_m^1 {U_{m}} \right]_{(n)}}^\top.\label{eq:scatMDA}
\end{gather}
Between-class scatter $S_B^{(n)}$ is found in a similar manner. Using these definitions, each $U_n$ is found by optimizing \cite{tao2007general}:
\begin{gather}
U_n=\argmin_{\hat{U}_n} tr(\hat{U}_n^\top (S_W^{(n)} - \lambda S_B^{(n)})\hat{U}_n).
\label{eq:mdamoden}
\end{gather}
Different implementations of the multilinear discriminant analysis have been introduced including Discriminant Analysis with Tensor Representation (DATER), Direct Generalized Tensor Discriminant Analysis (DGTDA) and Constrained MDA (CMDA). DATER minimizes the ratio ${tr(U_n^\top S_W^{(n)}U_n)}/{tr(U_n^\top S_B^{(n)}U_n)}$ \cite{yan2005discriminant} instead of (\ref{eq:mdamoden}). Direct Generalized Tensor Discriminant Analysis (DGTDA), on the other hand, computes scatter matrices without projecting inputs on $U_m$, where $m\neq n$ and finds an optimal $U_n$\cite{li2014multilinear}. Constrained MDA (CMDA) finds the solution in an iterative fashion \cite{li2014multilinear}, where each subspace is found by fixing all other subspaces.
\section{Tensor-Train Discriminant Analysis}
\label{sec:ttda}
When the data are higher order tensors, LDA needs to vectorize them and finds an optimal projection as shown in (\ref{eq:LDA}). This creates problems as the intrinsic structure of the data is destroyed. Even though MDA addresses this problem, it is inefficient in terms of storage complexity \cite{cichocki2017tensor, chaghazardi2017sample} as it relies on TD. Thus, we propose to solve (\ref{eq:LDA}) by constraining $U=\mathbf{L}(\mathcal{U}_1\times_3^1\mathcal{U}_2\times_3^1\dots \times_3^1\mathcal{U}_N)$ to be a TT-subspace to reduce the computational and storage complexity and obtain a solution that will preserve the inherent data structure. Consequently, the obtained $U$ will still provide discriminative features and will have a TT-subspace structure.
The goal of TTDA is to learn left orthogonal tensor factors $\mathcal{U}_n \in \mathbb{R}^{R_{n-1}\times I_n \times R_{n}}, n\in \{1,\dots,N\}$ using TT-model such that the discriminability of projections $\mathbf{x}_c^k, \forall c, k$ is maximized. First, $\mathcal{U}_n$s can be initialized using TT decomposition proposed in \cite{oseledets2011tensor}. To optimize $\mathcal{U}_n$s for discriminability, we need to solve (\ref{eq:LDA}) for each $\mathcal{U}_n$, which can be rewritten using the definition of $U$ as:
\begin{gather}
\mathcal{U}_n=\argmin_{\hat{\mathcal{U}}_n} tr\bigg[\mathbf{L}(\mathcal{U}_1\times_3^1\dots\times_3^1\hat{\mathcal{U}}_n\times_3^1\dots\times_3^1\mathcal{U}_N)^\top \nonumber \\ S \mathbf{L}(\mathcal{U}_1\times_3^1\dots\times_3^1\hat{\mathcal{U}}_n\times_3^1\dots\times_3^1\mathcal{U}_N)\bigg].
\label{eq:TTDA}
\end{gather}
Using the definitions presented in (\ref{eq:cprime}) and (\ref{eq:cprime2}), we can express (\ref{eq:TTDA}) in terms of tensor merging product:
\begin{gather}
\mathcal{U}_n=\argmin_{\hat{\mathcal{U}}_n} tr\bigg[(\mathcal{U}_1\times_3^1\dots \times_3^1\hat{\mathcal{U}}_n\times_3^1\dots\times_3^1\mathcal{U}_N) \times_{1,\dots,N}^{1,\dots,N} \mathcal{S} \nonumber \\ \times_{N+1,\dots,2N}^{1,\dots,N} (\mathcal{U}_1\times_3^1\dots\times_3^1\hat{\mathcal{U}}_n\times_3^1\dots\times_3^1\mathcal{U}_N)\bigg],
\label{eq:ScatTT}
\end{gather}
where $\mathcal{S}=\mathbf{T}_N^{-1}(S)\in\mathbb{R}^{I_1\times\dots \times I_N\times I_1\times\dots \times I_N}$. Let $\mathcal{U}_{n-1}^L=\mathcal{U}_1\times_3^1\mathcal{U}_2\times_3^1\dots\times_3^1\mathcal{U}_{n-1}$ and $\mathcal{U}_{n}^R=\mathcal{U}_{n+1}\times_3^1\dots\times_3^1\mathcal{U}_N$. By rearranging the terms in (\ref{eq:ScatTT}), we can first compute all merging products and trace operations that do not involve $\mathcal{U}_n$ as:
\begin{gather}
\mathcal{A}_n=tr_4^8\Bigg[\mathcal{U}_{n-1}^L\times_{1,\dots,n-1}^{1,\dots,n-1}\bigg(\mathcal{U}_{n}^R\times_{1,\dots,N-n}^{n+1,\dots,N} \nonumber \\ \left(\mathcal{U}_{n-1}^L\times_{1,\dots,n-1}^{N+1,\dots,N+n-1}(\mathcal{U}_{n}^R\times_{1,\dots,N-n}^{N+n+2,\dots,2N}\mathcal{S})\right)\bigg)\Bigg],
\label{eq:a}
\end{gather}
where $\mathcal{A}_n \in \mathbb{R}^{R_{n-1}\times I_n\times R_{n}\times R_{n-1}\times I_n\times R_{n}}$ (refer to Fig. \ref{fig:Asupp} for a graphical representation of (\ref{eq:a})). Then, we can rewrite the optimization in terms of $\mathcal{U}_n$:
\begin{equation}
\mathcal{U}_n=\argmin_{\hat{\mathcal{U}}_n}\left(\hat{\mathcal{U}}_n\times_{1,2,3}^{1,2,3}\left(\mathcal{A}_n\times_{4,5,6}^{1,2,3}\hat{\mathcal{U}}_n\right)\right).
\label{eq:utau}
\end{equation}
\begin{figure}
\centering
\scalebox{0.5}{
\Large
\begin{tikzpicture}
\path (0:0) node (n1) [third node] {$\mathcal{S}$};
\path (180:3) node (l1) [above=5*1 cm] {$I_{1}$};
\path (180:3) node (l2) [above=3*1 cm] {$I_{2}$};
\path (180:3) node (l3) [above=2*1 cm] {$\dots$};
\path (180:3) node (l4) {$I_{n}$};
\path (180:3) node (l5) [below=2*1 cm] {$\dots$};
\path (180:3) node (l6) [below=3*1 cm] {$I_{N-1}$};
\path (180:3) node (l7) [below=5*1 cm] {$I_{N}$};
\foreach \i in {1,...,7}{
\path [second connect] (l\i) -- (n1);}
\path (0:3) node (r1) [above=5*1 cm] {$I_{1}$};
\path (0:3) node (r2) [above=3*1 cm] {$I_{2}$};
\path (0:3) node (r3) [above=2*1 cm] {$\dots$};
\path (0:3) node (r4) {$I_{n}$};
\path (0:3) node (r5) [below=2*1 cm] {$\dots$};
\path (0:3) node (r6) [below=3*1 cm] {$I_{N-1}$};
\path (0:3) node (r7) [below=5*1 cm] {$I_{N}$};
\foreach \i in {1,...,7}{
\path [second connect] (r\i) -- (n1);}
\path (180:2*3) node (ul1) [above=2.7*1 cm] [net node] {$\mathcal{U}_{n-1}^L$};
\foreach \i in {1,...,3}{
\path [second connect] (l\i) -- (ul1);}
\path (180:2*3) node (lr1) [above=1*1 cm] {$R_{n-1}$};
\path [net connect] (ul1) -- (lr1);
\path (180:2*3) node (ul2) [below=2.7*1 cm] [net node] {$\mathcal{U}_{n}^R$};
\foreach \i in {5,...,7}{
\path [second connect] (l\i) -- (ul2);}
\path (180:2*3) node (lr2) [below=1*1 cm] {$R_{n}$};
\path (180:2*3) node (lrn) [below=6*1 cm] {$R_{N}$};
\path [net connect] (lrn) -- (ul2) -- (lr2);
\path (0:2*3) node (ur1) [above=2.7*1 cm] [net node] {$\mathcal{U}_{n-1}^L$};
\foreach \i in {1,...,3}{
\path [second connect] (r\i) -- (ur1);}
\path (0:2*3) node (rr1) [above=1*1 cm] {$R_{n-1}$};
\path [net connect] (ur1) -- (rr1);
\path (0:2*3) node (ur2) [below=2.7*1 cm] [net node] {$\mathcal{U}_{n}^R$};
\foreach \i in {5,...,7}{
\path [second connect] (r\i) -- (ur2);}
\path (0:2*3) node (rr2) [below=1*1 cm] {$R_{n}$};
\path (0:2*3) node (rrn) [below=6*1 cm] {$R_{N}$};
\path [net connect] (rrn) -- (ur2) -- (rr2);
\path [third connect] (rrn) -- (lrn);
\end{tikzpicture}
\normalsize
}
\caption{Tensor $\mathcal{A}_n$ is formed by first merging $\mathcal{U}_n^R$, $\mathcal{U}_{n-1}^L$ and $\mathcal{S}$ and then applying trace operation across $4^{th}$ and $8^{th}$ modes of the resulting tensor. The green line at the bottom of the diagram refers to the trace operator. }
\label{fig:Asupp}
\end{figure}
Let $A_n=\mathbf{T}_3(\mathcal{A}_n)\in\mathbb{R}^{R_{n-1} I_n R_{n}\times R_{n-1} I_n R_{n}}$, then (\ref{eq:utau}) can be rewritten as:
\begin{gather}
\mathcal{U}_n=\argmin_{\hat{\mathcal{U}}_n} \mathbf{V}(\hat{\mathcal{U}}_n)^\top A_n \mathbf{V}(\hat{\mathcal{U}}_n), \nonumber \\ \mathbf{L}(\hat{\mathcal{U}}_n)^\top \mathbf{L}(\hat{\mathcal{U}}_n)=\mathbb{I}_{R_{n}}.
\label{eq:TTDAU}
\end{gather}
This is a non-convex function due to unitary constraints and can be solved by the algorithm proposed in \cite{wen2013feasible}. The procedure described above to find the subspaces is computationally expensive due to the complexity of finding each $\mathcal{A}_n$ \cite{wang2018tensor}.
When $n=N$, (\ref{eq:TTDAU}) does not apply as $\mathcal{U}_N^R$ is not defined and the trace operation is defined on the third mode of $\mathcal{U}_N$. To update $\mathcal{U}_N$, the following can be used:
\vspace{-2em}
\begin{gather}
\mathcal{U}_N = \argmin_{\hat{\mathcal{U}}_N} tr\left(\hat{\mathcal{U}}_N\times_{1,2}^{1,2}\left(\mathcal{A}_N\times_{3,4}^{1,2}\hat{\mathcal{U}}_N\right)\right),\nonumber
\end{gather}
where $\mathcal{A}_N = \mathcal{U}_{N-1}^L\times_{1,\dots,N-1}^{1,\dots,N-1} \Big(\mathcal{U}_{N-1}^L\times_{1,\dots,N-1}^{N+1,\dots,2N-1}\mathcal{S}\Big)$. Once all of the $\mathcal{U}_{n}$s are obtained, they can be used to extract low-dimensional, discriminative features as $U^\top \mathbf{V}(\mathcal{Y}_c^k)$. The pseudocode for TTDA is given in Algorithm 1
\algrenewcommand\algorithmicrequire{\textbf{Input:}}
\algrenewcommand\algorithmicensure{\textbf{Output:}}
\begin{algorithm}
\caption{Tensor-Train Discriminant Analysis (TTDA)}
\begin{algorithmic}[1]
\Require Input tensors $\mathcal{Y}_c^k \in \mathbb{R}^{I_1 \times I_2 \times \dots \times I_N }$ where $c \in \{1,\dots ,C\}$ and $k \in \{1,\dots,K\}$, initial tensor factors ${\mathcal{U}_n}, n \in \{1,\dots ,N\}$, $\lambda$, $R_1,\dots,R_N$, $MaxIter$
\Ensure $\mathcal{U}_n, n \in \{1,\dots ,N\}$, and $\mathbf{x}_c^k,\quad \forall c,k$
\State $\mathcal{S} \gets \mathbf{T}_N^{-1}(S_W-\lambda S_B), \text{see eqns}. (\ref{eq:LDAscat}), (\ref{eq:LDAscat2})$.
\While{$iter<MaxIter$}
\For{$n=1$ : $N-1$}
\State Compute $\mathcal{A}_{n}$ using (\ref{eq:a}).
\State $\mathbf{V}(\mathcal{U}_n)\gets\hspace{-.3em}\smash{\displaystyle\argmin_{\substack{\hat{\mathcal{U}}_n, \\\mathbf{L}(\hat{\mathcal{U}}_n)^\top\mathbf{L}(\hat{\mathcal{U}}_n)=\mathbb{I}_{R_{n}}}}} \hspace{-.2em}\mathbf{V}(\hat{\mathcal{U}}_n)^\top \mathbf{T}_3(\mathcal{A}_n) \mathbf{V}(\hat{\mathcal{U}}_n)$.
\vspace{.7em}
\EndFor
\State $\mathcal{A}_{N} \gets \mathcal{U}_{N-1}^L\times_{1,\dots,N-1}^{1,\dots,N-1} \Big(\mathcal{U}_{N-1}^L\times_{1,\dots,N-1}^{N+1,\dots,2N-1}\mathcal{S}\Big)$
\State $\mathbf{L}(\mathcal{U}_N) \gets \hspace{-.7em}\smash{\displaystyle\argmin_{\substack{\hat{\mathcal{U}}_N,\\ \mathbf{L}(\hat{\mathcal{U}}_n)^\top\mathbf{L}(\hat{\mathcal{U}}_N)=\mathbb{I}_{R_{N}}}}} \hspace{-.7em}tr\Big(\mathbf{L}(\hat{\mathcal{U}}_N)^\top\mathbf{T}_2(\mathcal{A}_N)\mathbf{L}(\hat{\mathcal{U}}_N)\Big)$.
\vspace{1em}
\State $iter \gets iter+1$.
\EndWhile
\State $U=\mathbf{L}(\mathcal{U}_1\times_3^1\mathcal{U}_2\times_3^1\dots \times_3^1\mathcal{U}_N)$
\State $\mathbf{x}_c^k \gets U^\top \mathbf{V}(\mathcal{Y}_c^k)$, $\quad \forall c,k$.
\end{algorithmic}
\label{alg:TTDA}
\end{algorithm}
\vspace{-1em}
\section{Multi-Branch Tensor-Train Discriminant Analysis}
\label{sec:mbttda}
TTDA algrorithm described above becomes computationally expensive as it requires the computation of tensor $\mathcal{A}_n$ through tensor merging products. For this reason, in this section we introduce computationally efficient tensor network structures for TTDA. These new algorithms are inspired by prior work in tensor networks which considers the benefits of reshaping high-dimensional vector- and matrix-type data into tensors and then processing them using TT decomposition \cite{cichocki2016tensor}. Several papers employed this idea to reshape matrices and vectors into tensors, known as ket augmentation and quantized TT (QTT), for better compression and higher computational efficiency \cite{wang2017efficient,wang2018tensor,cichocki2016tensor,bengua2017efficient, oseledets2010approximation,oseledets2009breaking}.
Inspired by this idea, we propose to tensorize the projected training samples rather than the original data in the learning framework. Using this structural approximation within TTDA formulation, we first propose to approximate 2D-LDA by TT and then generalize by increasing the number of modes (or branches) of the projected training samples.
\vspace{-.7em}
\subsection{Two-way Tensor-Train Discriminant Analysis (2WTTDA)}
As LDA tries to find a subspace $U$ which maximizes discriminability for vector-type data, 2D-LDA tries to find two subspaces $V_1, V_2$ such that these subspaces maximize discriminability for matrix-type data \cite{ye2005two}. If one considers the matricized version of $\mathcal{Y}_c^k$ along mode $d$, i.e. $\mathbf{T}_d(\mathcal{Y}_c^k)\in \mathbb{R}^{\prod_{i=1}^dI_i\times \prod_{i=d+1}^NI_i}$, where $1<d<N$, the equivalent orthogonal projection can be written as:
\begin{gather}
\mathbf{T}_d(\mathcal{Y}_c^k)=V_1X_c^kV_2^\top,
\label{eq:2dlda}
\end{gather}
where $V_1 \!\in \! \mathbb{R}^{\prod_{i=1}^dI_i\times R_d}, V_2\! \in\! \mathbb{R}^{\prod_{i=d+1}^NI_i\times \hat{R}_d}$, $X_c^k\!\in\!\mathbb{R}^{R_d\times \hat{R}_d}$.
In TTDA, since the projections $\mathbf{x}_c^k$ are considered to be vectors, the subspace $U=\mathbf{L}(\mathcal{U}_1\times_3^1\mathcal{U}_2\times_3^1\dots \times_3^1\mathcal{U}_N)$ is analogous to the solution of LDA with the constraint that the subspace admits a TT model. If we consider the projections and the input samples as matrices, now we can impose a TT structure to the left and right subspaces analogous to 2D-LDA. In other words, one can find two sets of TT representations corresponding to $V_1$ and $V_2$ in (\ref{eq:2dlda}). Using this analogy, (\ref{eq:2dlda}) can be rewritten as:
\vspace{-.5em}
\begin{equation}
\mathbf{T}_d(\mathcal{Y}_c^k)=\mathbf{L}(\mathcal{U}_1\times_3^1\dots\times_3^1 \mathcal{U}_d) X_c^k \mathbf{R}(\mathcal{U}_{d+1}\times_3^1\dots\times_3^1\mathcal{U}_N),
\label{eq:projtw2}
\end{equation}
which is equivalent to the following representation:
\begin{equation}
\mathcal{Y}_c^k=\mathcal{U}_1\times_3^1\dots \times_3^1\mathcal{U}_d\times_3^1 X_c^k\times_2^1\mathcal{U}_{d+1}\times_3^1\dots\times_3^1\mathcal{U}_N.\nonumber
\label{eq:projtw}
\end{equation}
This formulation is graphically represented in Fig. \ref{fig:ttdtw} where the decomposition has two branches, thus we refer to it as Two-way Tensor-Train Decomposition (2WTT).
To maximize discriminability using 2WTT, an optimization scheme that alternates between the two sets of TT-subspaces can be utilized. When forming the scatter matrices for a set, projections of the data to the other set can be used instead of the full data which is similar to (\ref{eq:scatMDA}). This will reduce computational complexity as the cost of computing scatter matrices and the number of matrix multiplications to find $\mathcal{A}_n$ in (\ref{eq:a}) will decrease. We propose the procedure given in Algorithm \ref{alg:2WTTDA} to implement this approach and refer to it as Two-way Tensor-Train Discriminant Analysis (2WTTDA) as illustrated in Fig. \ref{fig:alg2w}. To determine the value of $d$ in (\ref{eq:projtw2}), we use a center of mass approach and find the $d$ that minimizes $|\prod_{i=1}^d I_i-\prod_{j=d+1}^N I_{j}|$. In this manner, the problem can be separated into two parts which have similar computational complexities.
\algrenewcommand\algorithmicrequire{\textbf{Input:}}
\algrenewcommand\algorithmicensure{\textbf{Output:}}
\begin{algorithm}
\caption{Two-Way Tensor-Train Discriminant Analysis (2WTTDA)}
\begin{algorithmic}[1]
\Require Input tensors $\mathcal{Y}_c^k \in \mathbb{R}^{I_1 \times I_2 \times \dots \times I_N }$ where $c \in \{1,\dots ,C\}$ and $k \in \{1,\dots,K\}$, initial tensor factors ${\mathcal{U}_n}, n \in \{1,\dots ,N\}$, $d$, $\lambda$, $R_1,\dots,R_N$, $MaxIter$, $LoopIter$
\Ensure $\mathcal{U}_n, n \in \{1,\dots ,N\}$, and $X_c^k,\quad \forall c,k$
\While{ $iter<LoopIter$}
\State $\mathcal{Y}_L \gets \mathcal{Y}\times_{d+1,\dots,N}^{2,\dots,N-d+1}(\mathcal{U}_{d+1}\times_3^1\dots\times_3^1\mathcal{U}_N)$.
\State $[\mathcal{U}_i]\gets TTDA(\mathcal{Y}_L,\lambda, R_i, MaxIter) \forall i\in \{1,\dots,d\}$.
\State $\mathcal{Y}_R \gets \mathcal{Y}\times_{1,\dots,d}^{2,\dots,d+1}(\mathcal{U}_{1}\times_3^1\dots\times_3^1\mathcal{U}_d)$.
\State $[\mathcal{U}_i]\gets TTDA(\mathcal{Y}_R,\lambda, R_i, MaxIter) \forall i\in \{d+1,\dots,N\}$.
\State $iter=iter+1$.
\EndWhile
\State $\mathcal{X}_c^k \gets \mathbf{L}(\mathcal{U}_1\times_3^1\dots\times_3^1\mathcal{U}_d)^\top\mathbf{T}_d(\mathcal{Y}_c^k)\mathbf{R}(\mathcal{U}_{d+1}\times_3^1\dots\times_3^1\mathcal{U}_N)^\top$.
\end{algorithmic}
\label{alg:2WTTDA}
\end{algorithm}
\vspace{-2em}
\subsection{Three-way Tensor-Train Discriminant Analysis (3WTTDA)}
Elaborating on the idea of 2WTTDA, one can increase the number of modes of the projected samples which will increase the number of tensor factor sets, or the number of subspaces to be approximated using TT structure. For example, one may choose the number of modes of the projections as three, i.e. $\mathcal{X}_c^k \in \mathbb{R}^{R_{d_1}\times R_{d_2} \times \hat{R}_{d_2}}$, where $1<d_1<d_2<N$. This model, named as Three-way Tensor-Train Decomposition (3WTT), is given in (\ref{eq:ttscthw}) and represented graphically in Fig. \ref{fig:ttdthw}.
\vspace{-.5em}
\begin{gather}
\mathcal{Y}_c^k= \Bigg( \bigg(\mathcal{X}_c^k \times_3^{N-d_2+2} \big(\mathcal{U}_{d_2+1} \times_3^1 \dots\times_3^1 \mathcal{U}_N\big)\bigg) \times_2^{d_2-d_1+2} \nonumber \\ \big(\mathcal{U}_{d_1+1}\times_3^1\dots \times_3^1 \mathcal{U}_{d_2}\big)\Bigg)\times_1^{d_1+2} \big(\mathcal{U}_{1}\times_3^1\dots \times_3^1\mathcal{U}_{d_1} \big).
\label{eq:ttscthw}
\end{gather}
To maximize discriminability using 3WTT, one can utilize an iterative approach as in Algorithm \ref{alg:2WTTDA}, where inputs are projected on all tensor factor sets except the set to be optimized, then TTDA is applied to the projections. \textcolor{black}{The flowchart for the corresponding algorithm is illustrated in Fig. \ref{fig:alg3w}}. This procedure can be repeated until a convergence criterion is met or a number of iterations is reached. The values of $d_{1}$ and $d_{2}$ are calculated such that the product of dimensions corresponding to each set is as close to $(\prod_{i=1}^NI_i)^{1/3}$ as possible. It is important to note that 3WTT will only be meaningful for tensors of order three or higher. For three-mode tensors, 3WTT is equivalent to Tucker Model. When there are more than four modes, the number of branches can be increased accordingly.
\def120{120}
\def60{60}
\def90{90}
\tikzstyle{block} = [rectangle, draw, fill=white!80!cyan,
text width=5em, text centered, rounded corners, minimum height=4em]
\tikzstyle{line} = [draw, very thick, color=black!50, -latex']
\tikzstyle{cloud} = [draw, ellipse, fill=olive!20, minimum height=2em]
\tikzset{
left connect/.style = {line width=1pt, draw=blue!50!cyan!45!gray, -latex'},
right connect/.style = {line width=1pt, draw=red!40!gray!60!orange, -latex'},
middle connect/.style = {line width=1pt, draw=green!50!black!40!yellow, -latex'}
}
\begin{figure*}
\begin{subfigure}[b]{.98\columnwidth}
\centering
\scalebox{0.45}{
\Large
\begin{tikzpicture}
\foreach \i in {1,...,2}
\path (225+\i*45:1*2) node (c\i) [left=7*1 cm] {$I_{\i}$};
\path (225:1*2) node (c3) [left=7*1 cm] {$I_{N}$};
\path (180:1*2) node (c4) [left=7*1 cm] {$I_{N-1}$};
\path (90:1*2) node (c5) [left=7*1 cm] {$\dots$};
\path (0:1*0) node (c6) [second node] [left=6.8*1 cm] {$\mathcal{Y}_c^k$};
\node [rotate=120] at (45:1*1) [above left=0.4*1 cm and 7.3*1 cm] {$\dots$};
\node [rotate=60] at (135:1* 1) [above left=0.25*1 cm and 7.3*1 cm] {$\dots$};
\path (180:1*6) node {\bf\large{=}};
\path (180:1*4) node (n1) [net node] {$\mathcal{U}_{1}$};
\path (180:1*4) node (b1) [below=2*1 cm] {$I_1$};
\path (180:1*2.5) node (n2) {$\dots$};
\path (180:1*1) node (n3) [net node] {$\mathcal{U}_{d}$};
\path (180:1*1) node (b2) [below=2*1 cm] {$I_{d}$};
\path (0:1*2) node (n4) [net node] {$\mathcal{U}_{d+1}$};
\path (0:1*2) node (b4) [below=2*1 cm] {$I_{d+1}$};
\path (0:1*3.5) node (n7) {$\dots$};
\path (0:1*5) node (n5) [net node] {$\mathcal{U}_{N}$};
\path (0:1*0.5) node (n6) [second node] {$X_{c}^k$};
\path (0:1*5) node (b5) [below=2*1 cm] {$I_{N}$};
\foreach \i in {9,10}{
\path (180:1*16-1*2*\i) node (n\i) [below=1.5*1 cm] {};}
\foreach \i in {1,...,5}{
\path [second connect] (c\i) -- (c6);;}
\path [net connect] (n1) -- (n2) -- (n3) -- (n6) -- (n4) -- (n7) -- (n5);;
\path [second connect] (n1) -- (b1) (n3) -- (b2) (n4)-- (b4) (n5) --(b5);;
\end{tikzpicture}
\normalsize
}
\caption{}
\label{fig:ttdtw}
\end{subfigure}
\begin{subfigure}[b]{.98\columnwidth}
\scalebox{0.5}{
\Large
\begin{tikzpicture}
\foreach \i in {1,...,2}
\path (225+\i*45:2) node (c\i) [left=7cm] {$I_{\i}$};
\path (225:2) node (c3) [left=7cm] {$I_{N}$};
\path (180:2) node (c4) [left=7cm] {$I_{N-1}$};
\path (90:2) node (c5) [left=7cm] {$\dots$};
\path (0:0) node (c6) [second node] [left=6.8cm] {$\mathcal{Y}_c^k$};
\node [rotate=120] at (45: 1) [above left=0.4cm and 7.3cm] {$\dots$};
\node [rotate=60] at (135: 1) [above left=0.25cm and 7.3cm] {$\dots$};
\path (180:6) node {\bf\large{=}};
\path (0:0.5) node (n6) [second node] {$\mathcal{X}_{c}^k$};
\path (180:4) node (n1) [below=1cm] [net node] {$\mathcal{U}_{d_1+1}$};
\path (180:4) node (b1) [below=3cm] {$I_{d_1+1}$};
\path (180:2.5) node (n2) [below=1.5cm] {$\dots$};
\path (180:1) node (n3) [below=1cm] [net node] {$\mathcal{U}_{d_2}$};
\path (180:1) node (b2) [below=3cm] {$I_{d_2}$};
\path (0:2) node (n4) [below=1cm] [net node] {$\mathcal{U}_{d_2+1}$};
\path (0:2) node (b4) [below=3cm] {$I_{d_2+1}$};
\path (0:3.5) node (n7) [below=1.5cm] {$\dots$};
\path (0:5) node (n5) [below=1cm] [net node] {$\mathcal{U}_{N}$};
\path (0:5) node (b5) [below=3cm] {$I_{N}$};
\path (180:4) node (n8) [above=1cm] [net node] {$\mathcal{U}_{1}$};
\path (180:4) node (b8) [above=3cm] {$I_{1}$};
\path (180:2.5) node (n9) [above=1.5cm] {$\dots$};
\path (180:1) node (n10) [above=1cm] [net node] {$\mathcal{U}_{d_1}$};
\path (180:1) node (b10) [above=3cm] {$I_{d_1}$};
\foreach \i in {1,...,5}{
\path [second connect] (c\i) -- (c6);;}
\path [net connect] (n8) -- (n9) -- (n10) -- (n6);;
\path [second connect] (n8) -- (b8) (n10) -- (b10);;
\path [net connect] (n1) -- (n2) -- (n3) -- (n6) -- (n4) -- (n7) -- (n5);;
\path [second connect] (n1) -- (b1) (n3) -- (b2) (n4)-- (b4) (n5) --(b5);;
\end{tikzpicture}
\normalsize
}
\caption{
\label{fig:ttdthw}
\end{subfigure}
\begin{subfigure}[b]{.98\columnwidth}
\centering
\begin{tikzpicture}[scale=1, node distance = 2.5cm, auto]
\node [cloud] (data) {$\mathcal{Y}, \lambda, d$};
\draw (0, 1.5) node [block] (ttpca) {Apply TT in \cite{oseledets2011tensor}};
\node [cloud, left of=data] (factorl) {${\mathcal{U}_1,\dots, \mathcal{U}_d}$};
\node [cloud, right of=data] (factorr) {${\mathcal{U}_{d+1},\dots, \mathcal{U}_N}$};
\node [block, below of = factorl] (lproj) {Project $\mathcal{Y}$ according to line 4.};
\node [block, below of = data] (TTDA1) {Apply TTDA};
\node [block, below of = factorr] (rproj) {Project $\mathcal{Y}$ according to line 2.};
\path [line] (data) -- (ttpca);
\path [left connect] (lproj) -- node [below, color=black] {$\mathcal{Y}_L$} (TTDA1);
\path [right connect] (rproj) -- node [color=black] {$\mathcal{Y}_R$} (TTDA1);
\path [left connect, dashed] (TTDA1) -- (factorr);
\path [right connect, dashed] (TTDA1) -- (factorl);
\path [left connect, dashed] (data) -- (lproj);
\path [left connect, dashed] (ttpca) -- (factorl);
\path [right connect, dashed] (data) -- (rproj);
\path [right connect, dashed] (ttpca) -- (factorr);
\path [left connect, dashed] (factorl) -- (lproj);
\path [right connect, dashed] (factorr) -- (rproj);
\end{tikzpicture}
\caption{}
\label{fig:alg2w}
\end{subfigure}
\begin{subfigure}[b]{.98\columnwidth}
\begin{tikzpicture}[scale=1, node distance = 2cm, auto]
\draw (-4, -.75) node [cloud] (data) {$\mathcal{Y}, \lambda, d$};
\draw (0, 2.5) node [block] (ttpca) {Apply TT in \cite{oseledets2011tensor}};
\draw (-2.5, 0) node [cloud] (factorl) {${\mathcal{U}_1,\dots, \mathcal{U}_{d_1}}$};
\draw (0, .75) node [cloud] (factorm) {${\mathcal{U}_{d_1+1},\dots, \mathcal{U}_{d_2}}$};
\draw (2.5, 0) node [cloud] (factorr) {${\mathcal{U}_{d_2+1},\dots, \mathcal{U}_N}$};
\draw (-2, -2.5) node [block] (proj) {Project $\mathcal{Y}$};
\draw (2, -2.5) node [block] (ttda) {Apply TTDA};
\path [line] (data) to [out=90 , in=180] (ttpca);
\path [left connect] (proj) to [out=-20,in=-160] node [below, color=black] {$\mathcal{Y}_L$} (ttda);
\path [right connect] (proj) to [out=20,in=160] node [below, color=black] {$\mathcal{Y}_R$} (ttda);
\path [middle connect] (proj) -- node [below, color=black] {$\mathcal{Y}_M$} (ttda);
\path [left connect] (ttda) -- (factorl);
\path [right connect] (ttda) -- (factorr);
\path [middle connect] (ttda) -- (factorm);
\path [left connect, dashed] (data) to [out=-70,in=120] (proj);
\path [line, dashed] (ttpca) -- (factorl);
\path [right connect,dashed] (data) to [out=-10,in=60] (proj);
\path [line,dashed] (ttpca) -- (factorr);
\path [middle connect,dashed] (data) to [out=-30,in=90] (proj);
\path [line,dashed] (ttpca) -- (factorm);
\path [right connect, dashed] (factorl) to [out=-40,in=60] (proj);
\path [middle connect, dashed] (factorl) to [out=-90, in=90] (proj);
\path [left connect, dashed] (factorm) to [out=-100,in=120] (proj);
\path [right connect, dashed] (factorm) to [out=-90,in=60] (proj);
\path [left connect, dashed] (factorr) to [out=-175,in=120] (proj);
\path [middle connect, dashed] (factorr) to [out=-170,in=90] (proj);
\end{tikzpicture}
\caption{}
\label{fig:alg3w}
\end{subfigure}
\caption{\textcolor{black}{Illustration of the proposed methods: (a) The proposed tensor network structure for 2WTT; (b) The proposed tensor network structure for 3WTT; (c) The flow diagram for 2WTTDA (Algorithm \ref{alg:2WTTDA}); (d) The flow diagram for 3WTTDA}}
\end{figure*}
\vspace{-.7em}
\section{Analysis of Storage, Training Complexity and Convergence}
\label{sec:compC}
In this section, we derive the storage and computational complexities of the aforementioned algorithms as well as providing a convergence analysis for TTDA.
\vspace{-.5em}
\subsection{Storage Complexity}
Let $I_n=I, n \in \{1,2,\ldots,N\}$ and $R_l=r, l\in \{2,\dots, N-1\}$. Assuming $N$ is a multiple of both 2 and 3,
total storage complexities are:
\begin{itemize}
\item $\mathcal{O}((N-1)r^2I+rI+rCK)$ for TT Decomposition, where $R_1=1, R_N=r$;
\item $\mathcal{O}((N-2)r^2I+2rI+r^2CK)$ for Two-Way TT Decomposition, where $R_1=R_N=1$;
\item $\mathcal{O}((N-3)r^2I+3rI+r^3CK)$ for Three-Way TT Decomposition, where $R_1=R_{d_1}=R_N=1$;
\item $\mathcal{O}(NrI+r^NCK)$ for Tucker Decomposition, where $R_1=R_N=r$.
\end{itemize}
\begin{table}
\centering
\caption{Storage Complexities of Different Tensor Decomposition Structures}
\begin{tabular}{c|c|c}
Methods & Subspaces ($\mathcal{U}_{n}$s) ($\mathcal{O}(.)$) & Projections $\mathcal{X}_{c}^{k}$ ($\mathcal{O}(.)$) \\
\hline
TT & $(N-1)r^2I+rI$ & $rCK$ \\
2WTT & $(N-2)r^2I+2rI$ & $r^2CK$ \\
3WTT & $(N-3)r^2I+3rI$ & $r^3CK$ \\
TD & $NrI$ & $r^NCK$
\end{tabular}
\label{tab:s_comp}
\end{table}
These results show that when the number of modes for the projected samples is increased, the storage cost increases exponentially for $\mathcal{X}_c^k$ while the cost of storing $\mathcal{U}_n$s decreases quadratically. Using the above, one can easily find the optimal number of modes for the projected samples that minimizes storage complexity. Let the number of modes of $\mathcal{X}_c^k$ be denoted by $f$. The storage complexity of the decomposition is then $\mathcal{O}((N-f)r^2I+f rI+r^f CK)$. The optimal storage complexity is achieved by taking the derivative of the complexity in terms of $f$ and equating it to zero. In this case, the optimal $f$ is given by \[\hat{f}= round\left(\log_r\left(\frac{r^2 I- r I}{C K \ln(r)}\right)\right),\] where $round(.)$ is an operator that rounds to the closest positive integer.
\vspace{-.5em}
\subsection{Computational Complexity}
For all of the decompositions mentioned except for DGTDA and LDA, the $\mathcal{U}_n$s and $\mathcal{X}_c^k$ depend on each other which makes these decompositions iterative. The number of iterations will be denoted as $t_c$ and $t_t$ for CMDA and TT-based methods, respectively. For the sake of simplicity, we also define $C_s=2CK$. The total cost of finding $\mathcal{U}_n$s and $\mathcal{X}_c^k$ $\forall n, c, k$, where $r<<I$ is in the order of:
\begin{itemize}
\item $\mathcal{O}\Big(I^N\big[(C_s+t_tr(r+N-1))I^{N}+t_tr^4(I+r^2I^{-1})\big]\Big)$ for TTDA;
\item $\mathcal{O}\Big(rI^{N}\frac{C_s}{2}+2I^{N/2}\big[(C_s+t_tr(r+N/2-1))I^{N/2}+t_tr^4I+t_tr^6I^{-1}\big]\Big)$ for 2WTTDA;
\item $\mathcal{O}\Big(rI^{N}\frac{C_s}{2}+3I^{N/3}\big[(C_s+t_tr(r+N/3-1))I^{N/3}+t_tr^4I+t_tr^6I^{-1}\big]\Big)$ for 3WTTDA.
\end{itemize}
\noindent If convergence criterion is met with a small number of iterations, i.e. $t_tr(r+N/f-1)<<C_s$, and $I^{N/f}>>r^6$ for all $f$, the reduced complexities are as given in Table \ref{tab:c_comp}.
\begin{table}
\centering
\caption{Computational complexities of various algorithms. The number of iterations to find the subspaces are denoted as $t_c$ for CMDA and $t_t$ for TT-based methods. $C_s=2CK$. ($r<<I$, $t_tr(r+N/f-1)<<C_s$, and $I^{N/f}>>r^6$)}
\begin{tabular}{c|c}
\hline
Methods & Order of Complexity ($\mathcal{O}(.)$) \\
\hline
LDA & $C_sI^{2N}+I^{3N}$\\
\hline
DGTDA & $3I^3+NC_sI^{N+1}$ \\
\hline
CMDA & $2t_c I^3+t_cN^2C_sI^N$\\
\hline
{TTDA} & $C_sI^{2N}$ \\
\hline
{2WTTDA} & $(r/2+2)C_sI^{N}$ \\
\hline
{3WTTDA} & $(rI^{N/3}/2+3)C_sI^{2N/3}$
\end{tabular}
\label{tab:c_comp}
\end{table}
We can see from Table \ref{tab:c_comp} that with increasing number of branches, TT-based methods become more efficient if the algorithm converges in a few number of iterations. This is especially the case if the ranks of tensor factors are low as this reduces the dimensionalities of the optimal solutions and the search algorithm finds a solution to (\ref{eq:TTDAU}) faster. When this assumption holds true, the complexity is dominated by the formation of scatter matrices. Note that the ranks are assumed to be much lower than dimensionalities and number of modes is assumed to be sufficiently high. When these assumptions do not hold, the complexity of computing $\mathcal{A}_n$ might be dominated by terms with higher powers of $r$. This indicates that TT-based methods are more effective when the tensors have higher number of modes and when the TT-ranks of the tensor factors are low. DGTDA has an advantage over all other methods as it is not iterative and the solution for each mode is not dependent on other modes. On the other hand, the solution of DGTDA is not optimal and there are no convergence guarantees except when the ranks and initial dimensions are equal to each other, i.e. when there is no compression.
\vspace{-.5em}
\subsection{Convergence}
To analyze the convergence of TTDA, we must first establish a lower bound for the objective function of LDA, as (\ref{eq:TTDA}) is lower bounded by the objective value of (\ref{eq:LDA}).
\begin{lemma}
Given that $\lambda \in \mathbf{R}_+$, \textit{i.e.} a nonnegative real number, the lower bound of $tr(U^\top S_W U)-\lambda tr(U^\top S_BU)$ is achieved when $U\in \mathbb{R}^{\prod_{n=1}^N I_n\times r}$ satisfies the following two conditions, simultaneously:
\begin{enumerate}
\item The columns of $U$ are in the null space of $S_{W}$: $\mathbf{u}_{j} \in null(S_{W}), \forall j \in \{1,\ldots, r\}$.
\item $\{\mathbf{u}_{1},\mathbf{u}_{2},\ldots,\mathbf{u}_{r}\}$ are the top-$r$ eigenvectors of $S_{B}$.
\end{enumerate}
In this case, the minimum of $tr(U^\top S_W U)-\lambda tr(U^\top S_BU)=-\lambda\sum_{i=1}^{r}\sigma_{i}$, where $\sigma_{i}$s are the eigenvalues of $S_{B}$.
\end{lemma}
\begin{proof}
Since $S_W$ is positive semi-definite,
\begin{equation*}
0\leq \min_{U}tr(U^\top S_{W} U),
\end{equation*}
which implies that when the columns of $U$ are in the null space of $S_{W}$, i.e. $\mathbf{u}_{j} \in null(S_{W}), \forall j \in \{1,\ldots, r\}$, the minimum value will be achieved for the first part of the objective function.
To minimize the trace difference, we need to maximize $tr(U^\top S_BU)$ which is bounded from above as:
\begin{equation*}
\max_{U}tr(U^\top S_BU)\leq \sum_{i=1}^r\sigma_i.
\end{equation*}
$tr(U^\top S_BU)$ is maximized when the columns of $U$ are the top-$r$ eigenvectors of $S_B$. Therefore, the trace difference achieves the lower-bound when $\mathbf{u}_{j} \in null(S_{W}), \forall j \in \{1,\ldots, r\}$ and $\{\mathbf{u}_{1},\mathbf{u}_{2},\ldots,\mathbf{u}_{r}\}$ are the top-$r$ eigenvectors of $S_{B}$ and this lower-bound is equal to $-\lambda\sum_{i=1}^{r}\sigma_{i}$.
\end{proof}
As shown above, the objective function of LDA is lower bounded. Thus, the solution to (\ref{eq:TTDA}) is also lower-bounded.
Let $f(\mathcal{U}_1,\mathcal{U}_2,\dots,\mathcal{U}_N)=tr\left(U^\top S U\right)$, where $U=\mathbf{L}(\mathcal{U}_1\times_3^1\dots\times_3^1\mathcal{U}_N)$ and $S$ is defined as in \eqref{eq:LDA}. If the function $f$ is non-increasing with each update of $\mathcal{U}_n$s, i.e.
\begin{gather}
f(\mathcal{U}_1^t,\mathcal{U}_2^t,\dots,\mathcal{U}_n^{t-1},\dots,\mathcal{U}_N^{t-1})\geq \nonumber \\ f(\mathcal{U}_1^t,\mathcal{U}_2^t,\dots,\mathcal{U}_n^{t},\dots,\mathcal{U}_N^{t-1}), \qquad \forall t,n \in \{1,2,\ldots, N\},\nonumber
\end{gather}
then we can claim that Algorithm \ref{alg:TTDA} converges to a fixed point as $t\xrightarrow{} \infty$ since $f(.)$ is lower-bounded. In \cite{wen2013feasible}, an approach to regulate the step sizes in the search algorithm was introduced to guarantee global convergence. In this paper, this approach is used to update $\mathcal{U}_n$s.
Thus, (\ref{eq:TTDAU}) can be optimized globally, and the objective value is non-increasing.
As Multi-Branch extensions utilize TTDA on the update of each branch, proving the convergence of TTDA is sufficient to prove the convergence of 2WTTDA or 3WTTDA.
\section{Experiments}
\label{sec:Exp}
The proposed TT based discriminant analysis methods are evaluated in terms of classification accuracy, storage complexity, training complexity and sample size. We compared our methods\footnote{Our code is in https://github.com/mrsfgl/MBTTDA} with both linear supervised tensor learning methods including LDA, DGTDA and CMDA\cite{li2014multilinear}\footnote{https://github.com/laurafroelich/tensor\_classification} as well as other tensor-train based learning methods such as MPS \cite{bengua2017matrix}, TTNPE \cite{wang2018tensor}\footnote{https://github.com/wangwenqi1990/TTNPE.} and STTM \cite{chen2019support}\footnote{https://github.com/git2cchen/KSTTM}. The experiments were conducted on four different data sets: COIL-100, Weizmann Face, Cambridge and UCF-101. For all data sets and all methods, we evaluate the classification accuracy and training complexity with respect to storage complexity.
In this paper, classification accuracy is evaluated using a 1-NN classifier and quantified as $N_{true}/N_{test}$, where $N_{true}$ is the number of test samples which were assigned the correct label and $N_{test}$ is the total number of test samples. Normalized storage complexity is quantified as the ratio of the total number of elements in the learnt tensor factors ($\mathcal{U}_{n}, \forall n$) and projections ($\mathcal{X}_{c}^{k}, \forall c,k$) of training data, $O_s$, to the size of the original training data ($\mathcal{Y}_c^k, \forall c,k$): \[\frac{O_{s}}{CK\prod_{n=1}^NI_n}.\]
Training complexity is the total runtime in seconds for learning the subspaces. All experiments were repeated 10 times with random selection of the training and test sets and average classification accuracies are reported.
The regularization parameter, $\lambda$, for each experiment was selected using a validation set composed of all of the samples in the training set and a small subset of each class from the test set \textcolor{black}{ (10 samples for COIL-100, 5 samples for Weizmann, 1 sample for Cambridge, and 10 samples for UCF-101)}. Utilizing a leave-$s$-out approach, where $s$ is the aforementioned subset size, 5 random experiments were conducted. The optimal $\lambda$ was selected as the value that gave the best average classification accuracy among a range of values from $0.1$ to $1000$ increasing in a logarithmic scale.
CMDA\textcolor{black}{, TTNPE and MPS} do not utilize the $\lambda$ parameter while DGTDA utilizes eigendecomposition to find $\lambda$ \cite{li2014multilinear}. \textcolor{black}{STTM has an outlier fraction parameter which was set to $0.02$ according to the original paper \cite{chen2019support}.}
\subsection{Data Sets}
\subsubsection{COIL-100}
The dataset consists of 7,200 RGB images of 100 objects of size $128\times 128$. Each object has 72 images, where each image corresponds to a different pose angle ranging from 0 to 360 degrees with increments of 5 degrees \cite{nenecolumbia}. For our experiments, we downsampled the grayscale images of all objects to $64\times 64$. Each sample image was reshaped to create a tensor of size $8\times 8\times 8\times 8$. Reshaping the inputs into higher order tensors is common practice and was studied in prior work \cite{khoromskij2011dlog, oseledets2011tensor, zhao2016tensor, cichocki2017tensor, bengua2017efficient}. 20 samples from each class were selected randomly as training data, i.e. $\mathcal{Y} \in \mathbb{R}^{8\times 8\times 8\times 8 \times 20 \times 100}$, and the remaining 52 samples were used for testing.
\subsubsection{Weizmann Face Database}
The dataset includes RGB face images of size $512\times352$ belonging to 28 subjects taken from 5 viewpoints, under 3 illumination conditions, with 3 expressions \cite{weiz}. For our experiments, each image was grayscaled, and downsampled to $64\times44$. The images were then reshaped into 5-mode tensors of size $4\times4\times4\times4\times11$ as in \cite{wang2018tensor}. For each experiment, 20 samples were randomly selected to be the training data, i.e. $\mathcal{Y}\in \mathbb{R}^{4\times4\times4\times4\times11\times20\times28}$, and the remaining 25 samples were used in testing.
\subsubsection{Cambridge Hand-Gesture Database}
The dataset consists of 900 image sequences of 9 gesture classes, which are combinations of 3 hand shapes and 3 motions. For each class, there are 100 image sequences generated by the combinations of 5 illuminations, 10 motions and 2 subjects \cite{kim2007tensor}. Sequences consist of images of size $240\times 320$ and sequence length varies. In our experiments, we used grayscaled versions of the sequences and we downsampled all sequences to length $30$. We also included 2 subjects and 5 illuminations as the fourth mode. Thus, we have 10 samples for each of the 9 classes from which we randomly select 4 samples as the training set, i.e. $\mathcal{Y}\in \mathbb{R}^{30\times40\times30\times10\times4\times9}$, and the remaining 6 as test set.
\subsubsection{UCF-101 Human Action Dataset}
UCF-101 is an action recognition dataset \cite{soomro2012ucf101}. There are 13320 videos of 101 actions, where each action category might have different number of samples. Each sample is an RGB image sequence with frame size $240\times 320\times 3$. The number of frames differs for each sample. In our experiments, we used grayscaled, downsampled frames of size $30\times 40$. From each class, we extracted 100 samples to balance the class sizes where each sample consists of 50 frames obtained by uniformly sampling each video sequence. 60 randomly selected samples from each class were used for training, i.e. $\mathcal{Y} \in \mathbb{R}^{30\times40\times50\times60\times101}$, and the remaining 40 samples were used for testing.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.98\columnwidth}
\centering
\includegraphics[width=.98\columnwidth]{coil_full_acc.png}
\caption{}
\label{fig:clsacc}
\end{subfigure}
\begin{subfigure}[b]{.98\columnwidth}
\centering
\includegraphics[width=.98\columnwidth]{Weizmann5Dacc2.png}
\caption{}
\label{fig:weizclsacc}
\end{subfigure}
\begin{subfigure}[b]{.98\columnwidth}
\centering
\includegraphics[width=.98\columnwidth]{Cambridgeacc.png}
\caption{}
\label{fig:cambclsacc}
\end{subfigure}
\begin{subfigure}[b]{.98\columnwidth}
\centering
\includegraphics[width=.98\columnwidth]{ucf101acc.png}
\caption{}
\label{fig:ucfclsacc}
\end{subfigure}
\caption{\textcolor{black}{Classification accuracy vs. Normalized storage cost of the different methods for: a) COIL-100, b) Weizmann Face, c) Cambridge Hand Gesture and d) UCF-101. All TD based methods are denoted using 'x', TT based methods are denoted using '+' and proposed methods are denoted using '*'. STTM and LDA are denoted using '$\triangle$' and 'o', respectively.}}
\label{fig:acc_all}
\end{figure*}
\begin{figure*}
\centering
\begin{subfigure}[b]{.98\columnwidth}
\centering
\includegraphics[width=.98\columnwidth]{coil_full_subtime.png}
\caption{}
\label{fig:subtime}
\end{subfigure}
\begin{subfigure}[b]{.98\columnwidth}
\centering
\includegraphics[width=.98\columnwidth]{Weizmann5Dsubtime2.png}
\caption{}
\label{fig:weizsubtime}
\end{subfigure}
\begin{subfigure}[b]{.98\columnwidth}
\centering
\includegraphics[width=.98\columnwidth]{Cambridgesubtime.png}
\caption{}
\label{fig:cambsubtime}
\end{subfigure}
\begin{subfigure}[b]{.98\columnwidth}
\centering
\includegraphics[width=.98\columnwidth]{ucf101subtime.png}
\caption{}
\label{fig:ucfsubtime}
\end{subfigure}
\caption{\textcolor{black}{Training complexity vs. Normalized storage cost of the different methods for: a) COIL-100, b) Weizmann Face, c) Cambridge Hand Gesture, and d) UCF-101.}}
\label{fig:subtimeall}
\end{figure*}
\subsection{Classification Accuracy}
We first evaluate the classification accuracy of the different methods with respect to normalized storage complexity. The varying levels of storage cost are obtained by varying the ranks, $R_{i}$s, in the implementation of the tensor decomposition methods. Varying the truncation parameter $\tau \in (0,1]$, the singular values smaller than $\tau$ times the largest singular value are eliminated. The remaining singular values are used to determine the ranks $R_i$s for both TT-based and TD-based methods. For TT-based methods, the ranks are selected using TT-decomposition proposed in \cite{oseledets2011tensor}, while for TD-based methods truncated HOSVD was used.
Fig. \ref{fig:clsacc} illustrates the classification accuracy of the different methods with respect to normalized storage complexity for COIL-100 data set. For this particular dataset, we implemented all of the methods mentioned above. It can be seen that the proposed discriminant analysis framework in its original form, TTDA, gives the highest accuracy results followed by TTNPE. However, these two methods only operate at very low storage complexities since the TT-ranks of tensor factors are constrained to be smaller than the corresponding mode's input dimensions. We also implemented STTM, which does not provide compression rates similar to other TT-based methods. This is due to the fact that STTM needs to learn $\frac{C(C-1)}{2}$ classifiers with TT structure. Moreover, these methods have very high computational complexity as will be shown in Section \ref{subsec:trcomp}. For this reason, they will not be included in the comparisons for the other datasets. For a wide range of storage complexities, MPS and 2WTTDA perform the best and have similar accuracy. It can also be seen that the storage costs of MPS and 2WTTDA stop increasing after some point due to rank constraints. This is in line with the theoretical storage complexity analysis presented in Section \ref{sec:compC}. Tucker based methods, such as CMDA and DGTDA, along with the original vector based LDA have lower classification accuracy.
Fig. \ref{fig:weizclsacc} similarly illustrates the classification accuracy of the different methods on the Weizmann Face Database. For all storage complexities, the proposed 2WTTDA and 3WTTDA perform better than the other methods, including TT based methods such as MPS.
Fig. \ref{fig:cambclsacc} illustrates the classification accuracy for the Cambridge hand gesture database. In this case, 3WTTDA performs the best for most storage costs. As the number of samples for training, validation and testing is very low for Cambridge dataset, the classification accuracy fluctuates with respect to the dimensionality of the features at normalized storage cost of $0.02$. Similar fluctuations can also be seen in the results of \cite{wang2018tensor}.
Finally, we tested the proposed methods on a more realistic, large sized dataset, UCF-101. For this dataset, TT-based methods perform better than the Tucker based methods. In particular, 2WTTDA performs very close to MPS at low storage costs, whereas 3WTTDA performs well for a range of normalized storage costs and provides the highest accuracy overall.
Even though our methods outperform MPS for most datasets, the classification accuracies get close for UCF-101 and COIL-100. This is due to the high number of classes in these datasets. As the number of classes increases, the number of scatter matrices that needs to be estimated also increases which results in a larger bias given limited number of training samples. This improved performance of MPS for datasets with large number of classes is also observed when MPS is compared to CMDA. Therefore, the reason that MPS and the proposed methods perform similarly is a limitation of discriminant analysis rather than the proposed tensor network structure.
\subsection{Training Complexity}
\label{subsec:trcomp}
In order to compute the training complexity, for TT-based methods, each set of tensor factors is optimized until the change in the normalized difference between consecutive tensor factors is less than $0.1$ or $200$ iterations is completed. After updating the factors in a branch, no further optimizations are done on that branch in each iteration. CMDA iteratively optimizes the subspaces for a given number of iterations (which is set to 20 to increase the speed in our experiments) or until the change in the normalized difference between consecutive subspaces is less than $0.1$.
Figs. \ref{fig:subtime}, \ref{fig:weizsubtime}, \ref{fig:cambsubtime}, \ref{fig:ucfsubtime} illustrate the training complexity of the different methods with respect to normalized storage cost for the four different datasets. In particular, Fig. \ref{fig:subtime} illustrates the training complexity of all the methods including TTNPE, TTDA and STTM for COIL-100. It can be seen that STTM has the highest computational complexity among all of the tested methods. This is due to the fact that for a 100-class classification problem, STTM implements $(100)(99)/2$ one vs. one binary classifiers, increasing the computational complexity. Similarly, TTNPE has high computational complexity as it tries to learn the manifold projections which involves eigendecomposition of the embedded graph. Among the remaining methods, LDA has the highest computational complexity as it is based on learning from vectorized samples which increases the dimensionality of the covariance matrices. For the tensor based methods, the proposed 2WTTDA and 3WTTDA have the lowest computational complexity followed by MPS and DGTDA. In particular, for large datasets like UCF-101 the difference in computational complexity between our methods and existing TT-based methods such as MPS is more than a factor of $10^2$.
\begin{table*}[htb]
\centering
\caption{\textcolor{black}{Average classification accuracy (left) and training time (right) with standard deviation for various methods and datasets.}}
\begin{tabular}{l|c|c|c|c||c|c|c|c|c}
\hline
Methods & COIL-100 & Weizmann & Cambridge &UCF-101 & (s) & COIL-100 & Weizmann & Cambridge & UCF-101\\
\hline
3WTTDA &$\mathbf{ 95.6 \pm 0.4}$ & $93.6\pm 2$&$\mathbf{98.2\pm1.7}$ &$\mathbf{68.6\pm0.8}$ & & $\mathbf{0.09 \pm 0.005}$ & $\mathbf{0.05 \pm 0.02}$ &$\mathbf{ 0.11 \pm 0.01}$ &$\mathbf{0.67\pm0.02}$ \\
2WTTDA & $94.8 \pm 0.5$ & $\mathbf{97.6 \pm 1.2}$& $89.1\pm16.7$ &$67.7\pm0.9$ & & $0.24\pm 0.06$ & $0.09 \pm 0.02$ & $1.7 \pm 1.5$ &$0.853\pm 0.13$\\
MPS & $94.2 \pm 0.2$ & $87.5 \pm 2.3$& $56.2\pm9.8$ &${67.9\pm0.6}$ & & $1.4\pm 0.13$ & $0.13 \pm 0.01$ & $2.07 \pm 0.25$ &$56.4\pm 1.9$\\
CMDA & $86.3 \pm 0.7$ & $96.4 \pm 1.03$& $95\pm 2.8$ &$67.7\pm0.8$ & & $12.2\pm6.6$ & $2.6 \pm 0.3$ & $12.6 \pm 0.3$ & $413.5\pm24.1$\\
DGTDA & $76.6 \pm 0.9$ & $69.9 \pm 1.8$& $35.4\pm8.7$ &$57.3\pm2.7$ & & $0.7\pm0.06$ & $0.16 \pm 0.02$ & $0.7 \pm 0.04$ & $35.3\pm2.9$
\end{tabular}
\label{tab:acc}
\end{table*}
\subsection{Convergence}
In this section, we present an empirical study of convergence for TTDA in Fig. \ref{fig:conv} where we report the objective value of TTDA, i.e. the expression inside argmin operator in (\ref{eq:TTDAU}), with random initialization of projection tensors. This figure illustrates the convergence of the TTDA algorithm, which is at the core of both 2WTTDA and 3WTTDA, on COIL-100 dataset. It can be seen that even for random initializations of the tensor factors, the algorithm converges in a small number of steps. The convergence rates for 2WTTDA and 3WTTDA are faster than that of TTDA as they update smaller sized projection tensors as shown in Section \ref{sec:compC}.
\begin{figure}
\centering
\includegraphics[width=.98\columnwidth]{convergence_COIL_3.png}
\caption{ \textcolor{black}{Convergence curve for TTDA on COIL-100. Objective value vs. the number of iterations is shown. }}
\label{fig:conv}
\end{figure}
\subsection{Effect of Sample Size on Accuracy}
We also evaluated the effect of training sample size on classification accuracy for Weizmann Dataset. In Fig. \ref{fig:weizsample}, we illustrate the classification accuracy with respect to training sample size for different methods. It can be seen that 3WTTDA provides a high classification accuracy even for small training datasets, i.e., for 15 training samples it provides an accuracy of $96\%$. This is followed by CMDA and 2WTTDA. It should also be noted that DGTDA is the most sensitive to sample size as it cannot even achieve the local optima and more data allows it to learn better classifiers.
\begin{figure}
\centering
\includegraphics[width=.98\columnwidth]{WeizmannClassizeAcc.png}
\caption{ \textcolor{black}{Comparison of classification accuracy vs. training sample size for Weizmann Face Dataset for different methods.} }
\label{fig:weizsample}
\end{figure}
\subsection{Summary of Experimental Results}
\label{subsec:summary}
In Table \ref{tab:acc}, we summarize the performance of the different algorithms for the four different datasets considered in this paper. In the left half of this table, we report the classification accuracy (mean $\pm$ std) of the different methods for a fixed normalized storage cost of about $2.10^{-2}$ for COIL-100, $6.10^{-3}$ for Weizmann Face, $2.10^{-4}$ for Cambridge Hand Gesture and $10^{-3}$ for UCF-101 datasets. At the given compression rates, for all datasets the proposed 3WTTDA and 2WTTDA perform better than the other tensor based methods. In some cases, the improvement in classification accuracy is significant, e.g. for Weizmann and Cambridge data sets. These results show that the proposed method achieves the best trade-off, i.e. between normalized storage complexity and classification accuracy.
Similarly, the right half of Table \ref{tab:acc} summarizes the average training complexity for the different methods for the same normalized storage cost. From this Table, it can be seen that 3WTTDA is the most computationally efficient method for all datasets. This is followed by 2WTTDA. The difference in computational time becomes more significant as the size of the dataset increases, e.g. for UCF-101. Therefore, even if the other methods perform well for some of the datasets, the proposed methods provide higher accuracy at a computational complexity reduced by a factor of $10^{2}$.
\section{Conclusions}
In this paper, we proposed a novel approach for tensor-train based discriminant analysis for tensor object classification. The proposed approach first formulated linear discriminant analysis such that the learnt subspaces have a TT structure. The resulting framework, TTDA, reduces storage complexity at the expense of high computational complexity. This increase in computational complexity is then addressed by reshaping the projection vector into matrices and third-order tensors, resulting in 2WTTDA and 3WTTDA, respectively. A theoretical analysis of storage and computational complexity illustrated the tradeoff between these two quantities and suggest a way to select the optimal number of modes in the reshaping of TT structure. The proposed methods were compared with the state-of-the-art TT-based subspace learning methods as well as tensor based discriminant analysis for four datasets. While providing reduced storage and computational costs, the proposed methods also yield higher or similar classification accuracy compared to state-of-the art tensor based learning methods such as CMDA, STTM, TTNPE and MPS.
The proposed multi-branch structure can also be extended to unsupervised methods such as dictionary learning, and subspace learning applications. The structure can also be optimized by permuting the modes in a way that the dimensions are better balanced than the original order.
\bibliographystyle{IEEEtran}
|
1,314,259,994,892 | arxiv | \section{Introduction}
The small-sale dynamics of turbulent flows is governed by highly non-linear and non-local dynamical processes, whose statistics are strongly intermittent in space and time \citep{Yeung2012,Buaria2019}.
Moreover, the strong and intermittent small-scale dynamics can generate coherent structures at larger scales \citep{Majda2001}. Such small-scale dynamics is effectively characterized by the velocity gradient field, rather than the velocity field itself \citep{Tsinober2001}. Consequently, understanding and modelling the velocity gradient dynamics is of singular importance in the study of turbulence and has been the subject of many works in the literature. In particular, the Lagrangian description of the velocity gradient dynamics has proven to be especially fruitful for understanding and modeling \citep{Meneveau2011}.
The equation governing the velocity gradient tensor dynamics along a fluid particle trajectory is easily derived from the Navier-Stokes equation (NSE) but, the equation is unclosed because of the anisotropic/non-local pressure Hessian and viscous terms. Developing closure models for these complex terms requires insight, and this work concentrates on the properties of the anisotropic pressure Hessian.
The pressure field can be expressed as a linear, non-local, functional of the second invariant of the velocity gradient tensor. Therefore, a strategy to infer the statistical properties of the pressure field consists in analyzing how the velocity gradient organizes in space.
A quantitative investigation of the correlation length of the velocity gradient magnitude shows that, in rotation-dominated regions, the pressure field is governed by a dissipation-scale neighbourhood while, in strain-dominated regions, the pressure is determined by an inertial-scale neighbourhood \citep{Vlaykov2019}. However, many works in the literature have shown that the pressure statistics can be described reasonably well by quasi-local approximations \citep{Chevillard2008,Lawson2015}. Indeed, the long-range effects to the pressure field are much smaller than expected due to partial cancellation of the competing contributions of the strain-rate and vorticity magnitude to the second invariant of the velocity gradient \citep{Vlaykov2019}.
The information about the statistics of the pressure field can then be employed to develop closure models for the Lagrangian dynamics of the velocity gradient in turbulence. In the inviscid case, an early closure model by \cite{Vieillefosse1982} has been derived neglecting the non-local/anisotropic part of the pressure Hessian, while retaining its local/isotropic part. This model is usually referred to as the Restricted Euler (RE) model. This model led to important insights, showing the tendency for the intermediate eigenvalue of the strain-rate to be positive, and also the preferred alignment of the vorticity with the intermediate strain-rate eigenvector \citep{Cantwell1992} as observed in Direct Numerical Simulation (DNS) of isotropic turbulence and homogeneous shear flows \citep{Ashurst1987}.
However, the RE flow exhibits a finite-time singularity for almost all initial conditions, indicating that a realistic model for the velocity gradient should take into account the anisotropic pressure Hessian, in addition to viscous contributions. Indeed, the anisotropic pressure Hessian is considered to play a major role in preventing such finite-time singularities, even for ideal fluids, and it has been analyzed in detail in several works \citep{Ohkitani1993,Nomura1998,Chevillard2008,Vlaykov2019}.
In an early work, the anisotropic pressure Hessian has been modelled as a stochastic process, independent of the gradient dynamics, and the stochastic differential equations for the velocity gradient have been constructed to satisfy isotropy constraints and empirical constraints as the log-normality of the dissipation rate \citep{Girimaji1990a}. A more advanced phenomenological and stochastic model was constructed in \cite{Chertkov1991} by analyzing the Lagrangian dynamics using four tracer trajectories, forming a tetrad. The tetrad can be used to construct a scale-dependent filtered velocity gradient \citep{Naso2005} and the closure of the model involves a direct relation between the local pressure and the velocity gradient on the tetrad.
The tetrad model provided a phenomenological basis for understanding how the anisotropic pressure Hessian acts to reduce non-linearity in the flow, a property that also emerges in more systematic closures for the pressure Hessian based on Gaussian random fields \citep{Wilczek2014}.
The deformation history of a fluid particle in the flow has been employed to model the anisotropic pressure Hessian and viscous terms using Lagrangian coordinate closures \citep{Chevillard2006}. In this model, only information on the recent fluid deformation (RFD) is retained, that is, the dynamics is affected by times up to the Kolmogorov timescale, $\tau_\eta$, in the past. A phenomenological closure is then constructed assuming that at a time $\tau_\eta$ in the past, the Lagrangian pressure Hessian was isotropic. This model does not exhibit the singularity associated with the RE, and was shown to capture many of the non-trivial features of the velocity gradient dynamics that are observed in experiments and Direct Numerical Simulations of the NSE. However, it displays unphysical behaviour for flows at large Reynolds number. A critical comparison with DNS data \citep{Chevillard2008} showed that while the closure model presented in \cite{Chevillard2006} can reproduce some of the non-trivial velocity gradient dynamics, it misses some important features of the pressure Hessian dynamics and statistical geometry in the flow.
\cite{Wilczek2014} proposed a closure for the Lagrangian velocity gradient equation by assuming that the velocity is a random field with Gaussian statistics. Closed expressions for the pressure Hessian and viscous terms conditioned on the velocity gradient are obtained by means of the characteristic functional of the Gaussian velocity field. The model produces qualitatively good results but, owing to the Gaussian assumption, it leads to quantitative predictions that are not in full agreement with DNS data. Therefore, to correct this aspect, the authors modified the closure such that the mathematical structure was retained, but the coefficients appearing in the model were prescribed using DNS data. This led to significant improvements, since the model provides interesting insights into the role of the anisotopic pressure Hessian in preventing the singularities arising in the RE. However,
the enhanced model did not satisfy the kinematic relations for incompressible and isotropic flows \citep{Betchov1956}.
Another model has been developed by \cite{Johnson2016}, who combined the closure modeling ideas by both \cite{Chevillard2006} and \cite{Wilczek2014}. This model leads to improvements compared with the two models on which it is based, and it is formulated in such a way that by construction the model satisfies the kinematic relations of \cite{Betchov1956}. However, a quantitative comparison with DNS data revealed some shortcomings in the ability of the model to properly capture the intermittency of the flow. Moreover, it runs into difficulties for high Reynolds number flows, like that of \cite{Chevillard2006} from which it has been partly derived. The capability to reproduce intermittency and high-Reynolds number flow features is a major challenge for velocity gradient models. A recent development of velocity gradient models, based on a multiscale refined self-similarity hypothesis, proposed by \cite{Johnson2017}, seems to remove the Reynolds number limitations (at least in the sense that the model does not break down at high Reynolds numbers).
In summary, while significant progress has been made since the initial modelling efforts of \cite{Vieillefosse1982,Vieillefosse1984}, much remains to be done. A major difficulty in developing accurate closure approximations for the Lagrangian velocity gradient equation is that the dynamical effects of the anisotropic/non-local pressure Hessian on the flow are not yet fully understood and are difficult to approximate using simple closure ideas.
This fact is the motivation behind the present work which aims to improve the understanding of the anisotropic pressure Hessian, and in particular, its statistical geometry relative to the strain-rate and vorticity fields.
In the following, we present what appears to be a previously unrecognized gauge symmetry for the pressure Hessian, such that when this gauge is added to the pressure Hessian, the invariant dynamics of the velocity gradient tensor remains unchanged. We then exploit this gauge symmetry to perform a rank reduction on the anisotropic pressure Hessian.
Remarkably, this rank reduction can be performed everywhere in the turbulent flow, and produces the newly introduced rank-reduced anisotropic pressure Hessian which lives on a two-dimensional manifold and exhibits striking alignment properties with respect to the strain-rate eigenframe and the vorticity vector. This dimensionality reduction, together with evident preferential alignments of the rank-reduced anisotropic pressure Hessian has implications in the understanding and modelling of turbulent flows.
\section{Theory}
In this Section the gauge symmetry for the invariants dynamics is derived from the equations for the velocity gradient written in the strain-rate eigenframe. The gauge is then exploited to reduce the rank of the anisotropic pressure Hessian obtaining a rank-reduced anisotropic pressure Hessian which is a two-dimensional object embedded in a three-dimensional space.
\subsection{Equations for the fluid velocity gradient in the strain-rate eigenframe}
The three-dimensional flow of a Newtonian and incompressible fluid with unitary density is described by the Navier-Stokes equations
\begin{equation}
D_t\bm{u} \equiv \partial_t\bm{u} + (\bm{u\cdot}\nabla) \bm{u} = -\nabla P + \nu\nabla^2\bm{u},\quad \nabla\bm{\cdot}\bm{u} = 0,
\label{eq_NS}
\end{equation}
where $\bm{u}(t,\bm{x})$, $P(t,\bm{x})$ are the fluid velocity and pressure fields and $\nu$ is the kinematic viscosity. By taking the gradient of \eqref{eq_NS}, the following equation for the velocity gradient tensor is obtained
\begin{align}
D_t\bm{A} = -\bm{A\cdot A} - \bm{H} + \nu \nabla^2 \bm{A},\quad \text{\textrm{Tr}}(\bm{A} )&= 0,
\label{eq_grad}
\end{align}
where $\bm{A}\equiv \bm{\nabla u}$ is the velocity gradient, and $\bm{H}\equiv \bm{\nabla\nabla}P$ is the pressure Hessian.
The pressure and viscous terms in equation \eqref{eq_grad} are not in closed form, since they cannot be expressed in terms of the velocity gradient along the fluid particle trajectory, $\bm{A}(t,\bm{x}(t))$. Models are necessary to define those terms and reliable modelling of them requires an understanding of their dynamical and statistical properties \citep{Meneveau2011}.
The tensor $\bm{A}$ is decomposed into its symmetric and anti-symmetric part, namely the strain-rate $\bm{S} \equiv (\bm{A} + \bm{A}^\top)/2$, and the rate-of-rotation $\bm{R} \equiv (\bm{A} - \bm{A}^\top)/2$, whose components are related to the vorticity $\bm{\omega}\equiv\bm{\nabla}\times\bm{u}$ as $R_{ij}=\epsilon_{ikj}\omega_k/2$.
Using equation \eqref{eq_grad} the equations for $\bm{S}$ and $\bm{\omega}$ are obtained, and it is insightful to write these in the eigenframe of $\bm{S}$. The eigenvectors $\bm{v}_i$ of the strain-rate satisfy $\bm{v}_i\bm{\cdot v}_j=\delta_{ij}$, where $\delta_{ij}$ is the Kronecker delta, and thus define an orthonormal basis. The strain-rate eigenvectors remain orthogonal so that the strain-rate basis undergoes rigid body rotation only, with rotation rate $\bm{w}$,
\begin{equation}
D_t\bm{v}_i = \bm{w}\times\bm{v}_i.
\end{equation}
The equations for the velocity gradient in the strain-rate eigenframe read
\begin{align}
\sum_{j=1}^3 \lambda_j &= 0 \label{eq_cont_eigen}\\
D_t{\lambda_{i}} &= -\lambda^2_{i} + \frac{1}{4}\left( \omega^2 - \widetilde{\omega}_i^2\right) - \widetilde{H}_{i(i)} +\widetilde{ \nu\nabla^2 S_{i(i)}},\label{eq_lambda}\\
\widetilde{W}_{ij}\left(\lambda_{(j)}-\lambda_{(i)}\right) &= -\frac{1}{4} \widetilde{\omega}_i\widetilde{\omega}_j - \widetilde{H}_{ij} + \widetilde{\nu\nabla^2{S}_{ij}}, \; j \ne i,\label{eq_alg}\\
D_t{\widetilde{\omega}_i} &= \lambda_{(i)}\widetilde{\omega}_i - \widetilde{W}_{ij}\widetilde{\omega}_j +\widetilde{ \nu\nabla^2\omega_i},\textrm{ for $i=1,2,3$}\label{eq_omega}
\end{align}
where $\lambda_i$ are the strain-rate eigenvalues, the tilde indicates tensors components in the strain-rate eigenframe, so that $\widetilde{\omega}_i=\bm{v}_i\bm{\cdot\omega}$ and $\widetilde{H}_{ij}=\bm{v}_i\bm{\cdot H \cdot v}_j$ and ${\omega}^2\equiv\widetilde{\omega}_i\widetilde{\omega}_i$.
In these equations, the indexes in brackets are not contracted.
The anti-symmetric tensor $\bm{W}$ is related to the eigenframe angular velocity $\bm{w}$ through
\begin{equation}
W_{ij}=\epsilon_{ikj}w_k
\end{equation}
and $\widetilde{W}_{ij}$ are the components of $\bm{W}$ in the strain-rate eigenframe. The eigenframe equations \eqref{eq_cont_eigen}, \eqref{eq_lambda}, \eqref{eq_alg}, \eqref{eq_omega} allow to sort out the interaction between local strain and vorticity and have been studied in detail \citep{Vieillefosse1982,Dresselhaus1992,Nomura1998}.
\subsection{A new symmetry for the dynamics of the velocity gradient invariants}
The eigenframe equations satisfy basic symmetries.
They are naturally invariant under the transformation $\widetilde{\omega}_i \to-\widetilde{\omega}_i$, since the eigenvectors are only defined up to an arbitrary sign.
The inviscid equations are also invariant under time reversal $t\rightarrow -t$.
However, the equations also possess another kind of symmetry that does not appear to have been previously recognized.
That new symmetry arises from the fact that in the equation governing $\widetilde{\omega}_i$, the strain-rate eigenrame rotation rate $\bm{w}$ only enters through the cross product $\widetilde{W}_{ij}\widetilde{\omega}_j$ and therefore its component along the vorticity direction, $\bm{w\cdot\omega}$, does not affect in any way the time evolution of the velocity gradient invariants.
In order to show this fact we first define the transformation
\begin{eqnarray}
\bm{W} \to \bm{W} + \gamma\bm{R},
\label{eq_gauge_W}
\end{eqnarray}
that corresponds to adding to the rotation-rate of the strain-rate eigenframe an additional rotation about the vorticity axis at rate $\gamma\omega/2$, where $\gamma(t,\bm{x})$ is a non-dimensional scalar field. If we introduce the transformation \eqref{eq_gauge_W} into the eigenframe equations, the equation governing the strain-rate eigenvalues \eqref{eq_lambda} and the vorticity components in the strain-rate eigenframe \eqref{eq_omega} remain unchanged. Indeed the equation for $\lambda_i$ is not affected by the transformation \eqref{eq_gauge_W} since it does not contain $\bm{W}$. The equation for $\widetilde{\omega}_i$ is also unaffected since by definition $\bm{R\cdot\omega}=\bm{0}$ and, therefore,
\begin{equation}
D_t{\widetilde{\omega}_i} = \lambda_{(i)}\widetilde{\omega}_i - \left[\widetilde{W}_{ij}+\gamma \widetilde{R}_{ij}\right]\widetilde{\omega}_j + \widetilde{ \nu\nabla^2\omega_i}= \lambda_{(i)}\widetilde{\omega}_i - \widetilde{W}_{ij}\widetilde{\omega}_j + \widetilde{ \nu\nabla^2\omega_i}.
\label{eq_omega_mod}
\end{equation}
On the other hand, the off-diagonal algebraic equation \eqref{eq_alg} becomes
\begin{equation}
\widetilde{W}_{ij}\left(\lambda_{(j)}-\lambda_{(i)}\right) = -\frac{1}{4} \widetilde{\omega}_i\widetilde{\omega}_j - \widetilde{H}_{ij} -\gamma \widetilde{R}_{ij}\left(\lambda_{(j)}-\lambda_{(i)}\right) + \widetilde{\nu\nabla^2{S}_{ij}}, \; j\ne i.
\label{eq_off_diag_mod}
\end{equation}
This equation is not invariant under the transformation \eqref{eq_gauge_W}. However, while this changes the orientation of the strain-rate eigenframe with respect to a fixed, arbitrary, reference frame, it does not affect either $\lambda_i$ or $\widetilde{\omega}_i$.
Therefore, the transformation $\bm{W} \to \bm{W} + \gamma\bm{R}$ corresponds to a symmetry for the invariants of the velocity gradient tensor, that can be expressed in terms of $\lambda_i$ or $\widetilde{\omega}_i$. For example, the second and third invariants of the velocity gradient tensor can be written as
\begin{align}
Q = -\sum_i\lambda_i^2/2+\sum_i\widetilde{\omega}_i^2/4, &&
R = -\sum_i\lambda_i^3/3-\sum_i\lambda_i\widetilde{\omega}_i^2/4.
\label{eq_def_RQ}
\end{align}
It is important to note, however, that multi-time or multi-point invariants of the velocity gradients are not in general invariant under the gauge transformation. For example, $\bm{S}(t,\bm{x}(t))\bm{:S}(t',\bm{x}(t'))$ is affected by the gauge transformation since the transformation arbitrarily modifies the relative orientations of the eigenframes of $\bm{S}(t,\bm{x}(t))$ and $\bm{S}(t',\bm{x}(t'))$. Nevertheless, multi-time or multi-point products of $\lambda_i$ or $\widetilde{\omega}_i$ are invariant under the gauge transformation. In this paper, we focus on single-point and single-time quantities.
\subsection{Gauge symmetry for the anisotropic pressure Hessian}
The anisotropic/non-local pressure Hessian is defined as
\begin{align}
\bm{\mathcal{H}}\equiv\bm{H}-\frac{1}{3}\mathbf{I}\text{\textrm{Tr}}(\bm{H})=\bm{H}+\frac{1}{3}\mathbf{I}(\bm{A : A}),
\end{align}
where $\mathbf{I}$ is the three-dimensional identity matrix. This anisotropic pressure Hessian satisfies $\text{\textrm{Tr}}(\bm{\mathcal{H}})=0$ and contains all of the non-local part of $\bm{H}$. It is also important to notice that its non-local dependence on the flow field is only through the second invariant of the velocity gradient $Q$ \citep{Majda2001}.
The invariance of the eigenframe dynamics under the transformation $\bm{W} \to \bm{W} + \gamma\bm{R}$ is interpreted as a gauge symmetry for $\bm{\mathcal{H}}$. That is, the term $\gamma \widetilde{R}_{ij}\left(\lambda_{(j)}-\lambda_{(i)}\right)$ in equation \eqref{eq_off_diag_mod} is added to $\widetilde{\mathcal{H}}_{ij}$ defining $\bm{\mathcal{H}}_\gamma=\bm{\mathcal{H}}+\delta \bm{\mathcal{H}}$, without affecting the eigenframe dynamics, which is described through $\lambda_i$ and $\widetilde{\omega}_j$.
In particular, the gauge term
\begin{equation}
\delta\bm{\mathcal{H}} = \gamma \sum_{i,j}\widetilde{R}_{ij}\left(\lambda_{j}-\lambda_{i}\right)\bm{v}_i\bm{v}_j^\top
\end{equation}
is the commutator of anti-symmetric and symmetric part of the velocity gradient
\begin{equation}
\delta\bm{\mathcal{H}} = \gamma\left[\bm{R},\bm{S}\right],
\label{eq_gauge_comm}
\end{equation}
where $\left[\bm{R},\bm{S}\right]\equiv\bm{R\cdot S}-\bm{S\cdot R}$. Then, the gauge symmetry consists in the fact that the single-point and single-time Lagrangian dynamics of the velocity gradient invariants is identical when $\bm{\mathcal{H}}$ is replaced by
\begin{equation}
\bm{\mathcal{H}}_\gamma = \bm{\mathcal{H}} + \gamma[\bm{R},\bm{S}].
\label{eq_H_gamma}
\end{equation}
The gauge symmetry holds for all real and finite multiplier $\gamma(t,\bm{x})$, which at this stage is still undetermined.
It is interesting to note that a term identical to that in equation \eqref{eq_gauge_comm} also arises from a closure of the pressure Hessian assuming a random velocity field with Gaussian statistics \citep{Wilczek2014}. In the framework of the Gaussian closure, the coefficient of $[\bm{R},\bm{S}]$ is the only one that requires specific knowledge of the spatial structure of the flow and must be prescribed by phenomenological closure hypothesis, while all other coefficients of the model can be determined exactly. However, our analysis implies that the ability of the Gaussian closure to predict the invariants of the velocity gradient tensor will not be impacted by the phenomenological closure hypothesis, since its contribution in the dynamics corresponds to the gauge term in equation \eqref{eq_gauge_comm} that does not affect the velocity gradient invariants.
\subsection{Using the gauge symmetry for dimensionality reduction}\label{RR}
While any finite and real $\gamma$ provides a suitable $\bm{\mathcal{H}}_\gamma$, there may exist certain choices of $\gamma$ that generate representations of $\bm{\mathcal{H}}_\gamma$ that live on a lower dimensional manifold in the system (in the sense that some of its eigenvalues are zero). If such configurations exist and are common, this could significantly aid the understanding and modelling of the anisotropic pressure Hessian in the turbulence dynamics. To seek for such lower dimensional configurations is equivalent to seek for configurations in which a rank-reduction on $\bm{\mathcal{H}}_\gamma$ can be performed. We denote such rank-reduced forms of $\bm{\mathcal{H}}_\gamma$ by $\bm{\mathcal{H}}_\gamma^*$. Notice that $\mathrm{rk}(\bm{\mathcal{H}}_\gamma^*)=1$ is not possible since $\text{\textrm{Tr}}(\bm{\mathcal{H}}_\gamma)=0$, and therefore either $\mathrm{rk}(\bm{\mathcal{H}}_\gamma^*)=2$ or $\bm{\mathcal{H}}_\gamma^*=\bm{0}$.
In seeking for lower dimensional representations, when $\bm{\mathcal{H}}$ is singular the gauge term is not needed as $\bm{\mathcal{H}}$ already lives on a lower dimensional manifold and we take $\bm{\mathcal{H}}_\gamma^*=\bm{\mathcal{H}}$, corresponding to the choice $\gamma=0$.
On the other hand, when $\bm{\mathcal{H}}$ is not singular we seek for a non-zero vector $\bm{z}_2$ such that $\bm{\mathcal{H}}_\gamma^*\bm{\cdot z}_2=\bm{0}$, where $\bm{z}_2$ corresponds to the eigenvector of $\bm{\mathcal{H}}_\gamma^*$ associated with its zero (and intermediate) eigenvalue. This is equivalent to the generalized eigenvalue problem
$\det\left(\bm{\mathcal{H}}_\gamma^*\right)=0$, that is,
\begin{align}
\det\left(\mathbf{I} + \gamma\bm{\mathcal{H}}^{-1}\left[\bm{R},\bm{S}\right]\right) = 0.
\label{eq_gen_evp}
\end{align}
Notice that $\bm{\mathcal{H}}$ can be safely inverted in equation \eqref{eq_gen_evp}, since the case of singular $\bm{\mathcal{H}}$ has been already taken into account and corresponds to $\gamma=0$.
If there exist finite and real values for $\gamma$ that solve equation \eqref{eq_gen_evp}, then those values of $\gamma$ generate a rank-two $\bm{\mathcal{H}}_\gamma^*$. Defining $\bm{\mathcal{E}}\equiv\bm{\mathcal{H}}^{-1}\left[\bm{R},\bm{S}\right]$, the characteristic equation governing $\xi\equiv -1/\gamma$ reads
\begin{align}
\xi^3 -c\xi^2-b\xi - a= 0,
\label{eq_psi}
\end{align}
with coefficients $a,b,c\in \mathbb{R}$ given by
\begin{align}
a\equiv \det(\bm{\mathcal{E}}), && b\equiv \frac{1}{2}\left(\bm{\mathcal{E}:\mathcal{E}}-\text{\textrm{Tr}}(\bm{\mathcal{E}})\text{\textrm{Tr}}(\bm{\mathcal{E}})\right), && c\equiv \text{\textrm{Tr}}(\bm{\mathcal{E}}).
\label{abc}
\end{align}
The properties of the roots of \eqref{eq_psi} are determined by the discriminant of the polynomial
\begin{align}
\mu\equiv b^2 c^2 + 4b^3 - 4c^3a - 27a^2 - 18abc.
\label{disc}
\end{align}
When $\mu=0$, all of the roots of \eqref{eq_psi} are real and at least two are equal, when $\mu>0$ there are three distinct real roots, and when $\mu<0$ there is one real root and two complex conjugate roots. In every case, there is at least one real root since all the coefficients are real and the degree of the characteristic polynomial is odd. Provided that $a\neq 0$, a real and finite $\gamma\equiv -1/\xi$ exists. When $a=0$, a real and finite $\gamma$ may or may not exist according to the value of the discriminant $\mu$.
This shows that configurations where a rank-two $\bm{\mathcal{H}}_\gamma$ does not exist, that is, the pressure Hessian is intrisically three-dimensional, may only occur when $a= 0$. Interestingly,
$a\equiv\det\bm{\mathcal{H}}\det[\bm{R},\bm{S}]$
and, since by hypothesis $\det\bm{\mathcal{H}}\ne 0$ the rank reduction of the anisotropic pressure Hessian may not be performed where $\det[\bm{R},\bm{S}]=0$. The determinant of the commutator is
\begin{equation}
\det[\bm{R},\bm{S}] = \frac{1}{4}(\lambda_2-\lambda_1)(\lambda_3-\lambda_2)(\lambda_1-\lambda_3)
\widetilde{\omega}_1\widetilde{\omega}_2\widetilde{\omega}_3,
\label{eq_detC}
\end{equation}
so that, when either one or more of the vorticity components in the strain-rate eigenframe is zero, and/or the straining-rate configuration is axisymmetric, a singular $\bm{\mathcal{H}}_\gamma$ may not exist. However, since $\bm{S}$ and $\bm{\omega}$ have continuous probability distributions, then the probability that $\det[\bm{R},\bm{S}] =0$ is in fact zero. Therefore, the rank reduction of $\bm{\mathcal{H}}_\gamma$ should be possible everywhere in the flow.
Configurations in which multiple rank-reduced anisotropic pressure Hessian can be defined at the same point, that is, there exist more than a single real and finite multiplier $\gamma$, admit an additional discrete symmetry which allows different $\bm{\mathcal{H}}_\gamma^*$ to generate the same dynamics of the velocity gradient invariants. We fix this additional gauge by choosing $\gamma$ that provides the maximum alignment between the intermediate eigenvector of the rank-reduced anisotropic pressure Hessian and the vorticity. As it will be shown in \S\ref{Results}, this is justified on the basis of the numerical results, which indicate a marked preferential alignment of the intermediate eigenvector of the rank-reduced anisotropic pressure Hessian with the vorticity.
The rank reduction of the anisotropic pressure Hessian, defined through equation \eqref{eq_gen_evp}, allows for a noticeable reduction of the complexity of the anisotropic pressure Hessian leading to a better understanding of its dynamical effects. Indeed, the fully three-dimensional anisotropic pressure Hessian is specified by five real numbers, being a square matrix of size three. In particular, it takes two numbers to specify the normalized eigenvector $\bm{y}_1$, one additional number for $\bm{y}_2$ (then $\bm{y}_3$ is automatically determined) and two more numbers for the independent eigenvalues $\varphi_1$ and $\varphi_3$ (since $\sum_i\varphi_i=0$). Therefore, the anisotropic pressure Hessian can be written as
\begin{equation}
\bm{\mathcal{H}} = \sum_{i=1}^3\varphi_i\bm{y}_i\bm{y}_i^\top.
\end{equation}
We keep the standard convention $\varphi_1\ge\varphi_2\ge\varphi_3$. On the other hand, the rank-reduced anisotropic pressure Hessian is specified by only four real numbers. Indeed it is a traceless and singular square matrix of size three. In particular, it takes two numbers to specify the plane orthogonal to the normalized eigenvector $\bm{z}_2$ an additional number to specify the orientation of $\bm{z}_1$ on the plane orthogonal to $\bm{z}_2$ (then $\bm{z}_3$ is determined) and a number for the single independent eigenvalue $\psi$.
Therefore, the rank-reduced anisotropic pressure Hessian can be written as
\begin{equation}
\bm{\mathcal{H}}_\gamma^* = \psi \left(\bm{z}_1\bm{z}_1^\top - \bm{z}_3\bm{z}_3^\top\right)
\end{equation}
since the intemediate eigenvector is identically zero and the others satisfy $\psi_1=-\psi_3=\psi$ and $\psi\ge 0$. The pressure Hessian lives locally on the plane $\Pi_2$ orthogonal to $\bm{z}_2$, which is the tangent space to a more complex manifold. The tensor $\bm{\mathcal{H}}_\gamma^*$ acts on a generic vector $\bm{q}$ amplifying its component along $\bm{z}_1$, cancelling its component along $\bm{z}_2$ and amplifying and flipping its component along $\bm{z}_3$. The rank-reduced anisotropic pressure Hessian is effective only on the plane $\Pi_2$.
The eigenvalue of the rank-reduced anisotropic pressure Hessian can be related to the full anisotropic pressure Hessian and the vorticity since
$
{\bm{\omega^\top \cdot \bm{\mathcal{H}}\cdot\omega} = \bm{\omega^\top\cdot\bm{\mathcal{H}}}_\gamma\bm{\cdot\omega}}
$
which implies
\begin{equation}
\psi = \frac{\sum_i\varphi_i(\bm{\omega\cdot y}_i)^2}{(\bm{\omega\cdot z}_1)^2-(\bm{\omega\cdot z}_3)^2}.
\label{eq_rel_psi1}
\end{equation}
Moreover, the tensors $\bm{\mathcal{H}}$ and $\bm{\mathcal{H}}_\gamma$ satisfy the relation
$
{\bm{\omega^\top \cdot S\cdot\bm{\mathcal{H}}\cdot\omega} = \bm{\omega^\top \cdot S\cdot\bm{\mathcal{H}}}_\gamma\bm{\cdot\omega}}
$
which yields another equation for the eigenvalue $\psi$,
\begin{equation}
\psi = \frac
{\sum_i\varphi_i\bm{\omega\cdot y}_i(\bm{S\cdot\omega})\bm{\cdot y}_i}
{\bm{\omega\cdot}\left[ \bm{z}_1 (\bm{S\cdot\omega})\bm{\cdot z}_1 - \bm{z}_3 (\bm{S\cdot\omega})\bm{\cdot z}_3\right]}.
\label{eq_rel_psi2}
\end{equation}
Equation \eqref{eq_rel_psi1} shows that a perfect alignment between $\bm{z}_2$ and $\bm{\omega}$ would result in an infinitely large $\psi$, unless the anisotropic pressure Hessian fulfills the condition $\bm{\omega^\top \cdot\bm{\mathcal{H}} \cdot\omega}=0$. For example, such a peculiar configuration occurs when the flow is exactly two-dimensional, for which $\bm{\mathcal{H}}_\gamma^*=\bm{\mathcal{H}}$. In general, a large eigenvalue $\psi$ corresponds to strong alignment between $\bm{z}_2$ and $\bm{\omega}$, as it will be discussed in \S\ref{Results}.
\begin{figure}
\centering
\begin{overpic}[width=.49\textwidth]{{schema_S2}.pdf}
\put(0,75) {$-\left[\bm{S \cdot S} - \frac{1}{3}(\bm{S : S})\mathbf{I}\right]$}
\put(12,30) {$\bm{v}_1$}
\put(80,30) {$\bm{v}_2$}
\put(54,86) {$\bm{v}_3$}
\put(25,84) {(a)}
\end{overpic}
\begin{overpic}[width=.49\textwidth]{{schema_R2}.pdf}
\put(-2,75) {$-\left[\bm{R \cdot R} - \frac{1}{3}(\bm{R : R})\mathbf{I}\right]$}
\put(80,46) {$\Pi_{\bm{\omega}}$}
\put(54,86) {$\bm{\omega}$}
\put(25,84) {(b)}
\end{overpic}
\vfill
\begin{overpic}[width=.49\textwidth]{{schema_Hg}.pdf}
\put(30,75) {$-\bm{\mathcal{H}}_\gamma^*$}
\put(80,46) {$\Pi_2$}
\put(12,30) {$\bm{z}_3$}
\put(54,86) {$\bm{z}_2$}
\put(80,30) {$\bm{z}_1$}
\put(25,84) {(c)}
\end{overpic}
\begin{overpic}[width=.49\textwidth]{{schema_zv}.pdf}
\put(54,78) {$\bm{\omega}$}
\put(34,40) {$\bm{z}_3$}
\put(64,40) {$\bm{z}_1$}
\put(68,50) {$\bm{v}_1$}
\put(40,30) {$\bm{v}_3$}
\put(25,84) {(d)}
\end{overpic}
\caption{Schematic representation of contribution of the terms on the right hand side of equation \eqref{eq_lambda_iso}. (a) Strain term
$-\left[\bm{S \cdot S} - \mathbf{I}(\bm{S : S})/3\right]$
for the typical configuration $\lambda_1 =\lambda_2=-\lambda_3/2$. (b) Rotation term
$-\left[\bm{R \cdot R} - \mathbf{I}(\bm{R : R})/3\right]$
which isotropically produces stretching rate along the plane orthogonal to $\bm{\omega}$ and a compression parallel to $\bm{\omega}$. (c) Rank-reduced anisotropic pressure Hessian
$-\bm{\mathcal{H}}_\gamma^*$
which produces straining along the $\bm{z}_3$ direction, and hinders it along the $\bm{z}_1$ direction. (d) Typical configuration for the relative orientation of strain-rate eigenframe, vorticity and rank-reduced anisotropic pressure Hessian eigenframe. }
\label{fig_scheme}
\end{figure}
This rank-reduction brings two-dimensional features into three-dimensional flows, and it is interesting to note that the equations for the velocity gradient already contain another two-dimensional flow feature. In particular, in equation \eqref{eq_lambda} the term $(\omega^2 - \widetilde{\omega}_i^2)/4$ arises from the eigenframe representation of $\bm{R\cdot R}=-\omega^2\bm{P_\omega}/4$ where $\bm{P_\omega}$ is the projection tensor on the plane $\Pi_{\bm{\omega}}$ orthogonal to the vorticity vector $\bm{\omega}$. This term describes the straining motion in the plane orthogonal to $\bm{\omega}$ that is associated with the centrifugal force produced by the spinning of the fluid particle about its vorticity axis. As we will discuss later, this two-dimensional effect can be compared with the two-dimensional effect of $\bm{\mathcal{H}}_\gamma^*$ on the velocity gradient evolution, leading to interesting insights into their respective dynamical roles. Moreover, $\bm{\mathcal{H}}_\gamma^*$ is a two dimensional object in a three-dimensional space which opens the possibility to effectively compare pressure Hessian statistics between two-dimensional and three-dimensional flows. However, the tangent space to the manifold defined by $\bm{\mathcal{H}}_\gamma^*$ varies in space and time, therefore the flow on $\Pi_2$ can not be directly compared with Euclidean two-dimensional turbulence but with flows in more complex geometries \citep{Falkovich2014}.
Using the dynamical equaivalence of $\bm{\mathcal{H}}$ and $\bm{\mathcal{H}}_\gamma^*$, we may re-write the equation governing $\lambda_i$ as (ignoring the viscous term)
\begin{equation}
D_t{\lambda_{i}} = -\left(\lambda^2_{i}-\frac{1}{3}\sum_j\lambda_j^2\right) - \frac{1}{4}\left( \widetilde{\omega}_i^2 - \frac{1}{3}\sum_j\widetilde{\omega}_j^2\right) -\widetilde{\mathcal{H}}^*_{\gamma,i(i)},
\label{eq_lambda_iso}
\end{equation}
and in figure \ref{fig_scheme} we provide a schematic to illustrate the role of each of the terms on the right hand side of \eqref{eq_lambda_iso}.
\section{Numerical results: rank-reduced anisotropic pressure Hessian}
\label{Results}
We now turn to assess the properties of $\bm{\mathcal{H}}_\gamma^*$. We do this using data from a Direct Numerical Simulation (DNS) of statistically stationary, isotropic turbulence. The DNS data used are those by \citet{Ireland2016a,Ireland2016b}, at a Taylor microscale Reynolds number $R_\lambda=597$. The data have been obtained through a pseudo-spectral method to solve the incompressible NSE on a three-dimensional, triperiodic cube discretized with $2048^3$ grid points. A deterministic forcing method that preserves the kinetic energy in the flow has been employed. A detailed description of the numerical method used can be found in \citet{Ireland2013}.
\subsection{Pressure Hessian rank reduction}
We first consider the properties of $\gamma$ as determined by the numerical solution of equation \eqref{eq_psi} with $\gamma\equiv -1/\xi_{RF}$ real and finite. At each grid point we solve the generalized eigenvalue problem \eqref{eq_gen_evp} to determine real and finite multipliers $\gamma$ for which $\bm{\mathcal{H}}_\gamma^*$ is singular.
The numerical solution of equation \eqref{eq_psi} is ill-conditioned when $\det[\bm{R},\bm{S}]$ is very small. Therefore, we skip the grid points at which $\det[\bm{R},\bm{S}]$ is less than a predefined numerical tolerance. We confirmed, however, that the results are only weakly sensitive to this small tolerance value.
Figure~\ref{fig_pdf_mu_gamma} shows the probability of the multiplicity of real and finite values for $\gamma$ obtained solving \eqref{eq_psi}. The statistics are constructed by averaging the flow over space and time, a total of ten snapshots spanning six eddy turnover times have been used.
The rank-reduced anisotropic pressure Hessian exists at the vast majority of the grid points, the configurations with no real and finite multipliers is observed at only about $0.1\%$ of the grid points and corresponds to $\det[\bm{R},\bm{S}]$ very small. The most common case ($\sim60\%$ of the grid points) corresponds to three real and finite roots $\xi_{RF}$ and thus three real and finite multipliers $\gamma$. Therefore, in addition to the continuous symmetry which allows to map $\bm{\mathcal{H}}$ into $\bm{\mathcal{H}}_\gamma^*$ there is a discrete symmetry, which allows three dynamically equivalent pressure Hessian, which generate the same dynamics of the velocity gradient invariants.
The next most common case ($\sim40\%$ of the grid points) is a single real and finite root $\xi_{RF}$ and so a single $\gamma$ and a single rank-two $\bm{\mathcal{H}}_\gamma^*$.
The case with two real and finite roots (and the third root asymptotically small compared with these) is rare ($\sim0.15\%$ of the grid points) and corresponds to $\det[\bm{R},\bm{S}]$ close to zero.
In the configurations in which there exist multiple $\gamma$'s, the multiplier which gives the highest alignment between the vorticity vector and the intermediate eigenvector of the rank-reduced anisotropic pressure Hessian is selected. Indeed, that preferential alignment is a clear feature of the rank-reduced anisotropic pressure Hessian, as we will see below.
\begin{figure}
\centering
\begin{overpic}[width=.49\textwidth]{{count_N_gamma}.pdf}
\put(48,5){{\large $\mu\of{\xi_{RF}}$}}
\put(3,36){\large \rotatebox{90}{Probability}}
\end{overpic}
\caption{Probability of multiplicity of real and finite roots of equation \eqref{eq_psi}.}
\label{fig_pdf_mu_gamma}
\end{figure}
The probability density function (PDF) of the multiplier $\gamma$ for which $\bm{\mathcal{H}}_\gamma$ has rank two is shown in figure~\ref{fig_pdf_gamma}(a). The PDF of the multiplier is highly non-Gaussian and the multiplier can be very large, even if with a small probability. This is due to the intermittency of the velocity field, that is, large values of the coefficients of equation \eqref{eq_psi} and also due to the high probability of small $\det[\bm{R},\bm{S}]$. In that case indeed, the matrix used for the reduction, $[\bm{R},\bm{S}]$, spans the whole three-dimensional domain but with a very small eigenvalue in a certain eigendirection. As a consequence, the multiplier $\gamma$ should be large enough to compensate the component of $\bm{\mathcal{H}}$ in that eigendirection, which can have large values. The probability density function of $\det[\bm{R},\bm{S}]$ is shown in figure \ref{fig_pdf_gamma}(b). The results show that $\det[\bm{R},\bm{S}]$ is highly intermittent, being small throughout the vast majority of the flow, but exhibiting extreme fluctuations is very small regions. This can be understood in terms of the fact that according to equation \eqref{eq_detC}, $\det[\bm{R},\bm{S}]$ is an high-order moment of the velocity gradient field. Moreover, the tendency for small values of $\det[\bm{R},\bm{S}]$ can also be understood in terms of the well-known fact that $\bm{\omega}$ tends to misalign with $\bm{v}_3$ \citep{Meneveau2011}, leading to small values for $\widetilde{\omega}_3$ and therefore to small values of $\det \left[\bm{R},\bm{S}\right ]$ via equation \eqref{eq_detC}.
We now turn to investigate the flow features conditioned on $\det \left[\bm{R},\bm{S}\right ]$.
The high probability to observe small $\det \left[\bm{R},\bm{S}\right ]$ is consistent with the average of the strain and rotation magnitude conditioned on the local value of $\det \left[\bm{R},\bm{S}\right ]$, the results for which are shown in figure \ref{fig_pdf_gamma}(c).
The values of $\tau_\eta^2\|\bm{S}\|^2$ and $\tau_\eta^2\|\bm{R}\|^2$ when $\det[\bm{R},\bm{S}]\to 0$, where $\tau_\eta$ is the Kolmogorov timescale, are both slightly less than $1/2$, that is the precise value of the unconditioned averages $\tau_\eta^2\langle\|\bm{S}\|^2\rangle=\tau_\eta^2\langle\|\bm{R}\|^2\rangle$ in isotropic turbulence. For larger values of $\det \left[\bm{R},\bm{S}\right ]$, $\|\bm{R}\|^2$ has a well defined power law scaling, $\|\bm{R}\|^2\sim\left|\det[\bm{R},\bm{S}]\right|^{1/3}$, as shown in the inset of figure \ref{fig_pdf_gamma}(c). The power law exponent is consistent with simple dimensional analysis. On the other hand, while $\|\bm{S}\|^2$ also depends on $\det[\bm{R},\bm{S}]$ as a power law, the exponent is less than $1/3$, and cannot be predicted by simple dimensional analysis. This is somewhat reminiscent of the results in \cite{Buaria2019} for $\langle \|\bm{R}\|^2\big\vert\|\bm{S}\|^2\rangle$ and $\langle \|\bm{S}\|^2\big\vert\|\bm{R}\|^2\rangle$, where they found that the former was well described by dimensional analysis (i.e.\ by Kolmogorov's 1941 theory, see \cite{pope}), while the latter was not.
The average of the second invariant of the velocity gradient tensor $Q$ conditioned on the local value of $\det[\bm{R},\bm{S}]$ is shown in figure~\ref{fig_pdf_gamma}(d). Interestingly, the region where $\det[\bm{R},\bm{S}]$ is small is slightly strain dominated (i.e.\ $Q<0$). On the other hand, the regions where $|\det[\bm{R},\bm{S}]|$ is relatively large, the dynamics is clearly rotation-dominated. When the conditioned average of $Q$ is weighted with the PDF of $\det[\bm{R},\bm{S}]$ it yields $\avg{Q}=0$ for isotropic turbulence, which indicates the very large relative weight of regions of the flow contributing to $\langle Q\big\vert \det[\bm{R},\bm{S}]\rangle$ being negative and very small.
\begin{figure}
\begin{overpic}[width=.49\textwidth]{{pdf_gamma}.pdf}
\put(56,5){{\large $\gamma$}}
\put(2,44){{\large\rotatebox{90}{PDF}}}
\put(22,75) {(a)}
\end{overpic}
\hfill
\begin{overpic}[width=.49\textwidth]{{pdf_detC.two_axes_red}.pdf}
\put(45,3){\large$\tau_\eta^2\det[\bm{R},\bm{S}]$}
\put(2,44){\large\rotatebox{90}{PDF}}
\put(28,74) {(b)}
\end{overpic}
\begin{overpic}[width=.49\textwidth]{{en_cond_detC}.pdf}
\put(64,75) {$\|\bm{S}\|^2$}
\put(64,68.6){$\|\bm{R}\|^2$}
\put(40,75) {$1/3$}
\put(22,45){\includegraphics[width=.21\textwidth]{{en_cond_detC_log}.pdf}}
\put(46,3) {\large$\tau_\eta^6\det[\bm{R},\bm{S}]$}
\put(2,30){\rotatebox{90}{\large$\tau_\eta^2\avg{q\big|\det[\bm{R},\bm{S}]}$}}
\put(22,75) {(c)}
\end{overpic}
\hfill
\begin{overpic}[width=.49\textwidth]{{Q_cond_detC}.pdf}
\put(46,3) {\large$\tau_\eta^6\det[\bm{R},\bm{S}]$}
\put(2,30){\rotatebox{90}{\large$\tau_\eta^2\avg{Q\big|\det[\bm{R},\bm{S}]}$}}
\put(30,76) {(d)}
\end{overpic}
\caption{(a) Probability density function (PDF) of the real and finite multiplier $\gamma=-1/\xi_{RF}$. (b) PDF of the determinant of the commutator of anti-symmetric and symmetric part of the velocity gradient, $\det[\bm{R},\bm{S}]$,
the blue curve refers to the blue labels and represents the same PDF over a smaller range.
(c) Strain magnitude $\|\bm{S}\|^2$ and rotation magnitude $\|\bm{R}\|^2$ conditioned on $\det[\bm{R},\bm{S}]$, the same plot in logarithmic scale is in the inset.
(d) Second invariant of the velocity gradient tensor $Q$ conditioned on $\det[\bm{R},\bm{S}]$.
}
\label{fig_pdf_gamma}
\end{figure}
\subsection{Rank-reduced anisotropic pressure Hessian eigenvalue}
\begin{figure}
\begin{overpic}[width=.49\textwidth]{{pdf_phi}.pdf}
\put(70,75){$\varphi_1$}
\put(70,69){$\varphi_2$}
\put(70,62){$\varphi_3$}
\put(53,4) {\large$\tau_\eta^2\varphi_i$}
\put(3,44) {\large\rotatebox{90}{PDF}}
\put(25,75) {(a)}
\end{overpic}
\hfill
\begin{overpic}[width=.49\textwidth]{{pdf_ephi}.pdf}
\put(70,75){$\psi_1$}
\put(70,69){$\psi_2$}
\put(70,62){$\psi_3$}
\put(53,4) {\large$\tau_\eta^2\psi_i$}
\put(3,44) {\large\rotatebox{90}{PDF}}
\put(25,75) {(b)}
\end{overpic}
\vfill
\begin{overpic}[width=.49\textwidth]{{phi_cond_lam2}.pdf}
\put(39,75){$q=\varphi$}
\put(47.5,68){$\psi$}
\put(74,28){$\sim\|\bm{S}\|^2$}
\put(74,53){$\sim\|\bm{S}\|^{8/3}$}
\put(50,3) {\large$\tau_\eta^2\|\bm{S}\|^2$}
\put(1,36) {\large\rotatebox{90}{$\tau_\eta^2\avg{q \big| \|\bm{S}\|^2}$}}
\put(25,75) {(c)}
\end{overpic}
\hfill
\begin{overpic}[width=.49\textwidth]{{phi_cond_omg2}.pdf}
\put(39,75){$q=\varphi$}
\put(47.5,68){$\psi$}
\put(74,30){$\sim\|\bm{R}\|^2$}
\put(74,55){$\sim\|\bm{R}\|^{8/3}$}
\put(50,3) {\large$\tau_\eta^2\|\bm{R}\|^2$}
\put(1,36) {\large\rotatebox{90}{$\tau_\eta^2\avg{q \big| \|\bm{R}\|^2}$}}
\put(25,75) {(d)}
\end{overpic}
\caption{Probability density function of the eigenvalues of (a) $\bm{\mathcal{H}}$ and (b) eigenvalues of $\bm{\mathcal{H}}_\gamma^*$, normalized with the Kolmogorov timescale $\tau_\eta$. (c) Magnitude of the anisotropic pressure Hessian eigenvalues $\varphi=\sqrt{\sum_i\varphi_i^2}$ and anisotropic pressure Hessian eigenvalue, $\psi$, conditioned on the local strain-rate magnitude and (d) on the rotation-rate magnitude.}
\label{Hess_eigenvalues}
\end{figure}
The rank-reduction of the anisotropic pressure Hessian corresponds to set its intermediate eigenvalue to zero by means of the gauge term $\gamma[\bm{R},\bm{S}]$. Since the anisotropic pressure Hessian is traceless by definition, $\text{\textrm{Tr}}(\bm{\mathcal{H}})=0$, it has in general two non-zero principal invariants. On the other hand, the rank-reduced anisotropic pressure hesssian has only one non-zero principal invariant, that is $\text{\textrm{Tr}}((\bm{\mathcal{H}}_\gamma^*)^2)$ since $\det(\bm{\mathcal{H}}_\gamma^*)=0$.
Figures~\ref{Hess_eigenvalues}(a,b) show that whereas $\bm{\mathcal{H}}$ is in general a fully three-dimensional object with three non-zero eigenvalues $\varphi_i$ that satisfy $\sum_{i=1}^3\varphi_i=0$, $\bm{\mathcal{H}}_\gamma^*$ is a two-dimensional object with only two active eigenvalues that satisfy $\psi_1=-\psi_3=\psi$, the intermediate eigenvalue being identically zero, $\psi_2=0$. Note that here and throughout, all eigenvectors are unitary, and are ordered according to their corresponding eigenvalues, such that $\varphi_1 \ge \varphi_2 \ge \varphi_3$.
The distributions of the eigenvalues $\varphi_1\ge0$ and $\varphi_3\le 0$ of the anisotropic pressure Hessian display marked tails and are almost symmetric with respect to each other. On the contrary, the distribution of $\varphi_2$ has moderate tails and it is positively skewed.
The eigenvalue of the rank-reduced anisotropic pressure Hessian, $\psi$, exhibits very large fluctuations. Its distribution has wide tails which show that $\psi$, even if with small probability, can take extremely large values. This is in part due to the large intermittency of the flow, giving rise to large values of $[\bm{R},\bm{S}]$ and $\gamma$ (although with small probability). Therefore, the geometrical simplification obtained by replacing the three-dimensional $\bm{\mathcal{H}}$ with the two-dimensional $\bm{\mathcal{H}}_\gamma^*$ also comes with the cost that the eigenvalue of $\bm{\mathcal{H}}_\gamma^*$ is far more intermittent than those of $\bm{\mathcal{H}}$.
The large values observed for $\psi$ are also closely related to the dimensionality reduction. In order to investigate this point we condition the eigenvalues of $\bm{\mathcal{H}}$ and $\bm{\mathcal{H}}_\gamma^*$ on the magnitude of the local strain and vorticity $\|\bm{S}\|^2$ and $\|\bm{R}\|^2$. For the anisotropic pressure Hessian we define
$\varphi = \sqrt{\sum_i\varphi_i^2}$ and compute the conditional averages
$\avg{\varphi \big| \|\bm{S}\|^2}$ and
$\avg{\varphi \big\vert \|\bm{R}\|^2 }$.
Similarly, for the rank-reduced anisotropic pressure Hessian we look at
$\avg{\psi \big\vert\|\bm{S}\|^2 }$ and $\avg{\psi \big\vert\|\bm{R}\|^2}$.
The results from the DNS are shown in figures~\ref{Hess_eigenvalues}(c,d). The results reveal a simple scaling $\avg{\varphi \big\vert \|\bm{S}\|^2}\sim\|\bm{S}\|^2$, as dimensional analysis suggests. This lends supports to the model in \cite{Wilczek2014}, in which the pressure Hessian is a linear combination of $\bm{S}^2$, $\bm{R}^2$ and $[\bm{R},\bm{S}]$. The scaling $\avg{\varphi \big\vert \|\bm{S}\|^2}\sim\|\bm{S}\|^2$ is evident especially for large values of $\|\bm{S}\|^2$. This may reflect the idea that during large fluctuations, the lengthscale associated with $\bm{S}$ is smaller as compared to situations where $\bm{S}$ is small or moderate. If true, then the pressure Hessian is more localized during large fluctuations, giving rise to the scaling $\avg{\varphi \big\vert \|\bm{S}\|^2}\sim\|\bm{S}\|^2$ that reflects a local relationship between $\varphi$ and $\|\bm{S}\|^2$. On the other hand, for the rank-reduced anisotropic pressure Hessian eigenvalue we find $\avg{\psi\big\vert\|\bm{S}\|^2}\sim\|\bm{S}\|^{2\zeta}$ with $\zeta>1$ (in particular $\zeta$ between $4/3$ and $5/4$). Nevertheless, $\avg{\psi\big\vert\|\bm{S}\|^2}$ mantains a well defined power law trend, which has positive implications for modelling the anisotropic pressure Hessian using information inferred by the rank-reduced anisotropic pressure Hessian.
Due to the higher exponent, $\psi$ is on average much larger than $\varphi$ at fixed velocity gradient magnitude, especially when large gradients occur.
The scaling of the eigenvalues magnitude conditioned on $\|\bm{R}\|^2$ is very similar to the scaling of the same quantity conditioned on $\|\bm{S}\|^2$ for both $\bm{\mathcal{H}}$ and $\bm{\mathcal{H}}_\gamma^*$.
The different scaling of $\psi$ and $\varphi$ with respect to the velocity gradient magnitude can be deduced from equation \eqref{eq_rel_psi1}. Indeed, the denominator in equation \eqref{eq_rel_psi1} can be very small since the vorticity tends to align with $\bm{z}_2$, which, as we will see in the next section, inducing large values of $\psi$. This is due to the constraint $\bm{\omega^\top\cdot\bm{\mathcal{H}}\cdot\omega}=\bm{\omega^\top\cdot\bm{\mathcal{H}}}_\gamma^*\bm{\cdot\omega}$.
From the viewpoint of dimensionality, the rank-reduced anisotropic pressure Hessian is a two-dimensional tensor which has to produce the same effect on the velocity gradient invariants as a fully three-dimensional tensor due to the gauge symmetry. Therefore the geometrical scaling of $\bm{\mathcal{H}}_\gamma^*$ is likely to differ from the scaling of $\bm{\mathcal{H}}$, which can span the whole three-dimensional embedding space.
In figure \ref{eigen_QR} we plot the conditioned averages $\langle\varphi\big\vert R,Q\rangle$ and $\langle\psi\big\vert R,Q\rangle$. The results show that $\langle\varphi\big\vert R,Q\rangle$ is quite large everywhere except for small $R,Q$ and its shape shares similarities with the sheared drop shape of the joint PDF of the invariants $R,Q$, that is in figure \ref{fig_vorticity_S_alignment_RQ}(d). In contrast, $\langle\psi\big\vert R,Q\rangle$ is largest in the quadrants $Q>0,R<0$ and $Q<0,R>0$ (especially below the right Vieillefosse tail) corresponding to regions of enstrophy and strain production. Therefore, it is not only that the magnitudes of $\bm{\mathcal{H}}$ and $\bm{\mathcal{H}}_\gamma^*$ differ significantly, but also that they are most active in different regions of the flow. Indeed, $\bm{\mathcal{H}}_\gamma^*$ is most active in the regions where the velocity gradients are also most active, while $\bm{\mathcal{H}}$ is active and strong in many regions where the velocity gradients display relatively little activity (e.g.\ the quadrant $Q<0$, $R<0$). In this sense then, one might say that $\bm{\mathcal{H}}_\gamma^*$ is more closely tied to the dynamics of the velocity gradients than $\bm{\mathcal{H}}$.
\begin{figure}
\includegraphics[width=1.\textwidth]{compr_fig_5-crop.pdf}
\caption{Results for (a) $\langle\varphi\big\vert R,Q\rangle$, where $\varphi=\sqrt{\sum_i\varphi_i^2}$, and (b) $\langle\psi\big\vert R,Q\rangle$ as functions of $R,Q$. Colors denote the magnitude of the terms, and black lines denote the Vieillefosse
tails.
}
\label{eigen_QR}
\end{figure}
\section{Numerical results: statistical geometry}
\begin{figure}
\begin{overpic}[width=.49\textwidth]{{pdf_omg_orient_H}.pdf}
\put(56,75) {$\bm{ \hat{\omega} \cdot y}_1$}
\put(56,68.6){$\bm{ \hat{\omega} \cdot y}_2$}
\put(56,61.8){$\bm{ \hat{\omega} \cdot y}_3$}
\put(52,3){\large$\cos\theta$}
\put(3,45){\rotatebox{90}{\large PDF}}
\put(25,75) {(a)}
\end{overpic}
\hfill
\begin{overpic}[width=.49\textwidth]{{pdf_omg_orient_eH}.pdf}
\put(56,75) {$\bm{ \hat{\omega} \cdot z}_1$}
\put(56,68.6){$\bm{ \hat{\omega} \cdot z}_2$}
\put(56,61.8){$\bm{ \hat{\omega} \cdot z}_3$}
\put(52,3){\large$\cos\theta$}
\put(3,45){\rotatebox{90}{\large PDF}}
\put(25,75) {(b)}
\end{overpic}
\caption{PDF of the orientation between the vorticity vector and (a) the eigenframe of the pressure Hessian, (b) the eigenframe of the rank-reduced anisotropic pressure Hessian. The alignment is expressed by inner product between the normalized vorticity $\hat{\bm{\omega}}\equiv\bm{\omega}/\|\bm{\omega}\|$ and normalized eigenvetors of $\bm{\mathcal{H}}$ ($\bm{y}_i$) and $\bm{\mathcal{H}}_\gamma^*$ ($\bm{z}_i$).}
\label{fig_omgH_align}
\end{figure}
\begin{figure}
\begin{overpic}[width=.49\textwidth]{{pdf_w1_orient_S}.pdf}
\put(36,75) {$\bm{y}_1\bm{\cdot}\bm{v}_1$}
\put(36,68.6){$\bm{y}_1\bm{\cdot}\bm{v}_2$}
\put(36,61.8){$\bm{y}_1\bm{\cdot}\bm{v}_3$}
\put(52,3){\large$\cos\theta$}
\put(3,45){\large\rotatebox{90}{PDF}}
\put(25,75) {(a)}
\end{overpic}
\hfill
\begin{overpic}[width=.49\textwidth]{{pdf_ew1_orient_S}.pdf}
\put(36,75) {$\bm{z}_1\bm{\cdot}\bm{v}_1$}
\put(36,68.6){$\bm{z}_1\bm{\cdot}\bm{v}_2$}
\put(36,61.8){$\bm{z}_1\bm{\cdot}\bm{v}_3$}
\put(52,3){\large$\cos\theta$}
\put(3,45){\large\rotatebox{90}{PDF}}
\put(25,75) {(b)}
\end{overpic}
\vfill
\begin{overpic}[width=.49\textwidth]{{pdf_w2_orient_S}.pdf}
\put(36,75) {$\bm{y}_2\bm{\cdot}\bm{v}_1$}
\put(36,68.6){$\bm{y}_2\bm{\cdot}\bm{v}_2$}
\put(36,61.8){$\bm{y}_2\bm{\cdot}\bm{v}_3$}
\put(52,3){\large$\cos\theta$}
\put(3,45){\large\rotatebox{90}{PDF}}
\put(25,75) {(c)}
\end{overpic}
\hfill
\begin{overpic}[width=.49\textwidth]{{pdf_ew2_orient_S}.pdf}
\put(36,75) {$\bm{z}_2\bm{\cdot}\bm{v}_1$}
\put(36,68.6){$\bm{z}_2\bm{\cdot}\bm{v}_2$}
\put(36,61.8){$\bm{z}_2\bm{\cdot}\bm{v}_3$}
\put(52,3){\large$\cos\theta$}
\put(3,45){\large\rotatebox{90}{PDF}}
\put(25,75) {(d)}
\end{overpic}
\vfill
\begin{overpic}[width=.49\textwidth]{{pdf_w3_orient_S}.pdf}
\put(36,75) {$\bm{y}_3\bm{\cdot}\bm{v}_1$}
\put(36,68.6){$\bm{y}_3\bm{\cdot}\bm{v}_2$}
\put(36,61.8){$\bm{y}_3\bm{\cdot}\bm{v}_3$}
\put(52,3){\large$\cos\theta$}
\put(3,45){\large\rotatebox{90}{PDF}}
\put(25,75) {(e)}
\end{overpic}
\hfill
\begin{overpic}[width=.49\textwidth]{{pdf_ew3_orient_S}.pdf}
\put(36,75) {$\bm{z}_3\bm{\cdot}\bm{v}_1$}
\put(36,68.6){$\bm{z}_3\bm{\cdot}\bm{v}_2$}
\put(36,61.8){$\bm{z}_3\bm{\cdot}\bm{v}_3$}
\put(52,3){\large$\cos\theta$}
\put(3,45){\large\rotatebox{90}{PDF}}
\put(25,75) {(f)}
\end{overpic}
\caption{PDF of the relative orientation between the pressure Hessian eigenframe and the strain eigenframe (a-c-e) and relative orientation between the rank-reduced anisotropic pressure Hessian eigenframe and the strain eigenframe (b-d-f). The orientation is expressed by inner product of the eigenvectors of the strain-rate tensor $\bm{v}_i$ with the eigenvectors of $\bm{\mathcal{H}}$ ($\bm{y}_i$) and the eigenvectors of $\bm{\mathcal{H}}_\gamma^*$ ($\bm{z}_i$).}
\label{fig_HS_align}
\end{figure}
We now turn to consider the statistical geometry of the system.
In figure~\ref{fig_omgH_align} we consider the alignment between the vorticity $\bm{\omega}$ and the eigenframes of $\bm{\mathcal{H}}$ and $\bm{\mathcal{H}}_\gamma^*$.
While there is a strong preferential statistical alignment of the intermediate strain-rate eigenvector $\bm{v}_2$ with $\bm{\omega}$ \citep{Meneveau2011}, the preferential statistical alignment between $\bm{\omega}$ and the pressure Hessian eigenvectors $\bm{y}_i$ is very weak. There is only a moderate tendency for alignment between $\bm{y}_{2,3}$ and $\bm{\omega}$ \citep{Chevillard2008}. This constitutes an obstacle for understanding the role of the anisotropic pressure Hessian in turbulence.
On the other hand, the results in figure~\ref{fig_omgH_align} show a striking alignment between $\bm{\omega}$ and the rank-reduced anisotropic pressure Hessian eigenvectors $\bm{z}_i$. Indeed, there is a remarkable tendency for $\bm{z}_2$ to align with $\bm{\omega}$, that is consistent with the preferential alignment between $\bm{v}_2$ and $\bm{\omega}$ and between $\bm{z}_2$ and $\bm{v}_2$ (figure \ref{fig_HS_align}). As discussed in \S\ref{RR}, the contribution of the vorticity and rank-reduced anisotropic pressure Hessian to the straining motion in the fluid is confined to planes. In particular, the straining associated with the centrifugal force produced by the spinning of the fluid particle about the vorticity axis acts in the plane $\Pi_{\bm{\omega}}$, orthogonal to $\bm{\omega}$, while the contribution from the rank-reduced anisotropic pressure Hessian lies on the plane $\Pi_2$, orthogonal to its intermediate eigenvector $\bm{z}_2$. The results shown in figure \ref{fig_omgH_align}(b) indicate that these two planes tend to almost coincide.
However, the effects of $\bm{\omega}$ and $\bm{\mathcal{H}}_\gamma^*$ on the strain-rate dynamics are radically different.
The rotation of the fluid element generates a stretching rate of magnitude $\omega^2/4$ on the plane $\Pi_{\bm{\omega}}$ and its contribution is isotropic, since the eigenvalue of the projection tensor $\bm{P_\omega}$ is the same for all the eigenvectors that belong to the plane $\Pi_{\bm{\omega}}$, as in figure \ref{fig_scheme}(b).
On the other hand, the rank-reduced anisotropic pressure Hessian causes a stretching rate of magnitude $\psi$ in direction $\bm{z}_3$ and an equal and opposite compression in the direction $\bm{z}_1$, orthogonal to $\bm{z}_3$, as in figure \ref{fig_scheme}(c). This results in a marked anisotropy of the effect of $\bm{\mathcal{H}}_\gamma^*$ on the plane $\Pi_2$.
Since the planes $\Pi_{\bm{\omega}}$ and $\Pi_2$ tend to be almost parallel, the anisotropic pressure Hessian can be understood as the cause of the anisotropy which is lacking in the centrifugal forces produced by the vorticity in the $\Pi_{\bm{\omega}}$ plane, and this anisotropy is a key element in the prevention of the blow-up of the system.
Interestingly, the gauge term used in defining $\bm{\mathcal{H}}_\gamma^*$, equation \eqref{eq_H_gamma}, arises from a rotation of the strain-rate eigenframe about $\bm{\omega}$ and the results show that $\bm{\mathcal{H}}_\gamma^*$ lives on a two-dimensional manifold that statistically has a strong, but imperfect tendency to be orthogonal to $\bm{\omega}$. The dynamical significance of the slight misalignment is that it allows the anisotropic pressure Hessian to contribute to the eigenframe dynamics.
Indeed, if $\bm{\mathcal{H}}_\gamma^*$ were exactly orthogonal to $\bm{\omega}$, then the anisotropic pressure Hessian would make no direct contribution to the vorticity dynamics, and its only role would be to contribute to the strain-rate dynamics, described by equation \eqref{eq_lambda} and \eqref{eq_alg}.
It is known that in the inviscid case, the neglect of the anisotropic pressure Hessian in the eigenframe dynamics leads to a finite time singularity \citep{Vieillefosse1982}. Therefore, assuming that the slight misalignment between $\bm{\mathcal{H}}_\gamma^*$ and $\bm{\omega}$ is not solely due to viscous effects, then this misalignment must also play a role in regularizing the eigenframe dynamics
thus preventing the onset of singularities in the inviscid Euler system.
Figures~\ref{fig_HS_align}(a-c-e) present the statistical alignment of the eigenvectors $\bm{y}_i$ of $\bm{\mathcal{H}}$, with the strain-rate eigenvectors $\bm{v}_j$. The alignments between the pressure Hessian eigenframe and the strain-rate eigenframe do not reveal any strong preferences, with weak alignment tendencies to $\bm{y}_1\bm{\cdot}\bm{v}_1\approx 0.71$ and $\bm{y}_{1,3}\bm{\cdot}\bm{v}_3\approx 0.71$. Therefore, there is a very mild tendency for $\bm{y}_1$ to form a $\pi/4$ angle with $\bm{v}_1$ and $\bm{v}_3$ and for $\bm{y}_3$ to form a $\pi/4$ angle with $\bm{v}_3$. These weak alignments make it difficult to model the directionality of $\bm{\mathcal{H}}$ in any simple way in terms of the eigenframe of the strain-rate tensor.
Figure~\ref{fig_HS_align}(b-d-f) show the alignments between the eigenvectors $\bm{z}_i$ of $\bm{\mathcal{H}}_\gamma^*$, with $\bm{v}_j$. The results show, in striking contrast to the corresponding plots for the alignment of $\bm{\mathcal{H}}$, that the eigenframe $\bm{\mathcal{H}}_\gamma^*$ exhibits remarkable alignment properties with a strong tendency to have $\bm{z}_{1,3}\bm{\cdot}\bm{v}_{1,3}\approx 0.71$, $\bm{z}_{2}\bm{\cdot}\bm{v}_2\approx 1$ and $\bm{z}_{2}\bm{\cdot}\bm{v}_3\approx 0$. This means that the tangent space $\Pi_2$ to the two-dimensional manifold on which $\bm{\mathcal{H}}_\gamma^*$ acts tends to be orthogonal to $\bm{v}_2$. On that plane the eigenvetors $\bm{z}_1$ and $\bm{z}_3$ of $\bm{\mathcal{H}}_\gamma^*$ tend to be inclined at an angle of $\pi/4$ relative to both $\bm{v}_1$ and $\bm{v}_3$. This evidence makes the rank-reduced anisotropic pressure Hessian suitable for modelling, since there is a well defined most probable configuration for the orientation of $\bm{\mathcal{H}}_\gamma^*$ with respect to $\bm{S}$.
Those clear preferential alignments between $\bm{\omega}$ and $\bm{S}$ with $\bm{\mathcal{H}}_\gamma^*$ also helps understanding how the anisotropic pressure Hessian prevents blow-up, as we will discuss in the next section.
\section{Numerical results: conditioned statistical geometry}
The simpler geometry of the rank-reduced anisotropic pressure Hessian together with its well-defined preferential alignments can facilitate the understanding of the pressure Hessian on the dynamics of the velocity gradient invariants.
In particular, the role of the anisotropic pressure Hessian in preventing the blow-up of the Restricted Euler system can be analyzed by considering how the statistical alignment properties of $\bm{\mathcal{H}}_\gamma^*$ depend on $\bm{S}$ and $\bm{\omega}$.
The finite-time singularity prevention mechanism can be safely tackled by using $\bm{\mathcal{H}}_\gamma^*$ instead of $\bm{\mathcal{H}}$ since such regularity problem is expressed in terms of invariants and is not linked with the orientation of the strain-rate eigenframe with respect to a fixed frame. The equations for the invariants dynamics, \eqref{eq_lambda} and \eqref{eq_omega}, show that there is a local stabilizing effect due to the reduction of the strain-rates by the centrifugal force produced by the vorticity. However, it is known that this mechanism alone is not sufficient to prevent blow-up of the system \citep{Meneveau2011}, and the anisotropic pressure Hessian provides the additional contribution to stabilize the dynamics. This can be understood more easily when the rank-reduced anisotropic pressure Hessian is employed instead of the full anisotropic pressure Hessian.
Indeed, $\bm{\mathcal{H}}_\gamma^*$ is effective only on a plane and the results show a clear tendency for $\bm{S}$ and $\bm{\omega}$ to preferentially align with $\bm{\mathcal{H}}_\gamma^*$, which is in striking contrast with their mild preferential alignment with $\bm{\mathcal{H}}$.
\subsection{Rank-redued anisotropic pressure Hessian--strain-rate alignment}
The components of the rank-redued anisotropic pressure Hessian, $\bm{\mathcal{H}}_\gamma^*$, in the strain-rate eigenframe can be expressed as
\begin{equation}
\bm{V}^\top\bm{\cdot}\bm{\mathcal{H}}_\gamma^*\bm{\cdot}\bm{V} = \bm{V}^\top\bm{\cdot} \bm{Z\cdot\psi \cdot Z}^\top \bm{\cdot}\bm{V}
\label{eq_H_components}
\end{equation}
where $\bm{V}$ and $\bm{Z}$ are the matrices which contain the strain-rate eigenvectors components and rank-reduced pressure Hessian eigenvectors components with respect to a Cartesian basis, that is, $V_{ij}\equiv\bm{e}_i\bm{\cdot v}_j$ and $Z_{ij}\equiv\bm{e}_i\bm{\cdot z}_j$.
The diagonal and singular matrix $\bm{\psi}$ contains the eigenvalues of the rank-reduced anisotropic pressure Hessian, $(\psi,0,-\psi)$.
The components of $\bm{\mathcal{H}}_\gamma^*$ in the strain-rate eigenframe can be explicitly computed,
\begin{equation}
\widetilde{\bm{\mathcal{H}}}_\gamma^* = \bm{V}^\top\bm{\cdot}\bm{\mathcal{H}}_\gamma^*\bm{\cdot}\bm{V} = \psi
\begin{bmatrix}
\widetilde{{z}}_{11}^2 - \widetilde{{z}}_{13}^2 &
\widetilde{{z}}_{11} \widetilde{{z}}_{21} - \widetilde{{z}}_{13}\widetilde{{z}}_{23} &
\widetilde{{z}}_{11} \widetilde{{z}}_{31} - \widetilde{{z}}_{13}\widetilde{{z}}_{33} \\
\widetilde{{z}}_{11} \widetilde{{z}}_{21} - \widetilde{{z}}_{13}\widetilde{{z}}_{23} &
\widetilde{{z}}_{21}^2 - \widetilde{{z}}_{23}^2 &
\widetilde{{z}}_{21} \widetilde{{z}}_{31} - \widetilde{{z}}_{23}\widetilde{{z}}_{33} \\
\widetilde{{z}}_{11} \widetilde{{z}}_{31} - \widetilde{{z}}_{13}\widetilde{{z}}_{33} &
\widetilde{{z}}_{21} \widetilde{{z}}_{31} - \widetilde{{z}}_{23}\widetilde{{z}}_{33} &
\widetilde{{z}}_{31}^2 - \widetilde{{z}}_{33}^2 \\
\end{bmatrix},
\label{eq_H_gamma_gen}
\end{equation}
where $\widetilde{z}_{ij}\equiv\bm{v}_i\bm{\cdot z}_j$ is the $i$-th strain-rate eigenframe component of the $j$-th eigenvector $\bm{z}_j$ and $\sum_i \widetilde{z}_{ij}^2=1$. Since $\bm{\mathcal{H}}_\gamma^*$ acts only on the plane $\Pi_2$, spanned by $\bm{z}_1$ and $\bm{z}_3$, the expression of $\bm{\mathcal{H}}_\gamma^*$ in the strain-rate eigenframe is simplified. The rank-reduction allows for separation of variables between the magnitude and orientation contributions.
The magnitude of the pressure Hessian is described solely by $\psi$ while the orientation depends on the dot products $\widetilde{z}_{ij}$.
The factorization into the product of a function only of the eigenvalue and a function only of the alignment of the eigenframes is a feature of two-dimensional traceless tensors, while in three dimensions such separation of variables is in general not possible \citep{Ballouz2018}.
The diagonal components of $\bm{\mathcal{H}}_\gamma^*$ in the strain-rate eigenframe cause a variation of the strain-rate eigenvalues. Using equation \eqref{eq_H_gamma_gen} in \eqref{eq_lambda}, and neglecting the viscous contribution, gives
\begin{equation}
D_t{\lambda_{i}} = -\left(\lambda^2_{i}-\frac{1}{3}\sum_j\lambda_j^2\right) - \frac{1}{4}\left( \widetilde{\omega}_i^2 - \frac{1}{3}\sum_j\widetilde{\omega}_j^2\right) - \psi \left(\widetilde{{z}}_{i1}^2 - \widetilde{{z}}_{i3}^2\right).
\label{eq_lambda_z}
\end{equation}
It is known that the blow up of the Restricted Euler model occurs in the quadrant $R>0,Q<0$ where
the invariants $R$ and $Q$ are defined in equation \eqref{eq_def_RQ}. In particular, the blow-up is associated with $R\to+\infty$ and $Q\sim-(27R^2/4)^{1/3}\to-\infty$ \citep{Vieillefosse1982}. In this quadrant the straining field is in a state of bi-axial extension, with $\lambda_1>0,\lambda_2>0,\lambda_3<0$. Therefore, to explore how $\bm{\mathcal{H}}_\gamma^*$ prevents blow-up, we must consider its effects on the states where $\lambda_1>0,\lambda_2>0,\lambda_3<0$.
From equation \eqref{eq_lambda_z} we see that $\bm{\mathcal{H}}_\gamma^*$ will act to prevent blow-up in the quadrant $Q<0, R>0$ if $\widetilde{{z}}_{13}^2 - \widetilde{{z}}_{11}^2<0$, $\widetilde{{z}}_{23}^2 - \widetilde{{z}}_{21}^2<0$, and $\widetilde{{z}}_{33}^2 - \widetilde{{z}}_{31}^2>0$. To consider this, in figure \ref{blow_up} we show the conditioned averages $\langle\widetilde{{z}}_{i3}^2 - \widetilde{{z}}_{i1}^2\big\vert R,Q\rangle$. The results confirm that when $Q<0, R>0$, $\langle\widetilde{{z}}_{23}^2 - \widetilde{{z}}_{21}^2\big\vert R,Q\rangle<0$, and $\langle\widetilde{{z}}_{33}^2 - \widetilde{{z}}_{31}^2\big\vert R,Q\rangle>0$, showing that $\bm{\mathcal{H}}_\gamma^*$ acts to reduce $|\lambda_2|$ and $|\lambda_3|$. However, contrary to expectation, they also show that $\langle\widetilde{{z}}_{13}^2 - \widetilde{{z}}_{11}^2\big\vert R,Q\rangle>0$, such that $\bm{\mathcal{H}}_\gamma^*$ explicitly acts to increase $\lambda_1$ when $Q<0, R>0$. Nevertheless, since $\sum_i \lambda_i =0$, if $\bm{\mathcal{H}}_\gamma^*$ acts to reduce $|\lambda_3|$ when $Q<0, R>0$, then it also indirectly acts to reduce $\lambda_1$, since $\lambda_1\to\infty$ is not possible unless $|\lambda_3|\to\infty$ (noting $-\lambda_3\geq \lambda_2$). Therefore, the effect of $\bm{\mathcal{H}}_\gamma^*$ is somewhat subtle, directly acting to prevent blow-up of $\lambda_2$ and $\lambda_3$, and only indirectly acting to prevent the blow-up of $\lambda_1$. Interestingly, the direct amplification of $\lambda_1$ due to $\bm{\mathcal{H}}_\gamma^*$ becomes very small in a narrow region along the right Vieillefosse tail, as the colors in figure \ref{blow_up}(b) show. Therefore, this amplification mechanism is not effective in the phase space region in which the Restricted Euler system blows up.
\begin{figure}
\includegraphics[width=1.\textwidth]{compr_fig_8-crop.pdf}
\caption{Results for $\langle\widetilde{{z}}_{i3}^2 - \widetilde{{z}}_{i1}^2\big\vert R,Q\rangle$, (a) $i=1$, (c) $i=2$, (e) $i=3$. The color range has been truncated to $[-0.3,0.3]$ in order to highlight the trend of the variables around the most probable values.
Results for $\langle|\widetilde{{z}}_{i3}^2 - \widetilde{{z}}_{i1}^2|\big\vert R,Q\rangle$ in logarithmic scale, (b) $i=1$, (d) $i=2$, (f) $i=3$. Black lines denote the Vieillefosse
tails.
}
\label{blow_up}
\end{figure}
The scalar products $\widetilde{z}_{ij}$ preferentially lie in a very narrow interval around a few well defined values, as clearly indicated by the results in figure \ref{fig_HS_align}.
In particular, the eigenvectors $\bm{z}_{1,3}$ of $\bm{\mathcal{H}}_\gamma^*$ tend to form an angle of $\pi/4$ with the eigenvectors $\bm{v}_{1,3}$ of $\bm{S}$. Therefore a typical configuration for the relative orientation between $\bm{\mathcal{H}}_\gamma^*$ and $\bm{S}$ is
\begin{equation}
\bm{V}^\top\bm{\cdot Z} =
\begin{bmatrix}
\cos\left(\pi/4+\epsilon_{11}\right) & \sin\left(\epsilon_{12}\right) & \cos\left(\pi/4+\epsilon_{13}\right)\\
\sin\left(\epsilon_{21}\right) & \cos\left(\epsilon_{22}\right) & \sin\left(\epsilon_{23}\right)\\
\cos\left(\pi/4+\epsilon_{31}\right) & \sin\left(\epsilon_{32}\right) & \cos\left(\pi/4+\epsilon_{33}\right)\\
\end{bmatrix},
\label{eq_rot_VZ}
\end{equation}
where the quantities $\epsilon_{ij}$ represent the deviations of the angles from the idealized configuration considered, and there is a dependence of the sign on the angle between $\bm{v}_1$ and $\bm{z}_1$, which can be $\pi/4$ or $3\pi/4$ (depending upon the sign of the eigenvalues that are chosen). That sign does not change the discussion below.
Considering only small deviations from the most probable alignment, that is, considering $|\epsilon_{ij}|\ll1$, the elements of the rotation matrix in equation \eqref{eq_rot_VZ} can be Taylor-expanded and, at first order in $\epsilon_{ij}$, the expression for the rank-reduced anisotropic pressure Hessian in the strain-rate eigenframe reduces to
\begin{equation}
\widetilde{\bm{\mathcal{H}}}_\gamma^* = \bm{V}^\top\bm{\cdot}\bm{\mathcal{H}}_\gamma^*\bm{\cdot V} \sim
\psi
\begin{bmatrix}
-2\epsilon_{11} & \epsilon_{32} & \pm 1 \\
\epsilon_{32} & 0 & \epsilon_{12} \\
\pm 1 & \epsilon_{12} & 2\epsilon_{11}\\
\end{bmatrix},
\label{eq_H_gamma_V}
\end{equation}
where the orthonormality constraint, $\bm{V\cdot V}^\top=\mathbf{I}$, has been used to relate the small perturbation angles.
It is the diagonal components of $\bm{\mathcal{H}}_\gamma^*$ that contribute directly to the rate of change of the strain-rate eigenvalues, as in equation \eqref{eq_lambda_z}, and the anisotropic pressure Hessian has no direct effect on the strain-rate eigenvalues when the most probable alignments, $\epsilon_{ij}=0$, occur.
At the level of this first order approximation, the effect of $\bm{\mathcal{H}}_\gamma^*$ on the first and third eigenvalue always has the opposite sign, which is consistent with the stabilizing effect of the pressure Hessian. Therefore, according to this first order approximation, the pressure Hessian tends to counteract both $\lambda_1$ and $\lambda_3$ by imposing a negative rate of change of $\lambda_1$ and a positive rate of change of $\lambda_3$, such that both the most positive and negative eigenvalues are pulled toward smaller magnitudes. The results in figure \ref{blow_up} confirm this prediction in the $Q>0,R>0$ quadrant,
where it is seen that ${\bm{\mathcal{H}}}_\gamma^*$ acts to suppress the magnitudes of both $\lambda_1$ and $\lambda_3$. That the linearized prediction fails in the region $Q<0,R>0$ is perhaps not surprising since that is the region of most intense nonlinear activity, and where ${\bm{\mathcal{H}}}_\gamma^*$ must be sufficiently large (and by implication $\epsilon_{ij}$ cannot be too small) in order to counteract the blow-up associated with the RE dynamics. The linearization also predicts that the influence of $\bm{\mathcal{H}}_\gamma^*$ on $\lambda_2$ is only a second order effect when $\epsilon_{ij}$ is small. However, this prediction is in general not supported by the DNS, since the results in figure \ref{blow_up} show that in most of the $Q,R$ plane, the rank-reduced anisotropic pressure Hessian strongly hinders the growth of positive $\lambda_2$.
In order to fully quantify the effect of $\bm{\mathcal{H}}_\gamma^*$, its magnitude should also be considered together with its orientation. The average of the diagonal components of $-\bm{\mathcal{H}}_\gamma^*$ in the strain-rate eigenframe conditioned on the invariants $R,Q$ is shown in figure \ref{H_gamma_eigS}.
Despite the large magnitude of the rank-reduced anisotropic pressure Hessian eigenvalue, the contribution of $\bm{\mathcal{H}}_\gamma^*$ to the strain-rate eigenvalue dynamics is moderate on average.
Figure \ref{eigen_QR} shows that the eigenvalue of $\bm{\mathcal{H}}_\gamma^*$, namely $\psi$, is very large along the right Vieillefosse tail and in the quadrant $Q>0,R<0$.
Figures \ref{blow_up}(a--c) show that $\langle|\widetilde{{z}}_{i3}^2 - \widetilde{{z}}_{i1}^2|\big\vert R,Q\rangle$ is small along the right Vieillefosse tail, and these small values of $|\widetilde{z}_{i3}-\widetilde{z}_{i1}|$ compensate the large magnitude of $\psi$ in the same region.
In particular, the orientational contribution of $\bm{\mathcal{H}}_\gamma^*$ to the dynamics of $\lambda_1$, namely $|\widetilde{z}_{13}-\widetilde{z}_{11}|$, is very small along the right Vieillefosse tail.
This indicates how the direct amplification of $\lambda_1$ due to $\bm{\mathcal{H}}_\gamma^*$ does not lead to blow up, since this amplification is strong for $R<0$, but is very weak along the right Vieillefosse tail where RE blows up, as shown in figure \ref{H_gamma_eigS}(a).
As observed above, the rank-reduced anisotropic pressure Hessian tends to suppress positive values of $\lambda_2$ in the $R>0,Q<0$ quadrant, as displayed in figure \ref{H_gamma_eigS}(b). Interestingly, however, $\bm{\mathcal{H}}_\gamma^*$ contributes to the growth of positive $\lambda_2$ in the region $Q>0,R<0$, where $\bm{\omega}$ and $\bm{v}_2$ are also strongly aligned (see figure \ref{fig_vorticity_S_alignment_RQ} (b)). As such, $\bm{\mathcal{H}}_\gamma^*$ indirectly contributes to vortex stretching.
The results in figure \ref{H_gamma_eigS}(c) show that, the rank-reduced anisotropic pressure Hessian strongly hinders $\lambda_3$ along the right Vieillefosse tail, contributing to its amplification only in a small region where $R<0$ and $Q>0$. This is a key way in which $\bm{\mathcal{H}}_\gamma^*$ acts to prevent blow-up in the region $R>0,Q<0$.
\begin{figure}
\includegraphics[width=1.\textwidth]{compr_fig_9-crop.pdf}
\caption{Results for $\langle-\mathcal{\widetilde{H}}^*_{\gamma,i(i)}\big\vert R,Q\rangle$, the average of the diagonal components of $-\bm{\mathcal{H}}_\gamma^*$ in the strain-rate eigenframe conditioned on the principal invariants $R,Q$. (a) $i=1$, (b) $i=2$, (c) $i=3$. Black lines denote the Vieillefosse tails.
}
\label{H_gamma_eigS}
\end{figure}
\subsection{Rank-redued anisotropic pressure Hessian--vorticity alignment}
As shown earlier, $\bm{\mathcal{H}}_\gamma^*$ exhibits remarkable alignment properties with respect to the vorticity $\bm{\omega}$. In view of this, we now consider how this alignment impacts the way that $\bm{\mathcal{H}}_\gamma^*$ competes with the centrifugal term produced by vorticity to control the growth of the strain-rates. This can be explored by considering the strain-rates along the vorticity direction.
The statistical alignments of the vorticity vector with the strain-rate eigenvectors, quantified by $(\bm{v}_i\bm{\cdot \hat{\omega}})^2$, conditioned on the invariants $R$ and $Q$, are shown in figure \ref{fig_vorticity_S_alignment_RQ}.
The vorticity tends to align with the most extensional strain-rate eigenvector in the region $R<0$ and also, to a lesser extent, between the Vieillefosse tails. Alignment between the vorticity and the most compressional strain-rate eigenvector takes place in the region $R>0$ only, above the right Vieillefosse tail.
The vorticity vector strongly aligns with the intermediate strain-rate eigenvector in the region $Q>0$, close to the $R=0$ axis and along the right Vieillefosse tail.
The half-plane $Q>0$ and the vicinity of the right Vieillefosse tail correspond to the bulk of probability on the $Q,R$ plane \citep{Meneveau2011}, as shown in figure \ref{fig_vorticity_S_alignment_RQ}(d), and therefore preferential alignment between vorticity and the intermediate strain-rate eigenvector is observed. In the phase-space region in which the alignment between vorticity and the intermediate strain-rate eigenvector is strong, the contribution of $\bm{\mathcal{H}}_\gamma^*$ to the dynamics of $\lambda_2$ is larger. This is observed by comparing figure \ref{fig_vorticity_S_alignment_RQ}(b) and figure \ref{H_gamma_eigS}(b).
\begin{figure}
\includegraphics[width=1.\textwidth]{compr_fig_10-crop.pdf}
\caption{Results for $\langle (\hat{\bm{\omega}}\bm{\cdot}\bm{v}_i)^2\big\vert R,Q\rangle$, the statistical alignment between vorticity and eigenvectors of the strain-rate tensor, conditioned on the principal invariants $R,Q$. (a) $i=1$, (b) $i=2$, (c) $i=3$. Plot (d) shows the joint probability density of the principal invariants $R$ and $Q$. Black lines denote the Vieillefosse tails.}
\label{fig_vorticity_S_alignment_RQ}
\end{figure}
We now turn to the combined effects of $\bm{\mathcal{H}}_\gamma^*$ and $\bm{\omega}$ on the strain-rate dynamics.
The evolution equation for $\bm{S}$ may be written as (ignoring the viscous term)
\begin{equation}
D_t\bm{S}=-\left(\bm{S\cdot S}-\frac{\text{\textrm{Tr}}(\bm{S\cdot S})}{3}\mathbf{I} \right) -\frac{1}{4}\left(\bm{\omega}\bm{\omega}^\top- \frac{\omega^2}{3} \mathbf{I} \right)-\bm{\mathcal{H}}.
\end{equation}
We consider the projection of this equation along the instantaneous vorticity direction $\hat{\bm{\omega}}\equiv\bm{\omega}/\omega$, and along this direction, the contribution of the last two terms is
\begin{equation}
\hat{\bm{\omega}}\bm{\cdot} \left(-\frac{1}{4}\bm{\omega}\bm{\omega}^\top+ \frac{\omega^2}{12} \mathbf{I} -\bm{\mathcal{H}} \right) \bm{\cdot}\hat{\bm{\omega}} =
-\frac{1}{6}\omega^2 -\psi\left( (\hat{\bm{\omega}}\bm{\cdot}\bm{z}_1)^2-(\hat{\bm{\omega}}\bm{\cdot}\bm{z}_3)^2 \right),
\label{eq_H_gamma_R2}
\end{equation}
where the properties of $\bm{\mathcal{H}}_{\gamma}^*$ have allowed us to use $\bm{\mathcal{H}}_{\gamma}^*$ instead of $\bm{\mathcal{H}}$. Note that the term $-\omega^2/6$ comes entirely from the contribution of vorticity to the isotropic part of the pressure Hessian, since the centrifugal contribution does not act along the direction of vorticity, but only orthogonal to it.
Equation \eqref{eq_H_gamma_R2} shows that (noting $\psi\geq 0$) when the vorticity is more aligned with the extensional/compressional direction of $\bm{\mathcal{H}}_\gamma^*$, then $\bm{\mathcal{H}}_\gamma^*$ acts with/against the contribution from vorticity to oppose/aid the production of strain along the vorticity direction.
In figure \ref{fig_vorticity_H_gamma_alignment_RQ} we consider the DNS data for $\langle (\hat{\bm{\omega}}\bm{\cdot}\bm{z}_3)^2-(\hat{\bm{\omega}}\bm{\cdot}\bm{z}_1)^2 \big\vert R,Q\rangle$.
The results show that in $Q>0$ regions, the vorticity vector preferentially aligns with the most compressional eigenvector of the rank-reduced anisotropic pressure Hessian, so that $\langle (\hat{\bm{\omega}}\bm{\cdot}\bm{z}_3)^2-(\hat{\bm{\omega}}\bm{\cdot}\bm{z}_1)^2 \big\vert R,Q\rangle>0$. On the contrary, in $Q<0$ regions $\langle (\hat{\bm{\omega}}\bm{\cdot}\bm{z}_3)^2-(\hat{\bm{\omega}}\bm{\cdot}\bm{z}_1)^2 \big\vert R,Q\rangle<0$. This striking behavior means that in vorticity dominated regions, $\bm{\mathcal{H}}_\gamma^*$ acts to increase the strain-rate along the vorticity direction, and the opposite in strain dominated regions.
\begin{figure}
\includegraphics[width=1.\textwidth]{compr_fig_11-crop.pdf}
\caption{Statistical alignment between vorticity and eigenvalues of the rank-reduced anisotropic pressure Hessian, conditioned on the principal invariants $R,Q$. Results for (a) $\langle (\hat{\bm{\omega}}\bm{\cdot}\bm{z}_3)^2-(\hat{\bm{\omega}}\bm{\cdot}\bm{z}_1)^2 \big\vert R,Q\rangle$ and (b) $\langle |(\hat{\bm{\omega}}\bm{\cdot}\bm{z}_3)^2-(\hat{\bm{\omega}}\bm{\cdot}\bm{z}_1)^2| \big\vert R,Q\rangle$ in logarithmic scale. Black lines denote the Vieillefosse tails.}
\label{fig_vorticity_H_gamma_alignment_RQ}
\end{figure}
\section{Conclusions}
In this paper a new symmetry for the Lagrangian dynamics of the velocity gradient invariants has been presented and it has been interpreted as a gauge for the anisotropic pressure Hessian.
This gauge arises because the dynamics of the strain-rate eigenvalues and vorticity components in the strain-rate eigenframe are unaffected by the angular velocity of the eigenframe along the vorticity direction. Using this symmetry, we have introduced a modified pressure Hessian, $\bm{\mathcal{H}}_\gamma^*$, that is the sum of the standard pressure Hessian and the gauge term. We then sought for lower dimensional representations of the pressure Hessian by performing a rank-reduction on $\bm{\mathcal{H}}_\gamma^*$, allowed by the additional degree of freedom provided by the gauge symmetry.
Remarkably, this rank reduction is possible everywhere in the flow, and consequently everywhere in the flow a two-dimensional $\bm{\mathcal{H}}_\gamma^*$ may be defined that generates exactly the same eigenframe dynamics as the full three-dimensional pressure Hessian $\bm{\mathcal{H}}$.
We also showed that $\bm{\mathcal{H}}_\gamma^*$ exhibits remarkable alignment properties with respect to the strain-rate eigenframe and vorticity, that are not possessed by $\bm{\mathcal{H}}$.
In particular, the plane on which $\bm{\mathcal{H}}_\gamma^*$ acts tends to be almost orthogonal to the vorticity vector. Consistently, the intermediate eigenvector of $\bm{\mathcal{H}}_\gamma^*$ strongly aligns with the strain-rate intermediate eigenvector. Also, the most compressional/extensional eigenvectors of $\bm{\mathcal{H}}_\gamma^*$ preferentially form an angle of $\pi/4$ with the most compressional/extensional eigenvectors of the strain-rate tensor.
The rank-reduced anisotropic pressure Hessian offers promising applications. For example, the reduction in dimensionality, provided by replacing $\bm{\mathcal{H}}$ with $\bm{\mathcal{H}}_\gamma^*$ in the eigenframe equations, is a step towards more efficient modeling, since the rank-reduced anisotropic pressure Hessian can be specified by only four numbers instead of five required for the fully three-dimensional $\bm{\mathcal{H}}$. The eigenvalues of $\bm{\mathcal{H}}_\gamma^*$ are also shown to be strongly related to the local strain-rate and vorticity in the flow, suggesting relatively simple ways to model these eigenvalues in Lagrangian models for the velocity gradient tensor. This property, together with the reduction in dimensionality and the remarkable alignment properties of $\bm{\mathcal{H}}_\gamma^*$, offer promising insights into ways in which the anisotropic pressure Hessian and its effects on the eigenframe dynamics can be modelled. The development of such a model will be the subject of future work.
\section*{Acknowledgements}
This work used the Extreme Science and Engineering Discovery Environment (XSEDE), supported by National Science Foundation grant ACI-1548562 \citep{xsede}.
\section{Declaration of Interests}
The authors report no conflict of interest.
\bibliographystyle{jfm}
\section{Introduction}
The small-scales of turbulent flows are characterized by the velocity gradient field, rather than the velocity field itself \citep{tsinober}. As a result, analyzing the dynamical and statistical properties of the velocity gradients provides an important way to explore the physical processes governing the small-scales of turbulent flows, which are governed by highly non-linear and non-local dynamical processes, and whose statistics are strongly intermittent in space and time. Moreover, studying the velocity gradients in a Lagrangian reference frame has proven to be paticlarly fruitful for understanding and modeling \citep{meneveau11}.
The equation governing the velocity gradient tensor along a fluid particle trajectory can be derived from the Navier-Stokes equation (NSE). However, it is unclosed because of the pressure Hessian and viscous terms. In order to model these terms it is vital to understand their behavior upon the basis of which one can provide reasonable closure approximations. For the inviscid case, an early closure model by \cite{vieillefosse84} involved neglecting the non-local/anisotropic part of the pressure Hessian while retaining its local/isotropic part, leading to the Restricted Euler Model (REM). While this model led to important insights, its solution exhibits a finite time singularity. More recent models that also account for the viscous contribution have been developed. In \cite{chevillard06}, a closure was constructed based on the deformation history of a fluid particle in the flow. The closure model involved only retaining the information on the recent deformation history, that is, for times up to the Kolmogorov timescale $\tau_\eta$ in the past. Phenomenological closures were then constructed assuming that at a time $\tau_\eta$ in the past, the pressure Hessian was isotropic. This model does not exhibit the singularity associated with the REM, and was shown to capture many of the non-trivial features of the velocity gradient dynamics that are observed in experiments and Direct Numerical Simulations (DNS) of the NSE. However, it was not able to capture certain important features, and also had problems when used to model large Reynolds number flows. \cite{chevillard08} also showed that the closure model presented in \cite{chevillard06} misses some important features of the pressure Hessian dynamics and statistical geomety.
In \cite{wilczek14} a very different approach was used to close the Lagrangian velocity gradient equation. Namely, a closure was developed based upon the assumption that the velocity gradients are spatiotemporally correlated random fields with Gaussian statistics, and closed expressions for the pressure Hessian and viscous terms were obtained through the evaluation of the associated characteristic functionals. Owing to the Gaussian assumption, this model led to predictions that are not in full agreement with DNS data. To attempt to cure this, the authors modified the closure such that the mathematical structure was retained, but the coefficients appearing in the model were prescribed using DNS data. This led to significant improvements, and the model provides interesting insights into the role of the anisotopic pressure Hessian in preventing the singularities arising in the REM. However, again, the model failed to fully capture the true dynamics of the velocity gradients, and did not satisfy the kinematic relations of \cite{betchov56}, which ought to be satisfied for an incompressible, statistically homogeneous flow field (assuming ergodicity of the flow).
Another model has been developed by \cite{johnson16} that utilizes the closure modeling ideas of both \cite{chevillard06} and \cite{wilczek14}. This model leads to improvements compared with the two models on which it is based, and its parameters are specified in such a way that by construction the model satisfies the kinematic relations of \cite{betchov56}. However, a quantitive comparison with DNS data revealed shortcomings in the models ability to properly capture the intermittency of the flow. Moreover, it, like that of \cite{chevillard06}, runs into difficulties for high Reynolds number flows. However, a more recent development in \cite{johnson17} solves this Reynolds number limitation of the model.
In summary, while significant progress has been made since the initial modelling efforts of \cite{vieillefosse84}, much remains to be done. Part of the significant difficulty of developing accurate closure approximations for the Lagrangian velocity gradient equation is that the anisotropic/non-local pressure Hessian and its dynamical effects on the flow are not fully understood and are difficult to approximate using simple closure ideas. This fact is part of the motivation behind the present work.
In the following, we present what appears to be a previously unrecognized gauge symmetry for the pressure Hessian, such that when this gauge is added to the pressure Hessian, the velocity gradient dynamics in the strain-rate eigenframe remain unchanged. We then exploit this gauge symmetry to perform a rank reduction on the non-local pressure Hessian. Remarkably, this modified pressure Hessian lives on a 2-dimensional manifold almost everywhere in the flow, and exhibits striking alignment properties with respect to the strain-rate eigenframe and the vorticity vector.
\section{Theory}
\subsection{Equations for the fluid velocity gradient in the strain-rate eigenframe}
The Navier-Stokes equation for a Newtonian, incompressible fluid with unitary density is
\begin{align}
\frac{D}{Dt}\bm{u}\equiv\partial_t \bm{u} +(\bm{u}\bm{\cdot}\nabla)\bm{u} = -\nabla P + \nu\nabla^2\bm{u},\quad \nabla\bm{\cdot}\bm{u} = 0,\, t\geq 0, \bm{x}\in\mathbb{R}^d,
\label{eq_NS}
\end{align}
where $\bm{u}\in\mathbb{R}^d,P\in\mathbb{R}$ are the fluid velocity and pressure, and $\nu$ is the kinematic viscosity. We will work in $d=3$, however, the analysis is easily extended to other cases.
By taking the gradient of \eqref{eq_NS} we obtain the equation for the velocity gradient tensor
\begin{align}
\frac{D}{Dt}\bm{A} = -\bm{A\cdot A} - \bm{H} + \nu \nabla^2 \bm{A},\quad \textrm{Tr}(\bm{A} )&= 0,
\label{eq_grad}
\end{align}
where $\bm{A}\equiv \bm{\nabla u}$ is the velocity gradient, and $\bm{H}\equiv \bm{\nabla\nabla}P$ is the pressure Hessian.
The pressure and viscous contributions to \eqref{eq_grad} are not in closed form, since they cannot be expressed in terms of $\bm{A}(\bm{x}(t),t)$, where $\bm{x}(t)$ is the trajectory of the fluid particle being followed. Models are necessary to define those terms and reliable modelling of them requires an understanding of their dynamical and statistical properties.
The tensor $\bm{A}$ may be decomposed into its symmetric and anti-symmetric part, namely the rate-of-strain $\bm{S} \equiv (\bm{A} + \bm{A}^\top)/2$, and the rate-of-rotation $\bm{R} \equiv (\bm{A} - \bm{A}^\top)/2$, whose components are related to the vorticity $\bm{\omega}\equiv\bm{\nabla}\times\bm{u}$ as $R_{ij}=(-1/2)\epsilon_{ijk}\omega_k$. Using \eqref{eq_grad} we may obtain the equations governing $\bm{S}$ and $\bm{\omega}$, and it is insightful to write these in the eigenframe of $\bm{S}$, with eigenvectors $\bm{v}_i$ (satisfying $\|\bm{v}_i\|\equiv 1$) and eigenvalues $\lambda_i$ for $i=1,2,3$. The equations in this eigenframe are ($\,\widetilde{\cdot}$ denotes a quantity in the eigenframe)
\begin{eqnarray}
\frac{D}{Dt}{\lambda_{i}} &=& -\lambda^2_{i} + \frac{1}{4}\left( \omega^2 - \widetilde{\omega}_i^2\right) - \widetilde{H}_{i(i)} +\widetilde{ \nu\nabla^2 S_{i(i)}},\label{eq_lambda}\\
\widetilde{W}_{ij}\left(\lambda_{(j)}-\lambda_{(i)}\right) &=& -\frac{1}{4} \widetilde{\omega}_i\widetilde{\omega}_j - \widetilde{H}_{ij} + \widetilde{\nu\nabla^2{S}_{ij}}, \; j \ne i,\label{eq_alg}\\
\frac{D}{Dt}{\widetilde{\omega}_i} &=& \lambda_{(i)}\widetilde{\omega}_i - \widetilde{W}_{ij}\widetilde{\omega}_j +\widetilde{ \nu\nabla^2\omega_i}.\label{eq_omega}
\end{eqnarray}
In these equations, the indeces in brackets are not contracted, $\widetilde{\omega}_i=\bm{v}_i\bm{\cdot}\bm{\omega}$ is the vorticity component in the eigenframe, and ${\omega}^2\equiv\widetilde{\omega}_i\widetilde{\omega}_i$. The term $\widetilde{W}_{ij}$ corresponds to the components of $W_{ij}$ in the eigenframe, where $W_{ij}$ is related to the eigenframe angular velocity $\bm{w}$ through $W_{ij}=(-1/2)\epsilon_{ijk}w_k$, with $(d/dt)\bm{v}_i=\bm{w}\times\bm{v}_i$. These eigenframe equations have been studied in some detail, e.g. \cite{vieillefosse84,dresselhaus92}.
\subsection{A new symmetry of the eigenframe equations}
The eigenframe equations are invariant under $\widetilde{\omega}_i \to-\widetilde{\omega}_i$, associated with the fact that the eigenvectors are only defined up to an arbitrary sign. The inviscid equations are also invariant under time reversal $t\rightarrow -t$. However, we now show that the equations also posses another kind of symmetry that does not appear to have been previously recognized.
We first define the transformation
\begin{eqnarray}
\bm{W} \to \bm{W} + \gamma\bm{R},
\end{eqnarray}
that corresponds to adding to the rotation-rate of the strain-rate eigenframe an additional rotation-rate about the vorticity axis, with $\gamma(\bm{x},t)$ a non-dimensional scalar field. If we introduce the transformation $\bm{W} \to \bm{W} + \gamma\bm{R}$ into the eigenframe equations, the equation governing $\lambda_i$ is unaffected since it does not contain $\bm{W}$, and the vorticity equation becomes
\begin{eqnarray}
\frac{D}{Dt}{\widetilde{\omega}_i} &=& \lambda_{(i)}\widetilde{\omega}_i - \left[\widetilde{W}_{ij}+\gamma \widetilde{R}_{ij}\right]\widetilde{\omega}_j + \widetilde{ \nu\nabla^2\omega_i}= \lambda_{(i)}\widetilde{\omega}_i - \widetilde{W}_{ij}\widetilde{\omega}_j + \widetilde{ \nu\nabla^2\omega_i},\label{eq_omega_mod}
\end{eqnarray}
since by definition, $\bm{R\cdot\omega}=\bm{0}$. Therefore, $\widetilde{\omega}_i$ is also unaffected by the transformation. On the other hand, the off-diagonal algebraic equation becomes
\begin{eqnarray}
\widetilde{W}_{ij}\left(\lambda_{(j)}-\lambda_{(i)}\right) &=& -\frac{1}{4} \widetilde{\omega}_i\widetilde{\omega}_j - \widetilde{H}_{ij} -\gamma \widetilde{R}_{ij}\left(\lambda_{(j)}-\lambda_{(i)}\right) + \widetilde{\nu\nabla^2{S}_{ij}}, \; j\ne i.\label{eq_alg}
\end{eqnarray}
This equation is affected by the transformation $\bm{W} \to \bm{W} + \gamma\bm{R}$, however, while this changes the rotation-rate of the eigenframe, it does not affect either $\lambda_i$ or $\widetilde{\omega}_i$ since in the equation governing $\widetilde{\omega}_i$, $\bm{W}$ only enters through the inner product $\widetilde{W}_{ij}\widetilde{\omega}_j$. Therefore, the transformation $\bm{W} \to \bm{W} + \gamma\bm{R}$ corresponds to a symmetry for the eigenframe dynamics.
\subsection{Gauge for the anisotropic pressure Hessian}
The anisotropic/non-local pressure Hessian may be defined as
\begin{align}
\bm{\mathcal{H}}\equiv\bm{H}-\frac{1}{3}\mathbf{I}\mathrm{Tr}[\bm{H}]=\bm{H}+\frac{1}{3}\mathbf{I}(\bm{A : A}),
\end{align}
where $\mathbf{I}$ is the 3-dimensional identity matrix. This anisotropic pressure Hessian satisfies $\mathrm{Tr}(\bm{\mathcal{H}})=0$ and contains all of the non-local part of $\bm{H}$. It is also important to note that its non-local dependence on the flow field is only through the invariant $\bm{A : A}$.
The invariance of the eigenframe dynamics under the transformation $\bm{W} \to \bm{W} + \gamma\bm{R}$ may be interpreted as a gauge symmetry for $\bm{\mathcal{H}}$. That is, the term $\gamma \widetilde{R}_{ij}\left(\lambda_{(j)}-\lambda_{(i)}\right)$ may be added to $\widetilde{\mathcal{H}}_{ij}$ without changing the eigenframe dynamics described through $\lambda_i$ and $\widetilde{\omega}_i$.
In coordinate free notation, the gauge term may be expressed as $\gamma[\bm{R},\bm{S}]$, where $[\bm{R},\bm{S}]\equiv\bm{R\cdot S}-\bm{S\cdot R}$ is the commutator of $\bm{R}$ and $\bm{S}$. Then, the gauge symmetry is reflected through the the fact that the eigenframe dynamics are identical when $\bm{\mathcal{H}}$ is replaced with
\begin{equation}
\bm{\mathcal{H}}_\gamma = \bm{\mathcal{H}}+\gamma[\bm{R},\bm{S}].
\end{equation}
In order for $\bm{\mathcal{H}}_\gamma$ to be physically admissible, however, certain conditions must be placed on $\gamma$. In particular, while $\bm{\mathcal{H}}$ and $\bm{\mathcal{H}}_\gamma$ lead to identical eigenframe dynamics, in order to preserve the properties of the dynamical equations we require both that $\gamma\in\mathbb{R}$ and that $\gamma$ is finite. The former condition is requird since $\bm{u}\in\mathbb{R}^d, P\in\mathbb{R}$, while the latter condition follows from the assumption that the solutions to the governing equations are regular.
\subsection{Using the gauge to reduce the dimensionality}
While any finite and real $\gamma$ provides a suitable $\bm{\mathcal{H}}_\gamma$, there may exist certain choices of $\gamma$ that generate a $\bm{\mathcal{H}}_\gamma$ that lives on a lower dimensional manifold in the system (in the sense that some of its eigenvalues are zero). If such configurations exist and are common, this could significantly aid the understanding and modelling of the role of the anisotropic pressure Hessian in the turbulence dynamics. To seek for such lower dimensional configurations, we may perform a rank-reduction on $\bm{\mathcal{H}}_\gamma$. We note that $\mathrm{rk}(\bm{\mathcal{H}}_\gamma)=1$ is not possible since $\mathrm{Tr}(\bm{\mathcal{H}}_\gamma)=0$, and therefore $\mathrm{rk}(\bm{\mathcal{H}}_\gamma)=2$ is the lowest rank possible. (Rank reduction not possible in incompressible 2D turbulence then????)
In seeking for lower dimensional representations, when $\bm{\mathcal{H}}$ is singular we take $\bm{\mathcal{H}}_\gamma=\bm{\mathcal{H}}$, corresponding to the choice $\gamma=0$, since then the gauge term is not needed as $\bm{\mathcal{H}}$ already lives on a lower dimensional manifold. On the other hand, when $\bm{\mathcal{H}}$ is not singular we seek a non-zero vector $\bm{z}_2$ such that $\bm{\mathcal{H}}_\gamma\bm{\cdot z}_2=\bm{0}$, where $\bm{z}_2$ corresponds to the eigenvector of $\bm{\mathcal{H}}_\gamma$ associated with the intermediate eigenvalue, whose value would be zero. This is a generalized eigenvalue problem, with non-trivial solutions existing when
\begin{align}
\textrm{det}\Big(\bm{\mathcal{H}}_\gamma\Big)=\textrm{det}\Big(\bm{I} + \gamma\bm{\mathcal{H}}^{-1}\left[\bm{R},\bm{S}\right]\Big) = 0.
\end{align}
If there exist finite and real values for $\gamma$ that solve this equation, then those values of $\gamma$ generate a 2-dimensional $\bm{\mathcal{H}}_\gamma$. Defining $\bm{\mathcal{E}}\equiv\bm{\mathcal{H}}^{-1}\left[\bm{R},\bm{S}\right]$, we obtain a characteristic equation governing $\psi\equiv 1/\gamma$
\begin{align}
\psi^3 -c\psi^2-b\psi - a= 0,\label{psi_eq}
\end{align}
where $a,b,c\in \mathbb{R}$ are defined as
\begin{align}
a&\equiv \textrm{det}(\bm{\mathcal{E}}),\\
b&\equiv \frac{1}{2}\Big(\bm{\mathcal{E}:\mathcal{E}}-\mathrm{Tr}(\bm{\mathcal{E}})\mathrm{Tr}(\bm{\mathcal{E}})\Big),\\
c&\equiv \mathrm{Tr}(\bm{\mathcal{E}}).
\label{abc}
\end{align}
The properties of the roots of \eqref{psi_eq} are determined by the discriminant of the polynomial
\begin{align}
\mu\equiv b^2 c^2+4b^3-27a^2-18abc.
\label{disc}
\end{align}
When $\mu=0$, all of the roots of \eqref{psi_eq} are real and at least two are equal, when $\mu>0$ there are three distinct real roots, and when $\mu<0$ there is one real root and two complex conjugate roots. In every case, there is at least one real root since $a,b,c\in \mathbb{R}$. Provided that $a\neq 0$ and $\mu\geq 0$, a real, finite root for $\gamma\equiv 1/\psi$ always exists. When $a=0$, a real finite root for $\gamma$ exists if $\mu>0$, and may or may not exist if $\mu=0$.
This shows that points where $\bm{\mathcal{H}}_\gamma$ is 3D may only occur when $a= 0$, and it is interesting to note that
\begin{align}
a\propto\widetilde{\omega}_1(\lambda_2-\lambda_3)\widetilde{\omega}_2(\lambda_3-\lambda_1)\widetilde{\omega}_3(\lambda_1-\lambda_2).
\end{align}
Therefore, when either one or more of the vorticity components in the eigenframe is zero, and/or the straining field in the eigenframe is axisymmetric, $\bm{\mathcal{H}}_\gamma$ may be 3D.
(Can we say more about the properties of the roots???
Physical/geometrical meaning of multiple real, finite $\gamma$????)
\section{Numerical Results}
We now turn to assess the amenability of $\bm{\mathcal{H}}_\gamma$ to rank-reduction, and to consider the statistical geometry of $\bm{\mathcal{H}}_\gamma$. We do this using data from a Direct Numerical Simulation (DNS). Our DNS data corresponds to that in \citet{ireland16a,ireland16b}, and we therefore refer the reader to that paper for the details of the DNS. Here we give a brief summary. The DNS uses a pseudo-spectral method to solve the incompressible NSE on a three-dimensional, triperiodic cube of length $\mathcal{L}$ with $N^3$ grid points. A deterministic forcing method is used that preserves the kinetic energy in the flow, and produces a statistically stationary, isotropic turbulent flow. A detailed description of the numerical methods used can be found in \citet{ireland13}. The data we use corresponds to $N=1024$ and the Taylor Reynolds number of the flow is $R_\lambda=398$.
We first consider the properties of $\gamma$ as determined by the numerical solution of \eqref{psi_eq} with $\gamma\equiv 1/\psi$. We note that the numerical solution of \eqref{psi_eq} is ill-conditioned when $\mathrm{det}[\bm{R},\bm{S}]$ is very small. Therefore, we assign $\mathrm{det}[\bm{R},\bm{S}]$ to have the value zero when its value is less than???
\begin{figure}
\begin{overpic}[width=.49\textwidth]{{count_N_gamma}.pdf}
\put(52,120) {$\approx40\%$}
\put(137,120) {$\approx60\%$}
\put(60,10) {Root multiplicity of $${\Large$\gamma$}$$}
\put(5,75){\rotatebox{90}{Probability}}
\end{overpic}
\hfill
\begin{overpic}[width=.49\textwidth]{{pdf_gamma}.pdf}
\put(105,10) {{\Large$\gamma$}}
\put(5,85){\rotatebox{90}{PDF}}
\end{overpic}
\caption{(a) Probability of root multiplicity of $\gamma$, (b) PDF of $\gamma$.}
\label{Gamma}
\end{figure}
Figure~\ref{Gamma}(a) shows the probability of the multiplicity of real, finite values for $\gamma$ obtained when solving \eqref{psi_eq}, where the statistics are constructed by averaging the flow over space and time. The results show that the most common case is that there is a single root for $\gamma$ with multiplicity three, while the next most common case is where the roots are all distinct. Figure~\ref{Gamma}(b) shows that the values of $\gamma$ for which $\bm{\mathcal{H}}_\gamma$ is 2D posses a PDF that is highly non-Gaussian.
\begin{figure}
\begin{overpic}[width=.49\textwidth]{{pdf_phi}.pdf}
\put(133,143) {$\varphi_1$}
\put(133,130){$\varphi_2$}
\put(133,116) {$\varphi_3$}
\put(70,7) {Eigenvalues of $\bm{\mathcal{H}}$}
\put(5,85){\rotatebox{90}{PDF}}
\end{overpic}
\hfill
\begin{overpic}[width=.49\textwidth]{{pdf_ephi}.pdf}
\put(131,143) {$\psi_1$}
\put(131,130){$\psi_2$}
\put(131,116) {$\psi_3$}
\put(70,7) {Eigenvalues of $\bm{\mathcal{H}}_\gamma$}
\put(5,85){\rotatebox{90}{PDF}}
\end{overpic}
\caption{Eigenvalues of (a) $\bm{\mathcal{H}}$, and (b) $\bm{\mathcal{H}}_\gamma$ normalized with the Kolmogorov timescale $\tau_\eta$.}
\label{Hess_eigen}
\end{figure}
Figure~\ref{Hess_eigen} shows that whereas $\bm{\mathcal{H}}$ is in general a fully 3D object with three active eigenvalues $\varphi_1,\varphi_2,\varphi_3$, $\bm{\mathcal{H}}_\gamma$ is a 2D object with only two active eigenvalues that satisfy $\psi=\psi_1=-\psi_3$, the intermediate eigenvalue being $\psi_2=0$. The eigenvalue $\psi$ exhibits extremely large fluctuations, the explanation for which we shall return to later. Therefore, the geometrical simplification obtained by replacing the 3D $\bm{\mathcal{H}}$ with the 2D $\bm{\mathcal{H}}_\gamma$ also comes with the cost that the eigenvalues of $\bm{\mathcal{H}}_\gamma$ are far more intermittent that those of $\bm{\mathcal{H}}$.
We now turn to consider the statistical geometry of the system. In Figure~\ref{Hess_eigen_align} we consider the alignment of the eigenframe of $\bm{\mathcal{H}}$, described by its eigenvectors $\bm{y}_i$, with the strain-rate eigenframe, described by the eigenvectors $\bm{v}_i$ (here and throughout, the eigenvectors are unitary, and the eigenvalues are ordered). The alignments do not reveal any strong preferences, with weak alignment tendencies to $\bm{y}_1\bm{\cdot}\bm{v}_1\approx 0.7$, $\bm{y}_{1,3}\bm{\cdot}\bm{v}_3\approx 0.7$. These weak alignments make it difficult to model $\bm{\mathcal{H}}$ in any simple way in terms of the eigenframe of the strain-rate tensor.
\begin{figure}
\begin{overpic}[width=.46\textwidth]{{pdf_w1_orient_S}.pdf}
\put(65,134) {$\bm{y}_1\bm{\cdot}\bm{v}_1$}
\put(65,121){$\bm{y}_1\bm{\cdot}\bm{v}_2$}
\put(65,109) {$\bm{y}_1\bm{\cdot}\bm{v}_3$}
\put(93,10) {$\cos\theta$}
\put(6,80){\rotatebox{90}{PDF}}
\end{overpic}
\begin{overpic}[width=.46\textwidth]{{pdf_w2_orient_S}.pdf}
\put(65,134) {$\bm{y}_2\bm{\cdot}\bm{v}_1$}
\put(65,121) {$\bm{y}_2\bm{\cdot}\bm{v}_2$}
\put(65,109) {$\bm{y}_2\bm{\cdot}\bm{v}_3$}
\put(93,10) {$\cos\theta$}
\put(6,80){\rotatebox{90}{PDF}}
\end{overpic}
\centering
\begin{overpic}[width=.46\textwidth]{{pdf_w3_orient_S}.pdf}
\put(65,134) {$\bm{y}_3\bm{\cdot}\bm{v}_1$}
\put(65,121) {$\bm{y}_3\bm{\cdot}\bm{v}_2$}
\put(65,109) {$\bm{y}_3\bm{\cdot}\bm{v}_3$}
\put(93,10) {$\cos\theta$}
\put(6,80){\rotatebox{90}{PDF}}
\end{overpic}
\caption{PDF of the inner product between the eigenvectors of the strain tensor $\bm{v}_i$ and the eigenvectors of $\bm{\mathcal{H}}$, $\bm{y}_i$.}
\label{Hess_eigen_align}
\end{figure}
In Figure~\ref{Hess_gamma_eigen} we consider the alignments between the eigenframe of $\bm{\mathcal{H}}_\gamma$, described by its eigenvectors $\bm{z}_i$, with the strain-rate eigenframe. The results show, in striking contrast to those of Figure~\ref{Hess_eigen_align}, that the eigenframe $\bm{\mathcal{H}}_\gamma$ exhibits remarkable alignment properties with $\bm{z}_{1,3}\bm{\cdot}\bm{v}_{1,3}\approx 0.7$, $\bm{z}_{2}\bm{\cdot}\bm{v}_2\approx 1$, $\bm{z}_{2}\bm{\cdot}\bm{v}_3\approx 0$. This means that the 2D manifold on which $\bm{\mathcal{H}}_\gamma$ lives tends to be orthogonal to $\bm{v}_2$ and inclined at an angle of $\approx 45^\circ$ relative to both $\bm{v}_1$ and $\bm{v}_3$.
\begin{figure}
\begin{overpic}[width=.46\textwidth]{{pdf_ew1_orient_S}.pdf}
\put(65,134) {$\bm{z}_1\bm{\cdot}\bm{v}_1$}
\put(65,121){$\bm{z}_1\bm{\cdot}\bm{v}_2$}
\put(65,109) {$\bm{z}_1\bm{\cdot}\bm{v}_3$}
\put(93,10) {$\cos\theta$}
\put(6,80){\rotatebox{90}{PDF}}
\end{overpic}
\hfill
\begin{overpic}[width=.46\textwidth]{{pdf_ew2_orient_S}.pdf}
\put(65,134) {$\bm{z}_2\bm{\cdot}\bm{v}_1$}
\put(65,121) {$\bm{z}_2\bm{\cdot}\bm{v}_2$}
\put(65,109) {$\bm{z}_2\bm{\cdot}\bm{v}_3$}
\put(93,10) {$\cos\theta$}
\put(6,80){\rotatebox{90}{PDF}}
\end{overpic}
\centering
\begin{overpic}[width=.46\textwidth]{{pdf_ew3_orient_S}.pdf}
\put(65,134) {$\bm{z}_3\bm{\cdot}\bm{v}_1$}
\put(65,121) {$\bm{z}_3\bm{\cdot}\bm{v}_2$}
\put(65,109) {$\bm{z}_3\bm{\cdot}\bm{v}_3$}
\put(93,10) {$\cos\theta$}
\put(6,80){\rotatebox{90}{PDF}}
\end{overpic}
\caption{PDF of the inner product between the eigenvectors of the strain tensor $\bm{v}_i$ and the eigenvectors of $\bm{\mathcal{H}}_\gamma$, $\bm{z}_i$.}
\label{Hess_gamma_eigen}
\end{figure}
In Figure~\ref{Vort_eigen} we consider the alignments of the eigenframes of $\bm{S}$, $\bm{\mathcal{H}}$, and $\bm{\mathcal{H}}_\gamma$ with the vorticity $\bm{\omega}$. In agreement with previous results, we find a strong alignment between $\bm{v}_2$ and $\bm{\omega}$ \citep{meneveau11}, and a moderate tendency for alignment between $\bm{y}_{1,2}$ and $\bm{\omega}$ \citep{chevillard08}. The results also show a striking alignment between $\bm{z}_2$ and $\bm{\omega}$, consistent with the alignment between $\bm{z}_2$ and $\bm{v}_2$, and between $\bm{v}_2$ and $\bm{\omega}$.
It is interesting that the gauge term used in defining $\bm{\mathcal{H}}_\gamma$ arises from a rotation of the strain-rate eigenframe about $\bm{\omega}$, while the statistics show that $\bm{\mathcal{H}}_\gamma$ lives on a 2D manifold that has a strong, but imperfect tendency to be orthogonal to $\bm{\omega}$. While we do not currently fully understand the implications or significance of this, one point that can be made is the following. In the equation governing $\widetilde{\omega}_i$, the production of $\widetilde{\omega}_i$ associated with the rotation of the eigenframe is $\widetilde{W}_{ij}\widetilde{\omega}_j$, and with the gauge term we have
\begin{eqnarray}
\widetilde{W}_{ij}\widetilde{\omega}_j \left(\lambda_{(j)}-\lambda_{(i)}\right) &=& -\frac{1}{4} \widetilde{\omega}_i\widetilde{\omega}_j\widetilde{\omega}_j - \widetilde{\mathcal{H}}_{\gamma,ij}\widetilde{\omega}_j + \widetilde{\nu\nabla^2{S}_{ij}}\widetilde{\omega}_j , \; j \ne i.
\end{eqnarray}
The dynamical significance of the slight misalignment is therefore that it allows the anisotropic pressure Hessian to contribute to the eigenframe dynamics by making a finite contribution to $\widetilde{W}_{ij}\widetilde{\omega}_j$. Indeed, if $\bm{\mathcal{H}}_\gamma$ were exactly orthogonal to $\bm{\omega}$, then the anisotropic pressure Hessian would make no contribution to $\lambda_i$ or $\widetilde{\omega}_i$, and its only role would be to contribute to the rotation of the strain-rate eigenframe. It is known that in the inviscid case, the neglect of the anisotropic pressure Hessian in the eigenframe dynamics leads to a finite time singularity \citep{vieillefosse84}. Therefore, assuming that the slight misalignment between $\bm{\mathcal{H}}_\gamma$ and $\bm{\omega}$ is not solely due to viscous effects, then this misalignment must also play a role in regularizing the eigenframe dynamics making singularities in the Euler system potentially preventable.
The point above also explains why the eigenvalues of $\bm{\mathcal{H}}_\gamma$ were found to be much larger than those of $\bm{\mathcal{H}}$ in Figure~\ref{Hess_eigen}. In particular, since under the gauge symmetry we have $\bm{\mathcal{H}}_\gamma\bm{\cdot\omega}=\bm{\mathcal{H}}\bm{\cdot\omega}$, then given that $\bm{\mathcal{H}}_\gamma$ is close to being orthogonal to $\bm{\omega}$ whereas $\bm{\mathcal{H}}$ is not, it is required that the eigenvalues of $\bm{\mathcal{H}}_\gamma$ are much larger than those of $\bm{\mathcal{H}}$.
(How does $\bm{\mathcal{H}}_\gamma\bm{\cdot\omega}$ depend on $\textrm{det}\left[\bm{R},\bm{S}\right]$???? Are large fluctuations of $\widetilde{\omega}_j$ associated with $\bm{\mathcal{H}}_\gamma$ being 3D???)
\begin{figure}
\begin{overpic}[width=.46\textwidth]{{pdf_omg_orient_S}.pdf}
\put(108,133) {$\bm{\hat{\omega}}\bm{\cdot}\bm{v}_1$}
\put(108,120){$\bm{\hat{\omega}}\bm{\cdot}\bm{v}_2$}
\put(108,108) {$\bm{\hat{\omega}}\bm{\cdot}\bm{v}_3$}
\put(93,10) {$\cos\theta$}
\put(6,80){\rotatebox{90}{PDF}}
\end{overpic}
\hfill
\begin{overpic}[width=.46\textwidth]{{pdf_omg_orient_H}.pdf}
\put(108,133) {$\bm{\hat{\omega}}\bm{\cdot}\bm{y}_1$}
\put(108,120) {$\bm{\hat{\omega}}\bm{\cdot}\bm{y}_2$}
\put(108,108) {$\bm{\hat{\omega}}\bm{\cdot}\bm{y}_3$}
\put(93,10) {$\cos\theta$}
\put(6,80){\rotatebox{90}{PDF}}
\end{overpic}
\centering
\begin{overpic}[width=.46\textwidth]{{pdf_omg_orient_eH}.pdf}
\put(106,133) {$\bm{\hat{\omega}}\bm{\cdot}\bm{z}_1$}
\put(106,120) {$\bm{\hat{\omega}}\bm{\cdot}\bm{z}_2$}
\put(106,108) {$\bm{\hat{\omega}}\bm{\cdot}\bm{z}_3$}
\put(93,10) {$\cos\theta$}
\put(6,80){\rotatebox{90}{PDF}}
\end{overpic}
\caption{PDF of the inner product between the normalized vorticity $\hat{\bm{\omega}}\equiv\bm{\omega}/\|\bm{\omega}\|$ and (a) $\bm{v}_i$, (b) $\bm{y}_i$, and (c) $\bm{z}_i$.}\label{Vort_eigen}
\end{figure}
Finally, as mentioned earlier, the only points in the flow where $\bm{\mathcal{H}}_\gamma$ may be 3D rather than 2D are points where $\textrm{det} \left[\bm{R},\bm{S}\right ]= 0$. The results in Figure~\ref{Cond_detC} show that regions of the flow where $\textrm{det} \left[\bm{R},\bm{S}\right ]\approx 0$ correspond to regions where both the strain and vorticity are relatively weak, and that the limit $\textrm{det} \left[\bm{R},\bm{S}\right ]\to0$ corresponds to strain-dominated regions of the flow where $Q\equiv\|\bm{\omega}\|^2/4-\|\bm{S}\|^2/2<0$ (is it then the case that the tendency for $\bm{\mathcal{H}}_\gamma$ to be 2D is part of the reason the strain and vorticity are so intermittent In other words, if it were 3D would the eigenframe rotation contribution to the vorticity equation suppress large vorticity?). Moreover, the PDF of $\textrm{det} \left[\bm{R},\bm{S}\right ]$ exhibits a sharp peak around $0$, showing that regions where $\textrm{det} \left[\bm{R},\bm{S}\right ]$ is small are the most likely. Recalling that
\begin{align}
\textrm{det}\left[\bm{R},\bm{S}\right]\propto\widetilde{\omega}_1(\lambda_2-\lambda_3)\widetilde{\omega}_2(\lambda_3-\lambda_1)\widetilde{\omega}_3(\lambda_1-\lambda_2)=0,
\end{align}
the tendency for $\textrm{det} \left[\bm{R},\bm{S}\right ]$ to be small may be understood in part in terms of the strong tendency for $\bm{\omega}$ to misalign with $\bm{v}_3$, leading to small values for $\widetilde{\omega}_3$.
\begin{figure}
\begin{overpic}[width=.46\textwidth]{{Q_cond_detC}.pdf}
\put(80,10) {$\tau_\eta^{-2}\det[\bm{R},\bm{S}]$}
\put(8,60){\rotatebox{90}{$\avg{Q|\det[\bm{R},\bm{S}]}$}}
\end{overpic}
\hfill
\begin{overpic}[width=.46\textwidth]{{pdf_detC}.pdf}
\put(80,10) {$\tau_\eta^{-2}\det[\bm{R},\bm{S}]$}
\put(6,80){\rotatebox{90}{PDF}}
\end{overpic}
\centering
\vspace{-2pt}
\begin{overpic}[width=.48\textwidth]{{en_cond_detC}.pdf}
\put(90,138) {$\tau_\eta^{-2}\|\bm{S}\|^2$}
\put(90,127){$\tau_\eta^{-2}\|\bm{\omega}\|^2/2$}
\put(80,10) {$\tau_\eta^{-2}\det[\bm{R},\bm{S}]$}
\put(4,60){\rotatebox{90}{$\avg{x^2|\det[\bm{R},\bm{S}]}$}}
\end{overpic}
\caption{Plot of (a) $Q\equiv\|\bm{\omega}\|^2/4-\|\bm{S}\|^2/2$ conditioned on $\det[\bm{R},\bm{S}]$, (b) PDF of $\det[\bm{R},\bm{S}]$, (c) $\|\bm{S}\|^2$ and $\|\bm{\omega}\|^2/2$ conditioned on $\det[\bm{R},\bm{S}]$.}
\label{Cond_detC}
\end{figure}
\section{Conclusions}
In this paper we have presented a new symmetry for the velocity gradient dynamics in the strain-rate eigenframe, that we have interpreted as a gauge symmetry for the anisotropic pressure Hessian. Using this symmetry we have introduce a modified pressure Hessian, $\bm{\mathcal{H}}_\gamma$, that is the sum of the standard pressure Hessian and the gauge term. We then sought for lower dimensional representations of the pressure Hessian by performing a rank-reduction on $\bm{\mathcal{H}}_\gamma$. Remarkably, we found that the rank reduction is possible almost everywhere in the flow, and consequently almost everywhere in the flow a 2D $\bm{\mathcal{H}}_\gamma$ may be defined that generates exactly the same eigenframe dynamics as the full 3D pressure Hessian $\bm{\mathcal{H}}$. We also showed that $\bm{\mathcal{H}}_\gamma$ exhibits remarkable alignment properties with respect to the strain-rate eigenframe and vorticity, that are not possessed by $\bm{\mathcal{H}}$.
There remain many questions concerning the dynamical interpretation of $\bm{\mathcal{H}}_\gamma$, and its implications for understanding the physics of the velocity gradients in the strain-rate eigenframe. Moreover, the reduction in dimensionality provided by replacing $\bm{\mathcal{H}}$ with $\bm{\mathcal{H}}_\gamma$ in the eigenframe equations, together with the remarkable alignment properties of $\bm{\mathcal{H}}_\gamma$ may offer promising insights into ways in which the anisotropic pressure Hessian and its effects on the eigenframe dynamics can be modelled. We are currently working on these points, and it will be the subject of a future paper.
\section*{Acknowledgements}
This work used the Extreme Science and Engineering Discovery Environment (XSEDE), supported by National Science Foundation grant ACI-1548562 \citep{xsede}.
\bibliographystyle{jfm}
|
1,314,259,994,893 | arxiv | \section{Introduction}
Gaussianity plays a central role in current
theories of structure formation \cite{Peebles2}.
Inflationary theories are normally
invoked to justify Gaussianity \cite{inflCMBR} but, historically,
simplicity was perhaps what first motivated this assumption.
As data has started to flood cosmology, however, the problem of
testing Gaussianity has reappeared both in Cosmic Microwave Background
(CMB) analysis \cite{kogut},
and galaxy survey analysis \cite{lascampanas}. A trend in data
analysis has been established which relies on Gaussianity and a
lingering feeling exists that the whole thing might fall through
should the data prove to be non-Gaussian in the first place. Furthermore
structure formation theories exist which in one way or another predict
non-Gaussian primordial fluctuations. Cosmic strings and
textures \cite{VS} provide two such examples. Pinning down
what precise non-Gaussian predictions such theories can make
is a task crying for a comprehensive formalism for quantifying
general non-Gaussianity. Finally even if the ``signal'' is Gaussian
it may happen that a non-Gaussian noise component is present, eg.
unresolved point sources \cite{catpeople}.
A precise prediction of their observational
properties could then assist in their subtraction from data,
before the final theoretical analysis is performed.
One is therefore left with the problem of how to test Gaussianity,
and how to quantitatively specify the most general non-Gaussian theory.
Several tests for non-Gaussianity have been proposed in the past.
Peaks' statistics \cite{BE,Jusk}, topological tests \cite{Pcoles,gott},
the 3-point correlation function \cite{kogut,gangui},
skewness and kurtosis \cite{scaramelo,periv}, and temperature and temperature
gradient histograms \cite{coulson} are the most topical examples. In some
cases these tests were only shown to be applicable for rather
artificial non-Gaussian distributions \cite{Pcoles}. In other cases the
tests were applied only to extremely non-Gaussian signals,
or the eroding effects of Gaussian noise were not explored \cite{pst}.
These tests however, are by no means exhaustive.
One can always devise a non-Gaussian theory which evades detection
by any of these tests, even when the hard realities of experiment
do not fully erase signal non-Gaussianity.
The only way to fully ascertain Gaussianity is to apply to data
a comprehensive formalism for encoding non-Gaussianity in its broadest
generality.
The $n$-point correlation function provides such a framework,
and it has long been used in cosmology \cite{Peebles1} and other branches of
physics \cite{randfields}. Computing the $n$-point function for large $n$
is however a practical impossibility. Taking the COBE data as an example
\cite{kogut}, only the 3-point function has been computed, and even in that
case attention was restricted to the pseudo-collapsed
and equilateral slices.
In section 2 we start off by showing how the $n$-point correlation
functions for $n$ up to any $N>3$ contain redundant information.
For Gaussian fields all the $N>2$ correlators can be determined
from the 2-point correlator. We show that even for the most general
non-Gaussian theory information encoded in the
$N>3$ correlator is dependent on information in lower order correlators.
Furthermore we show that one can never be sure that
by truncating the infinite correlator series at some $N$ one has
all the information about the most general non-Gaussian theory.
Strongly non-Gaussian theories may be written down which
have Gaussian moments up to any given order $N$.
The $n$-point function formalism then appears to have two drawbacks:
redundancy and impractical complexity. We shall argue that these two
drawbacks are due to each other, and that they may be eliminated
altogether.
In this paper, in Section~\ref{newstats}, we propose an alternative
formalism for comprehensively encoding non-Gaussianity. In the guise
to be used in this paper the formalism lives naturally
in Fourier space, and we have chosen to highlight non-Gaussianity
other than in the phases.
The idea of looking for non-Gaussianity in Fourier space has been
disfavoured in the past. It is argued that localized non-Gaussianity
in real space
(such as what is produced in cosmic string or textures scenarios)
will be obscured in Fourier space
due to the central limit theorem. It is also often assumed that
a Gaussian field can be accurately modelled as the Fourier transform
of a field whose randomness is solely in the phases. However, as we
argue in section 2, looking in Fourier space allows us to probe
the non-Gaussian nature of the field at specific scales, a fact which
is particularly useful when one can model the field as combination
of a Gaussian field which dominates on certain scales and a
non-Gaussian field which dominates on others.
Another very strong reason for considering Fourier space statistics
seriously is the fact that the highest resolution measurements
of CMB anisotropies will be performed by interferometric devices,
which naturally measure quantities in Fourier space (the ``$uv$ plane'').
Therefore, ignoring prejudice,
in section 3 we define a set of ``non-Gaussian spectra'' in terms of the
Fourier transform of the temperature anisotropies. Our
definitions follow up the proposals in \cite{mag1}, but
they are substantially more practical. We then characterize
the probability distribution function of these spectra in Gaussian theories
and in Appendix II give a physical interpretation of the qualities which
they measure. We set these quantities up so that while
they contain all the information degrees of freedom, they
do away with any redundancy. As a result we come up with a formalism which
shares with the $n$-point correlators the property of being comprehensive,
but with the advantage that it is computable and non-redundant.
Within the large set of statistics considered in this paper we
concentrate on a set of statistics which only use the information in
the absolute value of the Fourier modes. These are grouped in
two types of spectra: the ring spectrum and the inter-ring spectrum.
For the sake of maximal originality we leave to a future publication the
investigation of the role played by the more prosaic phase information.
In section 4 we consider three different applications. Firstly
we consider the case of a point source which is obscured
by Gaussian fluctuations. Secondly we consider the realistic
temperature anisotropy induce by a cosmic string, including
both the post-recombination Kaiser-Stebbins effect and the
Gaussian fluctuations at the surface of last scattering.
Finally we construct a strongly non-Gaussian theory, a theory
which produces skies which have a zero probability of
occurring in a Gaussian theory. To all these examples we apply
a battery of conventional statistics and show that they
evade any detection of non-Gaussianity. We show however that our
statistics reveal the non-Gaussian nature of the skies.
In section 5 we conclude by discussing the limitations of
these statistics and possible extensions.
\section{The $n$-point correlation function}
We start by reviewing the $n$-point correlation function formalism.
We then introduce the concept of $uv$-plane invariants, that is
quantities which are made up of Fourier modes $a({\bf k})$,
and which are invariant under rotations and translations.
We show how the Fourier transform of the $n$-point correlation
function is made up of $uv$-plane multilinear invariants.
One may then count the number of degrees of freedom in
the Fourier modes for a given sky coverage. By doing so we
show that the $n$-point correlators for $n$ up to a certain
$N$ contains information which can only be redundant.
This will set the tone for the next Section: trying to do
away with the redundancy and complexity of the $n$-point correlation
function.
\subsection{The $n$-point correlation function and its transform}
We consider CMB data in the small angle limit, when projecting onto
a planar patch is suitable. Since data may come in either real or
Fourier space we hope to address the problem of non-Gaussianity
in terms of these two descriptions. In this paper however
we will concentrate on the Fourier space description, and thus
produce statistics better suited to interferometers.
We shall use the convention:
\begin{equation}\label{fourier}
\frac{\Delta T({\bf x})}{T}
={\int {d{\bf k}\over 2\pi}a({\bf k})e^{i{\bf k}\cdot{\bf x}}}
\end{equation}
The $n$-point correlation function is defined as the expectation
value of the product of any $n$ temperatures. Translational and
rotational invariance make redundant the position of one of the points
and the direction of another.
Hence the $n$-point function may be written as a function of
$(x_2, {\bf x}_3,\cdots,{\bf x}_n)$ in the form
\begin{equation}
C^n(x_2, {\bf x}_3,\cdots,{\bf x}_n)={\langle
\frac{\Delta T({\bf x}_1)}{T}...\frac{\Delta T({\bf x}_n)}{T}
\rangle}
\end{equation}
The 2-point correlation function and its Fourier transform, the
angular power spectrum $C(k)$, are well-known.
They fully specify Gaussian fluctuations. For Gaussian fluctuations
non-vanishing higher order correlation functions exist,
but they are redundant as they can be obtained from
the two-point correlation function.
This is not the case in non-Gaussian theories, for which the $n$-point
correlators act not only as a non-Gaussianity indicator,
but are also an indispensable fluctuation qualifier, as much as the
power spectrum.
The angular power spectrum may be generalized for $n>2$ by
Fourier analysing the $n$-point function
\begin{eqnarray}\label{cn1}
&C^n&(x_2, {\bf x}_3,\cdots,{\bf x}_n)= \nonumber \\
&&\int \frac{dk_2}{(2\pi)^{1/2}}
\cdots \frac{d{\bf k}_n}{2\pi}C^n(k_2, \cdots,{\bf k}_n)
{e^{ik_2 x_2} \cdots e^{i{\bf k}_n\cdot{\bf x}_n}}
\end{eqnarray}
In general $C(k)$ is more predictive than $C(x)$, as it tells us how much
power exists on a given scale. In the same way one may expect
the transform $C^n(k_2, \cdots,{\bf k}_n)$ to be more predictive
than its configuration space counterpart, as it
tells us how much non-Gaussianity exists on each scale.
We shall call $C^n(k_2, \cdots,{\bf k}_n)$ a non-Gaussian
spectrum. One may also define Gaussian spectra as correlators
of the $a({\bf k})$ modes:
\begin{equation}\label{cn2}
{\langle a({\bf k}_1)\cdots a({\bf k_n})\rangle}=
\delta({\bf k}_1+\cdots {\bf k_n})C^n(k_2, \cdots,{\bf k}_n)
\end{equation}
where the $\delta$ function and functional form of $C^n$ result
from the requirements of translational and rotational invariance
(see (\ref{rott}) below). Using (\ref{fourier}) one may easily
check that the two definitions (\ref{cn1}) and (\ref{cn2})
of $C^n(k_2, \cdots,{\bf k}_n)$ agree.
Non-Gaussian spectra are more
complicated than power spectra, since they are functions
of many variables. As $n$ increases one is left with the problem
of how to pack so much information. We will however show that most
of the information encoded in $C^n(k_2, \cdots,{\bf k}_n)$ is largely
redundant, even for the most general non-Gaussian fluctuation.
\subsection{$uv$-plane multilinear invariants as components of
the $n$-point correlation function}
Here we show an equivalent route to non-Gaussian spectra.
This route draws on work in \cite{mag1}, where the spherical harmonic
coefficients $a^\ell_m$ are used to define quantities other than
the $C_\ell$ spectrum which are invariant under the 3D rotation group.
$m$-spectra and inter-$\ell$ correlators appear as supplementary
information. These spectra are multilinear combinations of
the $a^\ell_m$ which can be generally written as sums of products
of Clebsch-Gordon coefficients. It can be shown that they act
as a decomposition of the $n$-point function on the sphere in
a suitable base made up of Legendre polynomials and spherical harmonics.
These spectra are trivial to implement on
a computer, but are formally quite complicated for large $\ell$.
An explicit expression for the quadrupole $m$-shape was given in
\cite{mag1} with a suggested application to texture scenarios.
Fortunately at very high $\ell$ one may simply reformulate the
problem in terms of the Fourier representation of small
patches. $m$-spectra and inter-$\ell$ correlators then become very simple.
They reappear as $uv$-plane invariants, that is quantities made
up of the $a({\bf k})$ modes, and which are invariant under
2D rotations and translations (the projected 3D rotation group).
The non-Gaussian spectra $C^n(k_2, \cdots,{\bf k}_n)$ are
invariant under rotations and translations. This requirement
may also be imposed on any set of qualifiers of a random field
which statistically satisfies these invariances.
Under a rotation $R_\theta$ and a translation along a vector
${\bf t}$ the Fourier components transform as in
\begin{eqnarray}\label{rott}
R_\theta(a({\bf k}))&=&a(R_\theta({\bf k}))\nonumber\\
T_{\bf t}a({\bf k})&=&e^{i{\bf k}\cdot {\bf t}}a({\bf k})
\end{eqnarray}
A systematic way to generate invariants out of the $a({\bf k})$
is to consider multilinear combinations, that is sums of products
of $n$ modes $a({\bf k)}$ (monomials). For these to be invariant
under translations it is necessary that the vectors ${\bf k}_i$
used in each monomial add up to zero. To achieve invariance under
rotations one must then, for each monomial, average over all possible
rotations of the ${\bf k}_i$ configuration used.
One may formally write the most general multilinear invariant
of order $n$ as
\begin{equation}\label{geninv}
I^{(n)}={1\over N_\theta}\sum_{\theta}\prod_{i=1}^n a({\bf k}_i)
\end{equation}
in which the vectors ${\bf k}_i$ considered in each product must
add up to zero and always take the same configuration, and $N_\theta$ is the
total number of possible rotations of the configuration, should
Fourier space be discretized.
For $n=2$ the only invariant for each $k$
is the angular power spectrum. Given
a vector ${\bf k}$, the requirement that the second vector
in the binomial adds to zero fully determines the second vector.
Averaging over all rotations makes the direction of the
first vector irrelevant. The invariant (\ref{geninv}) then
reduces to
\begin{equation}
I^{(2)}(k)={1\over N}\sum_{|{\bf k}|=k} |a({\bf k})|^2
\end{equation}
For the third order invariants one now has an invariant which depends
on a vector and a scalar. Independent invariants are parameterized
by the third vector and the relative direction of the second vector.
The first vector is fully determined by the requirement that the 3 vectors
add to zero. The actual directions of the second and third vectors
are made redundant by taking the circular
average. A particularly interesting 3rd order invariant
may be obtained if one demands that the 3 vectors
used all have the same moduli. Then for each $k$ only one invariant
exists, the one obtained with the configuration plotted in Figure 1.
Diagrammatically one may then write down the most general invariant
for any order, rapidly bumping into unwanted proliferation.
The procedure however is very simple, and reduces to Eqn.~(\ref{geninv})
and the various independent diagrams it allows.
The most general multinomial invariant of degree $n$ is
a function of $(k_2,{\bf k}_3, \cdots, {\bf k}_n)$.
Hence the non-Gaussian spectra defined in the decomposition
of the $n$-point correlation function correspond to the most general
multinomial invariant one may construct out of the $a({\bf k})$.
\subsection{Exposing the redundancy of the $n$-point function}
The approach just devised has the advantage of allowing us to
expose the redundancy of the $n$-point correlation function.
Let us start by counting the number of degrees of freedom
present in the Fourier modes produced by a given measurement.
If we had full sky coverage then there would be
$2k+1$ modes per unit of $k$. Finite sky coverage has the effect
of correlating neighbouring modes among these,
thereby reducing the number of independent modes per unit
of $k$ to $2kf_{\rm sky}$, if $f_{\rm sky}$ is the fraction of
sky covered. An alternative Fourier space
discretization is then required, so that the modes in the new
mesh are quasi-uncorrelated, while encoding all the statistical
information in the original modes. This may be done with a
so-called uncorrelated mesh (see \cite{hobsmag}). There is
some arbitrariness in where the new mesh in laid. This arbitrariness allows
us to be sloppy with the invariances imposed in the previous section,
since any vector ${\bf k}$ may now be placed anywhere in the
uncorrelated mesh cell. Hence the angles required by configurations
such as the ones in Fig. 1 should be seen as flexible, as
far as the mesh resolution in concerned.
Let us now consider a generic $\Delta k=1$ ring containing
$N_{ring}\approx 2kf_{\rm sky}$ uncorrelated mesh points.
Since there are 3 degrees of freedom in rotations and translations
one may not build more than $N_{ring}-3$ independent invariants
per unit of $k$, plus 3 invariants relating adjacent
$\Delta k=1$ rings. The number of multilinear invariants making up
the $n$-point function transform is vastly larger. Even
if we restrict ourselves to invariants made up only of
modes in each ring, the number of invariants is 1 for $n=2,3$
(see Figure 1),
then, for $n>3$, of order ${\cal O}(N_{ring}^{n-3})$, if $N_{ring}\gg 1$.
The situation gets worse if we consider inter-ring multinomial
invariants. Let us now consider a square in Fourier space with
$N_p\times N_p$ uncorrelated mesh points.
Then for large $N_p$ the number of multilinear invariants
of order $n$ in all rings is of order ${\cal O}(N_p^{n-1})$.
The number of independent mesh points, on the contrary, is
of order ${\cal O}(N_p^2)$.
Hence there must be an algebraic dependence between all the
multilinear invariants. The information encoded in the higher
order correlators must therefore repeat itself in any theory, Gaussian
or not. We therefore argue that the $n$-point function formalism,
while comprehensive, is not systematic. This is not to say that
some truncation of the correlator series might not be useful
as a non-Gaussianity test. In particular we feel that ring
multinomial invariants, such as the cubic one depicted in Figure 1,
may be useful non-Gaussianity tests.
\section{Ring and inter-ring spectra}\label{newstats}
We now propose an alternative packaging for the information in Fourier
space. Comparing it with the $n$-point transform, it is simpler,
does away with redundancy, and has an immediate physical interpretation.
We divide the $uv$-plane in $\Delta k=1$ rings where
$N_k=2kf_{\rm sky}$ independent modes lie. Out of these we may
build $N_k-3$ invariants. In whatever we do we shall always make
sure that the formalism proposed produces the power spectrum $C(k)$
as the first of these quantities.
The other $C(k,m)$, for $m=1,\cdots, N_k-4$, are the ring spectra.
We shall not consider multilinear invariants, but shall search for
alternative prescriptions. On top of these, for each two adjacent
rings there will be 3 invariants, the inter-ring correlators.
Given the arbitrariness of the Fourier
modes mesh exact position we may also be justified in building
simply $N_k$ non-invariant quantities for each ring, as long
as we know how they transform. We found the latter attitude
more practical, but shall give in Appendix I the correct prescription
for building properly invariant quantities.
For a Gaussian theory the probability of a given map depends
only on the map power spectrum. Consider then a very non-Gaussian map
by which we mean something we visually recognize as very structured.
Consider also various other maps with the same power spectrum,
but which we visually recognize as very Gaussian.
All these maps, Gaussian looking or not, have the same probability
in Gaussian theories. In non-Gaussian theories, on the contrary,
the probability of a given map depends on more than its power spectrum.
Hence, within the set of maps considered above, it may happen
that the non-Gaussian looking map is now considerably more
probable than the other maps.
The point we wish to make is that non-Gaussianity arises not from structured
maps being less likely in Gaussian theories, but from structured maps being
more likely in non-Gaussian theories.
This seemingly innocent remark has two important implications.
First it implies that the natural variables for non-Gaussianity
spectra should be uniformly distributed in Gaussian theories.
In contrast, at least in some non-Gaussian theories, the same
variables should have peaked distributions.
Hence non-Gaussian spectra should carry no information
whatsoever in Gaussian theories, but they should be
highly predictive at least in some non-Gaussian theories.
A second implication is that disproving Gaussianity on its
own merits is a contradiction in terms. One can always disprove a
given non-Gaussian theory on its own merits
by measuring a non-Gaussian spectrum
and finding it to be away from the theoretically predicted ridge.
However, any non-Gaussian spectrum measurement is
equally probable in Gaussian theories, and so it can never be used as
evidence against Gaussianity. Disproving Gaussianity is
then a matter dependent on the available competing non-Gaussian theories.
If one measures a non-Gaussian spectrum spot on the prediction
of a well motivated non-Gaussian theory then this is strong evidence
in favour of that non-Gaussian theory. One may simply argue that
the non-Gaussian theory has predicted the observation with
much larger probability than the Gaussian theory. Pedantically,
the observation has not disproved Gaussianity. However it
has discredited Gaussianity massively in the face of the more predictive
competing non-Gaussian theory.
It is under the requirement that non-Gaussian spectra ought to be
uniformly distributed in Gaussian theories that we now proceed to define
non -Gaussian spectra.
Consider a ring of the $uv$-plane where $N_k$ independent
complex modes $a({\bf k}_i)=\Re [a({\bf k}_i)]
+i\Im [ a({\bf k}_i)]$ live.
In Gaussian theories these are distributed as
\begin{eqnarray}
F(\Re[a({\bf k}_i)],\Im[a({\bf k}_i)])=
{1\over (2\pi\sigma^2)^{N_k/2}}\times\nonumber \\
\exp{-\left(
{1\over 2\sigma_k^2}\sum_{i=1}^{m_k}(\Re^2[a({\bf k}_i)]+
\Im^2[a({\bf k}_i)])\right)}
\end{eqnarray}
where $m_k=N_k/2$.
First separate the $N_k$ complex modes into $m_k$ moduli $\rho_i$
and $m_k$ phases $\phi_i$
\begin{eqnarray}
\Re[a({\bf k}_i)]&=&\rho_i\cos{\phi_i}\nonumber\\
\Im[a({\bf k}_i)]&=&\rho_i\sin{\phi_i}
\end{eqnarray}
The Jacobian of this transformation is
\begin{equation}\label{jac1}
\left| {\partial (\Re[a({\bf k}_i)],\Im[a({\bf k}_i)])
\over \partial(\rho_i,\phi_i)}\right|=\prod_{i=1}^{m_k}\rho_i
\end{equation}
The $\{\rho_i\}$ may be seen as
Cartesian coordinates which we transform into polar coordinates.
These consist of a radius $r$ plus $m_k-1$ angles $\tilde\theta_i$
given by
\begin{equation}\label{pol}
\rho_i=r\cos{\tilde\theta_i}\prod_{j=0}^{i-1}\sin{\tilde\theta_j}
\end{equation}
with $\sin{\tilde\theta_0}=\cos{\tilde\theta_{m_k}}=1$.
In terms of these variables the radius is related to the
angular power spectrum by $C(k)=r^2/(2m_k)$. In general the first
$m_k-2$ angles $\tilde\theta_i$ vary between $0$ and $\pi$ and the
last angle varies between 0 and $2\pi$.
However because all $\rho_i$ are positive all angles are in $(0,\pi/2)$.
The Jacobian of this transformation is
\begin{equation}\label{jac2}
\left| {\partial(\rho_1,\cdots,\rho_{m_k})
\over \partial(r,\tilde\theta_1,\cdots,\tilde\theta_{m_k-1}}\right|=
r^{m_k-1}\prod_{i=2}^{m_k-1}\sin^{m_k-i}{\tilde\theta_{i-1}}
\end{equation}
Polar coordinates in $m_k$ dimensions may be understood as the iteration
of the following rule:
\begin{eqnarray}
\rho_i&=&r_i\cos{\tilde\theta_i}\nonumber \\
r_{i-1}&=&r_i\sin {\tilde\theta_i}
\end{eqnarray}
in which $r_i$ is the radius of the shade $m_k-i+1$ dimensional sphere
obtained by keeping fixed all $\rho_j$ for $j=1,\cdots,i-1$:
\begin{equation}
r_i={\sqrt{\rho_i^2+\rho_{i+1}^2+\cdots+\rho_{m_k}^2}}
\end{equation}
One may easily see that this is how 3D polars work, and also that
the transform (\ref{pol}) follows this rule.
Hence one may invert the transform (\ref{pol}) with
\begin{equation}
\tilde\theta_i=\arccos{\rho_i\over {\sqrt{\rho_i^2+\rho_{i+1}^2+
\cdots+\rho_{m_k}^2}}}
\end{equation}
for $i=1,\cdots,m_k-1$.
The total Jacobian of the transformation
from $(\Re[a({\bf k}_i)],\Im[a({\bf k}_i)])$
to $\{r,\tilde\theta_i,\phi_i\}$ is
just the product of (\ref{jac1}) and (\ref{jac2}).
Hence for a Gaussian theory one has the distribution
\begin{equation}
F(r,\tilde\theta_i,\phi_i)={
{r^{N_k-1}\exp{-\left(r^2\over 2\sigma_k^2\right)}}
\over (2\pi\sigma^2)^{N_k/ 2}}
\prod_{i=1}^{m_k-1}\cos{\tilde\theta_i}
(\sin{\tilde\theta_i})^{N_k-2i-1}
\end{equation}
In order to define $\tilde\theta_i$ variables
which are uniformly distributed in
Gaussian theories one may finally perform the transformation on each
$\tilde\theta_i$:
\begin{equation}
{\theta_i}=\sin^{N_k-2i}(\tilde\theta_i)
\end{equation}
so that for Gaussian theories one has:
\begin{equation}
F(r,{\theta}_i,\phi_i)={r^{N_k-1}e^{-r^2/(2\sigma_k^2)}
\over 2^{m_k-1}(m_k-1)!}\times 1 \times\prod_{i=1}^{m_k}{1\over
2\pi}
\end{equation}
The factorization chosen shows that all new variables are independent
random variables for Gaussian theories. $r$ has a $\chi^2_{N_k}$
distribution,
the ``shape'' variables $\theta_i$ are uniformly distributed
in $(0,1)$, and the phases $\phi_i$ are uniformly distributed in $(0,2\pi)$.
The variables $\theta_i$ define a non-Gaussian shape spectrum,
the {\it ring spectrum}.
They may be computed from ring moduli $\rho_i$ simply by
\begin{equation}
{\theta}_i={\left(\rho_{i+1}^2+\cdots +\rho_{m_k}^2
\over \rho_i^2\cdots +\rho_{m_k}^2\right)}^{m_k-i}
\end{equation}
They describe how shapeful the perturbations are.
If the perturbations are stringy then
the maximal moduli will be much larger than the minimal moduli.
If the perturbations are circular, then all moduli will be roughly
the same. This favours some combinations of angles, which are
otherwise uniformly distributed. In general any shapeful picture
defines a line on the ring spectrum $\theta_i$.
A non-Gaussian theory ought to define a set of probable smooth
ring spectra peaking along a ridge of typical shapes.
We can now construct an invariant for each adjacent pair of
rings, solely out of the moduli. If we order the $\rho_i$ for each
ring, we can identify the maximum moduli. Each of these moduli
will have a specific direction in Fourier space; let
${\bf k}_{max}$
and ${\bf k}^{'}_{max}$ be the directions where the maximal moduli
are achieved.
The angle
\begin{equation}
\psi(k,k')={1\over \pi}{\rm ang}({\bf k}_{max},{\bf k}^{'}_{max})
\end{equation}
will then produce an inter-ring correlator for the moduli, the
{\it inter-ring spectra}. This
is uniformly distributed in Gaussian theories in $(-1,1)$. It gives
us information on how connected the distribution of power is between
the different scales.
We have therefore defined a transformation from the original modes
into a set of variables $\{r,\theta,\phi,\psi\}$. The non-Gaussian
spectra thus defined have a particularly simple distribution
for Gaussian theories. They also comply
with the uniformity requirement we have place on non-Gaussian spectra
in the discussion at the start of this Section.
We shall call perturbations for which the phases are not uniformly
distributed localized perturbations. This is because if perturbations
are made up of lumps statistically distributed but with well defined
positions then the phases will appear highly correlated. We shall
call perturbations for which the ring spectra are not
uniformly distributed shapeful perturbations. We will identify later
the combinations of angles which measure stringy or spherical shape of the
perturbations. This distinction is interesting as it is in principle
possible for fluctuations to be localized but shapeless, or more
surprisingly, to be shapeful but not localized. Finally we shall call
perturbations for which the inter-ring spectra are not uniformly
distributed, connected perturbations. This turns out to be one of
the key features of stringy perturbations. These three definitions
allow us to consider structure in various layers. White noise
is the most structureless type of perturbation. Gaussian fluctuations
allow for modulation, that is a non trivial power spectrum $C(k)$,
but their structure stops there.
Shape, localization, and connectedness constitute the three next
levels of structure one might add on. Standard visual structure
is contained within these definitions, but they allow for more
abstract levels of structure. We will show in
Appendix II what these concepts mean with reference to visual
structure.
In the formulation above there is a minor flaw which we found
inconsequent, given the practical advantages gained. This flaw
is spelled out and corrected in Appendix I, but we have
chosen not to do so in the main body of this paper. In Appendix I
we also mention what can be done with the phases $\phi$.
This is however outside the scope of this paper, where
we have decided to investigate the practical applications
of the less investigated ring and inter-ring spectra.
\section{Applications}
Historically much attention has been payed to the non-Gaussianity in
the phases $\phi$. As mentioned above, it has frequently been assumed
that the prescription of random phases in Fourier space leads to
Gaussian perturbations. Evidence of peculiar behaviour
of the phases was shown in numerical simulations of CMB anisotropies
from cosmic strings \cite{BBS,Pom}. Little attention has been given
to the $\rho$'s. In the following three applications we will focus on the
statistics only involving the $\rho$'s and show that, in these cases,
they are good non-Gaussian indicators.
In all these examples we will consider maps with $160^2$-pixels with
no noise; it has been customary to apply the various standard statistics
to the raw non-Gaussian signal superposed with small scale Gaussian noise,
but no attempt has been made at studying the effects of large scale
Gaussian fluctuations. As we will argue there are physically motivated
reasons for doing so. With the intent of keeping the different effects
separate we will analyse this latter case. The addition of noise
should be studied when considering specific observing strategies.
We will quote all values of the wavenumber, $k$, using uncorrelated
mesh units. I.e. following the discussion of Section II, we will start
labelling the wavenumbers in unit intervals, from the smallest up to
the largest. The width of the rings are therefore $\Delta k=1$.
\subsection{Unresolved point source on a Gaussian signal}
As a first application of these statistics let us consider a Gaussian
signal when non-Gaussian foregrounds are present. We know that this
is the case in real CMB measurements and there exist a series of
techniques which allow one to separate the two signals using
a combination of spectral and spatial information. A more difficult
situation occurs when one considers unresolved point sources. In this
case, either one uses additional information about the patch of the
sky one is observing \cite{catpeople} or one has to make
assumptions and the best one can achieve is to subtract them on a
statistical basis.
Let us consider a simple case which illustrates the weakness of
current methods for checking non-Gaussianity but highlights the
strengths of our technique.
Suppose that the field is sufficiently small for only a small number
of point sources to be present. Also suppose that the signal
is Gaussian and that it has
reached the Silk damping tail. The probed spectrum will then
go to white-noise at the scale of the field size, but converge
to the raw spectrum otherwise \cite{hobsmag}. A
fitting formula for the power spectrum of the Gaussian
signal is
\begin{equation}\label{exppower}
P_g(k)=\alpha\exp{\left( -k^2\over 2k_g^2\right)}
\end{equation}
On top of this one must
either firmly believe that the signal is Gaussian, or that the
signal is non-Gaussian, but of a distinctively different shape.
Now let a single unresolved source be present in the field. Let
the source be perfectly circular, and have a Fourier space falloff
of the form
\begin{equation}
P_{ng}(k)={1\over 1+(k/k_c)^4}
\end{equation}
The phases are all correlated and arranged so as to center the configuration
and the angles $ \theta$ correspond to a perfectly circular
configuration. All moduli are exactly equal the square root
of the power spectrum. This is a shapeful, localized, and connected
perturbation, visually recognizable as highly non-Gaussian (see
Fig. 2.
Although we are using it as a toy model for an unresolved source,
this is inspired by a spot produced by a texture undergoing
perfect spherically symmetric collapse.
In Fig. 2. we show the point source, and the signal mixed
with the point source for the case
$\alpha=3$, $k_c=0.1$, and $k_g=5$. What has started as visually
very non-Gaussian disappears completely with the addition of
Gaussian signal. A real space subtraction of the source is bound
to fail.
From inspecting the histogram of temperatures at each realization
one finds that, comparing with a purely Gaussian map with the same
overall power spectrum, they look the same (see Fig. 3).
A more thorough analysis would lead us to calculate the skewness, $\alpha_3$,
and kurtosis, $\alpha_4$, of the maps,
\begin{eqnarray}
\alpha_3&=&C^3(0,0)/(C^2(0))^{3/2} \nonumber \\
\alpha_4&=&(C^4(0,0,0)/(C^2(0))^2)-3
\end{eqnarray}
or better yet, estimate the distribution of $\alpha_3$ and
$\alpha_4$. In Fig. 4 we superpose histograms of
of skewness (left panel) and kurtosis (right panel) for the
non-Gaussian theory and for the purely Gaussian theory; clearly the
Gaussian behaviour on large scales is dominating the effect of the point
source.
One useful statistic to apply is the accumulated density of peaks
above a given threshold. It was shown in \cite{BE} that, for a Gaussian
field, the density of peaks over a threshold $\mu\sigma$ where
$\sigma=\sqrt{\langle|{{\delta T}\over T}|^2\rangle}$ is approximately given
by:
\begin{eqnarray}
N_{peaks}(\mu)={{1}\over{4\pi\sqrt{3}\theta_*^2}}{\rm max}[1,({6 \over
\pi})^{1/2}\gamma^2\mu \exp(-\mu^2/2)+{\rm erfc}\{{{\mu}\over
{[2(1-2\gamma^2/3)]^{1/2}}}\}]
\end{eqnarray}
where $\gamma$ and $\theta_*$ are dimensionless ratios of the first
three moments of the random field. We can apply this statistic to
our maps, and in Fig. 5 we compare the peak density of
the non-Gaussian maps with that of the pure Gaussian theory (with
the same power spectrum). Although there is a slight difference
for low (negative thresholds) the two peak densities are
essentially indistinguishable.
We can now apply the approach we have devised. The non-Gaussianity
will only become evident on small scales, i.e. for large $k$s in
the Fourier plane. In fact we can find an analytical expression for
the ring spectrum of a perfectly circular configuration: all
moduli are equal to the same value $\rho_i=I$. Then the ring spectrum is
\begin{equation}\label{thetacirc}
{\theta}_i^{\rm circ}={\left(m_k-i\over m_k-i+1\right)}^{m_k-i}
\end{equation}
For large values of $m_k$ this ring spectrum is approximately
$1/e$ for all $i$, until $i$ approaches $m_k-1$, where the spectrum rises
to $1/2$.
As shown in Fig 6 (left panel) the ring spectrum
at a low $k$ is indeed consistent with a uniform distribution (the
$\theta_i$s are uniformly distributed between 0 and 1).
As $k$ increases the angles $\theta_i$ start accumulating
around the circular ridge. Soon the point-source dominates the signal,
a fact evidenced by a perfectly circular ring spectrum. Well
into the non-Gaussian region of Fourier space (where the Gaussian
signal is strongly suppressed) we find a clean signal as shown
in Fig 6 (right panel).
This example illustrates the main idea and the main weakness behind our
technique. The main idea consists of trying to identify the
particular scale on which
non-Gaussianity is evident and clearly this is best done in Fourier
space. In this case (with no experimental small scale noise) one
simply needs to look at $k$s on sufficiently small scales; the
inclusion of Gaussian noise would introduce and outer limit in
Fourier space, reducing the region of non-Gaussianity to a finite ring.
As for the main weakness we point out that the
shape spectrum, $\theta_i$, is sensitive only to the global
shape of the map. While one point source leads to a very clean
distribution of power around rings in Fourier space, if one has
more than a few point sources then this will become less clear. Although for
a set of N sources one will have a very distinct signal (a smooth
line as opposed to a random distribution of $\theta_i$) it becomes
more difficult to distinguish the sources on a firm basis
from a purely Gaussian
signal. This leads us to establish the best operational strategy
for this method to work: choose small
fields and analyse them separately. In doing this one will be probing
the scales on which non-Gaussianity becomes dominant with less
objects to pick out. The fact that interferometric measurements
of the CMB are constrained to small fields leads us to believe
this to be a sensible prescription for $uv$plane data analysis.
Recent experience with such measurements \cite{catpeople} seems to
indicate that indeed in each field there are only a few problematic
sources (maybe one or two).
\subsection{A cosmic string with a Gaussian background}
One of the best motivated theories of non-Gaussian structure
formation is that of cosmic strings. Following a primordial
phase transition, line-like concentrations of energy could form in
certain grand unified theories \cite{VS}. This network of
strings would then evolve into a self-similar scaling regime,
perturbing matter and radiation during its evolution. The non-linear
evolution of the strings should lead to a non-Gaussian distribution
of fluctuations; more specifically, the effect of
strings on radiation after recombination should lead to very distinct
line-like discontinuities in the CMB\cite{ks,BBS}, the Kaiser-Stebbins
effect. In \cite{BBS} the authors solved Einsteins
equations sourced by a high-resolution simulation of an evolving
string network. They argued that the non-Gaussianity was due
to non-random phases and illustrated this by generating maps with
the same amplitudes but randomized phases and comparing the two.
A battery of tests have been used to quantify these non-Gaussian
features, in some cases with the inclusion of instrument noise and
finite resolution: in \cite{gott} the authors looked at gradient
histograms and the statistics of the genus of excursion sets, in
\cite{periv} an analytical fit to the kurtosis of a string map was
proposed and in \cite{Pom} a multifractal analysis of one
dimensional
scans was proposed.
More recent studies of the evolution of string perturbations in
the CMB indicate that the Kaiser-Stebbins effect is obscured on
subdegree scales by fluctuations generated before recombination
\cite{ACFM},
and that these perturbations look very Gaussian \cite{turok}. None of
the previous statistical tests have taken this into account.
A careful analysis of the behaviour of these two contributions, however,
indicates that the non-Gaussian features may become
dominant again on very small scales: perturbations seeded
before recombination will be exponentially suppressed by Silk damping
\cite{silk} on small scales, while the Kaiser-Stebbins effect
will lead to a $k^{-2}$ behaviour. This is an ideal situation
for using our statistic. We can evaluate the non-Gaussian spectrum
on scales where the non-Gaussian signal is expected to dominate,
and see if it shows any evidence for
deviation from the background Gaussian distribution.
If we consider the case of a very small field, we expect to have at
most one segment of string crossing the patch. This would be the case
for a field of a fraction of a degree. It is instructive to consider
the case of a smooth, straight string. Here the signal is maximally
non-circular and all of the power in the ring is concentrated on one
of the modes $\rho_s=m_kI$, with $\rho_i=0$ for $i\neq s$.
For such a configuration the ring spectrum is
\begin{eqnarray}
{\theta}_i&=&1{\rm \quad for} \quad i<s\nonumber\\
{\theta}_i&=&0{\rm \quad for} \quad i=s\nonumber\\
{\theta}_i&=&{0\over 0}{\rm \quad for} \quad i>s\nonumber\\
\end{eqnarray}
The last angles are undefined in the same way that the angle $\phi$
in the normal 3D polar coordinates is undefined for points along the $z$
axis. The point remains that the configuration corresponds to a single
point on the $m_k-1$ dimensional sphere, and that therefore has probability
zero in a Gaussian theory. For display purposes one may then also
fix the remaining angles at some particular but arbitrary value.
We define $0/0=0$.
For a perfectly straight string non-Gaussianity
is so extreme that it is will be visually evident even with a very large
amount of background Gaussian noise.
The situation changes dramatically, however, for
the more realistic case when the string is rough, or structured.
This is the picture that emerges from high resolution numerical
simulations \cite{hires}. The intercommutation of strings will build up kinks
and cusps along a string which will only stabilize once gravitational
radiation becomes important. Again most of the power will be
concentrated along one or a few modes, leading to a well defined
spectrum up to some maximum $i$. For larger $i$'s the spectrum
will be close to $0$ or ill defined in the same way as the straight
string case.
Having played with a string code,
we have chosen to model the string as a directed
Brownian walk along the patch we are considering.
We then modelled the effect of the Gaussian background on these
scales in the same way as in the previous example. We superimpose
a background Gaussian signal with the power spectrum given in \ref{exppower}.
In Fig 7 a
we show an example of a $(160)^2$-pixel map ($20$ arcmin$^2$) of the non-Gaussian signal and
in Fig 7 b we superimpose a Gaussian background with a $k_g=26$
and with 5 times the overall amplitude of the non-Gaussian signal.
Clearly the beautiful Kaiser-Stebbins effect is now beyond
what we can recognize visually. One must therefore resort to
more abstract tests.
We first applied to our maps some of the standard tests. It has been argued
that the skewness and kurtosis of the gradient of the temperature
anisotropy field should be a good indicator of string
non-Gaussianity. Skewness should be very sharply peaked at $0$,
(the patterns caused by the string are very symmetrical in terms of
amplitude), and kurtosis should be larger than the Gaussian
\cite{periv}. In Fig 8 we show histograms of skewness and
kurtosis made from an ensemble of 400 realizations. Clearly the
string with a Gaussian background is indistinguishable from the
purely Gaussian sky.
A more elegant statistic involves working out the Euler characteristic
of the maps, given a threshold. The procedure is straightforward:
given a threshold $\mu\sigma$ one evaluates
the difference between the number of isolated hot regions and cold
regions
with regard to $t$. For a Gaussian field the mean genus
is
\begin{eqnarray}
\Gamma\propto \mu e^{-{{\mu^2}\over 2}}
\end{eqnarray}
It was argued in \cite{gott} that this would be a
good indicator of non-Gaussianity
for strings. In Fig 9 we show the Euler characteristic
averaged over 100 runs for the string with a Gaussian background and
for a purely Gaussian map with the same power spectrum. Again we find
no significant difference between the two.
Finally we have applied to these maps our technique.
We first looked for the distribution of the $\theta_i$s in
rings where the non-Gaussianity is evident. Due to the random nature
of the structure on the string, the signal in the ring spectrum
won't be as cleanly defined as for the straight string case.
We therefore looked at a large number of maps in order to plot
$\theta_i$s with cosmic/sample invariance errobars.
For plotting purposes we shall give error bars as regions
of probability larger than $1/e$. This corresponds to a
1-$\sigma$ errorbar if the distribution is Gaussian, but generalizes
the concept of a 1-$\sigma$ errobar to more general distributions.
In particular the concept may be applied to a uniform distribution,
which does not even have a peak.
In Fig 10, the shaded region is where the $\theta_i$s have
more than $1\over e$ probability of being; the ring has $k=70-75$
(for a $160^2$-pixel map) and we clearly see a ridge towards
the left hand side. For rings at low $k$ this ridge blurs
into the standard Gaussian prediction.
A more striking statistic is the inter-ring spectrum.
In Fig 11, we have shaded the region where $\psi$s have more
than $1\over e$ probability of being. It is clear that for
low values of $k$ the Gaussian background dominates, and the
various rings are essentially uncorrelated. However
above a certain threshold, subsequent modes are tightly
correlated. As argued above, most of the power is concentrated
along one direction of each ring. What we see here is that this
direction is strongly correlated between rings.
This quality we labelled as connectedness. We see that
strings' connectedness is a robust non-Gaussian feature,
even when all else seems to fail.
\subsection{Evasive non-Gaussian theories}
We finally present a strongly non-Gaussian theory on all scales
which evades detection by several traditional
non-Gaussianity tests.
Consider a theory with a power spectrum as in (\ref{exppower}),
say with $k_g=10$, in uncorrelated mesh units.
Let the phases $\phi$ and inter-ring
correlator angles $\psi$ be uniformly distributed. However
let the ring spectra ${\theta}(k)$ for all rings $k$
be the circular ring spectrum ${\theta}^{\rm cir}(k)$
(cf. Eqn.~\ref{thetacirc}) with infinite probability density.
Thus we have theory
of delocalized, disconnected spheres. In Fig. 12 we
show a realization of this theory (call it theory $T_1$) and also
a Gaussian realization, that is, a realization of a theory (call it
$T_2$) which differs only in that the ${\theta}(k)$ are now
uniformly distributed.
Theory $T_1$ is strongly non-Gaussian. The set of all of its realizations
has measure zero in any Gaussian theory. In other words the cosmic
confusion between the two theories is zero, where cosmic confusion
is defined as the percentage of common skies generated by the two
theories \cite{mag1}.
If $Q$ is the set of all map variables, and if $F_1(Q)$
and $F_2(Q)$ are their distribution functions in the theories $T_1$
and $T_2$, then the cosmic confusion between the two theories is
\cite{mag1}
\begin{equation}
{\cal C}(T_1,T_2)=\int dQ\, \min{(F_1,F_2)}
\end{equation}
In terms of the variables $Q=\{C(k),{\theta}(k),\phi,\psi\}$
we have
\begin{eqnarray}
F_1&=&\prod_k\chi^2_{N_k}(C(k))\prod_{\phi}\frac{1}{2\pi}
\prod_{\psi}\frac{1}{2\pi}\prod_{{\theta}}
\delta({\theta}-{\theta}^{\rm circ})\\
F_2&=&\prod_k\chi^2_{N_k}(C(k))\prod_{\phi}\frac{1}{2\pi}
\prod_{\psi}\frac{1}{2\pi}
\end{eqnarray}
so that ${\cal C}(T_1,T_2)=0$.
Although we have as yet no physical
motivation for such a theory, we believe it to be a good example where
the traditional beliefs about non-Gaussianity do not hold; in spite of
its strong non-Gaussianity this theory evades all
tests we have applied to it. Visually the maps
produced by the theory look very Gaussian. We can apply all
the test we have introduced in the previous two sections
with rather spectacular failure.
Plotting temperature histograms reveals
a very Gaussian distribution (see Fig. 13). One may
convert these histograms into moments, with the same result.
The Sections of the $n$-point function which may be computed in
practice are also very Gaussian.
In Fig. 14 we have plotted the average and 1-sigma errobars
for the collapsed 3-point correlation function for $T_1$ and $T_2$
as inferred from 100 realizations. In Fig. 15 we plot
histograms of kurtosis for the two theories. Clearly they are not
good discriminators between the two theories.
We can estimate the number of peaks over a given threshold for the
two theories. In Fig. 15 we plot the total number of peaks
above a given threshold for $T_1$ and $T_2$. In Fig. 16 we
find the Euler characteristic for the two theories. Once again they
are indistinguishable.
Nevertheless all rings of the $uv$-plane show a ring spectrum
which is perfectly circular, without any variance. Any sky, and any $k$,
produces a ring spectrum as the one in Fig. 18,
obtained from the same realization used above, for the ring $k=11$.
\section{Discussion}
In this paper we have proposed a transformation of variables in
Fourier space which produces non-Gaussian spectra with a particularly
simple probability distribution
function for a Gaussian random field. We have focused on a subset
of these, the ring spectra, $\theta_i$, and the inter-ring
spectrum, $\psi$, which contain information about the moduli of the
Fourier modes. We have presented a few examples where they are
good qualifiers of non-Gaussianity.
A number of comments are in order with regards to the limitations
of these statistics. To begin with these statistics
are tailored for data in Fourier space. To actually apply these
statistics to real space data will involve
non-local transformations which may complicate the procedure. However,
in the examples which we have worked out, the non-Gaussianity becomes
apparent on small scales. Therefore one is forced to consider experiments
with the best possible resolution. These are interferometric devices
where the data is measured directly in Fourier space. Another possible
shortcoming of these statistics is that they are sensitive to the
global shape of the data set or map. This means that if one has
many non Gaussian features (such as many point sources or many
segments of string) then both the ring spectrum and the inter-ring
spectrum will look more Gaussian. This can only be avoided by looking at
small fields. But once again this is the situation
favoured by interferometers. One is limited to small fields (although
one can mosaic over reasonably large patches of sky, \cite{CWF}) and
experience in \cite{catpeople} indicates that very few unresolved
sources will be present.
In a interferometric search for string segments, one would restrict oneself
to fields of less than ${.5^{\rm 0}}^2$ and still have
a $90\%$ probability of actually seeing a string, but not more than one.
We have not included the effect of small scale noise in the examples
we considered. In those cases the signal was already sufficiently
corrupted for it to be difficult to identify the non-Gaussian
features. In fact, what one finds is that large scale Gaussianity seems
to be more devastating (in terms of erasing non-Gaussian features)
than small scale, noise-related,
Gaussianity. Clearly one has to include the two
effects if one wants to apply these techniques to data but the
details are dependent on each experiment. The statistics defined are
non-linear statistics in the data which means care must be had when
considering the effect of noise. A case by case analysis of the
different observing strategies will have to be made. Again, the
fact that the small scale noise in interferometers
increases as a power-law with scale,
as opposed to exponentially as in the case of a single dish
experiment, indicates that interferometric devices are the best
instruments for testing for non Gaussian features.
One immediate goal will be to design the ideal experiment for
detecting the Kaiser-Stebbins effect. This should include a
careful analysis of theoretical uncertainties (such as the
amplitude of fluctuations at last scattering) as well as
the real life complications mentioned above.
We have focused on statistics with the moduli, $\rho$, and have
not developed in any detail, or applied to any example, statistics
with the phases, $\phi$. It is conceivable that much information
can be extracted from their behaviour. In fact, a generic
feature of physically motivated non-Gaussian models is localization,
which, as we have argued is governed by the phases. Although we have
organized the information that can be extracted from a finite data
set in systematic way, it is important to define a useful set of
statistics in terms of the phases. We will do so in \cite{fermagagain}.
ACKNOWLEDGMENTS: We thank A. Albrecht, S. Hanany, J. Levin, J. Silk
and L. Tenorio
for interesting conversations.
P.F. was supported by the
Center for Particle Astrophysics, a NSF Science and
Technology Center at UC Berkeley, under Cooperative
Agreement No. AST 9120005.
J.M. thanks St.John's College, Cambridge, and the Royal
Society for support, and also Joe Silk and CfPA for
hospitality.
|
1,314,259,994,894 | arxiv | \section{Introduction}
The M\"obius strip \cite{pickover} has attracted the interest of
researchers and academics due to its fascinating geometric properties
\cite{macho}. In spite of its name \cite{mactutor}, it was not discovered
first by August Ferdinand M\"obius, but independently by Johann Benedict Listing \cite{mactutor1},
the father of modern topology.
The construction is fairly simple: starting with a rectangular piece
of paper, one can join two opposite edges in order to form a cylinder.
But if before joining the opposite edges we twist the rectangle
$180^{\mathrm{o}}$ we obtain this reknowned one-sided surface.
The strip is a non-orientable surface and for this reason it does not
have an outer and an inner side as usual surfaces, such as the sphere,
the plane or the cylinder. It is a one-sided surface and this fact
has suggested many applications in engineering \cite{macho}. For
instance, for designing audio and film tapes which could record
longer, since they could be used on the only, but double length, side.
For the same reason, it has been used in printing tapes for printers
and old typewriters. There are also M\"obius' strips in luggage
conveyor belts in airports in order to double their useful life. And
a resistor with this shape was patented \cite{resistor, davies}, made
up of two conductive layers and filled with a dielectric material,
preventing residual self-inductance. There are even aromatic
molecules in organic chemistry with this shape \cite{flapan}. And we
cannot forget that it is part of the universal recycling symbol,
formed by three green arrows.
But besides academics and engineers, the M\"obius strip has attracted
the attention of many science students. Just check for ``flux across
a M\"obius strip'' and you will obtain thousands of results in your
favourite search engine. We focus on their interest in this issue as
target for this paper, as well as their teachers'.
The reason for this is that integral theorems such as Stokes' just
can be applied to orientable surfaces \cite{marsden}, relating the
flux of the curl of a vector field across a surface with its
circulation along the boundary of the surface (see Fig~\ref{stokes}).
\begin{figure}[h]
\begin{center}
\includegraphics[height=0.2\textheight]{stokes.eps}
\caption{Stokes' theorem\label{stokes}: We have a surface $S$
with unitary normal $\nu$, bounded by a closed curve $\Gamma$
with tangent field $\tau$. We may compute the flux of
the vector field $\rot\mathbf{v}$ either by summing up the
contributions of $\pe{\rot\mathbf{v}}{\nu}$ on the surface $S$ or
the contributions of $\pe{\mathbf{v}}{\tau}$ along the curve
$\Gamma$}
\end{center}
\end{figure}
One might think this is a tricky question, since the answer is
negative: it just cannot be calculated. But there are experiments in
Physics where one could think this question could have a meaning.
Consider for instance a circuit attached to the boundary of a M\"obius
strip. According to Faraday's law, the flux of a variable magnetic
field across the surface induces an electric current on the circuit.
One can measure the electromotiv force on the circuit, but in
principle Faraday's law cannot be applied to calculate it with the
flux across the surface. This is the issue we would like to clarify
in this paper.
But before providing a solution to this puzzle, we need to recall
some useful concepts. In Section~2 we review the
concepts of flux and circulation before stating Stokes' theorem. In
Section~3 we describe the M\"obius strip as a non-orientable surface.
As it was expected, the calculations performed on the M\"obius strip
and on its boundary do not coincide, as Stokes' theorem is not
applicable, as we show in Section~4. But a simple solution to this issue
is provided in Section~5. A final section of conclusions is incliuded
at the end of the paper.
\section{Stokes' theorem}
Before recalling Stokes' theorem, there are a few definitions we
need to recall: the circulation of a vector field along a curve and
the flux of a vector field across a surface. This can be reviewed in
your favourite Vector Calculus book. I have chosen \cite{marsden} for
its nice examples relating Physics and Mathematics.
The line integral or circulation of a vector field along a curve is
the generalisation of the concept of the work done by a force along a
trajectory.
Let us consider a continuous vector field $\mathbf{v}$ and a curve
$\Gamma$ oriented by its tangent field of velocities $\tau$: that is,
we specify if the curve is followed onwards or backwards. If the
points on the curve $\Gamma$ are parametrised by
$\gamma(t)=\left(x(t),y(t),z(t)\right)$,
$t\in[a,b]$, the velocity of this parametrisation is given by
$\tau\big(\gamma(t)\big)=\gamma'(t)$, where the $'$ denotes derivation
with respect to time $t$.
We define the \textbf{line integral or circulation of $\mathbf{v}$ along $\Gamma$} as
the sum of the projections of $\mathbf{v}$ along $\tau$ at the points on
the curve,
\begin{equation}
\mathcal{C}_{\mathbf{v},\Gamma}:=
\int_{\Gamma}\pe{\mathbf{v}}{\frac{\tau}{\|\tau\|}}\,ds=
\int_a^b\langle \mathbf{v},\tau\rangle_{\gamma(t)}\,
dt,
\end{equation}
taking into account that the length element of a parametrised curve is
$ds=\|\gamma'(t)\|dt$. The $\pe{}{}$ stands for the scalar or inner
product, whereas $\|\ \|$ stands for the length of a vector.
\begin{figure}[h]
\begin{center}
\includegraphics[height=0.2\textheight]{circul.eps}
\caption{Circulation of the vector field $\mathbf{v}$ along the curve
$\Gamma$\label{circul}: The circulation of the field
$\mathbf{v}$ along the curve $\Gamma$ is calculated summing up the
contributions of $\pe{\mathbf{v}}{\tau}$ along the curve $\Gamma$} \end{center}
\end{figure}
We see that this definition does not change on
changing the parametrisation of the curve, but it depends on the
orientation of the curve. That is, it is the same no matter how fast
we follow the curve. But if we follow the curve the other way round,
the circulation changes by a sign. (see Fig~\ref{circul}).
On the other hand, the flux integral of a vector field across a
surface is also suggested by examples in Mechanics, Electromagnetism
and Fluid Mechanics \cite{aris}: the flux of a gravitational field across a
closed surface is related to the mass contained inside, the flux of a
electrostatic field is related to the total charge inside the surface
and the variation of the flux of a magnetic field across a surface is
related to the electromotiv force induced on the boundary of the
surface.
Let us consider a compact surface $S$ and a continous vector
field $\mathbf{v}$. The orientation of the surface is given by a
continuous unitary vector field $\nu$ normal to $S$ at every point.
For a closed surface, we have just two choices: a vector field pointing
inwards or outwards.
If such a vector field exists, the surface is called
\textbf{orientable}. The \textbf{flux of $\mathbf{v}$ across $S$} is
defined as the sum of the projections of $\mathbf{v}$ along $\nu$ at
the points of the surface,
\begin{equation} \Phi_{\mathbf{v},S}:=\int_S\langle
\mathbf{v},\nu\rangle \,dS,\end{equation}
where $dS$ is the area element of the surface.
If the surface is closed, the orientation of the surface is taken as
positive when $\nu$ points out of the surface. For a closed surface
then, the flux is positive if more field lines go out of the surface
than enter the surface.
If the surface is open, we can choose either orientation for it. But
the chosen orientation for $S$ induces an orientation for its boundary
$\Gamma$, as we see in Figs.~\ref{stokes} and \ref{orient}: if our right thumb points as
the normal vector $\nu$, our fingers show the way the boundary
$\Gamma$ is to be followed. This convention is necessary to avoid
amibiguities on stating Stokes' theorem.
For explicit calculations, we usually need a parametrisation for the
points on the surface $S$. This is a function, with certain
restrictions \cite{carmo},
$g:D\in\mathbb{ R}^{2}\to \mathbb{ R}^{3}$, such that
$g(u,v)=\left(x(u,v),y(u,v),z(u,v)\right)\in S$. That is, we describe
the points of $S$ using curvilinear coordinates $u,v$.
The lines of constant $u$, parametrised by $g(u_{0},v)$ and the lines
of constant $v$, parametrised by $g(u,v_{0})$, are called coordinate lines of the
parametrisation $g$ of $S$. Since these lines are contained on the
surface, their velocities,
\[\mathbf{X_{u}}(u,v)=\frac{\partial g(u,v)}{\partial u},\qquad
\mathbf{X_{v}}(u,v)=\frac{\partial g(u,v)}{\partial v},\]are tangent
vector fields to the surface $S$ and their vector product
$\mathbf{X_{u}}\times \mathbf{X_{v}}$ defines a normal vector field
to the surface $S$. Hence, a unitary normal vector field is
\[\nu(u,v)=\frac{\mathbf{X_{u}}\times
\mathbf{X_{v}}}{\|\mathbf{X_{u}}\times \mathbf{X_{v}}\|},\]
but we could have chosen the opposite one, just exchanging the order
of the coordinates.
If the unitary normal vector field is provided this way, since the
surface element in such parametrisation is
\[dS=\|\mathbf{\mathbf{X}_{u}}\times\mathbf{\mathbf{X}_{v}}\|\,du\,
dv,\]
the flux may be computed as
\[
\Phi_{\mathbf{v},S}=\int_{D}\langle \mathbf{v},
\mathbf{\mathbf{X}_{u}}\times
\mathbf{\mathbf{X}_{v}}\rangle\, du\,dv=
\int_{D}\left|
\begin{array}{ccc}
v^x & v^y & v^z \\
\frac{\partial x(u,v)}{\partial u} & \frac{\partial y(u,v)}{\partial u} &
\frac{\partial z(u,v)}{\partial u} \\
\\
\frac{\partial x(u,v)}{\partial v} & \frac{\partial y(u,v)}{\partial
v} & \frac{\partial z(u,v)}{\partial v}
\end{array}
\right|_{g(u,v)}du\,dv.\]
It can be seen that this expression is independent of the chosen
parametrisation, except for the sign due to the choice of orientation.
For instance, a sphere of radius $R$ can be parametrised using the
colatitude angle $\theta$ and the azimuthal angle $\phi$,
\[g(\theta,\phi)=(R\sin\theta\cos\phi, R\sin\theta\sin\phi,
R\cos\theta),\quad \theta\in(0,\pi),\ \phi\in(0,2\pi),\]
with some degeneracy, since $g(0,\phi)=(0,0,R)$ is the North pole of the sphere for all
values of $\phi$ and $g(\pi,\phi)=(0,0,-R)$ is the South pole of the sphere for all
values of $\phi$.
The lines of constant $\theta$, parametrised by
$g(\theta_{0},\phi)$, are the parallels of the sphere and
the lines of constant $\phi$, parametrised by $g(\theta,\phi_{0})$, are the meridians of the sphere.
Now we are ready to state Stokes' theorem.
Integral theorems such as Green's, Gauss' and Stokes' theorems are
fundamental in Physics, mainly in Fluid Mechanics and
Electromagnetism, since they relate integrals of a field in a region
with integrals on its boundary. In this sense, they may be viewed as
a way to reduce the dimensions of the integral, but the physical
consequences are far deeper. This is most relevant for conservative and
solenoidal fields, which can be written respectively as the gradient or the curl of
a potential.
In this paper we are interested in Stokes' theorem, which relates the
flux integral of the curl of a vector field across a surface with the
circulation of the field along the boundary of the surface. It may be
stated as follows:
\textbf{Stokes' theorem:} Let $S$ be a smooth,
compact, oriented surface, bounded by a curve $\Gamma$. Let
\textbf{v} be a smooth vector field. The flux of the curl of $\textbf{v}$ across
$S$, $\Phi_{\rot \mathbf{v},S}$ and the circulation or $\mathbf{v}$
along $\Gamma$, $\mathcal{C}_{\mathbf{v},\Gamma}$ are related by\begin{equation}
\Phi_{\rot \mathbf{v},S}=\mathcal{C}_{\mathbf{v},\Gamma}.
\end{equation}
where the orientation for $\Gamma$ is the one induced by the
orientation of $S$.
The curl is a differential vector operator,
\[\rot \mathbf{v}=\left|
\begin{array}{ccc} \mathbf{e_{x}} & \mathbf{e_{y}} & \mathbf{e_{x}} \\
\partial_{x} & \partial_{y} & \partial_{z} \\
v^{x} & v^{y} & v^{z} \end{array} \right|,\]
for a vector field
$\mathbf{v}=v^{x}\mathbf{e_{x}}+v^{y}\mathbf{e_{y}}+
v^{z}\mathbf{e_{z}}$ with coordinates $(v^{x},v^{y},v^{z})$ in the
orthonormal trihedron
$\{\mathbf{e_{x}},\mathbf{e_{y}},\mathbf{e_{z}}\}$ of unitary vectors
along the respective axes $X$, $Y$, $Z$.
Stokes' theorem provides a nice interpretation for the curl of a vector field
$\mathbf{v}$ at a point $P$. Let us consider a small disk $D^{2}$,
bounded by a circumference $S^{1}$ of radius
$\varepsilon$ centered at $P$ with unitary normal $\nu$ parallel to
$\rot \mathbf{v}(P)$ (see Fig.~\ref{orient}).
\begin{figure}[h]
\begin{center}
\includegraphics[height=0.2\textheight]{orient.eps}
\caption{Orientation of circunference $S^1$ induced by the one on the
disk $D^2$\label{orient}: If we set our right thumb along the normal
$\nu$ to the disk, our fingers show the orientation for $\tau$ along
the boundary curve $\Gamma$}
\end{center}
\end{figure}
At lowest order, if the radius $\varepsilon$ is small, we can take
$\rot\mathbf{v}$ as constant on the disk,
\[ \mathcal{C}_{\mathbf{v},S^1(\varepsilon)}=
\Phi_{\rot\mathbf{v},D^2(\varepsilon)}\approx\pi\varepsilon^2\,\|\rot\mathbf{v}(x_{0},y_{0},z_{0}))\|
,\]
and so we may view the curl of $\mathbf{v}$ at a point $P$ as the
density of circulation of this field on the orthogonal plane, since
\[ \|\rot\mathbf{v}(P))\|=\lim_{\varepsilon\to 0}\frac{\mathcal{C}_{\mathbf{v},S^1(\varepsilon)}}
{\pi\varepsilon^2}.\]
Hence, the curl of a field shows the existence of closed field lines
or whirlpools (finite circulation) around a point. Besides, its
direction provides the orientation of these whirlpools. This is
related to the fact that solenoidal fields are generated by currents
instead of charges.
One typical example of application of Stokes' theorem is Faraday's
law, one of Maxwell's laws for Electromagnetism \cite{marsden}, which relates the
electrical field $\mathbf{E}$ with the magnetic field
$\mathbf{B}$ through
\begin{equation}\rot \mathbf{E}=-\parcial{\mathbf{B}}{t}.\end{equation}
If we calculate the circulation of the electric field along a closed
curve $\Gamma$, after applying Stokes' theorem to a surface $S$
bounded by $\Gamma$, we get
\[\mathcal{C}_{\mathbf{E},\Gamma}=\Phi_{\rot\mathbf{E},S}=
-\Phi_{\parcial{\mathbf{B}}{t},S}=-\parcial{\Phi_{\mathbf{B},S}}{t},\]
using Faraday's law and taking out the derivative with respect to time.
If we think of the curve $\Gamma$ as a closed circuit, the
circulation of $\mathbf{E}$ is the electromotive force induced by the
varying magnetic field. This is the simple principle which explains how
electric motors work.
Another useful application of the theorem is the calculation of the
flux of a solenoidal field $\mathbf{v}=\rot \mathbf{A}$, that is, of
a vector field $\mathbf{v}$ endowed with a vector potential
$\mathbf{A}$,
\begin{equation}
\Phi_{\mathbf{v},S}=\mathcal{C}_{\mathbf{A},\Gamma},\end{equation}
so that it equals the circulation of its vector potential along the
boundary of the surface.
According to this result, the flux of the solenoidal field
$\mathbf{v}$ does not depend on the surface $S$, but just on its
boundary $\Gamma$. If the surface is closed, there is no boundary and
the flux of a solenoidal field across closed surfaces is always zero.
For open surfaces, the flux is the same across \emph{any} other surface bounded by
$\Gamma$. This fact shall be useful for our purposes later on.
\section{M\"obius strip}
As we mentioned in Section~1, building a M\"obius strip is
fairly simple (see, for instance, page 106 in \cite{carmo}). Let us
consider a vertical segment
$I=\{(R,0,z): z\in [-a,a]\}$ of length $2a$ and the circumference $C$
of radius $R>a$ and center $(0,0,0)$, lying on the plane $z=0$. If we
rotate the segment $I$, keeping it vertical, along the
circumference $C$, we would obtain a circular cylinder. But we allow
the segment also to rotate upside down on travelling along $C$ in
such a way that the segment is always contained in the plane
described by the $Z$ axis and the radius of the circumference through
the center of the segment (see Figs.~\ref{mob1} and \ref{mob2}).
\begin{figure}[h]
\begin{center}
\includegraphics[height=0.2\textheight]{mobius12.eps}
\caption{Initial location of the segment $I$ and after its center
rotates $\phi=\pi/2$\label{mob1}} \end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[height=0.2\textheight]{mobius345.eps}
\caption{Location of the segment $I$ after its center rotates $\phi=\pi,
3\pi/2,2\pi$\label{mob2}}
\end{center}
\end{figure}
The resulting surface $S$ is a M\"obius strip (see Fig.~\ref{moebius}), which may be
parameterised in a simple fashion with such a geometric construction.
\begin{figure}[h]
\begin{center}
\includegraphics[height=0.2\textheight]{moebius.eps}
\caption{M\"obius strip\label{moebius}}
\end{center}
\end{figure}
It seems reasonable to use as parameters the position of a point on
the segment $I$, $r\in(-a,a)$ and the angle $\phi$ rotated by the
center of the segment along the circumference.
When the center of the segment has rotated and angle $\phi$ along the
circumference, the segment rotates an angle $\phi/2$ around its center. If the center
of the segment had not rotated along the circumference, it would have
been parametrised as $(R+r\sin(\phi/2),0,r\cos(\phi/2))$. But since
it has rotated an angle $\phi$ along the circumference, we have
\[
g(r,\phi)=\left
(\left(R+r\sin\frac{\phi}{2}\right)\cos\phi,
\left(R+r\sin\frac{\phi}{2}\right)\sin\phi,
r\cos\frac{\phi}{2}\right),
\]
for $r\in(-a,a)$, $\phi\in(0,2\pi)$ as a parametrisation for the M\"obius' strip.
That is, $g(r,\phi)$ describes the position of the original point
corresponding to $r\in(-a,a)$ after rotation of the segment by an
angle $\phi/2$ and rotation of its center along the circumference by
an angle $\phi$.
Using the velocities of the coordinate lines,
\begin{eqnarray*}
\mathbf{X_{r}}(r,\phi)&=&\left(\sin\frac{\phi}{2}\cos\phi,
\sin\frac{\phi}{2}\sin\phi, \cos\frac{\phi}{2}\right),\\
\mathbf{X_{\phi}}(r,\phi)&=&\left
(-\left(R+r\sin\frac{\phi}{2}\right)\sin\phi,
\left(R+r\sin\frac{\phi}{2}\right)\cos\phi,
0\right)\\&+&\frac{1}{2}\left(
r\cos\frac{\phi}{2}\cos\phi,
r\cos\frac{\phi}{2}\sin\phi,-
r\sin\frac{\phi}{2}\right),\end{eqnarray*}
we may obtain a normal vector field, $\mathbf{X_{r}}\times
\mathbf{X_{\phi}}$ to the strip at every point.
We notice that this normal vector field is not continuous: if we
compare the expressions at the center of the segment, $r=0$, after
completing a turn from $\phi=0$ to $\phi=2\pi$,
\[ \mathbf{N}(0,0)=(0,0,1)\times (0,R,0)=(-R, 0,0),\]
\[\mathbf{N}(0,2\pi)=(0,0,-1)\times (0,R,0)=(R, 0,0),\]
the normal vector changes from pointing out of the center of the
circumference to pointing towards the center, though the point on the
strip is the same. Hence, the M\"obius strip is not orientable.
The boundary $\Gamma$ of the M\"obius strip $S$ is the curve described
by both endpoints $\{-a,a\}$ of the segment on rotating. Or
equivalently, since the endpoint $a$ arrives at the original position
of $-a$ after a whole turn, we may describe $\Gamma$ by the motion of just
the endpoint $a$ after the segment travels twice along the
circumference to end up at the original position,
\[\gamma(\phi)=
\left
(\left(R+a\sin\frac{\phi}{2}\right)\cos\phi,
\left(R+a\sin\frac{\phi}{2}\right)\sin\phi,
a\cos\frac{\phi}{2}\right),
\] for $\phi\in[0,4\pi]$.
\section{Flux \emph{across} a M\"obius' strip}
We are ready to perform some calculations on the strip and its
boundary. For simplicity, we consider a simple constant vector field
along the $Z$ axis, $\mathbf{v}=(0,0,1)$. This field
is solenoidal and a simple vector potential for it is
$\mathbf{A}(x,y,z)=(-y/2,x/2,0)$. That is,
$\mathbf{v}=\rot\,\mathbf{A}$.
The circulation of $\mathbf{A}$ along $\Gamma$, the boundary of the
strip $S$ is well defined, since it is an oriented curve, and may be
readily computed.
We need the velocity of the parametrisation of $\Gamma$,
with velocity,
\begin{eqnarray*}\gamma'(\phi)&=&
\left
(-\left(R+a\sin\frac{\phi}{2}\right)\sin\phi,
\left(R+a\sin\frac{\phi}{2}\right)\cos\phi,
0\right)\\&+&\frac{1}{2}
\left
(a\cos\frac{\phi}{2}\cos\phi,
a\cos\frac{\phi}{2}\sin\phi,-
a\sin\frac{\phi}{2}\right),
\end{eqnarray*}
and the vector potential on the points of $\Gamma$ in this
parametrisation,
\[\mathbf{A}(x(r,\phi),y(r,\phi),z(r,\phi))=\frac{1}{2}
\left(-\left(R+a\sin\frac{\phi}{2}\right)\sin\phi,
\left(R+a\sin\frac{\phi}{2}\right)\cos\phi,
0\right).
\]
Their inner product is just
\[\pe{\mathbf{A}(\gamma(\phi))}{\gamma'(\phi)}=\frac{1}{2}
\left(R+a\sin\frac{\phi}{2}\right)^{2},
\]
which makes the calculation of the circulation simple,
\begin{equation}\label{circA}\mathcal{C}_{\mathbf{A},\Gamma}=\int_0^{4\pi}\pe{\mathbf{A}(\gamma(\phi))}{\gamma'(\phi)}d\phi=
\frac{1}{2}\int_0^{4\pi}\left(R+a\sin\frac{\phi}{2}\right)^{2}d\phi=2\pi
R^{2}+\pi a^{2}.\end{equation}
But if we naively calculate the flux of $\mathbf{v}$ across the strip,
\begin{eqnarray*}\Phi_{\mathbf{v},S}&=&\int_{-a}^{a}dr\int_{0}^{2\pi}d\phi
\pe{\mathbf{v}}{\mathbf{X_{r}}\times \mathbf{X_{\phi}}}=
\int_{-a}^{a}dr\int_{0}^{2\pi}d\phi
\left(R + r\sin\frac{\phi}{2}\right)\sin\frac{\phi}{2}
\\&=&8Ra,
\end{eqnarray*}
which of course does not provide the same result as the circulation
of $\mathbf{A}$ along the boundary $\Gamma$, since the strip is not
orientable and Stokes' theorem is not applicable.
\begin{figure}[h]
\begin{center}
\includegraphics[height=0.2\textheight]{moebius-roto.eps}
\caption{Open M\"obius strip\label{broken}}
\end{center}
\end{figure}
However, there is a way to provide a meaning and an interpretation to the
previous integral. If we cut the strip along the original segment at
$\phi=0$, we obtain an oriented open strip, but its boundary is not
$\Gamma$ as one could expect, but the union $\tilde \Gamma$ of four
pieces: the piece of $\Gamma$ corresponding to $\phi\in(0,2\pi)$, the
piece of $\Gamma$ corresponding to $\phi\in(2\pi,4\pi)$ with reversed
orientation and the original segment $I$ counted twice to link both
segments of $\Gamma$ (see Fig~\ref{broken}). Since $I$ is orthogonal to $\mathbf{A}$, it
does not contribute to the circulation,
\begin{eqnarray*}\mathcal{C}_{\mathbf{A},\tilde\Gamma}&=&
\int_0^{2\pi}\pe{\mathbf{A}(\gamma(\phi))}{\gamma'(\phi)}d\phi
-\int_{2\pi}^{4\pi}\pe{\mathbf{A}(\gamma(\phi))}{\gamma'(\phi)}d\phi
\\&=&
\frac{1}{2}\int_0^{2\pi}\left(R+a\sin\frac{\phi}{2}\right)^{2}d\phi-
\frac{1}{2}\int^{4\pi}_{2\pi}\left(R+a\sin\frac{\phi}{2}\right)^{2}d\phi
=8Ra,\end{eqnarray*}
which of course provides the same result as the
flux across the open strip, since Stokes'theorem is applicable to
this oriented surface.
Though of course it is not the result we are
after, since we wish to recover the circulation of $\mathbf{A}$
along $\Gamma$, not the flux of $\rot\mathbf{A}$ across a broken
M\"obius strip.
\section{Circulation along the boundary of the strip}
We have checked explicitly that the flux of a solenoidal field
across a M\"obius' strip and the circulation of its potential vector
along the boundary of the strip are not the same, since Stokes'
theorem cannot be applied to a non-orientable surface.
However, the circulation of the field along the boundary of the strip
does have a physical meaning. As we have already mentioned, it could
be the electromotiv force induced on a circuit located along $\Gamma$
by a varying magnetic field. Is it possible to calculate it using
Faraday's law?
When written in this way we notice that the answer is simpler as
expected when we formulated the question in terms of the flux across
a M\"obius' strip, which sounded more appealing. Our goal is not the
flux, which is an auxiliary quantity, but the circulation or
electromotiv force, which is the one we can measure.
And again Stokes' theorem is of much help, since it can suggest the
right answer to the right question. If we are interested in
calculating the circulation $\mathcal{C}_{\mathbf{v},\Gamma}$, we
notice that Stokes' theorem simply states that it can be done with
the flux across \emph{any} oriented surface bounded by $\Gamma$. That
is, M\"obius strip has $\Gamma$ as boundary and has been useful for
defining it, but that is all: the strip is a bad choice,
since it is not an oriented surface. But we can use any other
oriented surface with the same boundary, as suggested in Exercise
7.30 in \cite{berkeley}.
Cones are the simplest choice, since any closed curve without
self-intersections can be the boundary of a cone. We take
any point $P$ in space as the vertex of the cone and draw the segments
that link $P$ with the points of $\Gamma$. The resulting surface is a
cone bounded by $\Gamma$ and is an oriented surface. The only issue
is that we have to choose $P$ so that the cone does not have
self-intersections.
A simple choice for the vertex is $(-R,0,0)$ (see Fig.~\ref{cone}), the
middle point of the horizontal segment on the strip at $\phi=\pi$.
\begin{figure}[h]
\begin{center}
\includegraphics[height=0.1\textheight]{moebius-comp.eps}
\includegraphics[height=0.1\textheight]{cone-comp.eps}
\caption{M\"obius strip and orientable cone bounded by $\Gamma$\label{cone}: Both
surfaces are bounded by the curve $\Gamma$. The cone is
constructed by linking the point $P$ with every point on $\Gamma$}
\end{center}
\end{figure}
A parametrisation for this cone $\tilde S$ is obtained by linear interpolation
of a surface between the vertex, $\tilde g(0,\phi)$, and $\Gamma$,
$\tilde g(1,\phi)$,
\[\tilde g(r,\phi)=(1-r)(-R,0,0)+ r\gamma(\phi), \qquad r\in(0,1), \
\phi\in(0,4\pi),\]
with the corresponding velocities for the coordinate lines,
\[\mathbf{X_{r}}(r,\phi)=\gamma(\phi)-(R,0,0),\qquad
\mathbf{X_{\phi}}(r,\phi)=r\gamma'(\phi),\]
allows calculation of the flux of $\mathbf{v}$ across the cone,
\begin{eqnarray*}\Phi_{\mathbf{v},\tilde
S}&=&\int_{0}^{1}dr\int_{0}^{4\pi}d\phi
\pe{\mathbf{v}}{\mathbf{X_{r}}\times \mathbf{X_{\phi}}}\\&=&
\int_{0}^{1}dr\int_{0}^{4\pi}d\phi
\left(2R^2\cos^{2}\frac{\phi}{2}
+ Ra\left(1+3\cos^{2}\frac{\phi}{2}\right)\sin\frac{\phi}{2} + a^2\sin^{2}\frac{\phi}{2}
\right)r
\\&=&2\pi R^{2}+\pi a^{2},
\end{eqnarray*}
and obtain the same results as with the circulation (\ref{circA}),
according to Stokes' theorem, since the cone is orientable.
Calculations provide the same result for any other choice of the
vertex $P$ of the cone.
\section{Conclusions}
In this paper we have provided a simple answer to the calculation of
the flux of a vector field across a one-sided surface, where Stokes'
theorem is not applicable.
We have shown that, though the question is ill posed, there is a way
of restating the problem in order to provide a right answer, that is
related to experiments we may perform in a laboratory.
It has been pointed out that the physically meaningful quantity is
not the flux across the one-sided surface, but the circulation along
the boundary of the surface. This quantity is not also meaningful,
but can be measured, for instance, as the electromotiv force along a
circuit induced by a varying magnetic field.
In fact, once we focus in computing the circulation along the
boundary, we notice that the one-sided surface is auxiliary and may be
replaced by \emph{any} other surface with the same boundary. If the
chosen surface is orientable, this allows us to calculate the flux
and the circulation an obtain the same result, according to Stokes'
theorem. In fact, cones are always available for designing orientable
surfaces with a given closed curve as boundary.
Summarising, the circulation of a vector field along the boundary of a
M\"obius strip, or any other one-sided surface, can be calculated
using Stokes' theorem, though not using the M\"obius strip, but any
other surface with the same boundary.
\section*{Bibliography}
|
1,314,259,994,895 | arxiv |
\section{Introduction}\label{sec:intro}
Water masers are known to be abundant in low- and high-mass star-forming regions, where they trace collimated
outflows \citep[e.g.,][]{Furuya1999,Furuya2000,Hollenbach2013,Moscadelli2013}, and protoplanetary disks \citep{Fiebig1996,Torrelles1998}, both of which are key features during the earliest phases of protostellar evolution.
In particular, the water maser line from the $J=6_{1,6} - 5_{2,3}$ rotational transition at 22~GHz has been detected, since its discovery by \cite{Cheung1969}, in hundreds of sources within both high- and low-mass star forming regions \citep[e.g.,][]{Furuya2003,Moscadelli2020}. These masers are extremely bright and compact, and have become primary targets for Very Long Baseline Interferometry (VLBI), which can probe angular resolutions better than 1~mas \citep[e.g.,][]{Wu2014,Sanna2017}. Observations of 22~GHz water masers have been crucial primarily for the study of the dense gas and their dynamics around young stellar objects (YSOs; \citealt{Moscadelli2019}).
The earliest phase of low-mass protostellar evolution (the Class 0 phase in the evolutionary classes defined by \citealt{Lada1987} and \citealt{Andre1993}) is characterized by the presence of powerful outflows, which are believed to be intimately linked to the accretion process. These outflows can create shocked regions where the protostellar jets impact the ambient molecular cloud, which could collisionally pump H$_2$O maser emission. Searches for water masers frequently target the youngest low-mass protostars, since they exhibit the most powerful collimated mass outflows. Several systematic surveys to search for water maser emission toward low-mass stars have been conducted in the past \citep{Wilking1994,Claussen1996,Furuya2001,Furuya2003}. These surveys have found that the detection rate of water masers drops drastically as protostars evolve through the Class I and II phases \citep{Furuya2001,Furuya2003}.
This is explained by the dissipation of the dense gas around the central object as it evolves. Also, the detection rate of water masers does seem to drop significantly for very low-luminosity objects ($L\lesssim0.4~L_\odot$; \citealt{Gomez2017}).
Only a few VLBI studies of the kinematics of water masers in low-mass stars have been conducted in the past \citep[e.g.,][]{Claussen1998,Moscadelli2006,Imai2007,Dzib2018}, in part because they are weaker than their counterparts associated to high-mass stars. These studies have shown that the masers emerge at the base of the protostellar jet, in shocks likely driven by the interaction with the disk, or in shocked gas clumps along the axis of the jet \citep{Moscadelli2006}.
\begin{figure*}
\begin{center}
{\includegraphics[width=0.85\textwidth,angle=0]{vla-serpens-obs.pdf}}
\end{center}
\caption{The 48 VLA pointings used for our observations are indicated by the large circles. Fields where H$_2$O masers are detected are in white. The circle diameters of $2\rlap.{'}7$ correspond to the field of view at 22.2~GHz. Cyan dots mark the positions of 90 young stars reported in \cite{Winston2018AJ}, which were identified as Class 0+I objects. Red dots correspond to 60 YSO candidates by \cite{Dunham_2015} classified as Class 0+I objects. Blue dots are 67 protostars identified by \cite{Plunkett2018} from ALMA 1-mm continuum observations and IR data. The distribution of the VLA pointings was chosen to cover the most of these objects. Green dots correspond to 31 Flat-spectrum, 59 Class II and 12 Class III objects from the catalog of \cite{Dunham_2015} that fall within the observed VLA fields. The background is a {\it Herschel} H$_{2}$ column density map of the Serpens South star-forming region \citep{Andre2010}.
}
\label{fig:vla}
\end{figure*}
In this paper, we focus on the Serpens South region, a well known region harboring one of nearest very young protostellar clusters. Serpens South was discovered by \cite{Gutermuth_2008} from {\it Spitzer} images as an infrared dark cloud and since then it has become an interesting target to observe low-mass young stars in the earliest phases of its development.
It is located $\sim$3$^{\rm o}$ south of the Serpens Main cloud, a region also rich in star formation activity \citep{Eiroa_2008}.
The W40 region, located $\sim$10~arcmin to the east of Serpens South, is a more evolved star-forming region hosting a cluster of massive stars and an HII region. Serpens South and W40 are both projected within the broader Aquila Rift complex of molecular clouds, and often are referred to as the Aquila region \citep[e.g.][]{Andre2010}.
The distance to the Aquila region has been a matter of debate in the literature. However, recent measurements does seem to converge to $\approx$440--480~pc \citep[e.g.][]{Ortiz2017,Zucker2019,Herczeg2019}. \cite{Ortiz2017} obtained VLBI trigonometric parallaxes of radio continuum sources in Serpens Main and W40 and reported an average distance of $436.0\pm9.2$~pc. Later, \cite{Ortiz2018} analyzed {\it Gaia} parallaxes of stars in the Aquila region (two stars are projected in the outskirts of Serpens South) and in Serpens Main, delivered as part of the 2nd data release (DR2). They found that the {\it Gaia} parallaxes from Aquila agree on average with those from Serpens Main, and are also consistent with the previous VLBI estimation, although their associated uncertainties are larger.
Thus, in the present study we adopt the distance from the VLBI measurement of $436.0\pm9.2$~pc.
\begin{table*}
\caption{VLA observed epochs}
\label{tab:obs}
\centering
\begin{tabular}{c c c c c c c}
\hline\hline
Epoch & Observation & VLA & \multicolumn{3}{c}{Continuum} & Channel \\
& date &configuration & Beam size & P.A. & rms & rms \\
& & & ($''\times''$) & ($^{\rm o}$) & ($\mu$Jy~beam$^{-1}$) & (mJy~beam$^{-1}$) \\
\hline
1 & 2019 Jan 19 & C & 1.08$\times$0.76 & $-12.5$ & 25 & 16 \\
2 & 2019 Jan 26 & C & 1.27$\times$0.80 & $-28.7$ & 24 & 16 \\
3 & 2019 Feb 02 & C & 1.16$\times$0.78 & $-22.8$ & 27 & 21 \\
4 & 2019 Feb 08 & C$\rightarrow$B & 0.75$\times$0.44 & $-1.46$ & 21 &18 \\
\hline
\end{tabular}
\end{table*}
Here we use the Karl G.\ Jansky Very Large Array (VLA) to conduct a survey of water masers toward Serpens South covering the region with the highest density of protostellar objects.
Follow up observations of the VLA-detected water masers were obtained with the Very Long Baseline Array (VLBA). The paper is organized as follows. Section \ref{sec:obs} describes the target selection, acquired observations, and data reduction. In Sect. \ref{sec:detections} we present our results and discuss the properties of the detected H$_2$O emission, the spatial and velocity distribution of maser spots, and the association of the masers with outflow activity. This section also reports the sources detected with radio continuum emission. In Sect. \ref{sec:discussion} we discuss the relationship between H$_2$O maser emission and bolometric source luminosity. Finally, Sect. \ref{sec:conclusions} presents our conclusions.
\section{Observations and data reduction}\label{sec:obs}
\subsection{VLA observations}\label{sec:vlaobs}
We observed the $6_{1,6} - 5_{2,3}$ H$_2$O maser line (at rest frequency 22,235.080 MHz) with the K-band receiver at a velocity resolution of 0.1~km~s$^{-1}$ (corresponding to 7.8~kHz) and a velocity coverage of $\sim$100~km~s$^{-1}$. The observations were taken in 4 epochs in C and C$\rightarrow$B\footnote{C$\rightarrow$B denotes the reconfiguration from C- to B-array.} configurations (Table \ref{tab:obs}) under program 18B-230. The epoch observed on February 2, 2019 missed 25\% of the scans; therefore it was re-observed on February 8, 2019 with the C$\rightarrow$B configuration (Table \ref{tab:obs}). The water maser line was covered by a 16-MHz wide spectral window with 2048 channels. Eight additional 128-MHz wide spectral windows (with 64 channels each) were observed in each baseband for the continuum, resulting in an aggregate bandwidth of 2~GHz.
A total of 48 VLA fields (Figure \ref{fig:vla}) were selected to cover essentially all known low-mass protostars across the region. Our target sample includes all Class 0+I candidates\footnote{Class 0+I refers to objects in the Class 0 or Class I phase, that cannot be separated based on IR measurements alone. \cite{Dunham_2015} uses the IR extinction corrected spectral index, $\alpha$, with $\alpha\geq0.3$ for Class 0+I. \cite{Winston2018AJ} uses IR colors to identify (deeply) embedded protostars as Class 0+I objects. }
reported in \citet[][90 sources]{Winston2018AJ} and \citet[][60 sources]{Dunham_2015},
as well as the 67 protostars identified by \cite{Plunkett2018} from ALMA 1-mm continuum observations and infrared (IR) data. The observed area also includes 31 Flat-spectrum, 59 Class II and 12 Class III objects from the catalog of \cite{Dunham_2015}.
Observing sessions consisted of series of three scans on target fields (for $\sim$1.8~min each target) bracketed by phase calibrator scans of $\sim$1.4min. The quasar 3C286 ($\alpha$(J2000) = 13:31:08.287984, $\delta$(J2000) = +30:30:32.95886), observed at the beginning of the observations, was used as the standard flux and bandpass calibrator, while J1851+0035 ($\alpha$(J2000) = 18:51:46.7217, $\delta$(J2000) = +00:35:32.414) was used as the phase calibrator. The total observing time in each epoch was about 2.4~hr.
Data calibration was performed with the NRAO's Common Astronomy Software Applications (CASA, version 5.4.1) package using the VLA pipeline\footnote{\url{https://science.nrao.edu/facilities/vla/data-processing/pipeline/CIPL_54}} provided along with the data, that has been modified to work with spectral line observations. The calibrated visibilities were imaged using the CASA task {\tt tclean}. We produced maps of continuum emission for each observed field by integrating the full 2-GHz bandwidth. The pixel size was $0\rlap.{''}16$ in the maps from the first three epochs and $0\rlap.{''}073$ in the last epoch. The number of clean iterations was set to 10,000 with a threshold for cleaning of 0.066~mJy. We use ``Briggs'' weighting and applied the primary beam correction. For the image sizes, we used 1040$\times$1040 and 2250$\times$2250 pixels in C and C$\rightarrow$B configuration, respectively, that correspond to a field size of $2\rlap.{'}7$.
Maps were made for individual epochs and for the combination of the first, second and fourth epochs. The central frequency (wavelength) in these continuum images is 22.9~GHz (1.31~cm). The beam sizes and root mean square (rms) noise achieved in the continuum images are given in columns 4--6 of Table \ref{tab:obs}.
For the images of the line data, we first fit and subtract the continuum from the {\it uv} data using the task {\tt uvcontsub}, excluding the inner 900 channels for the fitting. Then, the task {\tt tclean} was used to generate the data cubes of $2\rlap.{'}7$ in size, with 1,000 clean iterations, threshold for cleaning of 25~mJy, and the same pixel size and weighting scheme as the continuum images. The average rms noise in the maps not corrected by the primary beam was 16, 16, 21, and 18~mJy~beam$^{-1}$ in epochs 1, 2, 3, and 4, respectively (Table \ref{tab:obs}). In order to obtain the positions and fluxes of the detected spots at individual channels (c.f.\ Sect. \ref{sec:vla-results}), we perform a 2D gaussian fit to the brightness distribution with the CASA task {\tt imfit}.
The error in the spot position is given by the astrometric uncertainty, ${\theta_{\rm res}}/ {(2\times{\rm S/N})}$, where $\theta_{\rm res}$ is the FWHM size of the restoring beam, and S/N the signal-to-noise ratio of the source \citep{Thompson2017}. The C-configuration maps of the H$_2$O line have an average beam size of $1\rlap.{''}2$. Therefore, for emission detected at S/N=5 the formal (statistical) error in position is $\approx 0\rlap.{''}12$. For the C$\rightarrow$B configuration, the statistical error is $\approx 0\rlap.{''}08$.
\input{vlba-obs}
\input{line-imfit-aas}
\subsection{VLBA observations}\label{sec:obs-vlba}
We conducted multi-epoch Very Long Baseline Array (VLBA) observations toward 4 targets, including the 3 sources that were undoubtedly detected in H$_2$O emission with the VLA (Sect. \ref{sec:vla-results}), and one more star with tentative detection (2MASS J18295902--0201575). These observations were conducted between March and November, 2020 as part of program BO061 (Table \ref{tab:obs-vlba}). The data were taken at 22.2 GHz with 4 intermediate frequency (IF) bands, each of 16~MHz bandwidth. One IF was centered at the $6_{1,6} - 5_{2,3}$ H$_2$O transition and correlated with a channel spacing of $\sim$0.2~km~s$^{-1}$ (15.625~kHz). We observed the quasar J1824+0119 ($\alpha$(J2000)=18:24:48.143436, $\delta$(J2000)=+01:19:34.20183) as the phase reference calibrator, which we alternated with target observations in fast switching mode, switching sources every $\approx$30 seconds. Additional 30-min blocks of calibrators distributed over a wide range of elevations were observed at 23.7~GHz every $\approx$2 hours during each $\approx$9-hr observing run. The observations were organized in two blocks, ``A'' and ``B'', with each block observing up to 2 targets. The total on-source time of the water masers was 1.5 and 3~hr for blocks observing two and one targets, respectively (see Table \ref{tab:obs-vlba}).
The blocks have been observed in a total of 4 epochs as of the year 2020 (labeled as 1 to 4 in Table \ref{tab:obs-vlba}). Here, we only report the detections achieved so far (Sect. \ref{sec:masers-vlba}). The full analysis of the multi-epoch VLBA data will be presented in a forthcoming paper.
Data calibration was performed with the Astronomical Imaging System (AIPS; \citealt{Greisen2003}), using the ParselTongue scripting interface \citep{Kettenis2006} and following standard procedures for phase-referencing observations \citep[e.g.,][]{Reid2009}. Since the VLA-detected masers are relatively weak (<1~Jy; Section \ref{sec:detections}), the fringe-fitting solutions were derived from the scans on the phase-reference calibrator and then applied to the target. Once the calibration was completed, we imaged individual channel maps of $4096\times4096$ pixels using a pixel size of $50~\mu$as. Spot positions and fluxes were determined by fitting a Gaussian to the brightness distribution at individual channels using the AIPS task {\tt jmfit}. The expected statistical positional errors are of the order of 70~$\mu$as for emission detected at S/N=5.
\section{Results}\label{sec:detections}
\begin{figure*}[!bht]
\begin{center}
{\includegraphics[width=0.32\textwidth,angle=0]{CARMA-7-full-spectra.pdf}}
{\includegraphics[width=0.32\textwidth,angle=0]{SSTgbsJ1829053-014156-full-spectra.pdf}}
{\includegraphics[width=0.32\textwidth,angle=0]{SSTgbsJ1830177-021211-full-spectra.pdf}}
\end{center}
\caption{Spectra of H$_2$O emission detected with the VLA extracted at the peak pixel from the data cubes. The epoch of observation is indicated in the legends.
The vertical dashed line at $V_{\rm LSR} = 8$~km~s$^{-1}$ marks the systemic velocity of the cloud.
Due to failures during the observations, several fields, including CARMA-7, were skipped in the third epoch.
}
\label{fig:vla-spectra}
\end{figure*}
\subsection{VLA detected sources with H$_2$O emission}\label{sec:vla-results}
The cubes of the H$_2$O line were visually inspected to search for emission at the location of the targets.
Only 3 sources have detected H$_2$O emission, whose properties are listed in Table \ref{tab:line-imfit}. Column 1 of this table indicates
the name of the source. Column 2 gives the epoch of detection. Column 3 and 4 give the mean position obtained by taking the weighted mean of the contributing maser spots, where ``spot'' refers to emission detected in a single channel map. Column 5 gives line-of-sight LSR velocity of the channel with the highest intensity. Column 6 gives the velocity range of H$_2$O emission. Columns 7 and 8 indicate peak and integrated flux density of the highest-intensity channel, respectively. Column 9 gives the water maser luminosity.
In the three detected sources, the H$_2$O emission is weak, with fluxes below $\sim$230~mJy and velocity spread $\lesssim$1~km~s$^{-1}$.
Figure~\ref{fig:vla-spectra} presents extracted spectra at the position with highest intensity.
From this figure, it is clear that the emission shows variability in both flux and velocity (the velocity range of H$_2$O emission changes between epochs). This, together with the narrow widths of the lines, which are in the range from $0.7$ to $2.5$~km~s$^{-1}$, suggest that the detected H$_2$O emission is presumably due to masers.
We also notice that spatially distinct groups of spots contribute to the observed H$_2$O spectra toward SSTgbs J1829053--014156 and SSTgbs J1830177--021211. These groups of spots may correspond to spatially separated features, where ``feature'' refers to emission observed in contiguous velocity channels at nearly the same position. Since the poor angular resolution of the VLA does not allow us to unambiguously separate the features, we average all maser spots for the positions reported in Table \ref{tab:line-imfit}, ignoring the possibility that they may be part of distinct features. \\
In the following, we discuss each detected source separately. \\
{\bf CARMA-7}. Also known as SerpS-MM18a \citep{Maury2019}, it is a Class 0 protostar \citep{Maury2011} with strong millimeter continuum emission \citep{Plunkett2015ApJ} and a highly collimated bipolar outflow extending $\sim$0.16~pc \citep{Plunkett_2015}. Several knots are seen along the outflow, suggesting episodic events that are attributed to variations in the accretion rate of mass onto the protostar. There is a nearby protostar, CARMA-6 (also known as SerpS-MM18b; \citealt{Maury2019}), located to the southwest of CARMA-7, which also has millimeter continuum emission and is classified as a Class 0+I object \citep{Kern2016AJ}. The molecular outflow associated with CARMA-6 has a much wider opening angle (see Fig. \ref{fig:outflows}). The dust masses of CARMA-7 and CARMA-6 are estimated to be 1.21 and 0.43~$M_\odot$, respectively \citep{Plunkett2015ApJ}. They have {\it internal} luminosities of 13 and 16~$L_\odot$ that were derived from the 70-$\mu$m band of {\it Herschel} assuming a distance of 350~pc \citep{Podio2020}. These luminosities are rescaled to 20 and 25~$L_\odot$ for a distance of 436~pc.
The water maser detected with the VLA toward CARMA-7 is found at the very base of the CO ($J$=2--1) molecular outflow (see Fig. \ref{fig:outflows}) traced by ALMA \citep{Plunkett_2015}. This position also coincides with the peak of the millimeter continuum emission (see right panel of Fig. \ref{fig:outflows}). The velocity-integrated intensity map of the CO ($J$=3--2) line is also shown in this figure (right panel). In CARMA-6, the red-shifted CO ($J$=3--2) outflow seems to correspond to the cavity walls of the CO ($J$=2--1) outflow.
Radio continuum sources associated with both CARMA-7 and CARMA-6 were found by \cite{Kern2016AJ} from observations at 4.75--7.25~GHz (their sources VLA~12 and VLA~13). The radio continuum emission is also detected in our observations (see Sect. \ref{sec:cont} and Fig. \ref{fig:vla-continuum-all} in Appendix \ref{sec:appendix}; sources \#10 and \#9). \cite{Kern2016AJ} derived radio spectral indices of $2.31\pm0.12$ and $0.51\pm0.08$ for CARMA-7 and CARMA-6, respectively, which are indicative of thermal radio emission from ionized gas, and proposed that the radio emission is tracing the base of collimated outflows. \\
\begin{figure}[!bht]
\begin{center}
{\includegraphics[width=0.45\textwidth,angle=0]{carma7-alma-co.pdf}}
{\includegraphics[width=0.45\textwidth,angle=0]{carma7-alma-co32.pdf}}
\end{center}
\caption{Large scale molecular outflows traced by CO ($J = 2-1$) at 230.538 GHz from ALMA observations toward CARMA-7 and CARMA-6 \citep{Plunkett_2015}. The integration ranges are $-$20 to 4~km~s$^{-1}$ for the blue-shifted component and 12 to 40~km~s$^{-1}$ for red-shifted component. The $n$th contour is at $\left(\sqrt{2}\right)^{n}\times S_{\rm max} \times p$, where $S_{\rm max}=3.5$ and 6.3~Jy~beam$^{-1}$~km~s$^{-1}$ (for blue-shifted and red-shifted emission, respectively), $n$=0, 1, 2, 3, 4 ..., and $p$=10\%. The background is an ALMA map of 1-mm continuum emission \citep{Plunkett2018}.
The right panel shows a zoom-in of the central part of the mapped region. The contours correspond to CO ($J = 3-2$) emission at 345.796 GHz from ALMA observations, integrated in the same velocity range as the CO ($J = 2-1$) data, with $S_{\rm max}=10.8$~Jy~beam$^{-1}$~km~s$^{-1}$.
In both panels, the ``X''s indicate the average position of the water masers detected with the VLA (green)
and VLBA (yellow; see Sect. \ref{sec:masers-vlba}).
The beamsizes are shown in the bottom left corner of the images as white (molecular data) and cyan (continuum emission) ellipses.
}
\label{fig:outflows}
\end{figure}
{\bf SSTgbs J1829053--014156/IRAS~18264-0143}. This object is also a known YSO \citep{Dunham_2015}. The extinction corrected slope of the infrared spectral energy distribution (SED) is 0.96,
which places the source in the Class 0+I phase \citep{Dunham_2015}. The stellar extinction corrected bolometric luminosity is $L_{\rm Bol} = 2.9~L_\odot$ obtained by assuming a distance of 260~pc \citep{Dunham_2015}. This value is rescaled to $8.2~L_\odot$ for a distance of 436~pc. There is a 1.2 mm continuum peak close (at $\approx6''$) to the water maser, called Aqu-MM3, which was identified as a Class 0+I object \citep{Maury2011}. A dust mass of 1.7~$M_\odot$ and a bolometric luminosity of 14.3~$L_\odot$ (corrected for the assumed distance) was measured for the millimeter continuum source.
We detected radio continuum emission associated to this source (see Fig. \ref{fig:vla-continuum-all} in Appendix \ref{sec:appendix}, source \#2), which may be tracing the base of the jet. The maser position coincides, within the position errors, with the peak of radio continuum (Fig. \ref{fig:vla-continuum-all}).
\begin{figure*}[!bht]
\begin{center}
{\includegraphics[width=0.7\textwidth,angle=0]{Serpens18h29m05.3s_outflows_mho.pdf}}
\end{center}
\caption{H$_2$ 2.12 $\mu$m image of MHO2213, the outflow associated with SSTgbs J1829053--014156/IRAS 18264--0143 \citep{Zhang2015}. The MHO features are marked with magenta ellipses and denoted with letters.
The green ``X'' denotes the position of the maser, which coincides with the position of the outflow driving source. }
\label{fig:outflow-J1829053}
\end{figure*}
We searched the literature for molecular outflows that can be associated with this maser source. Observations of the CO ($J=3-2$) transition at 345.796~GHz were conducted by \cite{Nakamura2011} with the ASTE 10~m telescope to study the outflow activity in Serpens South. In their images, there is a clear bipolar CO outflow in the vicinity of SSTgbs J1829053--014156 (the red-shifted and blue-shifted outflow components are called R6 and B11, in the nomenclature of \citealt{Nakamura2011}). The maser is very close to the base of the blue-shifted component (see Fig.\ 9 of \citealt{Nakamura2011}). This outflow is also traced by H$_2$ emission at 2.12~$\mu$m \citep{Davis2010,Zhang2015}. The associated molecular hydrogen emission-line object is MHO~2213, which is thought to be driven by IRAS~18264--0143. The position angle of $118^{\rm o}$ of the MHO is similar to the orientation of the CO outflow (see Fig. \ref{fig:outflow-J1829053}). The maser is located at the base of the MHO feature that is associated with the blue-shifted CO lobe. \\
{\bf SSTgbs J1830177--021211/IRAS~18276--0214}. This object is a known YSO \citep{Dunham_2015,Winston2018AJ}. The infrared SED has a extinction corrected slope of $-2.22$ \citep{Dunham_2015},
which places it in the Class III phase. Later, based on its infrared colors, \cite{Winston2018AJ} classified it as a
disk-bearing pre-main-sequence object (equivalent to the Class II/transition disk class of \citealt{Dunham_2015}).
The extinction corrected bolometric luminosity is $L_{\rm Bol} = 158~L_\odot$, which has been rescaled for a distance of 436~pc. Its mass has not been estimated.
\begin{figure}[!bht]
\begin{minipage}{\textwidth}
\begin{center}
{\includegraphics[width=0.48\textwidth,angle=0]{Serpens18h30m17.7s_outflows.pdf}}
\end{center}
\caption{Molecular outflow lobes traced by CO ($J=1-0$) at 115.27~GHz \citep{Nakamura2019} toward SSTgbs J1830177--021211. The integration ranges are $-$15 to 4~km~s$^{-1}$ for the blue-shifted component and 11 to 30~km~s$^{-1}$ for red-shifted component. The labels denote lobes identified by \cite{Nakamura2011} from CO ($J=3-2$) observations at 345.796~GHz. The $n$th contour is at $\left(\sqrt{2}\right)^{n}\times S_{\rm max} \times p$, where $S_{\rm max}=0.2$~K~km~s$^{-1}$, $n$=0, 1, 2, 3, 4 ..., and $p$=11\%. In the background we show a {\it Herschel} 70 $\mu$m map retrieved from the Science Archive ({\url{http://archives.esac.esa.int/hsa/whsa/}}).
Orange circles mark the location of the {\it Herschel} protostellar cores \citep{Konyves2015} that have been identified as the outflow driving sources \citep{Nakamura2019}. The source with detected H$_2$O masers is indicated by the green circle.
The beamsize is shown in white in the bottom left corner.
}
\label{fig:outflows2}
\end{minipage}
\end{figure}
\input{vlba-astro}
The detection of water maser emission in this source is unexpected given that earlier surveys have suggested that maser activity disappears after the main accretion and outflow (Class 0--Class I) phase \citep{Furuya2001}. The maser has also been detected in our follow-up VLBA observations (see Sect. \ref{sec:masers-vlba}). We did not detect radio continuum emission associated to the maser and no radio continuum has been reported in the literature either. There are three molecular outflow lobes (B14, B15 and R8 in the nomenclature of \citealt{Nakamura2011}) in the surroundings of the water maser as seen in Fig. \ref{fig:outflows2}, where we show CO ($J=1-0$) data at 115.27~GHz taken with the Nobeyama telescope \citep{Nakamura2019}. The outflow lobes identified by \cite{Nakamura2011} are indicated in this figure, as well as the positions of the putative driving sources, which are taken from the {\it Herschel} catalog of protostellar cores \citep{Konyves2015}. The CO ($J=1-0$) emission at the position of the maser is relatively weak. The H$_2$ 2.12~$\mu$m image is dominated by very strong emission from IRAS~18276--0214 (Fig. 12.9 in \citealt{Zhang2015}), so it is difficult to find an association with an H$_2$ outflow feature. \\
\subsection{VLBA detected sources with H$_2$O maser emission}\label{sec:masers-vlba}
\begin{figure*}[!bht]
\begin{center}
{\includegraphics[width=0.458\textwidth,angle=0]{G028.55+3.69-BO061B1-sp.pdf}}
{\includegraphics[width=0.45\textwidth,angle=0]{G028.66+3.82-BO061A4-self-sp.pdf}}
\end{center}
\caption{VLBA spectra of the 22 GHz H$_2$O maser transition toward SSTgbs~J1830177--021211/IRAS 18276--0214 (left) and CARMA--6 (right) obtained by integrating over an area that covers all detected spots. The legends indicate the epoch of observation. The vertical dashed line at $V_{\rm LSR} = 8$~km~s$^{-1}$ marks the systemic velocity of the cloud. }
\label{fig:vlba-spectra}
\end{figure*}
{\bf SSTgbs J1830177--021211/IRAS~18276--0214}. The maser emission is seen at $V_{\rm LSR}$ from 4.2 to $6.6$~km~s$^{-1}$ (left panel in Fig. \ref{fig:vlba-spectra}). These velocities are blue-shifted with respect to the velocity of the cloud of $8$~km~s$^{-1}$ \citep{Kirk2013} by a few km~s$^{-1}$.
The brightest spot has a peak flux of 0.2~Jy~beam$^{-1}$, which is higher than the highest flux detected with the VLA. Figure \ref{fig:vlba-spots} shows the spatial distribution of the VLBA detected maser spots in four epochs. We identify four main features that occupy an extent of about 2~mas ($\approx$0.9~au) and are aligned roughly along the northeast-southwest direction. The strongest feature, labeled \#1, has persisted over the four observed epochs, which cover a time baseline of $\approx$7 months. Features \#2 and \#3 were detected on the first and second epochs, and feature \#4 only on the second epoch.
Table \ref{tab:astro-vlba} gives the error-weighted mean position offsets and intensity-weighted $V_{\rm LSR}$ for each feature obtained from all contributing spots to that feature. These positional offsets are with respect to the position of feature \#1, which we fixed at the origin in all epochs.
Fig. \ref{fig:vlba-spots} shows that feature \#2 (panel d) moved toward the southeast, while feature \#3 (panel c) moved toward the east between two consecutive epochs separated by only 13 days. Since feature positions are relative to feature \#1, we can investigate the internal proper motions of the two features, \#2 and \#3. In doing this, we remove the effect of the parallax, which is not well constrained by the current data. We obtain proper motions of $(\mu_\alpha\cos\delta,\mu_\delta)=(1.9\pm0.8,-2.6\pm1.2)$~mas~yr$^{-1}$ for feature \#2 and $(\mu_\alpha\cos\delta,\mu_\delta)=(-4.7\pm3.0,3.7\pm1.0)$~mas~yr$^{-1}$ for feature \#3. Although small, and given the fact that the positional offsets are larger than the astrometric uncertainties of about 70~$\mu$as (Sect. \ref{sec:obs-vlba}), these motions suggest that the two features are moving toward each other.
We attempt to estimate the {\it absolute} proper motions of feature \#1 by fitting the positions of the spot detected at $V_{\rm LSR}=6.1$~km~s$^{-1}$, where the proper motions are free parameters and the parallax is fixed to a constant value. We found that the resulting proper motions largely depend on the assumed value for the parallax. In addition, the fits yield lower residuals for parallaxes that are in the range from 0.5 to 1.0~mas.
Further observations spanning a larger time baseline will allow us to determine if the relative motions we measured continue over time, and disentangle {\it absolute} proper motions from the parallax.
\begin{figure*}[!bht]
\begin{center}
{\includegraphics[width=0.45\textwidth,angle=0]{vlba-spots-G028.55+3.69-f0.pdf}}
{\includegraphics[width=0.43\textwidth,angle=0]{vlba-spots-G028.55+3.69-f2.pdf}}
{\includegraphics[width=0.38\textwidth,angle=0]{vlba-spots-G028.55+3.69-f3.pdf}}
{\includegraphics[width=0.46\textwidth,angle=0]{vlba-spots-G028.55+3.69-f1.pdf}}
{\includegraphics[width=0.6\textwidth,angle=0]{vlba-spots-G028.55+3.69.pdf}}
\end{center}
\caption{Spatial distribution of the maser spots detected with the VLBA toward SSTgbs~J1830177--021211. The position offsets are with respect to the error-weighted mean position of feature \#1.
The spots are color-coded by the LSR velocity (color bar). We use different symbols to distinguish between 4 epochs observed during 2020 as follows: circles -- Mar 27, triangles -- Apr 9, squares -- Sep 29, pentagons -- Nov 1. For each epoch and feature, the symbol with black edge indicates the error-weighted mean position of all contributing spots. Panels (a) to (d) show close-up views of the features plotted in panel (e).
}
\label{fig:vlba-spots}
\end{figure*}
In Fig. \ref{fig:outflows2}, we see weak blue-shifted CO emission around the location of the masers that supports the presence of a molecular outflow that is too weak to be detected. This could happen if the star is not in Serpens South, but behind the molecular cloud, which could absorb the emission from the outflow. We searched the {\it Gaia} Early Data Release 3 (EDR3) catalog and found astrometric solution for the optical counterpart of SSTgbs J1830177--021211. The parallax reported in this catalog is $1.52\pm0.84$~mas \citep{Gaia2016,GaiaEDR3}, which is still consistent (within the errors) with a distance of 436~pc, although it may suggest a larger distance. Additional observations of the maser spots will allow us to also fit the parallax and provide an independent measurement of the distance to the star.
It is important to note that the classification of SSTgbs J1830177--021211 as a YSO is based on the infrared spectral index \citep{Dunham_2015}. However, Asymptotic Giant Branch (AGB) stars with infrared excesses can be misidentified as YSOs and the contamination fraction is non-negligible among Class~II--Class~III sources \citep{Oliveira2009}. Thus, SSTgbs J1830177--021211 could be a background AGB star with the water masers probably tracing an expanding or contracting circumstellar envelope. Given the small relative proper motions we measured for two maser features, and the fact that smaller parallaxes are favored from the astrometric fits and are within the $1\sigma$ uncertainty of the {\it Gaia} based parallax measurement, we incline towards the AGB star scenario as the most plausible interpretation. \\
{\bf CARMA-6}. Although we did not detect the maser associated with CARMA-7 using the VLBA, we did find a very bright maser ($\sim$12~Jy~beam$^{-1}$) associated with CARMA-6. This maser was seen serendipitously in our VLBA data on September, 2020, albeit it was not detected previously with the VLA in all three observed epochs. Considering the rms noise level of the VLA observations (c.f. Table \ref{tab:obs}), the VLBA detection of CARMA-6 implies an increase of maser flux density by more than two orders of magnitude in the highest intensity channel. This may correspond to a flare event, although less prominent than water maser flares seen toward massive stars \citep[e.g.][]{Hirota2014,Volvach2019}.
Additional data correlation at the position of CARMA-6 was obtained in a subsequent epoch. The spectrum observed in October, 2020 is shown in Fig \ref{fig:vlba-spectra}, after integrating over the area containing all maser spots.
Figure \ref{fig:vlba-spots-2} shows the spatial and velocity distribution of the spots detected in the images.
Because the maser is very bright, in this case we phase-referenced the visibility data to the maser spot at $V_{\rm LSR}=8.5$~km~s$^{-1}$.
We detect four groups of spots or {\it features} that are oriented in the southeast-northwest direction, covering an angular extent of about 4~mas (1.7~au).
The groups located to the northwest (NW), hereafter the NW cluster, delineate a nearly straight filament. The emission is red-shifted with respect to the systemic velocity of the cloud (8~km~s$^{-1}$), covering LSR velocities smaller than the red-shifted lobe of the CO ($J=2-1$) outflow traced by ALMA at larger angular scales (Fig. \ref{fig:outflows}). We see a velocity gradient through the filament with LSR velocities increasing to the north. The groups seen to the southeast (SE), hereafter the SE cluster, show LSR velocities close to the systemic velocity. Here, the maser spots are distributed along two opposite arc-like structures, displaying velocity gradients through the arcs, with LSR velocities increasing to the south. Similar gradients have been seen for instance in Serpens SMM1 \citep[][their Figure 3]{Moscadelli2006}. In Fig. \ref{fig:vlba-spots-2}, the diamonds indicate the error-weighted mean position of all contributing emission spots (indicated by the stars) to each particular feature. The line-of-sight velocity of each feature is obtained as the intensity-weighted mean $V_{\rm LSR}$ of the contributing spots. Fig. \ref{fig:vlba-spots-2} shows that the line-of-sight velocities of the features increase to the north.
We argue that the water masers originate in shocks between the red lobe of the molecular outflow and the surrounding material. As mentioned above, the NW and SE clusters draw a linear structure with the velocity gradient through this structure. The velocity gradient may arise from a rotating protostellar jet. Observationally, rotation signatures in jets have been seen as velocity gradients perpendicular to the jet axis \citep[e.g.][]{Chen2016,Lee2017}. In CARMA-6, the orientation of the protostellar jet axis is not yet very well constrained. In the left panel of Fig. \ref{fig:outflows} we see that the molecular outflow is oriented close to the north-south direction, thus the jet may be oriented in the same direction. This seems to be supported by the orientation of the dust disk detected in the ALMA continuum map at 347~GHz shown in Fig. \ref{fig:alma-cont-347} of Appendix \ref{sec:appendix}. The deconvolved size of this disk is $0\rlap.{''}2\times0\rlap.{''}14$ with a position angle of 82$^{\rm o}$. If the jet is perpendicular to the disk, the jet position angle would be 172$^{\rm o}$, while the water maser filament has a position angle of $\approx$130$^{\rm o}$. This seems to work against a rotating protostellar jet as the explanation for the observed maser velocity gradient.
In Fig.~\ref{fig:alma-cont-347} we compare the positions of the maser spots (phase-referenced to the extragalactic calibrator) against the distribution of the ALMA continuum emission at 347~GHz. We see that the spots are located within the disk, but have a significatn offset of 50~mas ($\approx22$~au) with respect to the continuum peak; the astrometric accuracy of the ALMA observations is about 9~mas\footnote{\url{https://help.almascience.org/kb/articles/what-is-the-astrometric-accuracy-of-alma}}.
Because the water masers appear to locate at the base of the outflow (and within the protostellar disk), and the linear scale of the masers of 1.7~au is smaller than the typical size of protostellar disks ($\lesssim60$~au; \citealt{Maury2019}), then the velocity gradient may inherit the velocity structure of the disk. Therefore, the observed water maser flare and the velocity gradient may be directly linked to a disk episodic accretion burst in CARMA-6.
The two epochs where the masers were detected are separated by only two months, covering a time baseline too short to investigate the internal kinematics of the masers. Additional VLBA observations will allow us to establish the kinematic structure of the water masers and further investigate the above alternative scenarios.
\begin{figure*}[bht]
\begin{center}
{\includegraphics[width=0.6\textwidth,angle=0]{vlba-spots-G028.66+3.82.pdf}}
\end{center}
\caption{Spatial distribution of the maser spots detected with the VLBA toward CARMA--6. The spots are color-coded by the LSR velocity (color bar). The stars indicate offsets measured on Oct 25, 2020, which are relative to $\alpha$(J2000)=18:30:03.538, $\delta$(J2000)=--02:03:08.377. For each feature, the diamonds indicate the error-weighted mean position of all contributing spots to that feature.}
\label{fig:vlba-spots-2}
\end{figure*}
\subsection{Continuum sources detected with the VLA}\label{sec:cont}
We performed a visual inspection of the maps that were constructed for the 48 VLA fields, first looking at the individual epochs, and then at the maps of the combination of the data from three epochs (see Sec. \ref{sec:vlaobs}). The visual inspection was done in the images uncorrected for the primary beam response, as this correction increases the noise toward the field edges affecting weak sources that then may appear as noise. However, once identified, the properties of the sources are measured in the primary beam corrected images.
Maps of $9''\times9''$ in size around the location of detected sources are presented in Figures \ref{fig:vla-continuum-all}--\ref{fig:vla-continuum-bck} in Appendix \ref{sec:appendix}. The maps are for all available epochs, but we note that some epochs do not exhibit detection. Table \ref{tab:imfit} lists the 17 sources detected with radio continuum,
as well as their positions and fluxes as obtained by fitting the brightness distribution with a Gaussian model using the task {\tt imfit} in CASA.
The fluxes are listed for each epoch and for the combined image.
Not all detected sources with radio continuum are associated with known young stars or other type of objects; there are 5 sources that have no counterparts (within a radius of 2$''$) in SIMBAD\footnote{\url{http://simbad.u-strasbg.fr/simbad/}}. On the other hand, we found that 10 sources are associated with known or candidate YSOs \citep{Povich2013}, and other 2 are associated with known radio sources \citep{Condon1998,OrtizLeon2015,Kern2016AJ}, also within a radius of 2$''$. Table \ref{tab:imfit} gives the names of the known sources. Out of the 12 objects that have an association with a known source, 6 had not been detected before in the radio according to SIMBAD. Therefore, we are reporting 6+5=11 new radio continuum detections.
The newly detected radio continuum sources with no counterparts at any other wavelength are \#1, \#5, \#14, \#15 and \#17. Source \#1 is detected in the four observed epochs with fluxes of 1.7--1.9~mJy. The other sources (\#5, \#14, \#15 and \#17) are detected in only one epoch, with fluxes above 0.22~mJy. In addition, sources \#3 and \#12, that have reported before in the literature, do not have counterparts at any other wavelength as well.
Following \cite{Anglada1998}, we can estimate the number of expected background sources inside a field of diameter $\theta_F$ as,
\begin{equation}
N = 1.4 \left\lbrace 1 -\exp \left[ -0.0066 \left( \frac{\theta_F}{\rm arcmin} \right)^2 \left( \frac{\nu}{\rm 5~GHz} \right)^2 \right] \right\rbrace \times \left( \frac{S_0}{\rm mJy} \right)^{-0.75} \left( \frac{\nu}{\rm 5~GHz} \right)^{-2.52}
\end{equation}
\noindent where $S_0$ is the detectable flux density threshold and $\nu$ the observing frequency. In our observations, $\nu$=22.2~GHz, and $S_0=3\times{\rm rms}\approx0.09$~mJy (c.f.\ Sect. \ref{sec:vlaobs}). Using a field size of $\theta_F=2\rlap.{'}7$, we obtain $\approx7$ expected background objects in the 48 observed fields. Thus, all of the unclassified sources with detected radio continuum emission are probably extragalactic objects.
Since our targets were observed in multiple epochs, covering a timescale of about 3 weeks, we can investigate the variability of continuum emission between the epochs. We estimated the variability as the difference between the maximum and minimum peak flux density, normalized by the maximum flux. For the estimation of variability uncertainties, we adopted a flux density calibration error of 15\%\footnote{\url{https://science.nrao.edu/facilities/vla/docs/manuals/oss/performance/fdscale}}, which was added quadratically to the statistical errors obtained from the Gaussian fits. We found that 9 sources show high levels of variability, with variations $\gtrsim50\%$ at $3\sigma$. These sources are \#4, \#5, \#11, \#12, \#13, \#14, \#15, \#16 and \#17. Four of these objects are YSOs; the other 5 are background candidates. Thus, in terms of variability, we do not see a distinction between the two groups. Previous works have found a similar result at shorter radio wavelengths. For instance, \cite{Kounkel2017} showed that both YSOs and extragalactic objects show strong radio continuum variability at 7.5~GHz.
\section{Discussion}\label{sec:discussion}
The four sources with H$_2$O maser emission detected here are known to be associated with phenomena related to YSOs. However, while CARMA-7, CARMA-6 and SSTgbs J1829053--014156 are in the early Class 0--Class I phase, SSTgbs J1830177--021211 is probably in the more evolved Class II phase. Three of the sources with associated maser emission drive large-scale outflows. From the spatial distribution of the maser spots, we argue that in all these sources the masers originate very close to the star, and are excited by the interaction between molecular outflows with the surrounding dense material, likely of the circumstellar disk.
Extinction corrected bolometric luminosities are available for the 162 stars of the catalog by \cite{Dunham_2015} that were observed with the VLA. The distribution of the bolometric luminosities, which have been rescaled assuming a distance of 436~pc, are shown in the left panel of Fig. \ref{fig:lbol}. Also shown in this figure are the bolometric luminosities of SSTgbs J1829053--014156, and SSTgbs J1830177--021211, and the internal luminosities of CARMA-7 and CARMA-6. As expected, maser emission was detected toward the objects with the highest luminosity. Figure \ref{fig:lbol} suggests that there is a bolometric luminosity threshold of $L_{\rm Bol}\approx10~L_\odot$ to excite water maser emission. However, water masers have been detected in objects with lower luminosities before \citep{Furuya2003}, for example, in VLA~1623 ($L_{\rm Bol}\approx1~L_\odot$, $d$ = 138~pc; \citealt{Andre1993,Ortiz2018}) and GF 9-2 ($L_{\rm Bol}\approx0.3-1.7~L_\odot$, $d$ = 200 -- 474~pc; \citealt{Furuya2008,Podio2020}).
It is still possible that the detection of water masers associated to lower-luminosity objects in Serpens South was missed due to variability. For instance, in CARMA-6, the masers were not detected in the three observed epochs with the VLA, but serendipitously detected with the VLBA about 1.5 years later.
We estimate water maser luminosities according to
\begin{equation}\label{eq:lmaser}
L_{\rm H_2O} = 4\pi d^2 S_{\rm int} \Delta V \nu_0/c,
\end{equation}
\noindent where $S_{\rm int}$ is the maser integrated flux density, $\Delta V$ is the velocity range of maser emission, $\nu_0=22,235.080$~MHz is the rest frequency of the $J=6_{1,6}-5_{2,3}$ water line, $c$ the speed of light, and $d$ the distance to the source. {The water mater luminosities are listed in Column 9 of Tables \ref{tab:line-imfit} and \ref{tab:astro-vlba} for sources detected with the VLA and VLBA, respectively. }
Using a 3$\sigma$ channel sensitivity of $\approx$48~mJy from our VLA observations (c.f.\ Table \ref{tab:obs}), a velocity spread of the masers of 3 channels, and $d=436$~pc, Eq. \ref{eq:lmaser} gives
an H$_2$O luminosity of $6\times10^{-11}~L_\odot$. Assuming the correlation between $L_{\rm H_2O}$ and $L_{\rm Bol}$ found by \cite{Shirley2007} for high-luminosity YSOs, according to which
\begin{equation}\label{eq:Shirley}
L_{\rm H_2O} = 3\times10^{-9}L_{\rm Bol}^{0.94},
\end{equation}
\noindent this upper limit in H$_2$O luminosity corresponds to $L_{\rm Bol}\approx0.02~L_\odot$. Thus, our VLA observations were in principle sensitive enough to detect all H$_2$O masers associated to low-luminosity protostars in Serpens South with $L_{\rm Bol}\gtrsim0.02~L_\odot$. As noted by \cite{Gomez2017}, the correlation between bolometric and maser luminosities may not hold for the lowest-luminosity YSOs. We plot in the right panel of Fig. \ref{fig:lbol} the bolometric luminosities of the 4 objects that have water masers and the maser luminosities measured at each individual epoch observed with the VLA and the VLBA. Due to the strong variability in both flux and velocity spread of the maser emission, the $L_{\rm H_2O}$ changes in all sources by more than 1 order of magnitude. CARMA-6 shows the highest variability, since the non-detection with the VLA implies a change in $L_{\rm H_2O}$ by about 4 orders of magnitude. In Fig. \ref{fig:lbol}, the two stars detected with the VLBA (SSTgbs~J1830177--021211 and CARMA-6) fall, within one order of magnitude, close to their predicted position by the $L_{\rm H_2O}$ versus $L_{\rm Bol}$ empirical relationship. A scatter of one order of magnitude was also observed for this relationship \cite[][their Fig.\ 3]{Shirley2007}.
\section{Conclusions}\label{sec:conclusions}
We have conducted an interferometric survey of 22~GHz H$_2$O masers toward the low-mass star-forming region Serpens South. Our observations were first carried out with the VLA covering all known protostars (Class 0--Class I objects) across the region. The VLA observations revealed, for the first time, three water masers in the region, which are found to be associated to CARMA-7, SSTgbs J1830177--021211 and SSTgbs J1829053--014156. Follow-up VLBA observations were carried out toward the VLA-detected sources to investigate the spatial distribution and kinematics of the masers. The VLBA observations found water maser emission associated to CARMA-6, which had not been detected with the VLA.
Three water maser sources (CARMA-7, SSTgbs J1829053--014156 and CARMA-6) are associated with Class 0--Class I objects that drive large scale molecular outflows and also display radio continuum emission from ionized gas. The water masers are found at the base of the molecular outflows and we propose that in all these three objects the masers are excited in shocks driven by the interaction between a protostellar jet and the circumstellar material. On the other hand, the source responsible for the excitation of the water maser associated with SSTgbs J1830177--021211 is unknown. This source has been classified in the literature as a Class II object and has no associated molecular outflows or radio jets. The small relative proper motions of two maser features that persisted over two epochs, and the small parallax hinted by the astrometric fits to the brightest feature suggest that SSTgbs J1830177--021211 is most likely a background AGB star with the water masers tracing an expanding or contracting circumstellar envelope. Further VLBI observations will allow us to obtain the parallax and proper motions of the maser spots and to test the proposed mechanism for the water maser excitation in these objects and confirm the AGB scenario proposed for SSTgbs J1830177--021211.
We also investigate the distributions of the bolometric luminosity of sources hosting 22~GHz H$_2$O masers and 162 YSOs covered by our observations. The comparison of the two distributions suggest a luminosity threshold for the water maser emission of $L_{\rm Bol}\approx10~L_\odot$. However, the water masers show strong variability, thus lower-luminosity sources may have been missed by the observations.
Lastly, we detected 11 new sources with radio continuum emission at~22 GHz, of which 6 are known or candidate YSOs, and 5 are unknown sources without counterparts at any other wavelength. Based on the estimation of the number of expected background sources in the observed area, we suggest that all of these unclassified sources are probably extragalactic objects.
\acknowledgments
The authors are grateful to the anonymous referee, whose comments helped to improve this paper.
G.N.O.-L.\ acknowledges support from the von Humboldt Stiftung.
L.L.\ acknowledges the support of DGAPA/PAPIIT grants IN112417 and IN112820, CONACyT-AEM grant 275201, and CONACyT-CF grant 263356. The authors acknowledge MiaoMiao Zhang for sharing his Canada--France--Hawaii
Telescope near-infrared data. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under a cooperative agreement by Associated Universities, Inc. This paper makes use of the following ALMA data: ADS/JAO.ALMA \#2012.1.00769.S and \#2015.1.00283.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ.
\vspace{5mm}
\begin{figure}[!bht]
\begin{center}
{\includegraphics[width=0.4\textwidth,angle=0]{lum-dist.pdf}}
{\includegraphics[width=0.5\textwidth,angle=0]{shirley.pdf}}
\end{center}
\caption{{\it Left}: Bolometric luminosity distribution of 162 YSOs covered by our VLA observations in grey and the 4 objects with detected water maser emission. {\it Right}: Water maser luminosity versus bolometric luminosity for the objects detected with the VLA and the VLBA as described in the legend. The arrows indicate upper limits. The dashed line represents the empirical relation expressed by Eq. \ref{eq:Shirley} \citep{Shirley2007}.
}
\label{fig:lbol}
\end{figure}
\include{imfit-aas}
|
1,314,259,994,896 | arxiv | \section{Introduction}
For a set $V$ and a positive integer $r$, let $V^{(r)}$ be the family of all $r$-subsets of $V$. An {\it $r$-uniform hypergraph} $G$, (or {\it $r$-graph}, for short), consists of a set $V$ of vertices and a set $E\subseteq V^{(r)}$ of edges. For
an integer $n\in \mathbb{N}$, we denote the set $\{1, 2, 3,\dots, n\}$ by $[n]$. Let
$K_t^{(r)}$ (or $[t]^{(r)}$) denote the {\it complete $r$-graph of order $t$}, that is, the $r$-graph of order $t$ containing all possible edges. Given an $r$-graph $G$, we use $e(G)$ to denote the number of edges of $G$.
\medskip
\begin{defn}
For an $r$-graph
$G$ of order $n$ and a vector $\overset{\rightarrow}{x}=(x_1,\dots,x_n)\in \mathcal{R}^n$, the {\it weight polynomial} of $G$ is
$$w(G,\overset{\rightarrow}{x})=\sum_{e\in E}\prod_{i\in e}x_i.$$
\end{defn}
\begin{defn}
We call $\overset{\rightarrow}{x}=(x_1,\dots,x_n)\in \mathcal{R}^n$ a {\it legal weighting} for $G$ if $x_i\geq0$ for any $i\in[n]$ and $\sum_{i=1}^n x_i=1$.
\end{defn}
\begin{defn}
The {\it Lagrangian} of $G$ is defined to be $\lambda(G) = \max w(G, \overset{\rightarrow}{x})$, where the maximum is
over all legal weightings for $G$. We call a legal weighting $\overset{\rightarrow}{x}$ {\it optimal} if $ w(G, \overset{\rightarrow}{x})=\lambda(G)$.
\end{defn}
Lagrangians for graphs (i.e, $2$-graphs) were introduced by Motzkin and Straus in 1965 \cite{MS1965}. They determined the following simple expression for the Lagrangian of a graph.
\begin{thm}[\cite{MS1965}]\label{graph}
If $G$ is a graph in which a largest clique has order $t$, then
$$\lambda(G)=\lambda(K_t^{(2)})=\frac{1}{2}(1-\frac{1}{t}).$$
\end{thm}
This theorem implies Tur\'an theorem; and Lagrangians are closely related to Tur\'an densities.
Let
\begin{equation}
\label{eq:mum}
\lambda_r(m)=\max\{ \lambda(H)\colon H \mbox{ is an $r$-graph with $m$ edges}\}.
\end{equation}
There are rich literatures on determining/estimating the values of $\lambda_r(m)$.
For distinct $A,B\in \mathbb{N}^{(r)}$, we say that $A$ is less than $B$ in the colexicographic ordering if $\max(A\bigtriangleup B)\in B$.
Let $C_{r,m}$ be the subgraph of $\mathbb{N}^{(r)}$ consisting of the first $m$ sets in the colexicographic ordering. If $r=3$, we simply write $C_m$ instead of $C_{3,m}$.
In 1989, Frankl and F\"{u}redi \cite{FF1989} made the following conjecture.
\begin{conj}[\cite{FF1989}]\label{conjecture}
For any $r\geq 3$ and $m\geq 1$, we have $\lambda_r(m)=\lambda(C_{r,m})$.
\end{conj}
For $r=2$, the validity of Conjecture \ref{conjecture} follows from Theorem \ref{graph}. However, this conjecture is still open even for the first case $r=3$.
Talbot \cite{T2002} has shown that $\lambda(C_{r,m})$ is a constant ($={t-1\choose r}/(t-1)^r$) for
$m \in [{t-1\choose r}, {t\choose r}-{t-2\choose r-2}]$ and jumps for every $m\in [{t\choose r}-{t-2 \choose r-2}, {t\choose r}]$. Most known results are in the interval $[{t-1\choose r}, {t \choose r}-{t-2\choose r-2}]$. For $r=3$, Talbot \cite{T2002} first proved that Conjecture \ref{conjecture} holds whenever ${{t-1}\choose 3}\leq m\leq {t \choose 3}-{{t-2}\choose 1}-(t-1)={t\choose3}-(2t-3)$ for some $t\in \mathbb{N}$. Tang, Peng, Zhang and Zhao \cite{TPZZ2016}
extended the above range to ${{t-1}\choose 3}\leq m\leq {t\choose 3}-{{t-2}\choose 1}-\frac 1 2(t-1)$. Recently, Tyomkyn \cite{T2017} proved the following.
\begin{thm}[\cite{T2017}]\label{3/4}
\begin{enumerate}
\item For $r=3$, there exists a constant $\delta_3>0$ such that
for any $m$ satisfying ${{t-1}\choose3}\leq m\leq {t\choose 3}- {t-2\choose 1}-\delta_3t^{3/4}$ we have $$\lambda_3(m)=\frac{{t-1\choose 3}}{(t-1)^3}.$$
\item For $r\geq 4$, there exists a constant $\delta_r>0$ such that
for any $m$ satisfying ${{t-1}\choose r}\leq m\leq {t\choose r}-\delta_rt^{r-2}$ we have
$$\lambda_r(m)=\frac{{t-1\choose r}}{(t-1)^r}.$$
\end{enumerate}
\end{thm}
A few good upper bounds on $\lambda(G)$ are known for general $m$. The following result,
which was conjectured (and partially solved for $r=3,4,5$ and any $m$; and for the case $m\geq {4(r-1)(r-2)\choose r}$) by Nikiforov \cite{Nikiforov2018},
was completely proved by the second author.
\begin{thm}[{Lu\cite{maxPspec}}] \label{smooth}
Let $r \geq 2$ and $H$ be an $r$-uniform hypergraph with $m$ edges. Write $m = {s\choose r}$
for some real $s\geq r-1$. We have $$\lambda(H)\leq m s^{-r}.$$
The equality holds if and only if $s$ is an integer and $H$ is the complete $r$-uniform hypergraph
$K^r_s$ possibly with some isolated vertices added.
\end{thm}
\begin{figure}[hbt]
\centering
\includegraphics[width=300pt, height=200pt]{lag3.png}
\caption{The conjectured values of $\lambda_3(m)$ and its smooth upper bound in Theorem \ref{smooth}.}
\label{fig:1}
\end{figure}
The Lagrangians of $3$-graphs have been extensively studied.
In this paper, we focus on $3$-graphs. We would like to prove a better upper bound for $\lambda_3(m)$.
We have the following theorem.
\begin{thm}\label{mainthm}
There exists a constant $c>0$ such that for any $m>0$ we have
\begin{equation}
\label{eq:1}
\lambda_3(m)\leq \lambda(C_{m+cm^{2/9}}).
\end{equation}
\end{thm}
Compared with the result of Theorem \ref{smooth} at $r=3$, the upper bound in Theorem \ref{mainthm} is better for most values $m$.
Note that $\lambda(C_m)={t-1\choose 3}/(t-1)^3$ for ${{t-1}\choose3}\leq m\leq {t\choose 3}-{t-2\choose 1}$.
We have the following corollary, which improves Tyomkyn's result for $r=3$ (Theorem \ref{3/4} item 1).
\begin{cor}\label{maincor}
There exists a constant $c>0$ such that for any ${{t-1}\choose3}\leq m\leq {t\choose3}-(t-2)-ct^{2/3}$,
we have $$\lambda_3(m)=\frac{{t-1\choose 3}}{(t-1)^3}.$$
\end{cor}
The paper is organized as follows. In section 2, we review notation and facts. Theorem \ref{mainthm} will be proved in section 3.
\section{Notation and Preliminaries}
Although our paper is focusing on $r=3$, we would like to give preliminaries for general $r$ first.
\subsection{General $r$}
Let $r\geq2$ be an integer. Given an $r$-graph $G=(V,E)$ and a set $S\subseteq \mathbb{N}$ with $|S|<r$, the $(r-|S|)$-uniform {\it link hypergraph} of $S$ is defined as $G_S=(V,E_S)$ with $E_S:=\{A\in \mathbb{N}^{(r-|S|)}:A\cup S\in E\}$. We will denote the complement graph of $G_S$ by $G^c_S=(V,E^c_S)$ with $E^c_S:=\{A\in \mathbb{N}^{(r-|S|)}:A\cup S\in V^{(r)}\backslash E\}$. Define $G_{i\backslash j}=(V, E_{i\backslash j})$, where $E_{i\backslash j}:=\{A\in E_i\backslash E_j:j\notin A\}$. Let $G-i$ be the $r$-graph obtained from $G$ by deleting vertex $i$ and the edges containing $i$. A hypergraph $G=(V,E)$ is said to {\it cover} a vertex pair $\{i,j\}$ if there exists an edge $e\in E$ with $\{i,j\}\subseteq e$. $G$ is said to {\it cover pairs} if it covers every pair $\{i,j\}\subseteq V^{(2)}$.
\begin{lem}[\cite{FF1989,T2017}]\label{Hmr}
Suppose $G\subseteq [n]^{(r)}$ and $\overset{\rightarrow}{x}=(x_1,\dots,x_n)$ is a legal weighting. For all $1\leq i<j\leq n$, we have
\begin{description}
\item[(i)] Suppose that $G$ does not cover the pair $\{i,j\}$. Then $\lambda(G)\leq\max\{\lambda(G-i),\lambda(G-j)\}$. In particular, $\lambda(G)\leq\lambda([n-1]^{(r)})$.
\item[(ii)] Suppose that $m,t\in \mathbb{N}$ satisfy ${{t-1}\choose r}\leq m\leq {t\choose r}-{{t-2}\choose{r-2}}$. Then
$$\lambda(C_{r,m})=\lambda([t-1]^{(r)})=\frac{1}{(t-1)^r}{{t-1}\choose r}.$$
\item[(iii)] $w(G_i,\overset{\rightarrow}{x})\leq (1-x_i)^{r-1} \lambda(G_i)$ for any $i\in [n]$.
\end{description}
\end{lem}
\begin{defn}[\cite{B1986}]
Let $E\subset \mathbb{N}^{(r)}$, $e\in E$ and $i,j\in \mathbb{N}$ with $i<j$. Then define
\begin{equation*}
L_{ij}(e)=\left\{\begin{array}{cc}
(e\backslash\{j\})\cup \{i\}, & \text{if}~i\notin e ~\text{and}~ j\in e;\\
e, & \text{otherwise},
\end{array}\right.
\end{equation*}
and $$\mathscr{C}_{ij}(E)=\{L_{ij}(e):e\in E\}\cup \{e:e,L_{ij}(e)\in E\}.$$
We say that $E$ is {\it left-compressed} if $\mathscr{C}_{ij}(E)=E$ for every $1\leq i<j$.
\end{defn}
From now on, suppose that ${{t-1}\choose r}\leq m< {t\choose r}$ for some integer $t$. Let $G$ be a graph with $e(G)=m$ which satisfies $\lambda(G)=\lambda_r(m)$ and let $\overset{\rightarrow}{x}$ be a legal weighting attaining the Lagrangian of $G$. Without loss of generality, we can assume $x_i\geq x_j$ for all $i<j$ and $\overset{\rightarrow}{x}$ has the minimum possible number of non-zero entries, and let $T$ be this number.
Suppose that $G$ achieves a strictly larger Lagrangian than $C_{r,m}$. Then we have
$$\lambda(G)>\frac{1}{(t-1)^r}{{t-1}\choose r},$$
which in turn implies $T\geq t$, otherwise $\lambda(G)\leq \lambda([t-1]^r)$.
\begin{lem}[\cite{FF1989,FR1984,T2002}]\label{four}
Let $G,T$ and $\overset{\rightarrow}{x}$ be as defined above. Then
\begin{description}
\item[(i)] $G$ can be assumed to be left-compressed and to cover pairs.
\item[(ii)] For all $1\leq i\leq T$ we have $$w(G_i,\overset{\rightarrow}{x})=r\lambda(G_i).$$
\item[(iii)] For all $1\leq i<j\leq T$ we have $$(x_i-x_j)w(G_{i,j},\overset{\rightarrow}{x})=w(G_{i\backslash j},\overset{\rightarrow}{x}).$$
\end{description}
\end{lem}
\begin{lem}[\cite{T2017}] \label{x1}
If $T=t$, then $x_1<\frac{1}{t-r+1}$ and $x_1<\frac{k+1}{k}x_{t-(k+1)r}$ for $1\leq k\leq \frac t r-1$.
\end{lem}
\subsection{The case $r=3$}
Let $r=3$, ${t-1\choose 3}\leq m <{t\choose 3}$ for some integer $t$, and $G$ be a $3$-graph with $m$ edges so that $\lambda(G)=\lambda_3(m)>\lambda([t-1]^3)$. Let $\vec x=(x_1,\ldots, x_n)$ be an optimal legal weighting for $G$ that uses exactly $T$ nonzero weights (i.e., $x_1\geq \cdots \geq x_T> x_{T+1}=\cdots =x_n=0$).
Talbot (\cite{T2002}, Inequality (2.2)) proved that the number of edges in $G$ must satisfy
$$m\geq {T-1\choose 3}+{T-2\choose 2} -(T-2).$$
Since $m<{t\choose 3}$ and $T\geq t$, it implies $T=t$. Thus,
\begin{lem}[{\cite{T2002}}] \label{T=t}
$G$ must have support on exactly $t$ vertices, i.e., $T=t$.
\end{lem}
Lemmas \ref{x1} and \ref{T=t} imply the following inequality:
\begin{align}
x_1 &< \frac{1}{t-2}. \label{eq:x1}
\end{align}
We have the following lemmas:
\begin{lem}
For any $k\in[t-1]$, we have
\begin{equation}\label{eq:xk}
x_{t-k}>\frac{k-1}{k+1}x_1.
\end{equation}
\end{lem}
\bigskip\noindent {\bf{Proof.}}~~ Observe that \begin{align*}
1&=x_1+\dots+x_{t-k-1}+x_{t-k}+\dots+x_t\\
&< (t-k-1)x_1+(k+1)x_{t-k}\\
&< \frac{t-k-1}{t-2}+(k+1)x_{t-k}.~~~~~~(by~\eqref{eq:x1})
\end{align*}
Solving $x_{t-k}$, we get
$$x_{t-k}>\frac{k-1}{k+1}\cdot\frac{1}{t-2}>\frac{k-1}{k+1}x_1.$$
\bs\hfill $\blacksquare$
\begin{lem}
For any subset $S\subseteq [t]$, we have
\begin{equation}
\label{eq:sumxk}
\sum_{i\in S}(x_1-x_i)<2x_1.
\end{equation}
\end{lem}
\bigskip\noindent {\bf{Proof.}}~~ It is trivial when $|S|\leq 2$. We can assume $|S|>2$. We will prove it by contradiction.
Suppose that there is $S=\{i_1,i_2,\ldots, i_k\}$ of $k$ distinct elements
such that $x_{i_1}+x_{i_2}+\dots+x_{i_k}\leq(k-2)x_1$.
Then
\begin{align*}
1&=x_1+x_2+\dots+x_t\\
&\leq x_{i_1}+x_{i_2}+\dots+x_{i_k}+(t-k)x_1\\
&\leq (k-2)x_1+(t-k)x_1\\
&=(t-2)x_1\\
&<1.
\end{align*}
Contradiction.
\bs\hfill $\blacksquare$
\section{Proof of Theorem \ref{mainthm}}
{\bf Proof of Theorem \ref{mainthm}:} Write $m={t\choose 3}-l$ where $0< l\leq {t-1\choose 2}$.
Let $m'=m+\eta$, where $\eta:=\lceil 4t^{2/3}\rceil$.
We claim for all $t\geq 8$
\begin{equation}
\label{eq:m}
\lambda_3(m)\leq \lambda(C_{m'}).
\end{equation}
Without loss of generality,
we can assume $\lambda_3(m)>\lambda([t-1]^{(3)})$. Otherwise, we have
$$\lambda_3(m)\leq \lambda([t-1]^{(3)})\leq \lambda(C_{m'}).$$
When $l\leq \eta $, then $m'\geq {t\choose 3}$. We have
$$\lambda_3(m)\leq \lambda_3\left({t\choose 3}\right)=\frac{{t\choose 3}}{t^3}\leq \lambda(C_{m'}).$$
We can assume $l>\eta$. Let $l'=l-\eta\geq 1$.
Let $G=(V,E)$ be a $3$-graph with $m$ edges satisfying $\lambda(G)=\lambda_3(m)$ and
Let $\vec x=(x_1,\ldots, x_n)$ be an optimal legal weighting for $G$ that uses exactly $T$ nonzero weights (i.e., $x_1\geq \cdots \geq x_T> x_{T+1}=\cdots =x_n=0$). By Lemma \ref{T=t}, we have $T=t$. By considering the induced sub-hypergraph on first $t$ vertices, we can assume $G$ has exactly $t$ vertices, at most $m$ edges,
and $\lambda(G)=\lambda_3(m)$. In addition, we may assume $G$ is left-compressed
by Lemma \ref{four}(i).
Define $b=\max\{i: \{i,t-1,t\} \in E\}$, we have $E_i=([t]\backslash i)^{(2)}$ for $i\in [b]$. So $x_1=x_2=\dots=x_b$ by Lemma \ref{four} (iii).
By the definition of $b$, we have
$$E^c_{t-1,t}=\{i \colon b+1\leq i\leq t-2\}.$$
In particular,
$G$ is a subgraph of $C_{{t\choose 3}- (t-2-b)}$. If $t-2-b\geq l'$, then $G$ is also a subgraph of $C_{m'}$. This implies
$$\lambda_3(m)=\lambda(G)\leq \lambda(C_{m'}),$$
and we are done. So we may assume $t-2-b<l'$. Let $l''= b+ \min\{l'-(t-2), 0\}$.
Let $B\subseteq E^c\setminus \{\{i,t-1,t\} \colon i\in E^c_{t-1,t}\}$ be any set of $l''+\eta$ non-edges.
This is possible since $G$ has at least $l$ non-edges and
$$l=l'+\eta\geq l''+\eta+(t-2-b).$$
Let $G'$ be a $3$-graph obtained from $G$ by deleting all edges in $\{\{b+1-i, t-1,t\}\colon 1\leq i \leq l''\}$
and adding all triples in $B$ as edges. Then $G'$ has at most $m+\eta= m'$ edges.
The main proof is to show the following inequality:
\begin{equation}
\label{eq:main}
w(G,\vec x)\leq w(G',\vec x).
\end{equation}
Let $s=\max\{i: \{t-i-1,t-i \}\in E^c_t\}$ and $S=\{t-s,t-s+1,\dots,t-1,t\}$. By the choice of $s$, we know $\{t-s-1,t-s,t\}\in E^c$
but $\{t-s-2,t-s-1,t\}\in E$.
\noindent {\bf Claim\refstepcounter{counter}\label{eS} \arabic{counter}.} For any $e\in E^c$, we have $|e\cap S|\geq 2$.
\bigskip\noindent {\bf{Proof.}}~~ Suppose $e=\{i,j,k\}\in E^c$ with $i<j<k$ and $|e\cap S|\leq 1$. We must have $i,j\notin S$. Since $E$ is left-compressed, $\{i,j,t\}\in E^c$. Then $\{j-1,j,t\}\in E^c$, contrary to the choice of $s$.
\bs\hfill $\blacksquare$
We may assume
\begin{align}\label{eq:x1xt-1xt}
x_1x_{t-1}x_t-x_{t-s-1}x_{t-s}x_t\geq 0.
\end{align}
Otherwise by replacing the edge $\{1,t-1,t\}$ with the non-edge $\{t-s-1, t-s, t\}$,
we get another 3-graph with the same number of edges whose Lagrangian is strictly greater than the Lagrangian of $G$.
Combining Inequalities \eqref{eq:x1xt-1xt} and \eqref{eq:xk}, we get
\begin{align}\label{eq:x1xt-1}
x_{t-1}\geq\frac{x_{t-s-1}x_{t-s}}{x_1}> \frac{s(s-1)}{(s+2)(s+1)}x_{1}.
\end{align}
For any $\{j,k\}\subseteq S^{(2)}$ with $j<k$, let $F_{jk}=\{\{i,j,k\}|i\in E^c_{jk}~\text{and}~ i<j\}$.
By Claim \ref{eS}, we have $$E^c= \bigcup\limits_{\{i,j\}\subseteq S^{(2)}}F_{ij}.$$
Now we will prove Inequality \eqref{eq:main}. We divide it into two cases.
\noindent
{\bf Case 1:} $s^2+s< \eta$. We have
\begin{align*}
w(G',\vec x) &= w(G, \vec x) - l''x_1x_{t-1}x_{t}+ \sum_{\{i,j,k\}\in B} x_ix_jx_k\\
&=w(G, \vec x)+\eta x_1x_{t-1}x_t -\sum_{\{i,j,k\}\in B}(x_1x_{t-1}x_t-x_ix_jx_k)\\
&>w(G, \vec x)+\eta x_1x_{t-1}x_t - \sum_{\{i,j,k\}\in B} (x_1-x_i)x_{t-1}x_t\\
&\geq w(G, \vec x)+\eta x_1x_{t-1}x_t - x_{t-1}x_t\sum_{\{j,k\}\in S^{(2)}} \sum_{i\colon \{i,j,k\}\in F_{jk}} (x_1-x_i)\\
&\geq w(G, \vec x)+\eta x_1x_{t-1}x_t - x_{t-1}x_t\sum_{\{j,k\}\in S^{(2)}} 2x_1 ~~~~~~~~~~~~~~~~~~~(by~ \eqref{eq:sumxk})\\
&=w(G, \vec x)+ (\eta-s^2-s)x_1x_{t-1}x_t\\
&>w(G, \vec x).
\end{align*}
\noindent
{\bf Case 2:} $s^2+s\geq \eta$. We have
\begin{equation}
\label{eq:s}
s\geq \frac{\sqrt{1+4\eta}-1}{2}>\sqrt{\eta}-\frac{1}{2}.
\end{equation}
Since $\eta=\lceil 4t^{2/3}\rceil$ and $t\geq 8$, by Inequality \eqref{eq:s}, we have
\begin{equation}
\label{eq:seta}
s(s-1)\eta>(4s+2)(t-2)\geq (4s+2)l''.
\end{equation}
\begin{align*}
w(G',\vec x) &= w(G, \vec x) - l''x_1x_{t-1}x_{t}+ \sum_{\{i,j,k\}\in B} x_ix_jx_k\\
&>w(G, \vec x)-l'' x_1x_{t-1}x_t + (l''+\eta) x_{t-1}^2x_t\\
&=w(G, \vec x)+ x_{t-1}x_t\left((l''+\eta) x_{t-1}-l'' x_1\right)\\
&\geq w(G, \vec x)+ x_{t-1}x_t\left((l''+\eta) \frac{s(s-1)}{(s+2)(s+1)} x_1- l'' x_1\right)~~~~~~(by~\eqref{eq:x1xt-1})\\
&=w(G, \vec x)+ \frac{1}{(s+2)(s+1)}x_1x_{t-1}x_t\left(s(s-1)\eta -(4s+2)l''\right)\\
&> w(G, \vec x). ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~(by~\eqref{eq:seta})
\end{align*}
Therefore, Inequality \eqref{eq:main} holds in any circumstances. If $l' \leq t-2$, then $G'$ is a subgraph of $C_{m'}$, else $G'$ is a subgraph of
$C_{{t\choose3}-(t-2)}$.
Inequality \eqref{eq:m} follows from Inequality \eqref{eq:main} by a sequence of inequalities:\\
if $l'\leq t-2$, then
$$\lambda_3(m)=w(G,\vec x)\leq w(G',\vec x)\leq \lambda(G')\leq \lambda(C_{m'}),$$
else
$$\lambda_3(m)=w(G,\vec x)\leq w(G',\vec x)\leq \lambda(G')\leq\lambda\left(C_{{t\choose3}-(t-2)}\right)=\lambda([t-1]^{(3)}) \leq \lambda(C_{m'}).$$
Finally we can choose a constant $c$ large enough so that the following two conditions hold:
\begin{itemize}
\item $cm^{2/9}>4\lceil t^{2/3} \rceil$ for all $t\geq8$,
\item and $cm^{2/9}>{t-1 \choose 2}$ for $1\leq t\leq 8.$
\end{itemize}
When $t\geq 8$, we have
$$\lambda_3(m)\leq \lambda(C_{m'}) \leq \lambda(C_{m+cm^{2/9}}).$$
When $1\leq t\leq 8$, we have
$$m+cm^{2/9}>{t-1\choose 3}+{t-1\choose 2}={t\choose 3}.$$
We have
$$\lambda_3(m)\leq \lambda_3\left({t\choose 3}\right)=\frac{{t\choose 3}}{t^3}=
\lambda\left(C_{{t\choose 3}}\right)\leq \lambda(C_{m+cm^{2/9}}).$$
This completes the proof of Theorem~\ref{mainthm}.
\bs\hfill $\blacksquare$
{\bf Remark:} Actually Inquality \eqref{eq:seta} only requires $\eta=c(l'')^{2/3}$.
When $m$ is closed to ${t\choose 3}$, we can get a better bound.
Let $m={t\choose 3}-l$ where $0<l<(t-2)+ct^{2/3}$.
Then we have
$$\lambda_3(m)\leq \lambda(C_{m+cl^{2/3}}).$$
|
1,314,259,994,897 | arxiv | \section{Introduction}
The mechanisms of failure of materials under stress are of paramount importance because of their use in a wide variety of applications \cite{Freund,Kanninen,Cadwell,Rivilin,Griffith}. Brittle materials usually break at very small deformations, typically on the order of a percent or less. In addition, their fracture behavior is difficult to reproduce, since much of the fracture properties are due to the existence of defects in the material \cite{Marder}. In practical situations, large deformations are not uncommon; it is exactly for this reason that composite polymer based materials are abundantly used. Rubbers are the prototypical polymeric materials that typically fracture at very high deformations, often at deformations exceeding $100\%$. Usually, such rubber materials are reinforced by adding nano-sized filler particles to increase their modulus and toughness. These composite materials are widely employed; however the main challenge remains to predict the fracture behavior of these rubbers \cite{Mars,Mullins,Gent59,Gent87,Gent90,Diaz,Cam,Chen,Zhang,Donnet} as a function of their material properties.
\par In this Letter, we study the fracture behavior of rubbers filled with silica nanoparticles, which is a common way to improve the mechanical properties of the rubbers \cite{Chen,Botti,Tiwari}. We determine the stress and deformation at which the material fails for different concentrations and types (sizes) of filler particles. Our main finding is that the stress and the deformation at failure are non-monotonic: they pass through a maximum at intermediate filler concentrations. To rationalize these findings, we first examine the standard Griffith theory for brittle fracture, which uses an energy balance between the elastic energy gained upon propagation of a fracture and the surface energy lost by creating additional interfacial area \cite{Griffith,Peresson}. From this, we conclude that the energy barrier for the spontaneous nucleation of an initial fracture is so large that thermally-driven fluctuations are much too weak to cause spontaneous breaking at a given stress. We subsequently extend the Griffith theory using an Erying-type model that incorporates a stress-induced crossing of the energy barrier for crack formation. This allows to relate the stress at break of a filled rubber to the volume fraction of filler material based only on the fracture energy and modulus of the material, both of which can be measured separately.
\section{Experiments}
The materials used for this work are composites of Acrylonitrile Butadiene Rubber (NBR) filled with silica nanoparticles that were prepared at SKF Elgin, USA. In brief, commercial NBR of molar mass $M_w=2.5 \times 10^5 g/mol$, glass transition temperature $T_g= -36^{\circ} {\rm C}$ and mass density $\rho=0.96 g/cm^3$ is used; the fillers are precipitated silica with three different primary particle sizes: $28$, $20$ and $15 nm$, which we name Silica1, Silica2 and Silica3 respectively. The amount of silica loaded in the NBR matrix is between $5$ and $90 phr$ (parts per hundred parts of rubber by weight) which covers a range of filler volume fractions from $1.59\%$ to $22.46\%$.
\begin{figure*}
\centering
\includegraphics[width=1.80\columnwidth]{Figure1.jpg}
\caption{A) True stress vs. strain for compounds of Silica1 type with different filler concentration. A sharp decrease of the stress at the end shows the break point. Inset shows the dumbbell-shaped specimens with the gauge marks used in the first series of experiments. B) critical strain at break vs. volume fraction for all types of silica. Black color is for Silica1, red for Silica2 and blue for Silica3.}
\label{Fig1}
\end{figure*}
\par The mechanical testing of compounds is performed on a Zwick extensometer. Two series of tests are performed. In the first series, tensile test on dumbbell-shaped specimens are carried out according to the standard ASTM D412-98a (see Fig. \ref{Fig1}A inset). The grip separation speed is fixed on $500 \pm 50 mm/min$, with a preload of $1 N$. We test three to five samples for each compound at room temperature. With this experiment we measure force-displacement curves. Assuming that the material is incompressible, this can be transformed into true stress and deformation (Fig. \ref{Fig1}A). The true stress can be defined as: $\sigma = FL/A_0L_0$, in which $F$ indicates the force, $L_0$ and $L$ are respectively the initial and actual distance between the gauge marks (see Fig. \ref{Fig1}A inset), and $A_0$ indicates the cross sectional area of the undeformed specimen. The strain is measured as $\gamma(\%)=\frac{L-L_0}{L_0} \times 100$.
\section{Results and Discussion}
\begin{figure}
\includegraphics[width=0.90\columnwidth]{Figure2.jpg}
\caption{Elastic modulus G versus filler volume fraction for the three types of silica. Black color is Silica1, red for Silica2 and blue for Silica3. The Silica particle size decreases from Silica1 to Silica3.}
\label{Fig2}
\end{figure}
The curves of Figure \ref{Fig1}A show a sharp drop of the true stress at the end that indicates the breaking of the sample. It follows that both the true stress and the deformation at break go through a maximum at intermediate filler concentration. For the other two silica types tested, the results are very similar to those shown in Figure \ref{Fig1}A. In all samples, the strain at break shows a peak around a volume fraction of $8\%$ filler (Fig. \ref{Fig1}B). At this filler concentration, the rubber shows the largest resistance against breaking, and since the overall deformation is larger, the stress at break is also larger. When the concentration of filler is higher than $8\%$, the material becomes more brittle and breaks at smaller deformations again; we also observe that the stress at break does not increase significantly after this concentration.
\begin{figure*}
\centering
\includegraphics[width=1.80\columnwidth]{Figure3.jpg}
\caption{A) Force vs. displacement for different filler concentrations in the second series of experiments where composites had an initial notch. Inset shows the fracture propagation process for NBR loaded with Silica1: the initial gauge length and the grip separation speed were respectively fixed on $2.5 cm$ and $10 mm/min$. B) Fracture energy scaled with the modulus versus the volume fraction of fillers for different types of silica.}
\label{Fig3}
\end{figure*}
\par To explain this behavior, we first characterize our materials. Measuring the Young's modulus at the relatively fast deformation rates of the tensile test is not accurate; we therefore determine the linear (visco-)elastic properties using standard rheology experiment. For our incompressible system, the Poisson ratio $\nu=0.5$, and the shear modulus is related to Young's modulus as $E=2G(1+\nu)=3G$. We find that while the measured shear modulus increases with increasing the filler concentration for all filler types, it increases more significantly for fillers with smaller particle size (Fig. \ref{Fig2}).
\par We subsequently determine the fracture energy $\Gamma$, required to create a fracture plane. The fracture energy $\Gamma$ includes not only the energy necessary to break the bonds at the crack tip, but also the energy dissipated in the vicinity of the crack tip during crack propagation \cite{Mullins,Peresson}. To determine $\Gamma$, we perform a second series of tensile tests on notched rectangular specimens with a width of $1 cm$ and thickness of $2 mm$. The samples were notched in the center, the depth of the notches being $2 mm$ (see Fig. \ref{Fig3}A inset).
\par From these experiments, we determine the fracture energy by calculating the work required to break the sample into two pieces and dividing that work by the created surface area. Figure \ref{Fig3}A shows the applied force $F$ on the system as a function of the displacement $\lambda$. The area under each curve gives the total work done on the samples up to their breakage. Assuming that all the work is used for creation of new surfaces, the fracture energy is obtained as:
\begin{equation}
\Gamma = \frac{\int_0^{\lambda_{max}}Fd\lambda}{2A_0}.
\label{FractureEnergy}
\end{equation}
\par What sets the force scale in these experiments is of course the elastic modulus; to scale out the trivial dependence of $\Gamma$ on the modulus, we scale the fracture energy with respect to the measured shear modulus for each sample. Figure \ref{Fig3}B shows the scaled fracture energy for three filler types and different volume fractions. We find that, similar to the stress and the deformation at break, the scaled fracture energy shows a maximum at volume fraction about $8\%$ (see Fig. \ref{Fig1} and Fig. \ref{Fig3}B), meaning that here the samples are hardest to break, i.e., fail at the largest deformation. The nonmonotonic behavior of the fracture energy has been previously observed for nanosilica-epoxy resins \cite{Chen} however, we are the first to establish a theoretical framework to quantitatively explain this nonmonotonic behavior.
\par The question is now whether characterizing the bulk elastic properties and the fracture energy is sufficient to account notably for the non-monotonic fracture behavior (Fig. \ref{Fig1}). Classically, the energy barrier for the spontaneous formation (nucleation) of a crack is due to Griffith \cite{Griffith}: the energy barrier results from a competition between the cost in fracture (surface) energy and the gain in elastic (volume) energy for the formation of the initial crack \cite{Kanninen,Cadwell,Daniel,Noushine,Lawn}. In two dimensions, the surface energy cost $E_s$ of creating the crack depends linearly on the crack length $l$ and is given by $E_s\sim 2\Gamma l$ where $\Gamma$ is the fracture energy, and the elastic energy gain $E_v$ is quadratic in $l$ according to $E_v\sim 2\sigma^2 l^2/3G$, where $\sigma$ is the applied stress. The activation energy then follows from extremalization, i.e, finding the maximum of the total energy: $E_{barr-2D}\sim 3\Gamma^2 G /\sigma^2$. This equation shows a power-law dependence of $E_{barr}$ on $\sigma$, and hence the force, and has been confirmed for the fracture of two-dimensional crystals \cite{Pauchard}. The extension to the three-dimensional case gives \cite{Pomeau}:
\begin{equation}
U = -\frac{\sigma^2}{6G}(\frac{4}{3}\pi l^3)+2\Gamma \pi l^2.
\label{TotalEnenrgy}
\end{equation}
Its extremum corresponds to the energy barrier, occurs for $l_{crit}=\frac{6\Gamma G}{\sigma^2}$ and is
\begin{equation}
E_{barr-3D} = \frac{24\pi \Gamma^3 G^2}{\sigma^4}.
\label{BarrierEnenrgy}
\end{equation}
For the spontaneous nucleation of a crack in the system, thermal fluctuations should overcome this energy barrier, leading to a probability of fracture $P_{fracture}\sim \exp (\frac{-E_{barr}}{k_BT})$, where $T$ is the absolute temperature and $k_B$ is the Boltzmann constant \cite{Daniel}. Once overcome, a crack starts to propagate.
\begin{figure}
\includegraphics[width=0.90\columnwidth]{Figure4.jpg}
\caption{Linear dependence between the activation volume $V^\ast$ and $\gamma_{break}l_{crit}^3$. Black color is for Silica1, red for Silica2 and blue for Silica3.}
\label{Fig4}
\end{figure}
We note that the above relations are obtained regardless of the filler particles. Hence, they are valid not only for silica particles but for any nano-sized hard particles.
\par Putting in typical numbers from the experiments, one immediately sees that $E_{barr}\gg k_BT$, and so a spontaneous, thermally activated, fracture \cite{Vanel} is not feasible in our system. This is because the elastomers dissipate enormous amounts of energy without breaking, which increases the fracture energy considerably and makes the energy barrier high compared to $k_BT$. The observation that the thermal energy alone is not sufficient to overcome the energy barrier is common for polymeric materials \cite{Atkins}, which has led to the consensus that for these systems the stress-induced crossing of the energy barrier may become important. This has led to a number of Eyring-type models that take into account the lowering of the energy barrier due to the applied stress \cite{Lee,Eyring,Richeton}. In its simplest form, the probability becomes:
\begin{equation}
P_{fracture}\sim \exp [\frac{-E_{act}+\sigma V^\ast}{k_BT}].
\label{FractureProbability}
\end{equation}
Zhurkov \cite{Zhurkov} provided a detailed comparison between the predictions of this model and the rate- and temperature dependent fracture and found that a wide range of polymeric materials follows this prediction.
\par In Eq.\ref{FractureProbability}, $V^\ast$ is the activation volume, which is often used as an adjustable parameter; if this is allowed, most experiments can be fit by the model. In our case, we define the activation volume $V^\ast \sim \gamma_{break}l_{crit}^3$ in which $\gamma_{break}$ is the strain at break that is experimentally measured (see Fig. \ref{Fig1}B). This definition is in fact necessary for consistency; since both $E_{act}$ (or $E_{barr-3D}$) and $\sigma V^\ast$ are much larger than the thermal energy, fracture will happen when $E_{act}\simeq\sigma V^\ast$. Putting in the expressions above for the energy barrier and the activation volume then leads to the familiar expression $\sigma_{break} \simeq G_{break}\gamma_{break}$, where $G_{break}$ is the slope of true stress versus strain very close to the breaking point (see Fig. \ref{Fig1}A).
\par To verify that the activation volume is indeed proportional to $\gamma_{break}l_{crit}^3$, we divided $E_{act}$ (calculated using Eqs. \ref{FractureEnergy} and \ref{BarrierEnenrgy}) by the experimental values of the stress at break (from Fig. \ref{Fig1}A) and compared these values to $\gamma_{break}l_{crit}^3$ (where $l_{crit}$ was calculated from Griffith's theory). Figure \ref{Fig4} shows the linear dependence of $V^\ast$ on $\gamma_{break}l_{crit}^3$.
\par Having determined the activation volume in this way, we can obtain the breaking stress in our experiments. Figure \ref{Fig5} compares the calculated stress at break obtained from $\sigma = \frac{E_{act}}{\gamma_{break}l_{crit}^3}$ with the experimental results shown in Figure \ref{Fig1}A, using the experimentally determined values for $G$ and $\Gamma$. The experimental (symbols) and theoretical (lines) results are in very reasonable agreement, and reproduce the non-monotonic behavior of the stress at break. The theoretical values of the stress at break above $14\%$ volume fraction are somewhat lower than the experimental results; this is likely related to the large plastic deformation observed on those samples.
\par Note that we perform the same measurements for two different strain rates. We find that while the failure stress does not change with the strain rate, the failure strain increases with decreasing the strain rate.
\begin{figure}
\includegraphics[width=0.90\columnwidth]{Figure5.jpg}
\caption{Stress at break as a function of filler volume fraction: symbols are the experimental results and lines are the theoretical prediction using Eyring model and Griffith theory. Black color is for Silica1, red for Silica2 and blue for Silica3.}
\label{Fig5}
\end{figure}
\section{Summary}
In summary, we have experimentally established a direct relation between the material properties of our composite materials and the very non-linear problem of crack initiation that determines resistance to breaking. For the composite materials considered here, we find that there exists an optimum amount of filler particles for which the filled rubbers show a maximum resistance against the applied stress and deformation and thus, are hardest to break. Using the adaptation of the Eyring model to the standard theory for fracture, we can explain how the non-monotonic fracture behavior is due to a subtle interplay between the bulk elastic energy gain and the surface fracture energy cost as a function of the filler concentration. These results should be relevant to filled polymeric systems in general, that all show a transition between the visco-elastic behavior of the polymer matrix without fillers and a more brittle behavior of the much harder composite material.
\section{Acknowledgments}
The authors thank the Stichting voor Fundamenteel Onderzoek der Materie (FOM), SKF, and Michelin for the financial support of the present research work.
|
1,314,259,994,898 | arxiv | \section{Introduction}
\ifFull
In an intriguing confluence of computational geometry and networking,
\emph{geometric routing} has shown how simple
geometric rules can replace cumbersome routing tables
to facilitate effective message passing in a network
(e.g., see~\cite{bmsu-rgdah-01, eg-sggdh-08,kk-gpsr-00,%
k-gruhs-07,kwzz-gacr-03,kwz-aogma-02,kwz-wcoac-03}).
\fi
Geometric routing algorithms
perform message passing using geometric information
stored at the nodes and edges of a network.
\ifFull
For example,
geometric information could come from the latitude and longitude GPS
coordinates of the nodes in a wireless sensor network or
this information could come from an embedded doubly-connected
edges list representation of
a planar subgraph of such a network.
Indeed, in one of the early works on the subject,
Bose \textit{et al.}~\cite{bmsu-rgdah-01}
show how to do geometric routing in an embedded planar subgraph of a wireless
sensor network by using a geometric subdivision traversal
algorithm of Kranakis {\it et al.}~\cite{kranakiscompass}, which was first
introduced in the computational geometry literature.
\fi
\subsection{Greedy Geometric Routing}
Perhaps the simplest routing rule is the
\emph{greedy} one:
\begin{itemize}
\item
If a node $v$ receives a message $M$
intended for a destination $w\not= v$,
then $v$ should forward $M$ to a neighbor that is
closer to $w$ than $v$ is.
\end{itemize}
This rule can be applied in any metric space, of course, but simple
and natural metric spaces are preferred over cumbersome or artificial
ones.
\ifFull
The greedy routing rule traces its roots back to the original
``degrees-of-separation'' small-world experiment of
Milgram~\cite{m-swp-67}, where he asked randomly chosen individuals
to forward 296 letters, initiating in Omaha, Nebraska and Wichita,
Kansas, all intended for a lawyer in Boston, using the rule that
requires each letter to be forwarded to an acquaintance that is
closer to the destination.
In the modern context, researchers are interested in solutions that
use a paradigm introduced by Rao {\it et al.}~\cite{rrpss-grli-03} of
doing greedy geometric routing in geometric graphs that
assigns virtual coordinates in a metric space
to each node in the network, rather
than relying on physical coordinates.
For example, GPS coordinates
may be unavailable for some sensors or the physical coordinates of
network nodes may be known only to a limited degree of certainty.
\fi
Thus, we are interested in greedy routing schemes that assign
network nodes to virtual coordinates in a natural metric space.
\ifFull
Interestingly, the feasibility of the greedy routing
rule depends heavily on the geometry of the underlying metric space
used to define the notion of ``closer to the destination.''
For example, it is easy to see that star graphs (consisting
of a central vertex adjacent to every node in an arbitrarily large
independent set)
cannot support greedy geometric routing in any
fixed-dimensional Euclidean space.
By a simple packing argument, there has to be two members of the
large independent set, in such a graph,
that will be closer to each other than the central
vertex.
Likewise, even for bi-connected or tri-connected planar graphs
embedded in ${\bf R}^2$, a
network may have ``holes'' where greedy routing
algorithms could get ``stuck'' in a local metric minimum
(e.g., see Funke~\cite{f-thdws-05} for
related work on hole detection in sensor networks).
Alternatively, several researchers (e.g.,
see~\cite{eg-sggdh-08,k-gruhs-07,m-adgra-07}) have shown that greedy
geometric routing is possible, for any connected graph,
in fixed-dimensional hyperbolic spaces.
Our interest in this paper, however, is on
greedy geometric routing in ${\bf R}^2$ under the Euclidean metric,
since this space more closely matches the geometry of
wireless sensor networks.
\fi
Interest in greedy geometric routing in fixed-dimensional Euclidean
spaces has expanded greatly since the work by Papadimitriou and
Ratajczak~\cite{pr-ocrgr-05}, who showed that any 3-connected planar graph
can be embedded in ${\bf R}^3$ so as to support greedy geometric routing.
Indeed, their conjecture that such embeddings are possible in ${\bf R}^2$
spawned a host of additional papers (e.g., \ifFull see~\cite{afg-acgdt-08,%
cgw-dcvc-07,d-gdt-08,eg-sggdh-08,lp-oelia-08,m-adgra-07}).
\else
see~\cite{afg-acgdt-08,d-gdt-08,eg-sggdh-08,lp-oelia-08,m-adgra-07}).
\fi
Leighton and Moitra~\cite{lm-srgem-08} settled this conjecture by
giving an algorithm to produce a greedy embedding of any
3-connected planar graph in ${\bf R}^2$, and a similar result
was independently found by Angelini {\it et al.}~\cite{afg-acgdt-08}.
Greedy embeddings in ${\bf R}^2$ were previously known
\ifFull only \fi for \ifFull graphs containing power
diagrams~\cite{cgw-dcvc-07}, \fi graphs containing Delaunay
triangulations~\cite{lp-oelia-08}, and existentially
(but not algorithmically) for triangulations~\cite{d-gdt-08}.
\subsection{Succinct Geometric Routing}
In spite of their theoretical elegance, these results settling the
Papadimitriou-Ratajczak conjecture have an unfortunate drawback, in
that the virtual coordinates of nodes in these solutions require
$\Omega(n\log n)$ bits each in the worst case.
These space inefficiencies reduce the applicability of
these results for greedy geometric routing, since one could
alternatively keep routing tables of size $O(n\log n)$ bits
at each network node to support message passing.
Indeed, such routing tables would allow for network nodes to be identified
using labels of only $O(\log n)$ bits each, which would significantly
cut down on the space, bandwidth, and packet header size
needed to communicate the
destination for each packet being routed.
Thus, for a solution to be effectively solving the routing problem
using a greedy geometric routing scheme, we desire
that it be
\emph{succinct}, that is, it should use
$O(\log{n})$ bits per virtual coordinate.
Succinct greedy geometric routing schemes are known for
fixed-dimensional hyperbolic spaces~\cite{eg-sggdh-08,m-adgra-07},
but we are unaware of any prior work on succinct greedy
geometric routing in fixed-dimensional Euclidean spaces.
\ifFull
We are therefore interested in this paper in a method for
succinct greedy geometric routing in ${\bf R}^2$, with distance comparisons being
consistent with the standard Euclidean $L_2$ metric.
\subsection{Additional Related Prior Work}
In addition to the greedy geometric routing schemes referenced above,
there is a hybrid scheme, for example, as outlined by Karp and
Kung~\cite{kk-gpsr-00}, which
combines a greedy routing strategy with face
routing~\cite{bmsu-rgdah-01}.
Similar hybrid schemes were subsequently studied by several other
researchers
(e.g., see~\cite{fs-odgfc-06,kwzz-gacr-03,kwz-aogma-02,kwz-wcoac-03}).
An alternative hybrid augmented greedy
scheme is introduced by Carlsson and Eager~\cite{ce-negrw-07}.
In addition, Gao {\it et al.}~\cite{gghzz-gsrmn-05} show how to
maintain a geometric spanner in a mobile network so as to support
hybrid routing schemes.
Although such schemes are \emph{local}, in that routing decisions can
be made at a node $v$ simply using information about $v$'s neighbors,
we are interested in this paper in routing methods that are purely
greedy.
As mentioned above,
Rao {\it et al.}~\cite{rrpss-grli-03} introduce the idea of doing
greedy geometric routing
using virtual coordinates, although they make no theoretical guarantees,
and Papadimitriou and Ratajczak~\cite{pr-ocrgr-05}
are the first to prove such a method exists in ${\bf R}^3$,
albeit with a non-standard metric.
In addition,
we also mentioned above how
Leighton and Moitra~\cite{lm-srgem-08}
and Angelini {\it et al.}~\cite{afg-acgdt-08}
have settled the Papadimitriou-Ratajczak conjecture, albeit with
solutions that are not succinct.
Moreover,
the only known succinct greedy geometric routing schemes
are for fixed-dimensional hyperbolic spaces~\cite{eg-sggdh-08,m-adgra-07}.
Thus, there does not appear to be any prior work on
succinct greedy geometric routing in ${\bf R}^2$ using the standard
Euclidean $L_2$ metric.
The problem of constructing succinct greedy
geometric routing schemes in ${\bf R}^2$ is related to the general
area of compressing geometric and topological data for networking
purposes. Examples of such work includes the compression
schemes of Suri {\it et al.}~\cite{ssw-ctdrt-03}
for two-dimensional routing tables,
and the coordinate and mesh compression work of
Isenburg {\it et al.}~\cite{ils-lcpfp-05}.
We should stress, therefore, that we are not primarily interested
in this paper in compression schemes for greedy geometric routing; we
are interested primarily in coordinate systems for greedy routing, since they
have a better applicability in distributed settings.
In particular, we are not interested in a compression scheme where
the computation of the coordinates in ${\bf R}^2$ of a network node $v$
depends on anything other than a succinct label for $v$.
That is,
we want a succinct \emph{coordinate system}, not simply an efficient
compression scheme that supports greedy routing.
Indeed, we show that succinct compression schemes are trivial, given
known Euclidean greedy geometric routing
methods~\cite{afg-acgdt-08,lm-srgem-08}.
Another area of related work is on methods for routing in geometric
graphs, such as road networks
(e.g., see~\cite{bfss-frrn-07,gh-cspasm-05,hsww-csuts-05,%
kp-sppls-06,ss-hhhes-05,sv-speg-86,zn-spaeu-98}).
For example,
Sedgewick and Vitter~\cite{sv-speg-86}
and Goldberg and Harrelson~\cite{gh-cspasm-05}
study methods
based on applying AI search algorithms, and
Bast {\it et al.}~\cite{bfss-frrn-07} explore routing methods based on the
use of transit nodes.
In this related work, the coordinates of the network nodes are fixed
geometric points, whereas, in the greedy geometric routing problems we
study in this paper, vertices are assigned virtual coordinates so
as to support greedy routing.
\fi
\subsection{Our Results}
We provide a succinct greedy geometric routing scheme for 3-connected
planar graphs in ${\bf R}^2$. At the heart of our scheme is a new greedy
embedding for 3-connected planar graphs in ${\bf R}^2$ which exploits the
tree-like topology of a spanning (Christmas cactus) subgraph. Our
embedding allows us to form a coordinate system which uses $O(\log{n})$ bits
per vertex, and allows distance comparisons to be done just using our
coordinate representations consistently with the Euclidean metric.
\ifFull
Although we are primarily interested in such a coordinate system for greedy
geometric routing, we also give a simple global
compression scheme for greedy geometric routing, based on the
approach of Leighton and Moitra~\cite{lm-srgem-08}
and Angelini {\it et al.}~\cite{afg-acgdt-08}, which achieves $O(\log n)$
bits per vertex, which is asymptotically optimal.
\fi
\ifFull
Our coordinate scheme for greedy geometric routing in a graph $G$
is based on a three-phase approach. In the first phase, we find a
spanning subgraph, $C$, of $G$, called a Christmas cactus
graph~\cite{lm-srgem-08}. In the second phase, we find a
graph-theoretic dual to $C$, which is a tree, $T$, and we form a
heavy path decomposition on $T$.
Finally, in the third phase, we show how to use $T$ and $C$ to embed
$G$ in ${\bf R}^2$ to support greedy routing with coordinates that can be
represented using $O(\log^2{n})$ bits, and then we show how this can
be further reduced to $O(\log n)$ bits per node.
\fi
\section{Finite-Length Coordinate Systems}
Let us begin by formally defining what we mean by a coordinate
system, and how that differs, for instance, from a simple compression scheme.
Let $\Sigma$ be an alphabet, and let $\Sigma^*$ a set of finite-length strings
over $\Sigma$.
We define a \emph{coordinate system} $f$ for a space $S$:
\begin{enumerate}
\item $f$ is a map, $f:\Sigma^* \rightarrow S$, which assigns
character strings to points of $S$.
\item $f$ may be \emph{parameterized}:
the assignment of strings to
points may depend on a fixed set of parameters.
\item $f$ is \emph{oblivious}:
the value of $f$ on any given $x\in \Sigma^*$ must depend
only on $f$'s parameters and $x$ itself. It cannot
rely on any other character
strings in $\Sigma^*$, points in $S$, or other values of $f$.
\end{enumerate}
\ifFull
Clearly, this is a computationally-motivated definition of
a coordinate system, since real-world computations performed on
actual points must use finite representations of those points.
This is an issue and theme
present, for instance,
in computational geometry (e.g.,
see~\cite{%
bcddprty-pcet-99,by-ag-98,%
bepp-cegpu-97,by-eeesd-00,bkmnsu-egcl-95,%
em-sstcd-90,fv-eeacg-93,gght-srlse-97,gm-rad-98,ils-lcpfp-05,ph-srr-01,%
ps-cgi-90,s-apfpa-97,ssw-ctdrt-03}).
Note also that
our definition can be used to define finite versions of
all the usual coordinate systems, since it allows for the use of symbols
like ``$\pi$'', ``$/$,'' and $k$-th root symbols.
Thus, it supports finite coordinates using rational and algebraic
numbers, for example.
In addition, note that it supports points in non-Cartesian coordinate
systems, such as a finite-length polar coordinate system,
in that we can allow
strings of the form ``$(x,y)$'' where
$x$ is a string representing a value $r\in {\bf R}^{+}$ and $y$
is a string representing a value $\theta\in [0,2\pi)$, which may even use
``$\pi$''. It also allows for non-unique representations, like the
homogeneous coordinate system for ${\bf R}^2$, which uses triples
of strings, with each triple representing a point in the Euclidean
plane, albeit in a non-unique way.
\fi
If $f$ is lacking property 3, we prefer to think of $f$
as a \emph{compression scheme}.
\ifFull
Examples of compression schemes are
mappings that use look-up tables, which are built incrementally based on
sequences of previous point assignments~\cite{ils-lcpfp-05}.
Given a compression scheme $f:\Sigma_f^*\rightarrow S$,
note that it is possible to
construct a coordinate system $f':\Sigma_{f'}^*\rightarrow S$ by
augmenting strings in $\Sigma_f^*$ with the data required
to evaluate $f$ (such as the assignments of other points in a set of
interest).
\fi
\section{Greedy Routing in Christmas Cactus Graphs}
\label{section:greedy-routing}
Our method is a non-trivial adaptation of
the Leighton and Moitra scheme~\cite{lm-srgem-08}, so we begin
by reviewing some of the ideas from their work.
A graph $G$ is said to be a \emph{Christmas cactus graph} if: (1) each
edge of $G$ is in at most one cycle, (2) $G$ is connected, and (3) removing
any vertex disconnects $G$ into at most two components.
For ease of discussion, we consider any edge in a Christmas cactus graph that
is not in a simple cycle to be a simple cycle itself (a 2-cycle); hence,
every edge in is in exactly one simple cycle.
The \emph{dual tree} of a Christmas cactus graph $G$ is a tree
containing a vertex for each simple cycle in $G$ with an edge
between two vertices if their corresponding cycles in $G$ share a vertex.
Rooting the dual tree at an arbitrary vertex creates what we call a \emph{depth tree}.\ifFull
(See Fig.~\ref{fig:cactus}.)
\begin{figure}[!hbt]
\begin{center}
\begin{tabular}{cc}
\includegraphics[scale=0.7]{xmas_cactus.pdf} &
\includegraphics[scale=0.7]{dual_tree2.pdf} \\
(a) & (b)
\end{tabular}
\end{center}
\caption{(a) A Christmas cactus graph and (b) its dual tree.}
\label{fig:cactus}
\end{figure}
\fi
Having a depth tree allows us to apply the rooted tree terminology to
cycles in $G$. In particular: \emph{root}, \emph{depth}, \emph{parent}, \emph{child},
\emph{ancestor}, and \emph{descendant} all retain their familiar definitions.
We define the \emph{depth} of a node $v$ to be the minimum depth of any
cycle containing $v$. The unique node that a cycle $C$ shares with its parent
is called the \emph{primary node} of $C$. Node $v$ is a \emph{descendant} of
a cycle $C$ if $v$ is in a cycle that is a descendant of $C$ and $v$ is not
the primary node of $C$. Node $v$ is a \emph{descendant} of node $u$ if removing
neighbors of $u$ with depth less than or equal to $u$ leaves $u$ and $v$ in
the same component.
\subsection{Greedy Routing with a Christmas Cactus Graph Embedding}
\ifFull
Leighton and Moitra~\cite{lm-srgem-08} show that every 3-connected planar
graph contains a spanning Christmas cactus subgraph and that every Christmas
cactus graph has a greedy embedding in ${\bf R}^2$, which together imply
that 3-connected planar graphs have greedy embeddings in ${\bf R}^2$.
\fi
Working level by level in a depth tree, Leighton and Moitra~\cite{lm-srgem-08}
embed the cycles of a Christmas cactus graph on semi-circles of increasing
radii, centered at the origin. Within the embedding we say that vertex
$u$ is above vertex $v$ if $u$ is embedded farther from the origin than
$v$, and we say that $u$ is to the left of $v$ if $u$ is embedded in the positive
angular direction relative to $v$. We can define below and right similarly.
These comparisons naturally give rise to
directions of movement between adjacent vertices in the embedding:
up, down, left, and right.
\begin{figure}[!b]
\begin{center}
\begin{tabular}{c@{\hspace*{5em}}c}
\includegraphics[scale=0.7,viewport=0cm 3.5cm 7cm 8.5cm, clip=true]{route1.pdf}
&
\includegraphics[scale=0.7,viewport=0cm 3.5cm 7cm 8.5cm, clip=true]{route2.pdf} \\%[-8pt]
(a) & (b)
\end{tabular}
\end{center}
\caption{ Arrows indicate valid greedy hops.
(a) Descendants of $s$ can be reached by a simple path of
up and right hops, up and left hops, or
a combination of the two.
(b) If $t$ is not a descendant of $s$, then we route down and (left or right)
in the direction of $t$ until we reach an ancestor of $t$.
}
\label{fig:subcactus_at_s}
\label{fig:cactus_lca}
\end{figure}
Routing from start vertex $s$ to a terminal vertex $t$ in a Christmas
cactus graph embedding can be broken down into two cases: (1) $t$ is
a descendant of $s$, and (2) $t$ is not a descendant of $s$.
\begin{enumerate}
\item As shown in Fig.~\ref{fig:subcactus_at_s}(a), if $t$ is a descendant
of $s$, then we can route to $t$ by a simple path of up and
right hops, up and left hops, or a
combination of the two.
\item As shown in Fig.~\ref{fig:cactus_lca}(b), if $t$ is not a descendant
of $s$, then we route to the least common (cycle) ancestor of
$s$ and $t$. Suppose, without loss of generality, that $t$ is to
the left of $s$, then we can reach this cycle by a sequence of
down and left hops. Once on the cycle, we can
move left until we reach an ancestor of $t$. Now we are back in
case 1.
\end{enumerate}
\ifFull
\subsection{A Succinct Compression Scheme}
Using the Christmas cactus graph embedding discussed above, we can assign
succinct integer values to each vertex, allowing us perform greedy
routing according to the Euclidean $L_2$ metric. Our embedding
$f:V(G)\rightarrow {\bf Z}_{n}^3$ produces a triple of the following integers:
$\mathrm{radialOrder}(v)$: the number of vertices to the right of $v$;
$\mathrm{level}(v)$: the number of semi-circles between the vertex
and the origin, excluding the semi-circle that $v$ is embedded on; and
$\mathrm{boundary}(v)$: the smallest $\mathrm{radialOrder}$ value of all vertices that are
descendants of $v$. The Leighton-Moitra embedding has the
property that all descendants of $v$ fall between $v$ and the
vertex embedded immediately to the right of $v$ on the same level as $v$.
Since each element of the triple can take on values in the range $[0,n]$,
the triple can be stored using $O(\log n)$ bits.
We can implement each step of the routing scheme using only the
triples of $s$, the neighbors of $s$, and $t$. Queries of the form
\emph{$u$ is left/right of $v$}, involve a straightforward
comparison of the $\mathrm{radialOrder}$ element of the triple. Likewise for \emph{$u$ is
above/below $v$}, using $\mathrm{level}$. The same comparisons can be used to determine
which neighbors of $s$ are a left, right, down, or
up move away. Finally, queries of the form \emph{$u$ is a descendant
of $v$} are true if and only if $\mathrm{boundary}(v) \leq \mathrm{radialOrder}(u) \leq
\mathrm{radialOrder}(v)$ and $\mathrm{level}(v) \leq \mathrm{level}(u)$.
To extend this routing scheme to graphs that have a spanning Christmas cactus
subgraph, we need to ensure that the routing scheme
does not fail by following edges that are not in the Christmas cactus subgraph.
Since the Christmas cactus graph has bounded degree $4$, for a node $v$, we can
store the triples of neighbors of $v$ in the Christmas cactus graph, in addition
to storing the triple for $v$, and only allow our greedy routing scheme to
choose vertices that are neighbors in the Christmas cactus subgraph. Storing
these extra triples in the coordinate does not increase its asymptotic
bit-complexity.
This routing scheme is greedy according to the Euclidean coordinates of the
vertices, using the Euclidean $L_2$ metric. Unfortunately, if we only
have access to the integer triples then it is not obvious that there
is any metric that we can define that will satisfy the definition
for greedy routing using just these integer values. Therefore, we must
concede that, while this routing scheme fulfills the spirit of greedy
routing, it is not greedy routing in the strictest sense. This is an
example of a compression scheme and not a coordinate system.
\else
This routing scheme immediately gives rise to a simple
succinct compression scheme for 3-connected planar graphs, which we discuss in
the full version of this paper.
\fi
\section{Toward a Succinct Greedy Embedding}
Given a 3-connected planar graph, we can find a spanning Christmas cactus
subgraph in polynomial time~\cite{lm-srgem-08}. Therefore, we restrict our
attention to Christmas cactus graphs. Our results apply to
3-connected planar graphs with little or no modification. In this section, we construct a novel
greedy embedding scheme for any Christmas cactus graph in ${\bf R}^2$. We
then build a coordinate system from
our embedding and show that the coordinates can be represented using
$O(\log^2{n})$ bits. In the next section, we show how to achieve an
optimal $O(\log{n})$-bit representation.
\subsection{Heavy Path Decompositions}
We begin by applying the Sleator and Tarjan~\cite{SleTar-JCSS-83}
\emph{heavy path decomposition} to the depth tree $T$ for $G$.
\begin{definition} Let $T$ be a rooted tree. For each node $v$ in $T$,
let $n_{T}(v)$ denote the number of descendants of $v$ in $T$, including $v$.
For each edge $e = (v,\mathrm{parent}(v))$ in $T$, label $e$ as a heavy edge
if $n_{T}(v) > n_{T}(\mathrm{parent}(v))/2$. Otherwise, label $e$ as a light edge.
Connected components of heavy edges form paths, called heavy paths.
Vertices that are incident only to light edges are considered to be
zero-length heavy paths. We call this the
{\bfseries heavy path decomposition} of $T$.
\end{definition}
For ease of discussion, we again apply the terminology from nodes in $T$ to
cycles in $G$. A cycle in $G$ is on a heavy path $H$ if its dual node in $T$
is on $H$. Let $H$ be a heavy path in $T$. We say that $\mathrm{head}(H)$ is the
cycle in $H$ that has minimum depth, we define $\mathrm{tail}(H)$ similarly.
Let $C_1$ and $C_2$ be two cycles such that $C_1 = \mathrm{parent}(C_2)$ and let
$\{p\} = V(C_1) \cap V(C_2)$.
If $C_1$ and $C_2$ are on the same heavy path then we call $p$ a \emph{turnpike}.
If $C_1$ and $C_2$ are on different heavy paths (where $C_1=\mathrm{tail}(H_1)$ and
$C_2 = \mathrm{head}(H_2)$) then we call $p$ an \emph{off-ramp} for $H_1$ and the vertices
$v\in V(C_2) \setminus \{p\}$ \emph{on-ramps} for $H_{2}$.
\subsection{An Overview of Our Embedding Strategy}
Like Leighton and Moitra~\cite{lm-srgem-08}, we lay the cycles
from our Christmas cactus graph on concentric semi-circles of radius
$1=R_0 < R_1 < R_2 \ldots$; however, our embedding has the
following distinct differences: we have $\Theta(n\log n)$ semi-circles
instead of $O(n)$ semi-circles, on-ramps to heavy paths are embedded on special
semi-circles which we call \emph{super levels},
turnpikes are placed in a predefined position when cycles are embedded,
and the radii of semi-circles can be computed without knowing the topology
of the particular Christmas cactus graph being embedded.
\ifFull
Since the path from the root to any leaf in the depth tree
contains $O(\log{n})$ heavy paths, our embedding has $O(\log{n})$
of super levels. Between super levels we lay out the non-trivial
heavy paths on \emph{baby levels}. \fi
To make our embedding scheme amenable to a proof by induction, we
modify the input Christmas cactus graph. After constructing a greedy embedding of
this modified graph, we use it to prove that we have a greedy
embedding for the original graph.
\subsection{Modifying the Input Christmas Cactus Graph}
Given a Christmas cactus graph $G$ on $n$ vertices, we choose a depth
tree $T$ of $G$, and compute the heavy path decomposition of $T$.
For a cycle $C$ on a heavy path $H$, we define $\mathrm{relativeDepth}(C)$ to be
$\mathrm{depth}(C) - \mathrm{depth}(\mathrm{head}(H))$. For each $C_1$, $C_2 = \mathrm{child}(C_1)$ forming
a light edge in $T$, let $\{p\} = V(C_1) \cap V(C_2)$. Split $p$ into two
vertices $p_1$ and $p_2$ each on their own cycle, and connect $p_1$ to $p_2$
with a path of $n-1-\mathrm{relativeDepth}(C_1)$ edges. The new graph $G'$
is also a Christmas cactus graph, and our new depth tree $T'$ looks
like $T$ stretched out so that heads of heavy paths (from $T$) are
at depths that are multiples of $n$. (See Fig.~\ref{fig:heavy}.)
\leaveout{
\begin{figure}[!t]
\begin{center}
\begin{tabular}[b]{cc}
\includegraphics[scale=0.7]{depth_tree.pdf} &
\includegraphics[scale=0.7,clip=true,viewport=6cm 1.5cm 12.5cm 11cm]{mod_depth_tree.pdf} \\
(a) & (b)
\end{tabular}
\caption{(a) A depth tree $T$ with positive-length heavy paths highlighted, and
(b) the new depth tree $T'$ after the modification procedure.}
\label{fig:heavy}
\end{center}
\end{figure}
}
We continue to call the paths copied from $T$ heavy paths (though they do
not form a heavy path decomposition of $T'$), and the newly inserted edges
are \emph{dummy} edges.
\subsection{Embedding the Modified Christmas Cactus Graph in ${\bf R}^2$}
Given a Christmas cactus graph $G$ on $n$ vertices, run the modification
procedure described above and get $G'$ and $T'$. We embed $G'$ in
phases, and prove by induction that at the end of each phase
we have a greedy embedding of an induced subgraph of $G'$.
\begin{lemma}[Leighton and Moitra~\cite{lm-srgem-08}]
\label{lemma-geom}
\ifFull
If the coordinates
\begin{align*}
c &= (0,1 + z) \\
b &= (-\sin \beta, \cos \beta) \\
a &= (-(1 + \epsilon)\sin(\beta - \alpha),(1+\epsilon)\cos(\beta - \alpha))
\end{align*}
are subject to the constraints
\begin{align*}
0 < &\alpha,\beta \leq \pi/2 \\
0 < &\epsilon \leq \frac{1 - \cos \beta}{6} \\
0 \leq &z \leq \epsilon \\
\sin \alpha \leq &\frac{\epsilon(1 - \cos \beta)}{2(1+\epsilon)}
\end{align*}
then $d(a,c) - d(b,c) \geq \epsilon^2 > 0$.
\else %
If points
$c = (0,1 + z)$, $b = (-\sin \beta, \cos \beta)$, and
$a = (-(1 + \epsilon)\sin(\beta - \alpha),(1+\epsilon)\cos(\beta - \alpha))$
are subject to the constraints
$0 < \alpha \leq \pi/2$, $0 < \beta \leq \pi/2$,
$0 < \epsilon \leq (1 - \cos \beta)/6$,
$0 \leq z \leq \epsilon$, and
$\sin \alpha \leq \frac{\epsilon(1 - \cos \beta)}{2(1+\epsilon)}$
then $d(a,c) - d(b,c) \geq \epsilon^2 > 0$.
\fi
\end{lemma}
We begin by embedding the root cycle, $C = (v_0,\ldots,v_{k-1})$, of $T'$.
We trace out a semi-circle of radius $R_0=1$ centered at the origin and
divide the perimeter of this semi-circle into $2n+1$ equal arcs. We allow
vertices to be placed at the leftmost point of each arc, numbering these
positions $0$ to $2n$. We place vertices $v_0,\ldots,v_{k-1}$ clockwise into
any $k$ distinct positions, reserving position $n$ for $C$'s turnpike. If $C$
does not have a turnpike, as is the case if $C$ is a dummy edge or the tail
of a heavy path, then position $n$ remains empty. The embedding of $C$ is
greedy\Leaveout{}{\ (proof omitted here)}.
\leaveout{
\begin{proof} If $C$ is a 2-cycle, then
the embedding of $C$ is greedy regardless of where the vertices are
embedded. Otherwise, consider each segment $su\neq v_0v_{k-1}$. The
perpendicular bisector to $su$ does not intersect any of our
embedded vertices. $u$ is the neighbor of $s$ that is closer to every vertex
on the $u$ side of the perpendicular bisector. Since all such segments have this
property, the embedding of $C$ is greedy.
\end{proof}
}
\emph{Inductive Step:}
Suppose we have a greedy embedding all cycles in $T'$ up to depth $i$,
call this induced subgraph $G_i'$. We show that the embedding can
be extended to a greedy embedding of $G_{i+1}'$. Our proof relies on
two values derived from the embedding of $G_i'$.
\begin{definition}
Let $s$, $t$ be any two distinct vertices in $G_i'$ and
fix $n_{s,t}$ to be a neighbor of $s$ such that $d(s,t) > d(n_{s,t},t)$.
We define $\delta(G_i')=\min_{s,t}\{d(s,t) - d(n_{s,t},t)\}$.
\end{definition}
We refer to the difference $d(s,t) - d(n_{s,t},t)$ as the \emph{delta value}
for distance-decreasing paths from $s$ to $t$ through $n_{s,t}$.
\begin{definition}
Let $\beta(G_i')$ to be the minimum (non-zero) angle
that any two vertices in the embedding of $G_i'$ form with the origin.
\end{definition}
\begin{figure}[!]
\vspace*{-28pt}
\begin{center}
\includegraphics[scale=0.7]{base.pdf}
\end{center}
\vspace*{-16pt}
\caption{$s$, $u$ and $t$ form a lower bound for $\delta(G_0')$.}
\vspace*{-16pt}
\label{fig:delta_0}
\end{figure}
Since we do not specify exact placement of all vertices, we cannot compute
$\delta(G_0')$ and $\beta(G_0')$ exactly. We instead compute positive
underestimates, $\delta_0$ and $\beta_0$, by considering hypothetical
vertex placements, and by invoking the following lemma.
\begin{lemma}
\label{lemma-delta}
Let $s$ and $u$ be two neighboring vertices embedded in the
plane. If there exists a vertex $t$ that is simultaneously
closest to the perpendicular bisector of $su$ (on the $u$ side),
and farthest from the line $su$, then the delta value for $s$ to $t$
through $u$ is the smallest for any choice of $t$.
\end{lemma}
Applying the above lemma to all hypothetical $s$, $u$, and $t$ placements for the
embedding of $G_0'$ leads to the underestimate
$\delta_0 = 2 - \sqrt{2 + 2\cos{\frac{\pi}{2n+1}}}
< d(s,t)-d(u,t) \leq \delta(G_0')$ where $s$, $u$, and $t$ are shown in Fig.~\ref{fig:delta_0}. Trivially, $\beta_0 = \frac{\pi}{2n+1} \leq \beta(G_0')$.
We now show how to obtain a greedy embedding of $G_{i+1}'$, given a greedy embedding of $G_i'$ and values $\delta_i$ and $\beta_i$.
Let $\epsilon_i = \min\{\delta_i/3, R_i\frac{1-\cos{\frac{2}{3}\beta_i}}{6}\}$.
Trace out a semi-circle of radius $R_{i+1} = R_i + \epsilon_i$ centered at
the origin. Each cycle at depth $i+1$ of $T'$ has the form $C=(v, x_1,\ldots, x_m)$
where $v$, the primary node of $C$, has already been embedded on the $i$th
semi-circle. We embed vertices $x_1$ to $x_m$ in two subphases: \\
\noindent\emph{Subphase 1} We first embed vertex $x_1$ from each $C$.
Choose an orientation for $C$ so that $x_1$ is not a turnpike.\footnote{For
the case where $C$ is a 2-cycle and $x_1$ is a turnpike we insert a
temporary placeholder vertex $p$ into $C$ with edges to $v$ and $x_1$, and treat
$p$ as the new $x_1$. We can later remove this placeholder by transitivity.} We place $x_1$ where the ray beginning at the origin
and passing through $v$ meets semi-circle $i+1$. We now show that distance
decreasing paths exist between all pairs of vertices embedded thus far.
Distance decreasing paths between vertices in $G_i'$ are preserved by the
induction hypothesis. For $t$ placed during this subphase: $t$ has a neighbor $v$
embedded on semi-circle $i$. If $s=v$ then $s$'s neighbor $t$ is strictly closer to $t$.
Otherwise if $s\in G_i'$ then since $t$ is within distance $\delta_i/3$ of $v$,
then $s$'s neighbor $u$ that is closer to $v$ is also closer to $t$.
\ifFull
By definition of $\delta_i$, $d(s,v) \geq d(u,v) + \delta_i$.
Since $t$ is in the $\delta_i/3$-ball around $v$,
$d(s,t) \geq d(s,v) - \delta_i/3$, and $d(u,t) \leq d(u,v) + \delta_i/3$.
Then,
\begin{align*}
d(s,t) &\geq d(s,v) - \delta_i/3\\
&\geq d(u,v) + \delta_i - \delta_i/3\\
&\geq d(u,v) - \delta_i/3 + \delta_i - \delta_i/3\\
&= d(u,t) + \delta_i/3\\
&> d(u,t)
\end{align*}
Therefore, $s$'s neighbor $u$ that is closer to the primary node $v$ is also closer to $t$.
\fi
If $s$ was placed during this subphase then $s$ is within distance
$R_i\frac{1-\cos{\frac{2}{3}\beta_i}}{6}$ from its neighbor $v$, and
the perpendicular bisector of $sv$ contains $s$ on one side and
every other vertex placed on the other side. Therefore $s$'s neighbor
$v$ is closer to $t$.
The next subphase requires new underestimates, which we call $\delta^1_{i}$
and $\beta^1_i$. By construction, $\beta^1_i = \beta_i$.
No $s$--$t$ paths within $G_i'$ decrease the delta value.
Paths from $s\in G_i'$ to $t$ placed in this subphase have delta value at
least $\delta_i/3$ by design.
\ifFull This follows directly from the proof of greediness of this
subphase.\fi
For paths from $s$ placed in this subphase, $s$'s neighbor $v$ is
the closest vertex to the perpendicular bisector
of $sv$ on the $v$ side. If we translate $v$ along the perpendicular bisector of $sv$
to a distance of $R_{i+1}$ from $sv$, this hypothetical point allows us to invoke
Lemma~\ref{lemma-delta} to get an underestimate for the delta value of all
paths beginning with $s$. Therefore, our new underestimate is:
$\delta^1_{i} = \min\{\delta_i/3, \sqrt{R_{i+1}^2 + \epsilon_i^2} - R_{i+1}\}$.\\
\noindent\emph{Subphase 2} We now finish embedding each cycle
$C = (v, x_1,\ldots x_m)$. Let the value $\alpha = \min\{\beta^1_{i}/3,$ $\delta^1_{i}/(3R_{i+1})\}$,
s.t. $\sin{\alpha} \leq
\frac{\epsilon_i(1-\cos{\frac{2}{3}\beta^1_i)}}{2(1+\epsilon_i)}$.
Trace out an arc of length $R_{i+1}\alpha$ from the embedding of $x_1$, clockwise
along semi-circle $i+1$. We evenly divide this arc into $2n+1$
positions, numbered $0$ to $2n$. Position $0$ is already filled by $x_1$.
We embed vertices in clockwise order around the arc in $m-1$
distinct positions; reserving position $n$ for $C$'s turnpike. If
there is no such node, position $n$ remains empty.
This completes the embedding of $G_{i+1}'$. We show that the embedding of
$G_{i+1}'$ is greedy. We only need to consider distance decreasing paths
that involve a vertex placed during this subphase. For $t$ placed during
this subphase, $t$ is within distance $\delta^1_{i}/3$ from an $x_1$,
therefore, all previously placed $s \neq x_1$ have a neighbor $u$ that is closer
to $t$. If $s=x_1$ the $s$'s neighbor closer to $t$ is $x_2$.
Finally, for $s$ placed during this subphase, let the cycle that
$s$ is on be $C=(v, x_1,\ldots,x_m)$. For $s=x_i\neq x_m$,
since $\alpha \leq \beta^1_i/3$, the interior of the sector formed by
$x_1$, $x_m$ and the origin is empty, therefore
$t$ is either on the $x_{i-1}$ side of the
perpendicular bisector to $x_{i-1}x_i$ or on the $x_{i+1}$
side of the perpendicular bisector to $x_ix_{i+1}$.
If $s=x_m$ If $t$ is embedded to the left $s$, the closer neighbor
is $x_{m-1}$. Otherwise, applying Lemma~\ref{lemma-geom},
our choice of $\sin{\alpha} \leq
\frac{\epsilon_i(1-\cos{\frac{2}{3}\beta^1_i})}{2(1+\epsilon_i)}$
forces the perpendicular bisector to $sv$ to have $s$ on one side, and
all nodes to the right of $s$ on the other side. All cases are considered,
so the embedding of $G_{i+1}'$ is greedy.
To complete the inductive proof, we must compute $\delta_{i+1}$ and $\beta_{i+1}$.
Trivially, $\beta_{i+1} = \frac{\alpha}{2n} \leq \beta(G_{i+1}')$.
Distance decreasing paths between vertices placed before this subphase
will not update the delta value. Therefore, we only evaluate paths
with $s$ or $t$ embedded during this subphase.
By design, paths from $s$ previously placed to $t$ placed during
this subphase have a delta value $\geq \delta_{i}^1/3$. Distance-decreasing paths
from $s$ placed in this subphase to $t\in G_{i+1}'$ take two
different directions. If $s$'s neighbor $u$ which is closer to $t$
is on semi-circle $i+1$ then points that are closest to the
perpendicular bisector to $su$ are along the perimeter of the sector formed
by $s$, $u$, and the origin. The point closest to the perpendicular bisector
is where the first semi-circle intersects the sector. We translate this
point down $R_{i+1} + 2$ units along the perpendicular
bisector, and we have an underestimate for the delta value for
any path beginning with a left/right edge. If $s$'s
neighbor that is closer to $t$ is on the $i$th semi-circle, then
a down edge is followed. To finish, we evaluate down edges $su$
added during the second subphase. The closest vertex to the
perpendicular bisector to $su$ on the $u$ side is either $u$, or the
vertex placed in the next clockwise position the $i+1$th semi-circle.
Translating this point $2R_{i+1}$ units away from $su$ along
the perpendicular bisector gives us the an underestimate for
paths beginning with $su$.
This completes the proof for the greedy embedding of $G'$.
We call the levels where the on-ramps to heavy paths are embedded
\emph{super levels},
and all other levels are \emph{baby levels}. There are $n-1$
baby levels between consecutive super levels and, since any path from root to leaf
in a depth tree travels through $O(\log n)$ different heavy paths, there are $O(\log{n})$ super
levels.
\ifFull %
\begin{figure}[!t]
\begin{center}
\begin{tabular}{ccc}
\includegraphics[scale=0.8,viewport=2.0cm 2.0cm 9.5cm 7.5cm,clip=true]{removal1.pdf}
&
\hspace{10pt}
&
\includegraphics[scale=0.8,viewport=2.0cm 2.0cm 9.5cm 7.5cm,clip=true]{removal2.pdf} \\
(a) && (b)
\end{tabular}
\caption{(a) Before removal of dummy nodes and (b) after removal.}
\label{fig:removal}
\end{center}
\end{figure}
\subsection{Obtaining a Greedy Embedding of $G$}
Let $G'$ be a modified Christmas cactus graph greedily embedded using the
procedure discussed above. We now show that collapsing the dummy edges
leaves us with a graph $G$ and a greedy embedding of $G$.
Let $C_1$, $C_2$ be any two cycles with a path of dummy edges between
them. We show that collapsing this path down to a single vertex gives
us new graph that is also greedily embedded.
\begin{proof}
Assume, without loss of generality, that $C_2$ is a descendant of $C_1$.
Let $P$ be the path of dummy edges between $C_1$ and $C_2$. Let $p_1$ be the vertex
that cycle $C_1$ shares with $P$, let $p_2$ be the vertex that cycle $C_2$
shares with $P$.
Collapse the path $P$ down to the vertex $p_1$, call this new graph $G''$.
We assign vertices in $G''$ the same coordinates in ${\bf R}^2$ that they are
assigned in the embedding of $G'$. (See Fig.~\ref{fig:removal}.)
We show that distance-decreasing paths exist between all pairs of vertices
in the embedding of $G''$, using the greediness of the embedding of $G'$.
Consider any two vertices $s$ and $t$ in $G''$. There are four cases:
\begin{enumerate}
\item If a distance-decreasing path from $s$ to $t$ in $G'$ involves both
$p_1$ and $p_2$, then there is a distance-decreasing path in $G''$
by transitivity.
\item If a distance-decreasing path from $s$ to $t$ in $G'$ involves $p_1$
and not $p_2$, then the same distance-decreasing path exists in $G''$
since no vertices or edges on this path were modified.
\item If a distance-decreasing path from $s$ to $t$ in $G'$ involves $p_2$ and
not $p_1$, then either $s$ or $t$ is not in $G''$. Therefore, this case
is irrelevant.
\item If a distance-decreasing path from $s$ to $t$ in $G'$ involves neither
$p_1$ nor $p_2$, then the same distance-decreasing path exists in $G''$
since no vertices or edges on this path were modified.
\end{enumerate}
Therefore, since there are distance decreasing paths between all $s$ and $t$ in
our embedding of $G'$, there are distance-decreasing paths between all $s$ and $t$
in the embedding of our new graph as well.
\end{proof}
Furthermore, every distance-decreasing path in $G''$ looks like the same
path from $G'$, but with vertices in $P \setminus \{p_1\}$ removed.
We apply the above modification algorithm to $G'$ repeatedly, until all
dummy edges are removed. After removing all of the dummy edges in this
way, we have our original graph $G$ and a greedy embedding of $G$.
\else
To obtain a greedy embedding for $G$, we repeatedly collapse dummy edges
in $G'$ until we get $G$. When we collapse an edge $(p,x_1)$,
where $p$ is the primary node for the 2-cycle, we collapse
the edge to vertex $p$ in the embedding. Collapsing in the other
direction may break distance decreasing paths through $p$ and neighbors of $p$
embedded on the same semi-circle as $p$. After collapsing all such
dummy edges, we have a greedy embedding of $G$.
\fi
\subsection{Our Coordinate System}
Let $v$ be a vertex in $G$. We define $\mathrm{level}(v)$ to be the number
of baby levels between $v$ and the previous super level
(zero if $v$ is on a super level) and $\mathrm{cycle}(v)$ to be the
position, $0$ to $2n$, where $v$ is placed when its
cycle is embedded. These values can be assigned to vertices
without performing the embedding procedure.
Let $s$ be $v$'s ancestor on the first super level.
The path from $s$ to $v$ passes through $O(\log(n))$ heavy paths, entering
each heavy path at an on-ramp, and leaving at an off-ramp.
We define $v$'s coordinate to be a $O(\log{n})$-tuple
consisting of the collection of $(\mathrm{level}(\cdot), \mathrm{cycle}(\cdot))$ pairs for each
off-ramp where a change in heavy paths occurs on the path from $s$ to $v$, and the pair
$(\mathrm{level}(v),\mathrm{cycle}(v))$, which is either an off-ramp or a turnpike.
Using the coordinate for $v$ and the parameter $n$, we can compute the
Euclidean coordinates for all the turnpikes and off-ramps where a change
in heavy path occurs on the path from $s$ to $v$, including the
coordinate for $v$. Thus, we have defined a coordinate system for
the Euclidean plane.
Using a straightforward encoding scheme, each level-cycle pair is encoded
using $O(\log{n})$ bits. Since a coordinate contains $O(\log{n})$ of
these pairs, we encode each coordinate using $O(\log^2{n})$ bits.
\subsection{Greedy Routing with Coordinate Representations}
\ifFull
Although contrived, it is possible to perform greedy geometric
routing by converting our coordinates to Euclidean points and using
the Euclidean $L_2$ metric whenever we need to make a comparison
along the greedy route. Alternatively, we can define a comparison rule,
which can be used for greedy routing
in our coordinate system, and which evaluates consistently with the
$L_2$ metric for all vertices on the path from start to goal.
\fi
By design, the routing scheme discussed in Sect.~\ref{section:greedy-routing}
is greedy for our embedding. We develop a comparison rule
using the potential number of edges that may be traversed on
a specific path from $s$ to $t$.
Let $s_i$ be the vertex between super levels $i$ and $i+1$, whose
$\mathrm{level}$-$\mathrm{cycle}$ pair is in position $i$ of $s$'s coordinate.
We define $t_i$ similarly. Let $\mathrm{superlevel}(s)$ be the position that
contains the level-cycle pair for $s$ itself.
Let $h$ be the smallest integer such that $s_h$ and $t_h$ differ.
Using the level-cycle pairs for $s_h$ and $t_h$, we
can compute the level-cycle pair for the off-ramps on the least
common ancestor $C$ that diverge toward $s$ and $t$, which we
call $s_C$ and $t_C$. That is, if
$\mathrm{level}(s_h) = \mathrm{level}(t_h)$ then $s_C = s_h$ and
$t_C = t_h$. Otherwise, assume without loss of generality that
$\mathrm{level}(s_h) < \mathrm{level}(t_h)$, then $s_C$'s pair is $(\mathrm{level}(s_h),\mathrm{cycle}(s_h))$
and $t_C$ is a turnpike with the pair $(\mathrm{level}(s_h), n)$.
We define $l$, $r$, $d$, $u$ be the potential number of left,
right, down, and up edges that may be traversed from $s$ to $t$. Values
$d$ and $u$ are simply the number of semi-circles passed through by
down and up hops, respectively. That is,
{\small
\[d = (\mathrm{superlevel}(s)\cdot n + \mathrm{level}(s)) - (hn + \mathrm{level}(s_C))\]
\[u = (\mathrm{superlevel}(t)\cdot n + \mathrm{level}(t)) - (hn + \mathrm{level}(t_C)).\]
}
If $\mathrm{cycle}(t_C) < \mathrm{cycle}(s_C)$, then we count the maximum number left
edges on the path from $s$ to $t_C$, and the maximum number of right
edges from $t_C$ to $t$. That is,
{\small
\[l = \begin{cases}
\mathrm{cycle}(s) + 2n(d-1) + \mathrm{cycle}(s_C) - \mathrm{cycle}(t_C)&\text{if $s \neq s_C$}, \\
\mathrm{cycle}(s_C) - \mathrm{cycle}(t_C) &\text{if $s = s_C$}. \end{cases}\]
\[r = \begin{cases}
2n(u-1) + \mathrm{cycle}(t) &\text{if $t \neq t_C$}, \\
0 &\text{if $t = t_C$}. \end{cases}\]
}
If $\mathrm{cycle}(t_C) \geq \mathrm{cycle}(s_C)$, then we count the maximum number of right
edges on the path from $s$ to $t_C$, and the maximum number of right edges
from $t_C$ to $t$. That is,
{\small
\[l = 0\]
\[r = r_1 + r_2,\text{ where}\]
\[r_1 = \begin{cases}
2n-\mathrm{cycle}(s) + 2n(d-1) + \mathrm{cycle}(t_C) - \mathrm{cycle}(s_C) &\text{if $s \neq s_C$}, \\
\mathrm{cycle}(t_C) - \mathrm{cycle}(s_C) &\text{if $s = s_C$}. \end{cases}\]
\[r_2 = \begin{cases}
2n(u-1) + \mathrm{cycle}(t) &\text{if $t \neq t_C$}, \\
0&\text{if $t = t_C$}. \end{cases}\]
}
Our comparison rule is:
\[D(s,t) = l + r + (2n+1)u + d.\]
Following the routing scheme from Sect.~\ref{section:greedy-routing}, any
move we make toward the goal will decrease $D(\cdot,\cdot)$, and all other
moves will will increase $D(\cdot,\cdot)$ or leave it unchanged.
Therefore, we can use this comparison rule to perform greedy routing
in our embedding efficiently, and comparisons made along the greedy route
will evaluate consistently with the corresponding Euclidean coordinates
under the $L_2$ metric.
\section{An Optimal Succinct Greedy Embedding}
Conceptually, the $\mathrm{level}(\cdot)$ and $\mathrm{cycle}(\cdot)$ values used
in the previous section are
encoded as integers whose binary representation corresponds to
a path from root to a leaf in a full binary tree with $n$ leaves. Instead
of encoding with a static $O(\log{n})$ bits per integer, we will
modify our embedding procedure so we can further exploit the heavy path
decomposition of the dual tree $T$, using
\emph{weight-balanced binary trees}~\ifFull\cite{GilMoo-BSTJ-59,Knu-AI-71}.\else%
\cite{Knu-AI-71}.\fi
\begin{definition} A {\bfseries weight-balanced binary tree} is a binary tree
which stores weighted items from a total order in its leaves. If item $i$
has weight $w_i$, and all items have a combined weight of $W$ then item
$i$ is stored at depth $O(\log{W/w_i})$. An inorder listing of the leaves outputs
the items in order.
\end{definition}
\ifFull By using appropriate weight functions with our weight-balanced
binary trees, we will be able to get telescoping sums
for the lengths of the codes for the $\mathrm{level}(\cdot)$
and $\mathrm{cycle}(\cdot)$ values,
giving us $O(\log{n})$ bits per coordinate, which is optimal. \fi
\subsection{Encoding the Level Values}
\ifFull
As in the $O(\log^2{n})$ embedding, we will lay the heavy paths between
super levels. However, we no longer require the on-ramps of heavy paths to be
embedded on super levels, nor do we require adjacent cycles on the same
heavy path to be embedded on consecutive levels; instead, cycles will be
assigned to baby levels by an encoding derived from a
weight-balanced binary tree. \fi
\ifFull
We will have a different weight-balanced binary tree for each heavy path
in our depth tree.
The items that we store in the tree are the cycles on the heavy path.
The path in the weight-balanced binary tree from the root to
the leaf containing a cycle gives us an encoding for the $\mathrm{level}$ that the
cycle should be embedded on between super levels.
\fi
Suppose we have a depth tree $T$ for $G$, and a heavy path
decomposition of $T$. Let $C$ be a simple cycle in $G$ on some heavy path
$H$ and let $C_{\mathrm{next}}$ be the next cycle on the heavy path $H$,
if it exists. Let $n(C)$ be the number of vertex descendants of
$C$ in $G$. We define a weight function $\gamma(\cdot)$ on the cycles
in $G$ as follows:
{\small
\[
\gamma(C) = \begin{cases}
n(C)&\text{if $C = \mathrm{tail}(H)$}, \\
n(C) - n(C_{\mathrm{next}})&\text{if $C \neq \mathrm{tail}(H)$}.
\end{cases}
\]
}
\ifFull \noindent That is, $\gamma(C)$ is the number of descendants of cycle $C$ in $G$
excluding the descendants of the next cycle on the heavy path with $C$. \fi
For each heavy path $H$, create a weight-balanced binary tree $B_H$ containing
each cycle $C$ in $H$ as an item with weight $\gamma(C)$, and impose a total order
so that cycles are in their path order from $\mathrm{head}(H)$ to $\mathrm{tail}(H)$.
Let $v$ be a vertex whose coordinate we wish to encode, and suppose $v$ is located
between super levels $l$ and $l+1$. Let $v_i$ be the vertex whose $\mathrm{level}$-$\mathrm{cycle}$
pair is in position $i$ of $v$'s coordinate. Let $v_i$ be contained in cycle $C_i$
(such that $v_i$ is not $C_i$'s primary node) on heavy path $H_i$.
\ifFull
Then the coordinate for $v$ will contain the collection of
$\mathrm{level}(\cdot)$ values for each
off-ramp $v_i$ on the path to $v$, and the $\mathrm{level}(\cdot)$ value for $v$
itself. Let $C_i$ be the cycle containing vertex $v_i$, such
that $v_i$ is not the primary node for $C_i\in H_i$. \fi
The code for $\mathrm{level}(v_i)$ is a bit-string representing the path
from root to the leaf for $C_i$ in the weight-balanced binary tree
$B_{H_i}$. Let $W_i$ be the combined weight of the items in $B_{H_i}$.
Since $C_i$ is at a depth of $O(\log{W_i/\gamma(C_i)})$, this is
length of the code.
Thus, the level values in $v$'s coordinate are encoded with
$O(\sum_{0\leq i\leq l}\log{W_i/\gamma(C_i)})$ bits total. \ifFull
We now show that this is a telescoping sum, giving us $O(\log{n})$ bits total.
\ifFull
All descendants counted in $W_i$ are counted in $\gamma(C_{i-1})$,
therefore, we have that $\gamma(C_{i-1}) \geq W_i$. By subtracting off
descendants that are further along the heavy path, we ensure
that $W_0 = n$. Thus, $\sum_{0\leq i\leq l} \log{W_i/\gamma(C_{i})}
\leq \log{W_0/\gamma(C_l)} \leq \log{n}$. \else
By design, this sum telescopes to $O(\log{n})$ bits.
\fi
\subsection{Encoding the Cycle Values}
For a node $v$ in $G$ we define a weight function $\mu(v)$ to be the number
of descendants of $v$ in $G$.
Let $C=(p,x_1,x_2,\ldots,x_m)$ be a cycle in $G$, where $p$ is
the primary node of $C$. Let $x_h$ be the turnpike that connects
$C$ to the next cycle on the heavy path, if it exists.
Let $x_i$ have weight $\mu(x_i)$ and impose a total order so
$x_j<x_k$ if $j<k$. For each cycle $C$, we create a weight-balanced binary tree $B_C$
containing nodes $x_1$ to $x_m$ as follows. We first create
two weight-balanced binary trees $B^1_C$ and $B^2_C$
where $B^1_C$ contains $x_j$ for $j < h$ and $B^2_C$ contains
$x_k$ for $k > h$. If no such $x_h$ exists, then choose an
integer $1\leq k\leq m$ and insert items $x_j$ for $j<k$ into
$B^1_C$ and insert the remaining items into $B^2_C$. We form our single
weight-balanced binary tree $B_C$ in two steps: (1) create a tree $B^3_C$ with
$B^1_C$ as a left subtree and a node for $x_h$ as a right subtree, and (2)
form $B_C$ with $B^3_C$ as a left subtree and $B^2_C$ as a right subtree.
We build $B_C$ in this way to ensure that every turnpike is given the same
path within its tree, and hence the same cycle code and value.
The code for $\mathrm{cycle}(v_i)$ is a bit-string representing the path from root to the
leaf for $v_i$ in the weight-balanced binary tree $B_{C_i}$. Let $W_i$ be the
combined weight of the items in $B_{C_i}$. Since $v_i$
is at a depth of $O(\log{W_i/\mu(v_i)})$, this is length of the code.
Thus, the $\mathrm{cycle}$ values in $v$'s coordinate are encoded
with $O(\sum_{0\leq i \leq l}\log{W_i/\mu(v_i)})$ bits total.
\ifFull We now show that this is a telescoping sum, giving us $O(\log{n})$
bits total.
Every descendant counted in $W_i$ is also counted in $\mu(r_{i-1})$,
thus $\mu(r_{i-1}) \geq W_i$. By design, $W_0 = n$. Hence
$\sum_{0\leq i\leq l} \log{W_i/\mu(r_{i})} \leq \log{W_0/w(r_l)} \leq \log{n}$.
\else
By design, this sum telescopes to $O(\log{n})$ bits.
\fi
\subsection{Interpreting the Codes}
Let $c$ be the smallest integer constant such that item $i$ stored
in the weight-balanced binary
tree is at depth $\leq c\log{W/w_i}$. We can treat the position of $i$
in the weight-balanced binary tree as a position in a full binary tree of
height $c\log{n}$. We interpret this code to be the number of tree nodes
preceding $i$ in an in-order traversal of the full binary tree. Using our
codes as described, we require $2n^c - 2$ baby levels between each super
level and $8n^c - 1$ cycle positions.
\subsection{An Overview of the Optimal Embedding}
Let $T$ be the depth tree for our Christmas cactus graph $G$.
We create weight-balanced binary trees on the heavy paths in $T$ and
on each of the cycles in $G$, giving us the $\mathrm{level}$ and $\mathrm{cycle}$ codes for
every vertex. We adjust the graph modification procedure
so that adjacent cycles on heavy paths are spaced out according to the
level codes. That is, adjacent cycles on the same heavy path have
heavy dummy edges (dummy edges that are considered to be on
the heavy path) inserted between them so that they are placed on
the appropriate baby levels. For cycles on different heavy paths,
we insert dummy edges to pad out to the next superlevel,
and heavy dummy edges to pad out to the appropriate baby level.
We embed the modified graph analogously to our $O(\log^2n)$ embedding, except
that the cycle codes dictate vertex placements.
We augment our coordinate system to store the $\mathrm{level}$ value
for elements on the root cycle, otherwise it is not possible to compute the
corresponding Euclidean point from our succinct representation.
The same comparison rule applies to our new coordinate system, with little
change to account for the new range of $\mathrm{level}$ and $\mathrm{cycle}$ values. Using this
embedding scheme and coordinate system we achieve optimal
$O(\log n)$ bits per coordinate.
\ifFull
\section{Conclusion}
We have provided a succinct coordinate-based representation for the
vertices in 3-connected planar graphs so as to support greedy routing in
${\bf R}^2$.
Our method uses $O(\log n)$ bits per vertex and allows greedy routing
to proceed using only our representation, in a way that is consistent
with the Euclidean metric.
For future work, it would be interesting to design an efficient
distributed algorithm to perform such
embeddings.
\fi
\bibliographystyle{abbrv}
|
1,314,259,994,899 | arxiv | \section{Introduction}\label{AR_s:introduction}
\subsection{Appetizer}\label{AR_s:appetizer
Consider throwing balls labeled $1, 2, \ldots, n$ into
a V-shaped bin with perpendicular sides.
\begin{center}
\[
\begin{tikzpicture}[scale=0.14
\draw (-97,9) -- (-88,0) -- (-79,9); \draw (-88,3) circle (2);
\draw(-88,1.25) node[above]{$1$};
\draw (-75,9) -- (-66,0) -- (-57,9); \draw (-66,3) circle (2);
\draw(-66,1.25) node[above]{$1$}; \draw (-69,6) circle (2);
\draw(-69,4.25) node[above]{$2$};
\draw (-53,9) -- (-44,0) -- (-35,9); \draw (-44,3) circle (2);
\draw(-44,1.25) node[above]{$1$}; \draw (-47,6) circle (2);
\draw(-47,4.25) node[above]{$2$}; \draw (-41,6) circle (2);
\draw(-41,4.25) node[above]{$3$};
\draw (-31,9) -- (-22,0) -- (-13,9); \draw (-22,3) circle (2);
\draw(-22,1.25) node[above]{$1$}; \draw (-25,6) circle (2);
\draw(-25,4.25) node[above]{$2$}; \draw (-19,6) circle (2);
\draw(-19,4.25) node[above]{$3$}; \draw (-16,9) circle (2);
\draw(-16,7.25) node[above]{$4$};
\draw (-9,9) -- (0,0) -- (9,9); \draw (0,3) circle (2);
\draw(0,1.25) node[above]{$1$}; \draw (-3,6) circle (2);
\draw(-3,4.25) node[above]{$2$}; \draw (3,6) circle (2);
\draw(3,4.25) node[above]{$3$}; \draw (6,9) circle (2);
\draw(6,7.25) node[above]{$4$}; \draw (0,9) circle (2);
\draw(0,7.25) node[above]{$5$};
\end{tikzpicture}
\]
\end{center}
\begin{question}\label{AR_q:appet}
What is the total number of resulting configurations?
How many configurations are there of any particular shape?
\end{question}
In order to answer these questions, at least partially, recall the
symmetric group $\Sc_n$ of all permutations of the numbers $1,
\ldots, n$. An {\dem involution} is a permutation $\pi \in \Sc_n$
such that $\pi^2$ is the identity permutation.
\begin{theorem}\label{AR_t:appet1}
The total number of configurations of $n$ balls is equal to the
number of involutions in the symmetric group $\Sc_n$.
\end{theorem}
Theorem~\ref{AR_t:appet1} may be traced back to Frobenius and Schur.
A combinatorial proof will be outlined in Section~\ref{AR_s:JdT}
(see Corollary~\ref{AR_t:RSK_cor1}).
\begin{example}
There are four configurations on three balls.
Indeed,
\[
\{\pi\in \Sc_3 \,:\, \pi^2=1\}=\{123,132,213,321\}.
\]
\end{example}
\begin{center}
\[
\begin{tikzpicture}[scale=0.14]
\draw (-75,9) -- (-66,0) -- (-57,9); \draw (-66,3) circle (2);
\draw(-66,1.25) node[above]{$1$}; \draw (-69,6) circle (2);
\draw(-69,4.25) node[above]{$2$}; \draw (-72,9) circle (2);
\draw(-72,7.25) node[above]{$3$};
\draw (-53,9) -- (-44,0) -- (-35,9); \draw (-44,3) circle (2);
\draw(-44,1.25) node[above]{$1$}; \draw (-47,6) circle (2);
\draw(-47,4.25) node[above]{$2$}; \draw (-41,6) circle (2);
\draw(-41,4.25) node[above]{$3$};
\draw (-31,9) -- (-22,0) -- (-13,9); \draw (-22,3) circle (2);
\draw(-22,1.25) node[above]{$1$}; \draw (-25,6) circle (2);
\draw(-25,4.25) node[above]{$3$}; \draw (-19,6) circle (2);
\draw(-19,4.25) node[above]{$2$};
\draw (-9,9) -- (0,0) -- (9,9); \draw (0,3) circle (2);
\draw(0,1.25) node[above]{$1$}; \draw (3,6) circle (2);
\draw(3,4.25) node[above]{$2$}; \draw (6,9) circle (2);
\draw(6,7.25) node[above]{$3$};
\end{tikzpicture}
\]
\end{center}
\bigskip
The {\dem inversion number} of a permutation $\pi$ is defined by
\[
\inv(\pi):=\#\{i<j \,:\, \pi(i)>\pi(j)\}.
\]
The {\dem left weak order} on $\Sc_n$ is defined by
\[
\pi\le \sigma \Longleftrightarrow
\inv(\pi)+\inv(\sigma\pi^{-1})=\inv(\sigma).
\]
The following surprising result was first proved by
Stanley~\cite{Stanley-words}.
\begin{theorem}\label{AR_t:staircase_balls_thm}
The number of configurations of ${n\choose 2}$ balls
which completely fill $n-1$ levels in the bin is equal to the
number of maximal chains in the weak order on $\Sc_n$.
\end{theorem}
\begin{center}
\[
\begin{tikzpicture}[scale=0.14]
\draw (-13,9) -- (-4,0) -- (5,9); \draw (-4,3) circle (2);
\draw(-4,1.25) node[above]{$1$}; \draw (-7,6) circle (2);
\draw(-7,4.25) node[above]{$2$}; \draw (-1,6) circle (2);
\draw(-1,4.25) node[above]{$3$}; \draw (2,9) circle (2);
\draw(2,7.25) node[above]{$4$}; \draw (-10,9) circle (2);
\draw(-10,7.25) node[above]{$5$}; \draw (-4,9) circle (2);
\draw(-4,7.25) node[above]{$6$};
\end{tikzpicture}
\]
\end{center}
The configurations of balls in a bin are called {\dem standard
Young tableaux}. We shall survey in this chapter results related
to Question~\ref{AR_q:appet} and its refinements. Variants and
extensions of Theorem~\ref{AR_t:appet1} will be described in
Section~\ref{AR_s:JdT}. Variants and extensions of
Theorem~\ref{AR_t:staircase_balls_thm} will be described in
Section~\ref{AR_s:words}.
\subsection{General}
This chapter is devoted to the enumeration of standard Young
tableaux of various shapes, both classical and modern, and to
closely related topics. Of course, there is a limit as to how far
afield one can go. We chose to include here, for instance,
$r$-tableaux and $q$-enumeration, but many interesting related
topics were left out. Here are some of them, with a minimal list
of relevant references for the interested reader: Semi-standard
Young tableaux~\cite{Md}\cite{Stanley_EC2}, (reverse) plane
partitions~\cite{Stanley_EC2}, solid ($3$-dimensional) standard
Young
tableaux~\cite{Zeilberger_solid_SYT},
symplectic and orthogonal tableaux~\cite{King}\cite{DeConcini
\cite{Sundaram}\cite{Berele}\cite{Sundaram2},
oscillating tableaux~\cite{MacLarnan}\cite{Sagan90
\cite{Roby}\cite{DuluckSagan}\cite{PakPostnikov},
cylindric (and toric) tableaux~\cite{Postnikov}.
\subsection{Acknowledgments}
Many people contributed comments and valuable information to this
chapter. We especially thank Christos Athanasiadis, Tomer Bauer,
Sergi Elizalde, Dominique Foata, Avital Frumkin, Curtis Greene,
Ira Gessel, Christian Krattenthaler, Igor Pak, Arun Ram, Amitai
Regev, Vic Reiner, Dan Romik, Bruce Sagan, Richard Stanley and
Doron Zeilberger.
We used Ryan Reich's
package {\tt ytableau}
for drawing diagrams and tableaux, and
the package {\tt algorithmicx} by J\'{a}nos Sz\'{a}sz for
typesetting algorithms in pseudocode.
\tableofcontents
\section{Preliminaries}\label{AR_s:prelim}
\subsection{Diagrams and tableaux}
\ytableausetup{centertableaux, boxsize = \standardboxsize}
\ytableausetup{nobaseline}
\begin{definition}
A {\dem diagram}
is a finite subset $D$ of the two-dimensional integer lattice
$\bbz^2$.
A point $c = (i,j) \in D$ is also called the {\dem cell} in row
$i$ and column $j$ of $D$; write $\row(c) = i$ and $\col(c) = j$.
Cells are usually drawn as squares with axis-parallel sides of
length $1$, centered at the corresponding lattice points.
\end{definition}
Diagrams will be drawn here according to the ``English notation'',
by which $i$ enumerates rows and increases downwards, while $j$
enumerates columns and increases from left to right:
\[
\ytableausetup{boxsize = 2.5em}
\ytableaushort{{(1,1)}{(1,2)}{(1,3)},{(2,1)}{(2,2)}}
\ytableausetup{boxsize = \standardboxsize}
\]
For alternative conventions see
Subsection~\ref{AR_s:def_classical_shapes}.
\begin{definition}\label{AR_d:order_on_D}
Each diagram $D$ has a natural component-wise partial order,
inherited from $\bbz^2$:
\[
(i, j) \le_D (i', j') \iff i \le i' \hbox{\rm\ and } j \le j'.
\]
As usual, $c <_D c'$ means $c \le_D c'$ but $c \ne c'$.
\end{definition}
\begin{definition}\label{AR_d:SYT}
Let $n := |D|$, and consider the set $[n] := \{1, \ldots, n\}$
with its usual linear order.
A {\dem standard Young tableau (SYT) of shape $D$} is a map $T: D
\to [n]$ which is an order-preserving bijection, namely satisfies
\[
c \ne c' \then T(c) \ne T(c')
\]
as well as
\[
c \le_D c' \then T(c) \le T(c').
\]
\end{definition}
Geometrically, a standard Young tableau $T$ is a filling of the
$n$ cells of $D$ by the numbers $1, \ldots, n$ such that each
number appears once, and numbers increase in each row (as the
column index increases) and in each column (as the row index
increases). Write $\sh(T) = D$. Examples will be given below.
Let $\SYT(D)$ be the set of all standard Young tableaux of shape
$D$, and denote its size by
\[
f^D := |\SYT(D)|.
\]
The evaluation of $f^D$ (and some of its refinements) for various
diagrams $D$ is the main concern of the current chapter.
\subsection{Connectedness and convexity}
We now introduce two distinct notions of connectedness for
diagrams, and one notion of convexity; for another notion of
convexity see Observation~\ref{AR_t:order_convex}.
\begin{definition}
Two distinct cells in $\bbz^2$ are {\dem adjacent} if they share a
horizontal or vertical side; the cells adjacent to $c = (i,j)$ are
$(i \pm 1, j)$ and $(i, j \pm 1)$.
A diagram $D$ is {\dem path-connected} if any two cells in it can
be connected by a {\dem path}, which is a finite sequence of cells
in $D$ such that any two consecutive cells are adjacent. The
maximal path-connected subsets of a nonempty diagram $D$ are its
{\dem path-connected components}.
\end{definition}
For example, the following diagram has two path-connected
components:
\[
\ydiagram{3 + 2, 3 + 3, 3, 1}
\]
\begin{definition}
The {\dem graph} of a diagram $D$ has all the cells of $D$ as
vertices, with two distinct cells $c, c' \in D$ connected by an
(undirected) edge if either $c <_D c'$ or $c' <_D c$. The diagram
$D$ is {\dem order-connected} if its graph is connected. In any
case, the {\dem order-connected components} of $D$ are the subsets
of $D$ forming connected components of its graph.
\end{definition}
For example, the following diagram (in English notation) is
order-connected:
\[
\ydiagram{1, 1 + 1}
\]
while the following diagram has two order-connected components,
with cells marked $1$ and $2$, respectively:
\[
\ytableaushort{\none\none11, \none2, 2, \none2}
\]
Of course, every path-connected diagram is also order-connected,
so that every order-connected component is a disjoint union of
path-connected components.
\begin{observation}\label{AR_t:obs1}
If $D_1, \ldots, D_k$ are the order-connected components of a
diagram $D$, then
\[
f^D = {|D| \choose |D_1|, \ldots, |D_k|} \prod_{i=1}^{k} f^{D_i} =
|D|! \cdot \prod_{i=1}^{k} \frac{f^{D_i}}{|D_i|!}.
\]
\end{observation}
\begin{definition}
A diagram $D$ is {\dem line-convex} if its intersection with every
axis-parallel line is either empty or convex, namely if each of
its rows $\{j \in \bbz \,|\, (i, j) \in D\}$ (for $i \in \bbz$)
and columns $\{i \in \bbz \,|\, (i, j) \in D\}$ (for $j \in \bbz$)
is either empty or an interval $[p,q] = \{p, p+1, \ldots, q\}
\subseteq \bbz$.
\end{definition}
For example, the following diagram is path-connected but not
line-convex:
\[
\ydiagram{2, 1 + 1, 4}
\]
\subsection{Invariance under symmetry}
The number of SYT of shape $D$ is invariant under some of the
geometric operations (isometries of $\bbz^2$) which transform $D$.
It is clearly invariant under arbitrary translations $(i, j)
\mapsto (i + a, j + b)$. The group of isometries of $\bbz^2$ that
fix a point, say $(0,0)$, is the dihedral group of order $8$.
$f^D$ is invariant under a subgroup of order $4$.
\begin{observation}\label{AR_t:invariance}
$f^D$ is invariant under arbitrary translations of $\bbz^2$, as
well as under
\begin{itemize}
\item reflection in a diagonal line: $(i, j) \mapsto (j, i)$ or
$(i, j) \mapsto (-j, -i)$; and \item reflection in the origin
(rotation by $180^\circ$): $(i, j) \mapsto (-i, -j)$.
\end{itemize}
\end{observation}
Note that $f^D$ is not invariant, in general, under reflections in
a vertical or horizontal line ($(i, j) \mapsto (i, -j)$ or $(i,
j) \mapsto (-i, j)$) or rotations by $90^\circ$ ($(i, j) \mapsto
(-j, i)$ or $(i, j) \mapsto (j, -i)$). Thus, for example, each of
the following diagrams, interpreted according to the English
convention (see Subsection~\ref{AR_s:def_classical_shapes}),
\[
\ydiagram{3, 2} \qquad \ydiagram{2, 2, 1} \qquad \ydiagram{1 + 2,
3} \qquad \ydiagram{1 + 1, 2, 2}
\]
has $f^D = 5$, whereas each of the following diagrams
\[
\ydiagram{2, 3} \qquad \ydiagram{1, 2, 2} \qquad \ydiagram{3, 1 +
2} \qquad \ydiagram{2, 2, 1 + 1}
\]
has $f^D = 2$.
\subsection{Ordinary, skew and shifted shapes}\label{AR_s:def_classical_shapes}
The best known and most useful diagrams are, by far, the ordinary
ones. They correspond to partitions.
\begin{definition}
A {\dem partition} is a weakly decreasing sequence of positive
integers: $\la = (\la_1, \ldots, \la_t)$, where $t \ge 0$ and
$\la_1 \ge \ldots \ge \la_t >0$. We say that $\la$ is a partition
of {\dem size} $n = |\la| := \sum_{i=1}^t \la_i$ and {\dem length}
$\ell(\la) := t$, and write $\la \vdash n$. The empty partition
$\la = ()$ has size and length both equal to zero.
\end{definition}
\begin{definition}\label{AR_d:ordinary_diagram}
Let $\la = (\la_1, \ldots, \la_t)$ be a partition. The {\dem
ordinary} (or {\dem straight}, or {\dem left-justified}, or {\dem
Young}, or {\dem Ferrers}) {\dem diagram of shape $\la$} is the
set
\[
D = [\la] := \{(i,j) \,|\, 1\le i \le t,\, 1 \le j \le \la_i \}.
\]
We say that $[\la]$ is a diagram of {\dem height} $\ell(\la) = t$.
\end{definition}
We shall adopt here the ``English'' convention for drawing
diagrams, by which row indices increase from top to bottom and
column indices increase from left to right. For example, in this
notation the diagram of shape $\la = (4,3,1)$ is
\[
[\la] =
\ydiagram{4,3,1} \qquad \hbox{(English notation).}
\]
An alternative convention is the ``French'' one, by which row
indices increase from bottom to top (and column indices increase
from left to right):
\[
[\la] =
\ydiagram{1,3,4} \qquad \hbox{(French notation).}
\]
Note that the term ``Young tableau'' itself mixes English and
French influences. There is also a ``Russian'' convention, rotated
$45^\circ$:
\[
[\la] =
\begin{tikzpicture}[scale=0.283]
\draw (0,-2) -- (-1,-1) -- (-2,0) -- (-3,1) -- (-4,2); \draw
(1,-1) -- (0,0) -- (-1,1) -- (-2,2) -- (-3,3); \draw (2,0) --
(1,1) -- (0,2) -- (-1,3); \draw (3,1) -- (2,2);
\draw (0,-2) -- (1,-1) -- (2,0) -- (3,1); \draw (-1,-1) -- (0,0)
-- (1,1) -- (2,2); \draw (-2,0) -- (-1,1) -- (0,2); \draw (-3,1)
-- (-2,2) -- (-1,3); \draw (-4,2) -- (-3,3);
\end{tikzpicture}
\qquad \hbox{(Russian notation).}
\]
This notation leads naturally to the ``gravitational'' setting
used to introduce SYT at the beginning of
Section~\ref{AR_s:introduction}.
A partition $\la$ may also be described as an infinite sequence,
by adding trailing zeros: $\la_i := 0$ for $i > t$. The partition
$\la'$ {\dem conjugate} to $\la$ is then defined by
\[
\la'_j := |\{i \,|\, \la_i \ge j\}| \qquad (\forall j \ge 1).
\]
The diagram $[\la']$ is obtained from the diagram $[\la]$ by
interchanging rows and columns. For the example above, $\la' =
(3,2,2,1)$ and
\[
[\la'] =
\ydiagram{3,2,2,1}
\]
An ordinary diagram is clearly path-connected and line-convex. If
$D = [\la]$ is an ordinary diagram of shape $\la$ we shall
sometimes write $\SYT(\la)$ instead of $\SYT(D)$ and $f^\la$
instead of $f^D$.
\begin{example}\label{AR_ex:ordinary_diagram}
\[
T =
\ytableaushort{1258, 346, 7} \,\in \SYT(4,3,1).
\]
\end{example}
Note that, by Observation~\ref{AR_t:invariance}, $f^\la =
f^{\la'}$.
\begin{definition}\label{AR_d:shifted_diagram}
If $\la$ and $\mu$ are partitions such that $[\mu] \subseteq
[\la]$, namely $\mu_i \le \la_i$ ($\forall i$), then the {\dem
skew diagram of shape $\la/\mu$} is the set difference
\[
D = [\la/\mu] := [\la] \setminus [\mu] = \{(i,j)\in [\la] \ :\,
\mu_i + 1 \le j \le \la_i\}
\]
of two ordinary shapes.
\end{definition}
For example,
\[
[(6,4,3,1) / (4,2,1)] =
\ydiagram{4 + 2, 2 + 2, 1 + 2, 1}
\]
A skew diagram is line-convex, but not necessarily path-connected.
In fact, its path-connected components coincide with its
order-connected components. If $D = [\la/\mu]$ is a skew diagram
of shape $\la/\mu$ we shall sometimes write $\SYT(\la/\mu)$
instead of $\SYT(D)$ and $f^{\la/\mu}$ instead of $f^D$. For
example,
\[
T =
\ytableaushort{\none\none\none\none14, \none\none37, \none56, 2}
\,\in \SYT((6,4,3,1) / (4,2,1)).
\]
Skew diagrams have an intrinsic characterization.
\begin{observation}\label{AR_t:order_convex}
A diagram $D$ is skew if and only if it is {\dem order-convex},
namely:
\[
c,c'' \in D,\, c' \in \bbz^2,\, c \le c' \le c'' \then c' \in D,
\]
where $\le$ is the natural partial order in $\bbz^2$, as in
Definition~\ref{AR_d:order_on_D}.
\end{observation}
Another important class is that of shifted shapes, corresponding
to strict partitions.
\begin{definition}
A partition $\la = (\la_1, \ldots, \la_t)$ $(t \ge 0)$ is {\dem
strict} if the part sizes $\la_i$ are strictly decreasing: $\la_1
> \ldots > \la_t >0$.
The {\dem shifted diagram of shape $\la$} is the set
\[
D = [\la^*] := \{(i,j) \,|\, 1\le i \le t,\, i \le j \le \la_i + i
-1\}.
\]
Note that $(\la_i + i - 1)_{i = 1}^{t}$ is a weakly decreasing
sequence of positive integers.
\end{definition}
For example, the shifted diagram of shape $\la = (4,3,1)$ is
\[
[\la^*] =
\ydiagram{4, 1 + 3, 2 + 1}
\]
A shifted diagram is always path-connected and line-convex. If $D
= [\la^*]$ is a shifted diagram of shape $\la$ we shall sometimes
write $\SYT(\la^*)$ instead of $\SYT(D)$ and $g^\la$ instead of
$f^D$. For example,
\[
T =
\ytableaushort{1246, \none358, \none\none7} \,\in \SYT((4,3,1)^*).
\]
\subsection{Interpretations}\label{AR_s:interpretations}
There are various interpretations of a standard Young tableau,
in addition to the interpretation (in Definition~\ref{AR_d:SYT})
as a linear extension of a partial order. Some of these
interpretations play a key role in enumeration.
\subsubsection
The Young lattice}
A standard Young tableau of ordinary shape describes a {\dem
growth process} of diagrams of ordinary shapes, starting from the
empty shape. For example, the tableau $T$ in
Example~\ref{AR_ex:ordinary_diagram} corresponds to the process
\[
\, \to \, \ydiagram{1} \, \to \, \ydiagram{2} \, \to \,
\ydiagram{2,1} \, \to \, \ydiagram{2,2} \, \to \, \ydiagram{3,2}
\, \to \, \ydiagram{3,3} \, \to \, \ydiagram{3,3,1} \, \to \,
\ydiagram{4,3,1}
\]
Consider the {\dem Young lattice} whose elements are all
partitions, ordered by inclusion (of the corresponding diagrams).
By the above, a SYT of ordinary shape $\la$ is a maximal chain, in
the Young lattice, from the empty partition to $\la$. The number
of such maximal chains is therefore $f^\la$. More generally, a SYT
of skew shape $\la/\mu$ is a maximal chain from $\mu$ to $\la$ in
the Young lattice.
A SYT of shifted shape can be similarly interpreted as a maximal
chain in the {\dem shifted Young lattice}, whose elements are
strict partitions ordered by inclusion.
\subsubsection{Ballot sequences and lattice paths}\label{AR_s:lattice_paths}
\begin{definition}\label{AR_d:ballot_seq}
A sequence $(a_1, \ldots, a_n)$ of positive integers is a {\dem
ballot sequence}, or {\dem lattice permutation}, if for any
integers $1 \le k \le n$ and $r \ge 1$,
\[
\# \{1 \le i \le k \,|\, a_i = r \} \ge \# \{1 \le i \le k \,|\,
a_i = r+1 \},
\]
namely: in any initial subsequence $(a_1, \ldots, a_k)$, the
number of entries equal to $r$ is not less than the number of
entries equal to $r+1$.
\end{definition}
A ballot sequence describes the sequence of votes in an election
process with several candidates (and one ballot), assuming that at
any given time candidate $1$ has at least as many votes as
candidate $2$, who has at least as many votes as candidate $3$,
etc. For example, $(1,1,2,3,2,1,4,2,3)$ is a ballot sequence for
an election process with $9$ voters and $4$ candidates.
For a partition $\la$ of $n$, denote by $\BS(\la)$ the set of
ballot sequences $(a_1, \ldots, a_n)$ with $\# \{i \,|\, a_i = r\}
= \la_r$ $(\forall r)$.
\begin{observation}
The map $\phi : \SYT(\la) \to \BS(\la)$ defined by
\[
\phi(T)_i := \row(T^{-1}(i)) \qquad (1 \le i \le n)
\]
is a bijection.
\end{observation}
For example, if $T$ is the SYT in Example~\ref{AR_ex:ordinary_diagram}
then $\phi(T) = (1,1,2,2,1,2,3,1)$.
\medskip
Clearly, a ballot sequence $a = (a_1, \ldots, a_n) \in \BS(\la)$
with maximal entry $t$ corresponds to a {\dem lattice path} in
$\bbr^t$, from the origin $0$ to the point $\la$, where in step
$i$ of the path coordinate $a_i$ of the point increases by $1$. In
fact, $\BS(\la)$ is in bijection with the set of all lattice paths
from $0$ to $\la$ which lie entirely in the cone
\[
\{(x_1, \ldots, x_t) \in \bbr^t \,|\, x_1 \ge \ldots \ge x_t \ge 0
\}.
\]
A SYT of skew shape $\la/\mu$ corresponds to a lattice path in
this cone from $\mu$ to $\la$.
A SYT of shifted shape corresponds to
a {\dem strict} ballot sequence, describing a lattice path within
the cone
\[
\{(x_1, \ldots, x_t) \in \bbr^t \,|\, x_1 > \ldots > x_s > x_{s+1}
= \ldots = x_t = 0 \hbox{ for some } 0 \le s \le t \}.
\]
\subsubsection{The order polytope}\label{AR_s:order_polytope}
Using the partial order on a diagram $D$, as in
Definition~\ref{AR_d:order_on_D}, one can define the corresponding
{\dem order polytope}
\[
P(D) := \{ f: D \to [0,1] \,|\, c \le_D c' \then f(c) \le f(c')\,
(\forall c,\, c' \in D)\}.
\]
The order polytope is a closed convex subset of the unit cube
$[0,1]^D$. Denoting $n := |D|$, each linear extension $c_1 <
\ldots < c_n$ of the partial order $\le_D$ corresponds to a
simplex
\[
\{ f: D \to [0,1] \,|\, f(c_1) \le \ldots \le f(c_n) \}
\]
of volume $1/n!$ inside $P(D)$. The union of these simplices is
$P(D)$, and this union is almost disjoint: Any intersection of two
or more simplices is contained in a hyperplane, and thus has
volume $0$. Each simplex, or linear extension, corresponds to a
SYT of shape $D$. Therefore
\begin{observation}\label{AR_t:vol_order_polytope}
If $P(D)$ is the order polytope of a diagram $D$ then
\[
\vol P(D) = \frac{f^D}{|D|!}.
\]
\end{observation}
\subsubsection{Other interpretations}
The number of SYT of ordinary shape can be interpreted as a
coefficient in a power series, or as the constant term in a
Laurent series; see Remark~\ref{AR_r:DZ_coeff}.
SYT of certain ordinary, skew and shifted shapes may be
interpreted as {\dem reduced words} for suitable elements in the
symmetric group. This interpretation will be developed and
explained in Section~\ref{AR_s:words}.
SYT may be interpreted as {\dem permutations}; $f^\la$ then
measures the size of certain subsets of the symmetric group, such
as descent classes, sets of involutions, Knuth classes and pattern
avoidance
classes. Examples will be given in Sections~\ref{AR_s:JdT} an
~\ref{AR_s:q}.
The Young lattice has a vast generalization to the concept of {\dem differential
poset}~\cite{Stanley_differential}\cite{Gessel}.
There are other deep algebraic and geometric interpretations. The
interested reader is encouraged to start with the beautiful
survey~\cite{Barcelo-Ram} and the excellent textbooks~\cite{JK,
Stanley_EC2, Sagan_book, Fulton, Md}. In the current survey we
will focus on combinatorial aspects and mostly ignore the
algebraic and geometric approaches.
\subsection{Miscellanea}
The concepts to be defined here are not directly related to
standard Young tableaux, but will be used later in this survey.
A {\dem composition} of a nonnegative integer $n$ is a sequence
$(\mu_1, \ldots, \mu_t)$ of positive integers such that $\mu_1 +
\ldots + \mu_t = n$. The components $\mu_i$ are {\em not} required
to be weakly decreasing; in fact, every composition may be
re-ordered to form a (unique) partition. $n = 0$ has a unique
(empty) composition.
A permutation $\sigma \in \Sc_n$ {\dem avoids a pattern} $\pi \in
\Sc_k$ if the sequence $(\sigma(1), \ldots, \sigma(n))$ does not
contain a subsequence $(\sigma(t_1), \ldots, \sigma(t_k))$ (with
$t_1 < \ldots < t_k$) which is order-isomorphic to $\pi$, namely:
$\sigma(t_i) < \sigma(t_j) \iff \pi(i) < \pi(j)$. For example,
$21354 \in \Sc_5$ is $312$-avoiding, but $52134$ is not (since
$523$ is order-isomorphic to $312$).
\section{Formulas for thin shapes}\label{AR_s:thin}
\subsection{Hook shapes}
A {\dem hook shape} is an ordinary shape which is the union of one
row and one column. For example,
\[
[(6,1^3)] =
\ydiagram{6, 1, 1, 1}
\]
One of the simplest enumerative formulas is the following.
\begin{observation}\label{hooks}
For every $n \ge 1$ and $0 \le k \le n-1$,
\[
f^{(n-k, 1^k)} = {n-1 \choose k}.
\]
\end{observation}
\begin{proof}
The letter $1$ must be in the corner cell. The SYT is uniquely
determined by the choice of the other $k$ letters in the first
column.
\end{proof}
Note that, in a hook shape $(n+1-k,1^k)$, the letter $n+1$ must be
in the last cell of either the first row or the first column. Thus
\[
f^{(n+1-k,1^k)}=f^{(n-k,1^k)}+f^{(n+1-k,1^{k-1})}.
\]
By Observation~\ref{hooks}, this is equivalent to Pascal's
identity
\[
{n \choose k}={n-1 \choose k}+{n-1 \choose k-1} \qquad (1 \le k
\le n-1).
\]
\begin{observation}
The total number of hook shaped SYT of size $n$ is $2^{n-1}$.
\end{observation}
\begin{proof}
There is a bijection between hook shaped SYT of size $n$ and
subsets of $\{1, \ldots, n\}$ containing $1$: Assign to each SYT
the set of entries in its first row.
Alternatively, a hook shaped SYT of size $n \ge 2$ is uniquely
constructed by adding a cell containing $n$ at the end of either
the first row or the first column of a hook shaped SYT of size
$n-1$, thus recursively multiplying the number of SYT by $2$.
Of course, the claim also follows from Observation~\ref{hooks}.
\end{proof}
\subsection{Two-rowed shapes}
Consider now ordinary shapes with at most two rows.
\begin{proposition}\label{two-rows}
For every $n \ge 0$ and $0\le k\le n/2$
\[
f^{(n-k,k)}={n \choose k}-{n \choose k-1},
\]
where ${n \choose -1} = 0$ by convention. In particular,
\[
f^{(m,m)} = f^{(m,m-1)} = C_m = \frac{1}{m+1}{2m \choose m},
\]
the $m$-th Catalan number.
\end{proposition}
\begin{proof}
We shall outline two proofs, one by induction and one
combinatorial.
For a proof by induction on $n$ note first that $f^{(0,0)} =
f^{(1,0)} =1$.
If $0 < k< n/2$ then there are two options for the location of the
letter $n$ -- at the end of the first row or at the end of the
second. Hence
\[
f^{(n-k,k)} = f^{(n-k-1,k)} + f^{(n-k,k-1)} \qquad (0 < k < n/2).
\]
Thus, by the induction hypothesis and Pascal's identity,
\[
f^{(n-k,k)} = {n-1 \choose k} - {n-1 \choose k-1} + {n-1 \choose
k-1} - {n-1 \choose k-2} = {n-1 \choose k} - {n-1 \choose k-2} =
{n \choose k} - {n \choose k-1}.
\]
The cases $k = 0$ and $k = n/2$ are left to the reader.
For a combinatorial proof, recall (from
Subsection~\ref{AR_s:lattice_paths}) the lattice path
interpretation of a SYT and use Andr\'{e}'s reflection trick:
A SYT of shape $(n-k,k)$ corresponds to a lattice path from
$(0,0)$ to $(n-k,k)$ which stays within the cone $\{(x,y) \in
\bbr^2 \,|\, x \ge y \ge 0\}$, namely does not touch the line $y =
x+1$. The number of {\em all} lattice paths from $(0,0)$ to
$(n-k,k)$ is ${n \choose k}$. If such a path touches the line $y =
x + 1$, reflect its ``tail'' (starting from the first touch point)
in this line to get a path from $(0,0)$ to the reflected endpoint
$(k-1, n-k+1)$. The reflection defines a bijection between all the
``touching'' paths to $(n-k,k)$ and all the (necessarily
``touching'') paths to $(k-1,n-k+1)$, whose number is clearly ${n
\choose k-1}$.
\end{proof}
\begin{corollary}\label{total-two-rows}
The total number of SYT of size $n$ and at most $2$ rows
is $n \choose {\lfloor n/2 \rfloor}$.
\end{corollary}
\begin{proof} By Proposition~\ref{two-rows},
\[
\sum_{k=0}^{\lfloor n/2 \rfloor} f^{(n-k,k)} = \sum_{k=0}^{\lfloor
n/2 \rfloor} \left( {n \choose k} - {n \choose k-1} \right) = {n
\choose {\lfloor n/2 \rfloor}}.
\]
\end{proof}
\subsection{Zigzag shapes}\label{AR_s:zigzag_sum}
A {\dem zigzag shape}
is a path-connected skew shape which does not contain a $2\times
2$ square. For example, every hook shape is zigzag. Here is an
example of a zigzag shape of size $11$:
\[
\ydiagram{3 + 3, 3 + 1, 1 + 3, 1 + 1, 1 + 1, 2}
\]
The number of SYT of a specific zigzag shape has an interesting
formula, to be presented in Subsection~\ref{AR_s:zigzag}. The
total number of SYT of given size and various zigzag shapes is
given by the following folklore statement, to be refined later
(Proposition~\ref{t.zigzag-descent class}).
\begin{proposition}\label{total-zigzag}
The total number of zigzag shaped SYT of size $n$ is $n!$.
\end{proposition}
\begin{proof}
Define a map from zigzag shaped SYT of size $n$ to permutations in
$\Sc_n$ by simply listing the entries of the SYT, starting from
the SW corner and moving along the shape. This map is a bijection,
since an obvious inverse map builds a SYT from a permutation
$\sigma = (\sigma_1, \ldots, \sigma_n) \in \Sc_n$ by attaching a
cell containing $\sigma_{i+1}$ to the right of the cell containing
$\sigma_i$ if $\sigma_{i+1} > \sigma_i$, and above this cell
otherwise.
\end{proof}
\section{Jeu de taquin and the RS correspondence}\label{AR_s:JdT}
\subsection{Jeu de taquin}
Jeu de taquin is a very powerful combinatorial algorithm,
introduced by Sch\"utzenberger~\cite{Schutzenberger}. It provides
a unified approach to many enumerative results. In general, it
transforms a SYT of skew shape into some other SYT of skew shape,
using a sequence of {\dem slides}. We shall describe here a
version of it, using only {\dem forward slides}, which transforms
a SYT of skew shape into a (unique) SYT of ordinary shape. Our
description follows \cite{Sagan_book}.
\begin{definition}
Let $D$ be a nonempty diagram of skew shape. An {\bf inner corner}
for $D$ is a cell $c \not\in D $ such that
\begin{enumerate}
\item $D \cup \{c\}$ is a skew shape, and \item there exists a
cell $c' \in D$ such that $c \le c'$ (in the natural partial order
of $\bbz^2$, as in Definition~\ref{AR_d:order_on_D}).
\end{enumerate}
\end{definition}
\begin{example}
Here is a skew shape with marked inner corners:
\[
\ytableaushort{\none\none\none\none\none\none\none{\none[\bullet]},
\none, \none\none\none\none{\none[\bullet]},
\none\none\none{\none[\bullet]}, {\none[\bullet]}}
* {0, 5+3, 5+2, 4+2, 1+2}
\]
\end{example}
Here is the main jeu de taquin procedure:
\medskip
\begin{algorithmic}[1]
\Require{$T$, a SYT of arbitrary skew shape.} \Ensure{$T'$, a SYT
of ordinary shape.} \Statex \Procedure{JdT}{$T$} \State $D \gets
\sh(T)$ \State Choose a cell $c_0 = (i_0, j_0)$ such that $D
\subseteq (c_0)_+ := \{(i, j) \in \bbz^2 : i \ge i_0,\, j \ge
j_0\}$ \While{$c_0 \not\in D$}
\State Choose $c = (i, j) \in (c_0)_+ \setminus D$ which is an inner corner for $D$
\State $T \gets$ \Call{ForwardSlide}{$T, c$}
\State $D \gets \sh(T)$
\EndWhile \State \textbf{return} $T$ \Comment Now $c_0 \in D
\subseteq (c_0)_+$, so $D$ has ordinary shape \EndProcedure
\end{algorithmic}
\medskip
and here is the procedure $\textsc{ForwardSlide}$:
\medskip
\begin{algorithmic}[1]
\Require{$(T_{in}, c_{in})$, where $T_{in}$ is a SYT of skew shape
$D_{in}$ and $c_{in}$ is an inner corner for $D_{in}$.}
\Ensure{$T_{out}$, a SYT of skew shape $D_{out} = D_{in} \cup
\{c_{in}\} \setminus \{c'\}$ for some $c' \in D_{in}$.} \Statex
\Procedure{ForwardSlide}{$T_{in}, c_{in}$} \State $T \gets
T_{in}$, $c \gets c_{in}$ \State $D \gets \sh(T)$ \If{$c = (i,
j)$}
\State $c_1 \gets (i+1, j)$
\State $c_2 \gets (i, j+1)$
\EndIf \While{at least one of $c_1$ and $c_2$ is in $D$}
\State $c' \gets \begin{cases}
c_1, & \text{if $c_1 \in D$ but $c_2 \not\in D$, or $c_1, c_2 \in D$ and $T(c_1) < T(c_2)$} \\
c_2, & \text{if $c_2 \in D$ but $c_1 \not\in D$, or $c_1, c_2 \in D$ and $T(c_2) < T(c_1)$}
\end{cases}$
\State $D' \gets D \cup \{c\} \setminus \{c'\}$ \Comment{$c \not\in D$, $c' \in D$}
\State Define $T' \in \SYT(D')$ by: $T' = T$ on $D \setminus \{c'\}$ and $T'(c) := T(c')$
\State $D \gets D'$, $T \gets T'$, $c \gets c'$
\If{$c = (i, j)$}
\State $c_1 \gets (i+1, j)$
\State $c_2 \gets (i, j+1)$
\EndIf
\EndWhile \State \textbf{return} $T$ \EndProcedure
\end{algorithmic}
\medskip
The $\JdT$ algorithm employs certain random choices, but actually
\begin{proposition}{\rm \cite{Schutzenberger, Thomas1974, Thomas1977}}
For any SYT $T$ of skew shape, the resulting SYT $\JdT(T)$ of
ordinary shape is independent of the choices made during the
computation.
\end{proposition}
\begin{example}
Here is an example of a forward slide, with the initial $c_{in}$
and the intermediate cells $c$ marked:
\[
T_{in} = \, \ytableaushort{\none {\none[\bullet]} 3 6, \none 1 4
7, 2 5 8} \,\to\, \ytableaushort{\none 1 3 6, \none
{\none[\bullet]} 4 7, 2 5 8} \,\to\, \ytableaushort{\none 1 3 6,
\none 4 {\none[\bullet]} 7, 2 5 8} \,\to\, \ytableaushort{\none 1
3 6, \none 4 7 {\none[\bullet]}, 2 5 8} \, = T_{out} \, ,
\]
and here is an example of a full jeu de taquin (where each step is
a forward slide):
\[
T = \, \ytableaushort{\none {\none[\bullet]} 3 6, \none 1 4 7, 2 5
8} \,\to\, \ytableaushort{\none 1 3 6, {\none[\bullet]} 4 7, 2 5
8} \,\to\, \ytableaushort{{\none[\bullet]} 1 3 6, 2 4 7, 5 8}
\,\to\, \ytableaushort{1 3 6, 2 4 7, 5 8} \,= \JdT(T).
\]
\end{example}
\subsection{The Robinson-Schensted correspondence}\label{AR_s:RSK}
The Robinson-Schensted (RS) correspondence is a bijection from
permutations in $\Sc_n$ to pairs of SYT of size $n$ and same
ordinary shape. Its original motivation was the study of the
distribution of longest increasing subsequences in a permutation.
For a detailed description see, e.g., the textbooks
~\cite{Stanley_EC2}, \cite{Sagan_book} and \cite{Fulton}. We shall
use the jeu de taquin algorithm to give an alternative
description.
\begin{definition}
Denote $\delta_n := [(n,n-1,n-2,\dots,1)]$. For a permutation
$\pi\in \Sc_n$ let $T_\pi$ the skew SYT of antidiagonal shape
$\delta_n/\delta_{n-1}$ in which the entry in the $i$-th column
from the left is $\pi(i)$.
\end{definition}
\begin{example}
\[
\pi = 53412 \quad \then \quad T_\pi =
\ytableaushort{\none\none\none\none 2, \none\none\none 1,
\none\none 4, \none 3, 5}.
\]
\end{example}
\begin{definition} {\rm (The Robinson-Schensted (RS) correspondence)}
For a permutation $\pi\in \Sc_n$ let
\[
P_\pi := \JdT(T_\pi) \quad \text{and} \quad Q_\pi :=
\JdT(T_{\pi^{-1}}).
\]
\end{definition}
\begin{example}
\[
\pi = 2413
\quad \then \quad T_\pi =
\ytableaushort{\none\none\none 3, \none\none 1, \none 4, 2} \quad
, \quad T_{\pi^{-1}} =
\ytableaushort{\none\none\none 2, \none\none 4, \none 1, 3}.
\]
Then
\[
T_\pi = \,
\ytableaushort{\none\none\none 3, \none{\none[\bullet]} 1, \none
4, 2} \,\to\, \ytableaushort{\none\none{\none[\bullet]} 3, \none
1, \none 4, 2} \,\to\, \ytableaushort{\none\none 3,\none 1,
{\none[\bullet]} 4, 2} \,\to\, \ytableaushort{\none\none 3,
{\none[\bullet]} 1, 24} \,\to\,
\ytableaushort{\none{\none[\bullet]} 3, 14, 2} \,\to\,
\ytableaushort{{\none[\bullet]} 3, 14, 2} \,\to\,
\ytableaushort{13, 24} \, = P_\pi
\]
and
\[
T_{\pi^{-1}} = \,
\ytableaushort{\none\none{\none[\bullet]} 2, \none\none 4, \none
1, 3} \,\to\, \ytableaushort{\none\none 2, \none\none 4,
{\none[\bullet]} 1, 3} \,\to\, \ytableaushort{\none\none 2,
\none{\none[\bullet]} 4, 1, 3} \,\to\, \ytableaushort{\none\none
2, {\none[\bullet]} 4, 1, 3} \,\to\,
\ytableaushort{\none{\none[\bullet]} 2, 14, 3} \,\to\,
\ytableaushort{{\none[\bullet]} 2, 14, 3} \,\to\,
\ytableaushort{12, 34} \, = Q_\pi.
\]
\end{example}
\begin{theorem}\label{AR_t:RSK_bijection}
The RS correspondence is a bijection from all permutations in
$\Sc_n$ to all pairs of SYT of size $n$ and the same shape.
\end{theorem}
Thus
\begin{claim}\label{AR_t:RSK_properties}
For every permutation $\pi\in \Sc_n$,
\begin{itemize}
\item[(i)] $\sh(P_\pi)=\sh(Q_\pi)$.
\item[(ii)] $\pi \leftrightarrow (P, Q) \,\then\, \pi^{-1}
\leftrightarrow (Q, P)$.
\end{itemize}
\end{claim}
A very fundamental property of the RS correspondence is the
following.
\begin{proposition}\label{AR_t:Schensted}{\rm \cite{Schensted}}
The height of $\sh(\pi)$ is equal to the size of the longest
decreasing subsequence in $\pi$. The width of $\sh(\pi)$ is equal
to the size of the longest increasing subsequence in $\pi$.
\end{proposition}
A version of the RS correspondence for shifted shapes was given,
initially, by Sagan~\cite{Sagan79}. An improved algorithm was
found, independently, by Worley~\cite{Worley} and
Sagan~\cite{Sagan87}. See also~\cite{Haiman1989}.
\subsection{Enumerative applications}\label{AR_s:RSK-involutions}
In this section we list just a few applications of the above
combinatorial algorithms.
\begin{corollary}\label{AR_t:RSK_cor1}\
\begin{itemize}
\item[(1)] The total number of pairs of SYT of the same shape is
$n!$. Thus
\[
\sum_\la (f^\la)^2 = n!
\]
\item[(2)] The total number of SYT of size $n$ is equal to the
number of involutions in $\Sc_n$~{\rm \cite[A000085]{oeis}}. Thus
\[
\sum\limits_{\la\vdash n}f^\la=\sum\limits_{k=0}^{\lfloor
n/2\rfloor} {n\choose 2k} (2k-1)!!,
\]
where $(2k-1)!! := 1 \cdot 3 \cdot \ldots \cdot (2k-1)$.
\item[(3)] Furthermore, for every positive integer $k$, the total
number of SYT of height $< k$ is equal to the number of
$[k,k-1,\ldots,1]$-avoiding involutions in $\Sc_n$.
\end{itemize}
\end{corollary}
\begin{proof}
$(1)$ follows from Claim~\ref{AR_t:RSK_properties}(i), $(2)$ from
Claim~\ref{AR_t:RSK_properties}(ii), and $(3)$ from
Proposition~\ref{AR_t:Schensted}.
\end{proof}
\medskip
A careful examination of the RS correspondence implies the
following refinement of Corollary~\ref{AR_t:RSK_cor1}(2).
\begin{theorem}\label{AR_t:involutions_fixed}
The total number of SYT of size $n$ with $n - 2k$ odd rows is
equal to ${n\choose 2k} (2k-1)!!$, the number of involutions in
$\Sc_n$ with $n - 2k$ fixed points.
\end{theorem}
\begin{corollary}\label{even_parts}
The total number of SYT of size $2n$ and all rows even is equal to
$(2n-1)!!$, the number of fixed point free involutions in
$\Sc_{2n}$.
\end{corollary}
For further refinements see, e.g.,~\cite[Ex.\ 45--46, 85]{Stanley_EC2_Supp}.
\medskip
Recalling the simple formula for the number of two-rowed SYT
(Corollary~\ref{total-two-rows}), it is tempting to look for the
total number of SYT of shapes with more rows.
\begin{theorem}\label{height3}{\rm \cite{Regev-height}}
The total number of SYT of size $n$ and at most $3$ rows is
\[
M_n = \sum_{k=0}^{\lfloor n/2 \rfloor} {n\choose 2k} C_k \, ,
\]
the $n$-th Motzkin number~{\rm \cite[A001006]{oeis}}.
\end{theorem}
\begin{proof}
By Observation~\ref{AR_t:obs1} together with
Proposition~\ref{two-rows}, the number of SYT of skew shape
$(n-k,k,k) / (k)$ is equal to
\[
{n\choose 2k} C_k \, ,
\]
where $C_k$ is $k$-th Catalan number. On the other hand, by
careful examination of the jeu de taquin algorithm, one can verify
that it induces a bijection from the set of all SYT of skew shape
$(n-k,k,k) / (k)$ to the set of all SYT of shapes $(n-k-j,k,j)$
for $0 \le j \le \min(k, n-2k)$. Thus
\[
\sum\limits_{\la \vdash n \atop \ell(\la)\le 3} f^\la =
\sum\limits_{k=0}^{\lfloor n/2 \rfloor}\sum\limits_{j}
f^{(n-k-j,k,j)} = \sum_{k=0}^{\lfloor n/2 \rfloor} {n\choose 2k}
C_k \, ,
\]
completing the proof.
\end{proof}
See~\cite{Eu} for a bijective proof of Theorem~\ref{height3} via a
map from SYT of height at most $3$ to Motzkin paths.
The $n$-th Motzkin number also counts non-crossing involutions in
$\Sc_n$. It follows that
\begin{corollary}
The total number of SYT of height at most $3$ is equal to the
number of non-crossing involutions in $\Sc_n$.
\end{corollary}
\medskip
Somewhat more complicated formulas have been found for shapes with
more rows.
\begin{theorem}\label{height4}{\rm \cite{GB}}
\begin{itemize}
\item[1.]
The total number of SYT of size $n$ and at most $4$ rows
is equal to $C_{\lfloor (n+1)/2\rfloor} C_{\lceil (n+1)/2\rceil}$.
\item[2.]
The total number of SYT of size $n$ and at most $5$ rows
is equal to $6\sum_{k=0}^{\lfloor n/2 \rfloor}{n\choose 2k} C_k
\frac{(2k+2)!}{(k+2)!(k+3)!}$.
\end{itemize}
\end{theorem}
\bigskip
The following shifted analogue of Corollary~\ref{AR_t:RSK_cor1}(1)
was proved by Schur~\cite{Schur}, more than a hundred years ago,
in a representation theoretical setting. A combinatorial proof,
using the shifted RS correspondence, was given by
Sagan~\cite{Sagan79}. An improved shifted RS algorithm was found,
independently, by Worley~\cite{Worley} and Sagan~\cite{Sagan87}.
See the end of Subsection~\ref{AR_s:def_classical_shapes}
for the notation $g^\la$.
\begin{theorem
\[
\sum_{\text{strict } \la} 2^{n - \ell(\la)} (g^\la)^2 = n!
\]
\end{theorem}
\section{Formulas for classical shapes}\label{AR_s:classical_shapes}
There is an explicit formula for the number of SYT of each
classical shape -- ordinary, skew or shifted. In fact, there are
several equivalent fomulas, all unusually elegant. These formulas,
with proofs, will be given in this section. Additional proof
approaches (mostly for ordinary shapes) will be described in
Section~\ref{AR_s:proof_approaches}.
\subsection{Ordinary shapes}\label{AR_s:ordinary_shapes}
In this subsection we consider ordinary shapes $D = [\la]$,
corresponding to partitions $\la$. Recall the notation $f^{\la} :=
|\SYT(\la)|$ for the number of standard Young tableaux of shape
$\la$. Several explicit formulas are known for this number -- a
product formula, a hook length formula and a determinantal
formula.
Historically, ordinary tableaux were introduced by Young in 1900~\cite{Young1900}.
The first explicit formula for the number of SYT of ordinary shape was
the product formula.
It was obtained in 1900 by Frobenius~\cite[eqn.\ 6]{Frobenius1900} in an algebraic context,
as the degree of an irreducible character $\chi^{\la}$ of $\Sc_n$.
Independently, MacMahon~\cite[p.\ 175]{MacMahon1909} in 1909
(see also~\cite[\S 103]{MacMahon_book}) obtained the same formula
for the number of ballot sequences (see Definition~\ref{AR_d:ballot_seq} above),
which are equinumerous with SYT. In 1927 Young~\cite[pp.\ 260--261]{Young1927}
showed that $\deg(\chi^{\la})$ is actually equal to the number of SYT of shape $\la$,
and also provided his own proof~\cite[Theorem II]{Young1927} of MacMahon's result.
\begin{theorem}\label{t.AR_num_ordinary_prod
{\rm\ (Ordinary product formula)}
For a partition $\la = (\la_1, \ldots, \la_t)$, let $\ell_i :=
\la_i + t -i$ $(1 \le i \le t)$. Then
\[
f^{\la} = \frac{|\la|!}{\prod_{i=1}^{t} \ell_i!}
\cdot \prod_{(i,j):\, i < j} (\ell_i - \ell_j).
\]
\end{theorem}
The best known and most influential of the explicit formulas is
doubtlessly the Frame-Robinson-Thrall hook length formula,
published in 1954~\cite{FRT}. The story of its discovery is quite
amazing~\cite{Sagan_book}: Frame was led to conjecture the formula
while discussing the work of Staal, one of Robinson's students,
during Robinson's visit to him in May 1953. Robinson could not
believe, at first, that such a simple formula exists, but became
convinced after trying some examples, and together they proved it.
A few days later, Robinson gave a lecture followed by a
presentation of the new result by Frame. Thrall, who was in the
audience, was very surprised because he had just proved the same
result on the same day!
\begin{definition}\label{AR_d:hook}
For a cell $c = (i,j) \in [\la]$ let
\[
H_{c} := [\la] \cap \left( \{(i,j)\} \cup \{(i,j') \,|\, j' > j\}
\cup \{(i',j) \,|\, i' > i\} \right)
\]
be the corresponding {\dem hook}, and let
\[
h_{c} := |H_{c}| = \la_i + \la'_j - i - j + 1.
\]
be the corresponding {\dem hook length}.
\end{definition}
For example, in the following diagram the cells of the hook
$H_{(1,2)}$ are marked:
\[
\ydiagram{4, 3, 1} * [\bullet]{1 + 3, 1 + 1}
\]
and in the following diagram each cell is labeled by the
corresponding hook length:
\[
\ytableaushort{6431, 421, 1}
\]
\begin{theorem}\label{t.AR_num_ordinary_hook
{\rm\ (Ordinary hook length formula)}
For any partition $\la = (\la_1, \ldots, \la_t)$,
\[
f^{\la} = \frac{|\la|!}{\prod_{c \in [\la]} h_{c}}.
\]
\end{theorem}
Last, but not least, is the determinantal formula. Remarkably, it
also has a generalization to the skew case; see the next
subsection.
\begin{theorem}\label{t.AR_num_ordinary_det
{\rm\ (Ordinary determinantal formula)}
For any partition $\la = (\la_1, \ldots, \la_t)$,
\[
f^{\la} = |\la|! \cdot \det \left[\frac{1}{(\la_i - i + j)!}
\right]_{i, j = 1}^{t},
\]
using the convention $1/k! := 0$ for negative integers $k$.
\end{theorem}
We shall now show that all these formulas are equivalent. Their
validity will then follow from a forthcoming proof of
Theorem~\ref{t.AR_num_skew_det}, which is a generalization of
Theorem~\ref{t.AR_num_ordinary_det}. Other proof approaches will
be described in Section~\ref{AR_s:proof_approaches}.
\begin{claim}
The formulas in Theorems~\ref{t.AR_num_ordinary_prod},
\ref{t.AR_num_ordinary_hook} and \ref{t.AR_num_ordinary_det} are
equivalent.
\end{claim}
\begin{proof}
To prove the equivalence of the product formula
(Theorem~\ref{t.AR_num_ordinary_prod}) and the hook length formula
(Theorem~\ref{t.AR_num_ordinary_hook}), it suffices to show that
\[
\prod_{c \in [\la]} h_{c} = \frac{\prod_{i=1}^{t} (\la_i + t -
i)!}{\prod_{(i,j):\, i < j} (\la_i - \la_j - i + j)}.
\]
This follows by induction on the number of columns, once we show
that the product of hook lengths for all the cells in the first
column of $[\la]$ satisfies
\[
\prod_{i=1}^{t} h_{(i,1)} =
\prod_{i=1}^{t} (\la_i + t - i) \,;
\]
and this readily follows from the obvious
\[
h_{(i,1)} = \la_i + t - i \qquad (\forall i).
\]
Actually, one also needs to show that the ordinary product formula
is valid even when the partition $\la$ has trailing zeros (so that
$t$ in the formula may be larger than the number of nonzero parts
in $\la$). This is not difficult, since adding one zero part
$\la_{t+1} = 0$ (and replacing $t$ by $t+1$) amounts, in the
product formula, to replacing each $\ell_i = \la_i + t - i$ by
$\ell_i + 1$ $(1 \le i \le t)$ and adding $\ell_{t+1} = 0$, which
multiplies the RHS of the formula by
\[
\frac{1}{\prod_{i=1}^{t} (\ell_i + 1) \cdot \ell_{t+1}!} \cdot
\prod_{i=1}^{t} (\ell_i + 1 - \ell_{t+1}) = 1.
\]
To prove equivalence of the product formula
(Theorem~\ref{t.AR_num_ordinary_prod}) and the determinantal
formula (Theorem~\ref{t.AR_num_ordinary_det}), it suffices to show
that
\[
\det \left[\frac{1}{(\ell_i - t + j)!} \right]_{i, j = 1}^{t} =
\frac{1}{\prod_{i=1}^{t} \ell_i!} \cdot {\prod_{(i,j):\, i < j}
(\ell_i - \ell_j)} \quad ,
\]
where
\[
\ell_i := \la_i + t - i \qquad (1 \le i \le t)
\]
as in Theorem~\ref{t.AR_num_ordinary_prod}. Using the {\dem
falling factorial} notation
\[
(a)_n := \prod_{i = 1}^{n} (a + 1 - i) \qquad (n \ge 0),
\]
this claim is equivalent to
\[
\det \left[ (\ell_i)_{t - j} \right]_{i, j = 1}^{t} =
\prod_{(i,j):\, i < j} (\ell_i - \ell_j)
\]
which, in turn, is equivalent (under suitable column operations)
to the well known evaluation of the Vandermonde determinant
\[
\det \left[ \ell_i^{t - j} \right]_{i, j = 1}^{t} =
\prod_{(i,j):\, i < j} (\ell_i - \ell_j).
\]
See~\cite[pp.\ 132--133]{Sagan_book} for an inductive proof
avoiding explicit use of the Vandermonde.
\end{proof}
\subsection{Skew shapes}
The determinantal formula for the number of SYT of an ordinary
shape can be extended to apply to a general skew shape. The
formula is due to Aitken~\cite[p.\ 310]{Aitken}, and was
rediscovered by Feit~\cite{Feit}. No product or hook length
formula is known in this generality (but a product formula for a
staircase minus a rectangle has been found by
DeWitt~\cite{DeWitt}; see also~\cite{Kratt_Schlosser}). Specific
classes of skew shapes, such as zigzags and strips of constant
width, have interesting special formulas; see
Section~\ref{AR_s:formulas_skew_strips}.
\begin{theorem}\label{t.AR_num_skew_det
{\rm\ (Skew determinantal
formula)~\cite{Aitken}\cite{Feit}\cite[Corollary
7.16.3]{Stanley_EC2}} The number of SYT of skew shape $\la/\mu$,
for partitions $\la = (\la_1, \ldots, \la_t)$ and $\mu = (\mu_1,
\ldots, \mu_s)$ with $\mu_i \le \la_i$ $(\forall i)$, is
\[
f^{\la/\mu} = |\la/\mu|! \cdot \det \left[\frac{1}{(\la_i - \mu_j
- i + j)!} \right]_{i, j = 1}^{t},
\]
with the conventions $\mu_j := 0$ for $j > s$ and $1/k! := 0$ for
negative integers $k$.
\end{theorem}
The following proof is inductive. There is another approach that
uses the Jacobi-Trudi identity.
\begin{proof} (Adapted from~\cite{Feit})\\
By induction on the size $n := |\la/\mu|$. Denote
\[
a_{ij} := \frac{1}{(\la_i - \mu_j - i + j)!}.
\]
For $n= 0$, $\la_i = \mu_i$ $(\forall i)$. Thus
\[
i = j \,\then\, \la_i - \mu_i - i + i = 0 \,\then\, a_{ii} = 1
\]
and
\[
i > j \,\then\, \la_i - \mu_j - i + j < \la_i - \mu_j = \la_i -
\la_j \le 0 \,\then\, a_{ij} = 0.
\]
Hence the matrix $(a_{ij})$ is upper triangular with diagonal
entries $1$, and $f^{\la/\mu} = 1 = 0! \det (a_{ij})$.
For the induction step assume that the claim holds for all skew
shapes of size $n-1$, and consider a shape $\la/\mu$ of size $n$
with $t$ rows. The cell containing $n$ must be the last cell in
its row and column. Therefore
\[
f^{\la/\mu} = \sum_{i'} f^{(\la/\mu)_{i'}}
\]
where $(\la/\mu)_{i'}$ is the shape $\la/\mu$ minus the last cell
in row $i'$, and summation is over all the rows $i'$ which are
nonempty and whose last cell is also last in its column.
Explicitly, summation is over all $i'$ such that $\la_{i'} >
\mu_{i'}$ as well as $\la_{i'} > \la_{i'+1}$. By the induction
hypothesis,
\[
f^{\la/\mu} = (n-1)!\, \sum_{i'} \det\, ( a_{ij}^{(i')} )
\]
where $a_{ij}^{(i')}$ is the analogue of $a_{ij}$ for the shape
$(\la/\mu)_{i'}$ and summation is over the above values of $i'$.
In fact,
\[
a_{ij}^{(i')} = \begin{cases}
a_{ij}, & \hbox{if } i \ne i'; \\
(\la_i - \mu_j - i + j) \cdot a_{ij}, & \hbox{if } i = i'.
\end{cases}
\]
This holds for all values (positive, zero or negative) of $\la_i -
\mu_j - i + j$. The rest of the proof consists of two steps.
\medskip
{\bf Step 1:}
The above formula for $f^{\la/\mu}$ holds with summation extending over all $1 \le i' \le t$.\\
Indeed, it suffices to show that
\[
\la_{i'} = \mu_{i'} \hbox{ or } \la_{i'} = \la_{i'+1} \,\then\,
\det\, ( a_{ij}^{(i')} ) = 0.
\]
If $ \la_{i'} = \la_{i'+1}$ then
\[
\la_{i'+1} - \mu_j - (i'+1) + j = (\la_{i'} - 1) - \mu_j - i' + j
\qquad (\forall j),
\]
so that the matrix $( a_{ij}^{(i')} )$ has two equal rows and
hence its determinant is $0$. If $\la_{i'} = \mu_{i'}$ then
\[
j \le i' < i \,\then\, \la_i - \mu_j - i + j < \la_i - \mu_j \le
\la_i - \mu_{i'} \le \la_{i'} - \mu_{i'} = 0
\]
and
\[
j \le i' = i \,\then\, (\la_{i'} - 1) - \mu_j - i' + j < \la_{i'}
- \mu_j \le \la_{i'} - \mu_{i'} = 0.
\]
Thus the matrix $( a_{ij}^{(i')} )$ has a zero submatrix
corresponding to $j \le i' \le i$, which again implies that its
determinant is zero -- e.g., by considering the determinant as a
sum over permutations $\sigma \in \Sc_t$ and noting that, by the
pigeon hole principle, there is no permutation satisfying $j =
\sigma(i) > i'$ for all $i \ge i'$.
\medskip
{\bf Step 2:} Let $A_{ij}$ be the $(i,j)$-cofactor of the matrix
$A = (a_{ij})$, so that
\[
\det A = \sum_j a_{ij} A_{ij} \qquad (\forall i)
\]
and also
\[
\det A = \sum_i a_{ij} A_{ij} \qquad (\forall j).
\]
Then, expanding along row $i'$,
\[
\det (a_{ij}^{i'}) = \sum_j a_{i'j}^{(i')} A_{i'j} = \sum_j
(c_{i'} - d_j) a_{i'j} A_{i'j}
\]
where $c_i := \la_i - i$ and $d_j := \mu_j - j$. Thus
\begin{eqnarray*}
\frac{f^{\la/\mu}}{(n-1)!} &=& \sum_{i' = 1}^{t} \det\,
(a_{ij}^{(i')})
= \sum_{i'} \sum_j (c_{i'} - d_j) a_{i'j} A_{i'j} \\
&=& \sum_{i'} \sum_j c_{i'} a_{i'j} A_{i'j} - \sum_{i'} \sum_j d_j a_{i'j} A_{i'j} \\
&=& \sum_{i'} c_{i'} \det A - \sum_j d_j \det A
= \left( \sum_{i'} c_{i'} - \sum_j d_j \right) \det A \\
&=& \left( \sum_{i'} \la_{i'} - \sum_j \mu_j \right) \det A =
|\la/\mu| \det A = n \det A
\end{eqnarray*}
which completes the proof.
\end{proof}
\subsection{Shifted shapes}\label{AR_s:shifted_shapes}
For a strict partition $\la$, let $g^{\la} := |\SYT(\la^*)|$ be
the number of standard Young tableaux of shifted shape $\la$. Like
ordinary shapes, shifted shapes have three types of formulas --
product, hook length and determinantal. The product formula was
proved by Schur~\cite{Schur}, using representation theory, and
then by Thrall~\cite{Thrall}, using recursion and combinatorial
arguments.
\begin{theorem}\label{t.AR_num_shifted_prod
{\rm\ (Schur's shifted product
formula)~\cite{Schur}\cite{Thrall}\cite[p.\ 267, eq.\ (2)]{Md}}
For any strict partition $\la = (\la_1, \ldots, \la_t)$,
\[
g^{\la} = \frac{|\la|!}{\prod_{i=1}^{t} \la_i!} \cdot
\prod_{(i,j):\, i < j} \frac{\la_i - \la_j}{\la_i + \la_j}.
\]
\end{theorem}
\begin{definition}
For a cell $c = (i,j) \in [\la^*]$ let
\[
H_{c}^* := [\la^*] \cap \left( \{(i,j)\} \cup \{(i,j') \,|\, j' >
j\} \cup \{(i',j) \,|\, i' > i\} \cup \{(j+1,j') \,|\, j' \ge
j+1\} \right)
\]
be the correponding {\dem shifted hook}; note that the last set is
relevant only for $j < t$. Let
\[
h_{c}^* := |H_{c}^*| =
\begin{cases}
\la_i + \la_{j+1}, &\hbox{\rm if } j < t; \\
\la_i - j + |\{i' \,|\, i' \ge i, \, \la_{i'} + i' \ge j + 1\}|,
&\hbox{\rm if } j \ge t.
\end{cases}
\]
be the corresponding {\dem shifted hook length}.
\end{definition}
For example, in the following diagram the cells in the shifted
hook $H_{(1,2)}^*$ are marked
\[
\ydiagram{5, 1 + 4, 2 + 2} * [\bullet]{1 + 4, 1 + 1, 2 + 2}
\]
and in the following diagram each cell is labeled by the
corresponding shifted hook length.
\[
\ytableaushort{97542, \none6431, \none\none21}
\]
\begin{theorem}\label{t.AR_num_shifted_hook
{\rm\ (Shifted hook length formula)~\cite[p.\ 267, eq.\ (1)]{Md}}
For any strict partition $\la = (\la_1, \ldots, \la_t)$,
\[
g^{\la} = \frac{|\la|!}{\prod_{c \in [\la^*]} h_{c}^*}.
\]
\end{theorem}
\begin{theorem}\label{t.AR_num_shifted_det
{\rm\ (Shifted determinantal formula)} For any strict partition
$\la = (\la_1, \ldots, \la_t)$,
\[
g^{\la} = \frac{|\la|!}{\prod_{(i,j):\, i < j} (\la_i + \la_j)}
\cdot \det \left[\frac{1}{(\la_i - t + j)!} \right]_{i, j =
1}^{t},
\]
using the convention $1/k! := 0$ for negative integers $k$.
\end{theorem}
The formulas in Theorems~\ref{t.AR_num_shifted_prod},
\ref{t.AR_num_shifted_hook} and \ref{t.AR_num_shifted_det} can be
shown to be equivalent in much the same way as was done for
ordinary shapes in Subsection~\ref{AR_s:classical_shapes}. Note
that the factors of the first denominator in the determinantal
formula (Theorem~\ref{t.AR_num_shifted_det}) are precisely the
shifted hook lengths $h_c^*$ for cells $c = (i,j)$ in the region $j < t$.
\section{More proofs of the hook length formula}\label{AR_s:proof_approaches}
\subsection{A probabilistic proof
}
Probabilistic proofs rely on procedures for a random choice of an
object from a set. The
key observation is that a uniform distribution implies an exact
evaluation and ``almost uniform" distributions yield good bounds.
A seminal example is the Greene-Nijenhuis-Wilf probabilistic proof
of the ordinary hook length formula, to be described here.
Our outline follows Sagan's description, in the first edition
of~\cite{Sagan_book}, of the original proof of Greene, Nijenhuis
and Wilf~\cite{Greene-etal}.
We start with a procedure that generates a random SYT of a given
ordinary shape $D$. Recall from Definition~\ref{AR_d:hook} the
notions of {\dem hook} $H_c$ and {\dem hook length} $h_c$
corresponding to a cell $c \in D$. A {\dem corner} of $D$ is a
cell which is last in its row and in its column (equivalently, has
hook length $1$).
\medskip
\begin{algorithmic}[1]
\Require{$D$, a diagram of ordinary shape.} \Ensure{A random $T
\in \SYT(D)$.} \Statex \Procedure{RandomSYT}{$D$} \While{$D$ is
not empty}
\State $n \gets |D|$
\State Choose randomly a cell $c \in D$ \Comment with uniform probability $1/n$
\While{$c$ is not a corner of $D$}
\State Choose randomly a cell $c' \in H_c \setminus \{c\}$
\Comment with uniform probability $1/(h_c - 1)$
\State $c \gets c'$
\EndWhile
\State $T(c) \gets n$
\State $D \gets D \setminus \{c\}$
\EndWhile \State \textbf{return} $T$ \EndProcedure
\end{algorithmic}
\medskip
We claim that this procedure produces each SYT of shape $D$ with
the same probability. More precisely,
\begin{lemma}\label{AR_t:GNW-lemma1}
The procedure $\textsc{RandomSYT}$ produces each SYT of shape $D$
with probability
\[
p = \frac{1}{|D|!} \prod_{c \in D} h_c \, .
\]
\end{lemma}
\begin{proof}
By induction on $n := |D|$. The claim clearly holds for $n = 0,
1$.
Suppose that the claim holds for all shapes of size $n-1$, and let
$D$ be an ordinary shape of size $n$. Let $T \in \SYT(D)$, and
assume that $T(v) = n$ for some corner $v = (\alpha, \beta)$.
Denote $D' := D \setminus \{v\}$, and let $T' \in \SYT(D')$ be the
restriction of $T$ to $D'$.
In order to produce $T$, the algorithm needs to first produce $v$
(in rows 4--8, given $D$), and then move on to produce $T'$ from
$D'$. By the induction hypothesis, it suffices to show that the
probability that rows 4--8 produce the corner $v = (\alpha,
\beta)$ is
\[
\frac{\prod_{c \in D} h_c / n!}{\prod_{c \in D'} h'_c / (n-1)!} =
\frac{1}{n} \prod_{c \in D'} \frac{h_c}{h'_c} = \frac{1}{n}
\prod_{i = 1}^{\alpha - 1} \frac{h_{i,\beta}}{h_{i,\beta} -1}
\prod_{j = 1}^{\beta - 1} \frac{h_{\alpha,j}}{h_{\alpha,j} -1} \,
,
\]
where $h'_c$ denotes hook length in $D'$.
This is equal to
\[
\frac{1}{n} \prod_{i = 1}^{\alpha - 1} \left( 1 +
\frac{1}{h_{i,\beta}-1} \right)
\prod_{j = 1}^{\beta - 1} \left( 1 + \frac{1}{h_{\alpha,j}-1} \right)
= \frac{1}{n} \sum\limits_{A \subseteq [\alpha-1] \atop B\subseteq
[\beta-1]} \prod_{i \in A} \frac{1}{h_{i,\beta} -1} \prod_{j \in
B} \frac{1}{h_{\alpha,j} -1} \, .
\]
Following Sagan~\cite{Sagan_book} we call any possible sequence of
cells of $D$ obtained by lines 4--8 of the procedure (starting at
a random $c$ and ending at the given corner $v$) a {\dem trial}.
For each trial $\tau$, let
\[
A(\tau) := \{ i < \alpha \,:\, \exists j \text{ s.t. } (i,j)
\text{ is a cell in the trial } \tau \} \subseteq [1, \alpha - 1]
\]
be its {\dem horizontal projection} and let
\[
B(\tau) := \{ j < \beta \,:\, \exists i \text{ s.t. } (i,j) \text{
is a cell in the trial } \tau \} \subseteq [1, \beta - 1] \, .
\]
be its {\dem vertical projection}.
\smallskip
It then suffices to show that for any given $A \subseteq [1,
\alpha - 1]$ and $B \subseteq [1, \beta - 1]$, the sum of
probabilities of all trials $\tau$ ending at $v = (\alpha, \beta)$
such that $A(\tau) = A$ and $B(\tau) = B$ is
\[
\frac{1}{n} \prod_{i \in A} \frac{1}{h_{i,\beta} -1} \prod_{j \in
B} \frac{1}{h_{\alpha,j} -1} \, .
\]
This may be proved by induction on $|A \cup B|$.
\end{proof}
Lemma~\ref{AR_t:GNW-lemma1} says that the algorithm produces each
$T \in \SYT(D)$ with the same probability $p$. The number of SYT
of shape $D$ is therefore $1/p$, proving the hook length formula
(Theorem~\ref{t.AR_num_ordinary_hook}).
For a fully detailed proof see~\cite{Greene-etal} or the first
edition of~\cite{Sagan_book}.
\medskip
A similar method was applied in~\cite{Sagan80} to prove the hook
length formula for shifted shapes
(Theorem~\ref{t.AR_num_shifted_hook} above).
\subsection{Bijective proofs}\label{bijective}
There are several bijective proofs of the (ordinary) hook length
formula. Franzblau and Zeilberger~\cite{FZ} gave a bijection which
is rather simple to describe, but breaks the row-column symmetry
of hooks. Remmel~\cite{Remmel82} used the Garsia-Milne involution
principle~\cite{GarsiaMilne81} to produce a composition of maps,
``bijectivizing'' recurrence relations.
Zeilberger~\cite{Zeilberger_DM1984} then gave a bijective version
of the probabilistic proof of Greene, Nijenhuis and
Wilf~\cite{Greene-etal} (described in the previous subsection).
Krattenthaler~\cite{Kratt95} combined the Hillman-Grassl
algorithm~\cite{HillmanGrassl} and Stanley's
$(P,\omega)$-partition theorem with the involution principle.
Novelli, Pak and Stoyanovskii~\cite{NovelliPakSto97} gave a
complete proof of a bijective algorithm previously outlined by Pak
and Stoyanovskii~\cite{PakSto92}. A generalization of their method
was given by Krattenthaler~\cite{Kratt98}.
Bijective proofs for the shifted hook length formula were given by
Krattenthaler~\cite{Kratt95} and Fischer~\cite{Fischer}.
A bijective proof of the ordinary determinantal formula was
given by Zeilberger~\cite{Zeilberger_DM1983}; see
also~\cite{Linial} and~\cite{Kratt89}.
We shall briefly describe here the bijections of Franzblau-Zeilberger
and of Novelli-Pak-Stoyanovskii.
Only the algorithms (for the map in one direction) will be
specified; the interested reader is referred to the original
papers (or to~\cite{Sagan_book}) for more complete descriptions
and proofs.
The basic setting for both bijections is the following.
\begin{definition}
Let $\la$ be a partition of $n$ and $A$ a set of positive integers
such that $|A| = |\la|$. A {\dem Young tableaux} of (ordinary)
shape $\la$ and image $A$ is a bijection $R: [\la] \to A$, not
required to be order-preserving. A {\dem pointer tableau} (or
{\dem hook function}) of shape $\la$ is a function $P: [\la] \to
\bbz$ which assigns to each cell $c \in [\la]$ a pointer $p(c')$
which encodes some cell $c'$ in the hook $H_c$ of $c$ (see
Definition~\ref{AR_d:hook}). The pointer corresponding to $c' \in
H_c$ is defined as follows:
\[
p(c') :=
\begin{cases}
j, & \text{if $c'$ is $j$ columns to the right of $c$, in the same row;} \\
0, & \text{if $c' = c$;} \\
-i, & \text{if $c'$ is $i$ rows below $c$, in the same column.}
\end{cases}
\]
Let $\YT(\la, A)$ denote the set of all Young tableaux of shape
$\la$ and image $A$, $\PT(\la)$ the set of all pointer tableaux of
shape $\la$, and $\SPT(\la, A)$ the set of all pairs $(T, P)$
(``standard and pointer tableaux'') where $T \in \SYT(\la, A)$ and
$P \in \PT(\la)$. $\YT(\la)$ is a shorthand for $\YT(\la, [n])$
where $n = |\la|$, and $\SPT(\la)$ a shorthand for $\SPT(\la,
[n])$.
\end{definition}
\begin{example}
A typical hook, with each cell marked by its pointer:
\[
\ytableaushort{0 1 2 3 4, \mone, \mtwo}
\]
\end{example}
The hook length formula that we want to prove may be written as
\[
n! = f^{\la} \cdot \prod_{c \in [\la]} h_c \, .
\]
The LHS of this formula is the size of $\YT(\la)$, while the RHS
is the size of $\SPT(\la)$. Any explicit bijection $f: \YT(\la)
\to \SPT(\la)$ will prove the hook length formula. As promised, we
shall present algorithms for two such bijections.
\bigskip
\noindent {\dem The Franzblau-Zeilberger algorithm~\cite{FZ}:} The
main procedure, $\textsc{FZ-SortTableau}$, ``sorts'' a YT $R$ of
ordinary shape, column by column from right to left, to produce a
SPT $(T, P)$ of the same shape. The pointer tableau $P$ records
each step of the sorting, keeping just enough information to
enable reversal of the procedure. $\varnothing$ denotes the empty
tableau.
\medskip
\begin{algorithmic}[1]
\Require{$R \in \YT(\la)$.} \Ensure{$(T, P) \in \SPT(\la)$.}
\Statex \Procedure{FZ-SortTableau}{$R$} \State $(T, P) \gets
(\varnothing, \varnothing)$ \Comment{Initialize} \State $m \gets$
number of columns of $R$ \For{$j \gets m$ \textbf{downto} $1$}
\Comment{Add columns from right to left}
\State $c \gets$ column $j$ of $R$
\State $(T, P) \gets$ \Call{InsertColumn}{$T, P, c$}
\EndFor \State \textbf{return} $(T, P)$ \EndProcedure
\end{algorithmic}
\medskip
The algorithm makes repeated use of the following procedure
$\textsc{InsertColumn}$:
\medskip
\begin{algorithmic}[1]
\Require{$(T, P, c)$, where $(T, P) \in \SPT(\mu, A)$ for some
ordinary shape $\mu$ and some set $A$ of positive numbers of size
$|A| = |\mu|$ such that all the rows of $T$ are increasing, and $c
= (c_1, \dots, c_m)$ is a vector of distinct positive integers
$c_i \not\in A$ whose length $m \ge \ell(\mu)$.} \Ensure{$(T', P')
\in \SPT(\mu', A')$, where $A' = A \cup \{c_1, \ldots, c_m\}$ and
$\mu'$ is obtained from $\mu$ by attaching a new first column of
length $m$.} \Statex \Procedure{InsertColumn}{$T, P, c$} \For {$i
\gets 1$ \textbf{to} $m$}
\State $T \gets$ \Call{Insert}{$T, i, c_i$}
\Comment{Insert $c_i$ into row $i$ of $T$, keeping the row entries increasing}
\State $d_i \gets$ (new column index of $c_i) - 1$
\Comment{Initialize the pointer $d_i$}
\EndFor \While{$T$ is not a Standard Young Tableau}
\State $(k,x) \gets T^{-1}(\min \{T(i,j) \,|\, T(i-1, j) > T(i, j) \})$
\Comment{The smallest entry out of order}
\State $y \gets d_{k-1} + 1$
\Comment{Claim: $y > 0$}
\State $T \gets$ \Call{Exchange}{$T, (k, x), (k-1, y)$}
\State $y' \gets$ new column index of the old $T(k-1, y)$
\Comment{The new row index is $k$}
\State \Comment{Update the pointers $d_{k-1}$ and $d_k$}
\State $d_{k-1} \gets \begin{cases}
v, & \text{if } d_k = v \ge 0,\ v \ne x-1; \\
-1, & \text{if } d_k = x-1; \\
-(u+1), & \text{if } d_k = -u < 0.
\end{cases}$
\State $d_k \gets y'-1$
\EndWhile \State $P \gets$ \Call{Attach}{$d, P$} \Comment{Attach
$d$ to $P$ as a first column} \State \textbf{return} $(T, P)$
\Comment{$T$ is now a SYT} \EndProcedure
\end{algorithmic}
\medskip
This procedure makes use of some elementary operations, which may
be described as follows:
\begin{itemize}
\item $\textsc{Insert}(T, i, c_i)$ inserts the entry $c_i$ into
row $i$ of $T$, reordering this row to keep it increasing. \item
$\textsc{Exchange}(T, (k, x), (\ell, y))$ exchanges the entries in
cells $(k, x)$ and $(\ell, y)$ of $T$ and then reorders rows $k$
and $\ell$ to keep them increasing. \item $\textsc{Attach}(d, P)$
attaches the vector $d$ to the pointer tableau $P$ as a new first
column.
\end{itemize}
\begin{example}
An instance of $\textsc{InsertColumn}(T, P, c)$ with
\[
T =\, \ytableaushort{1 8, 4, 7} \quad , \quad P =\,
\ytableaushort{1 0, 0, 0} \quad , \quad c =\, \ytableaushort{{12},
5, 3, 6}
\]
proceeds as follows (with the smallest entry out of order set in boldface):
\[
(T, d) =\, \ytableaushort{1 8 {12}, 4 5, {\bnum{3}} 7, 6} \quad
\ytableaushort{2, 1, 0, 0} \quad \to \quad \ytableaushort{1 8
{12}, 3 {\bnum{4}}, 5 7, 6} \quad \ytableaushort{2, \mone, 0, 0}
\quad \to \quad \ytableaushort{1 4 8, 3 {12}, 5 {\bnum{7}}, 6}
\quad \ytableaushort{\mtwo, 1, 0, 0} \quad \to \quad
\ytableaushort{1 4 8, 3 7, 5 {12}, 6} \quad \ytableaushort{\mtwo,
0, 1, 0}
\]
and yields
\[
T =\, \ytableaushort{1 4 8, 3 7, 5 {12}, 6} \quad , \quad P =\,
\ytableaushort{\mtwo 1 0, 0 0, 1 0, 0} \quad .
\]
An instance of $\textsc{FZ-SortTableau}(R)$ with
\[
R =
\ytableaushort{9 {12} 8 1, 2 5 4, {11} 3 7, {10} 6}
\]
proceeds as follows (the second step being the instance above):
\[
\ytableaushort{1} \quad \ytableaushort{0} \quad \to \quad
\ytableaushort{1 8, 4, 7} \quad \ytableaushort{1 0, 0, 0} \quad
\to \quad \ytableaushort{1 4 8, 3 7, 5 {12}, 6} \quad
\ytableaushort{\mtwo 1 0, 0 0, 1 0, 0} \quad \to \quad
\ytableaushort{1 3 4 8, 2 7 9, 5 {10} {12}, 6 {11}} \quad
\ytableaushort{0 \mtwo 1 0, 2 0 0, \mone 1 0, 1 0} \quad = (T, P)
\quad .
\]
\end{example}
\bigskip
\noindent {\dem The Novelli-Pak-Stoyanovskii
algorithm~\cite{NovelliPakSto97}:} Again, we prove the hook length
formula
\[
n! = f^{\la} \cdot \prod_{c \in [\la]} h_c
\]
by building an explicit bijection $f: \YT(\la) \to \SPT(\la)$.
However, instead of building the tableaux column by column, we
shall use a modified jeu de taquin to unscramble the entries of $R
\in \YT(\la)$ so that rows and columns increase. Again, a pointer
tableau will keep track of the process so as to make it
invertible. Our description will essentially
follow~\cite{Sagan_book}.
First, define a linear (total) order on the cells of a diagram $D$
of ordinary shape by defining
\[
(i, j) \unlhd (i', j') \iff \text{either } j > j' \text{ or } j =
j' \text{ and } i \ge i'.
\]
For example, the cells of the following diagram are labelled $1$
to $7$ according to this linear order:
\[
\ytableaushort{7 4 2, 6 3 1, 5}
\]
If $R \in \YT(\la)$ and $c \in D := [\la]$, let $R^{\unlhd c}$
(respectively $R^{\lhd c}$) be the tableau consisting of all cells
$b \in D$ with $b \unlhd c$ (respectively, $ \lhd c$).
Define a procedure $\textsc{MForwardSlide}$ which is the procedure
$\textsc{ForwardSlide}$ from the description of jeu de taquin,
with the following two modifications:
\begin{enumerate}
\item Its input is $(T, c)$ with $T \in \YT$ rather than $T \in
\SYT$. \item Its output is $(T, c)$ (see there), rather than just
$T$.
\end{enumerate}
\medskip
\begin{algorithmic}[1]
\Require{$R \in \YT(\la)$.} \Ensure{$(T, P) \in SPT(\la)$.}
\Statex \Procedure{NPS}{$R$} \State $T \gets R$ \State $D \gets
\sh(T)$ \State $P \gets 0 \in \PT(D)$ \Comment A pointer tableau
of shape $D$ filled with zeros \While{$T$ is not standard}
\State $c \gets$ the $\lhd$-maximal cell such that $T^{\lhd c}$ is standard
\State $(T', c') \gets$ \Call{MForwardSlide}{$T^{\lhd c}, c$}
\For {$b \in D$}
\Comment Replace $T^{\unlhd c}$ by $T'$, except that $T(c') \gets$ the old $T(c)$
\State $T(b) \gets
\begin{cases}
T(b), & \text{if } b \rhd c; \\
T'(b), & \text{if } b \unlhd c \text{ and } b \ne c'; \\
T(c), & \text{if } b = c'.
\end{cases}$
\EndFor
\State Let $c = (i_0, j_0)$ and $c' = (i_1, j_1)$
\Comment Necessarily $i_0 \le i_1$ and $j_0 \le j_1$
\For {$i \textbf{ from } i_0 \textbf{ to } i_1 - 1$}
\State $P(i, j_0) \gets P(i+1, j_0) - 1$
\EndFor
\State $P(i_1, j_0) \gets j_1 - j_0$
\EndWhile \State \textbf{return} $T$ \Comment Now $c_0 \in D
\subseteq (c_0)_+$, so $D$ has ordinary shape \EndProcedure
\end{algorithmic}
\medskip
\begin{example}
For
\[
R = \, \ytableaushort{6 2, 4 3, 5 1} \quad ,
\]
here is the sequence of pairs $(T, P)$ produced during the
computation of $\textsc{NPS}(R)$ (with $c$ in boldface):
\[
\ytableaushort{6 2, 4 {\bnum{3}}, 5 1} \quad \ytableaushort{0 0, 0
0, 0 0} \quad \to \quad \ytableaushort{6 {\bnum{2}}, 4 1, 5 3}
\quad \ytableaushort{0 0, 0 \mone, 0 0} \quad \to \quad
\ytableaushort{6 1, 4 2, {\bnum{5}} 3} \quad \ytableaushort{0
\mtwo, 0 0, 0 0} \quad \to \quad
\]
\[
\quad \to \quad \ytableaushort{6 1, {\bnum{4}} 2, 3 5} \quad
\ytableaushort{0 \mtwo, 0 0, 1 0} \quad \to \quad
\ytableaushort{{\bnum{6}} 1, 2 4, 3 5} \quad \ytableaushort{0
\mtwo, 1 0, 1 0} \quad \to \quad \ytableaushort{1 4, 2 5, 3 6}
\quad \ytableaushort{0 \mtwo, 0 0, 1 0} \quad .
\]
\end{example}
\subsection{Partial difference operators}\label{AR_s:DZ}
MacMahon~\cite{MacMahon_book} has originally used partial
difference equations, also known as recurrence relations, to solve
various enumeration problems -- among them the enumeration of
ballot sequences, or equivalently SYT of an ordinary shape (see
Subsection~\ref{AR_s:lattice_paths}).
Zeilberger~\cite{Zeilberger_DM1980} improved on MacMahon's proof
by extending the domain of definition of the enumerating
functions, thus simplifying the boundary conditions: In PDE
terminology, a Neumann boundary condition (zero normal
derivatives) was replaced by a Dirichlet boundary condition (zero
function values). He also made explicit use of the algebra of
partial difference operators; we shall present here a variant of
his approach.
Consider, for example, the two dimensional ballot problem --
finding the number $F(m_1, m_2)$ of lattice paths from $(0, 0)$ to
$(m_1, m_2)$ which stay in the region $m_1 \ge m_2 \ge 0$.
MacMahon~\cite[p.\ 127]{MacMahon_book} has set the partial
difference equation
\[
F(m_1, m_2) = F(m_1 - 1, m_2) + F(m_1, m_2 - 1) \qquad (m_1 > m_2
> 0)
\]
with the boundary conditions
\[
F(m_1, m_2) = F(m_1, m_2 - 1) \qquad (m_1 = m_2 > 0)
\]
and
\[
F(m_1, 0) = 1 \qquad (m_1 \ge 0).
\]
By extending $F$ to the region $m_1 \ge m_2 - 1$, the recursion
can be required to hold for almost all $m_1 \ge m_2 \ge 0$:
\[
F(m_1, m_2) = F(m_1 - 1, m_2) + F(m_1, m_2 - 1) \qquad (m_1 \ge
m_2 \ge 0,\, (m_1, m_2) \ne (0,0))
\]
with
\[
F(m_1, m_2) = 0 \qquad(m_1 = m_2 - 1 \hbox{\ \ or\ \ } m_2 = -1)
\]
and
\[
F(0,0) = 1.
\]
In general,
consider functions $f : \bbz^n \to \bbc$
and define the fundamental {\dem shift operators} $X_1, \ldots,
X_n$ by
\[
(X_i f)(m_1, \ldots, m_n) := f(m_1, \ldots, m_i + 1, \ldots, m_n)
\qquad (1 \le i \le n).
\]
For $\alpha \in \bbz^n$ write $X^{\alpha} = X_1^{\alpha_1} \cdots
X_n^{\alpha_n}$, so that $(X^{\alpha} f)(m) = f(m + \alpha)$. A
typical linear partial difference operator with constant
coefficients has the form
\[
P = p(X_1^{-1}, \ldots, X_n^{-1}) = \sum_{\alpha \ge 0} a_{\alpha}
X^{-\alpha},
\]
for some polynomial $p(z)$ with complex coefficients, so that
$a_\alpha \in \bbc$ for each $\alpha = (\alpha_1, \ldots,
\alpha_n) \in \bbn^n$ and $a_\alpha \ne 0$ for only finitely many
values of $\alpha$. We also assume that $p(0) = a_0 = 1$.
\begin{definition}
Define the {\dem discrete delta function} $\delta: \bbz^n \to
\bbc$ by
\[
\delta(m) = \begin{cases}
1, & \hbox{if } m = 0; \\
0, & \hbox{otherwise}.
\end{cases}
\]
A function $f :\bbz^n \to \bbc$ satisfying $Pf = \delta$ is called
a {\dem fundamental solution} corresponding to the operator $P$.
If $f$ is supported in $\bbn^n$, it is called a {\dem canonical}
fundamental solution.
\end{definition}
It is clear that each operator $P$ as above has a unique canonical
fundamental solution.
In the following theorem we consider a slightly more general type
of operators, which can be written as $X^\alpha p(X_1^{-1},
\ldots, X_n^{-1})$ for a polynomial $p$ and some $\alpha \ge 0$.
\begin{theorem}\label{AR_t:DZ_thm2
{\rm (A variation on~\cite[Theorem 2]{Zeilberger_DM1980})} Let
$F_n = F_n(m_1, \ldots, m_n)$ be the canonical fundamental
solution corresponding to an operator $P = p(X_1^{-1}, \ldots,
X_n^{-1})$, where $p(z)$ is a symmetric polynomial with $p(0) =
1$.
Denote
\[
\Delta_n := \prod_{(i,j):\, i<j} (I - X_i X_j^{-1}).
\]
Then $G_n = \Delta_n F_n$ is the unique solution of the equation
$Pg =0$ in the region
\[
\{(m_1, \ldots, m_n) \in \bbz^n \,|\, m_1 \ge \ldots \ge m_n \ge
0\} \setminus \{(0, \ldots, 0)\}
\]
subject to the boundary conditions
\[
(\exists i)\, m_i = m_{i+1} - 1 \,\then\, g(m_1, \ldots, m_n) = 0,
\]
\[
m_n = -1 \,\then\, g(m_1, \ldots, m_n) = 0
\]
and
\[
g(0, \ldots, 0) = 1.
\]
\end{theorem}
\begin{proof}
Since each $X_i$ commutes with the operator $P$, so does
$\Delta_n$.
Since $m_1 + \ldots + m_n$ is invariant under $X_i X_j^{-1}$ and
$F_n$ is a solution of $Pf = 0$ in the
complement of the hyperplane $m_1 + \ldots + m_n = 0$, so is
$G_n$. It remains to verify that $G_n$ satisfies the prescribed
boundary conditions. Now, by definition,
\[
G_n(m) = (I - X_1 X_2^{-1}) A_{1,2} F_n(m)
\]
where the operator $A_{1,2}$ is symmetric with respect to $X_1$,
$X_2$. Since $F_n(m)$ is a symmetric function, we can write
\[
G_n(m) = (I - X_1 X_2^{-1}) H(m)
\]
where $H$ is symmetric with respect to $m_1$ and $m_2$.
Suppressing the dependence on $m_3, \ldots, m_n$,
\[
G_n(m_1, m_2) = (I - X_1 X_2^{-1}) H(m_1, m_2) = H(m_1, m_2) -
H(m_1 + 1, m_2 - 1) = 0
\]
whenever $m_1 = m_2 - 1$, by the symmetry of $H$. Similarly, for
$i = 1, \ldots, n-1$, $G_n(m) = 0$ on $m_i = m_{i+1} - 1$. On $m_n
= -1$,
\[
G_n(m_1, \ldots, m_{n-1}, -1) = \prod_{(i,j): 1 \le i < j \le
n-1} (I - X_i X_j^{-1}) \prod_{1 \le i \le n-1} (I - X_i X_n^{-1})
F_n(m_1, \ldots, m_{n-1}, -1).
\]
Since $F_n(m) = 0$ for $m_n < 0$,
\[
G_n(m_1, \ldots, m_{n-1}, -1) = 0.
\]
Finally, $F_n(m_1, \ldots, m_n) = 0$ on all of the hyperplane $m_1
+ \ldots + m_n = 0$ except the origin $0 = (0, \ldots, 0)$.
Therefore
\[
G_n(0) = \Delta_n F_n(0) = F_n(0) = 1.
\]
\end{proof}
For every function $f: \bbz^n \to \bbc$ whose support is contained
in a translate of $\bbn^n$ (i.e., such that there exists $N \in
\bbz$ such that $f(m_1, \ldots, m_n) = 0$ whenever $m_i < N$ for
some $i$) there is a corresponding generating function (formal
Laurent series)
\[
\gf(f) := \sum_m f(m) z^m \in \bbc((z_1, \ldots, z_n)).
\]
Let $p(z_1, \ldots, z_n)$ be a polynomial with complex
coefficients and $p(0) = 1$, and let $P = p(X_1^{-1}, \ldots,
X_n^{-1})$ be the corresponding operator. Since $\gf(X^{-\alpha}
f) = z^{\alpha} \gf(f)$ we have
\[
\gf(P f) = p(z_1, \ldots, z_n) \cdot \gf(f),
\]
and therefore $P f = \delta$ implies $\gf(f) = 1/p(z_1, \ldots,
z_n)$.
\begin{definition}{\rm (MacMahon~\cite{MacMahon_book})}
Let $A \subseteq \bbz^n$ and $f : A \to \bbc$. A formal Laurent
series $\sum_m a(m) z^m$ is a {\dem redundant generating function
for $f$ on $A$} if $f(m) = a(m)$ for all $m \in A$.
\end{definition}
\begin{theorem}\label{AR_t:DZ_thm5
{\rm (MacMahon~\cite[p.\ 133]{MacMahon_book})} Let $g(m)$ be the
number of lattice paths from $0$ to $m$, where travel is
restricted to the region
\[
A = \{(m_1, \ldots, m_n) \in \bbz^n \,|\, m_1 \ge m_2 \ge \ldots
\ge m_n \ge 0\}.
\]
Then
\[
\prod_{(i,j):\, i < j} \left( 1 - \frac{z_j}{z_i} \right) \cdot
\frac{1}{1 - z_1 - \ldots - z_n}
\]
is a redundant generating function for $g$ on $A$ and therefore
\[
g(m) = \frac{(m_1 + \ldots + m_n)!}{(m_1 + n - 1)! \ldots m_n!}
\prod_{(i,j):\, i <j} (m_i - m_j + j - i).
\]
\end{theorem}
This gives, of course, the ordinary product formula
(Theorem~\ref{t.AR_num_ordinary_prod}).
\begin{proof}
Apply Theorem~\ref{AR_t:DZ_thm2} with $P = I - X_1^{-1} - \ldots -
X_n^{-1}$. The canonical fundamental solution of $P f = \delta$ is
easily seen to be the multinomial coefficient
\[
F_n(m_1, \ldots, m_n) = \begin{cases}
\frac{(m_1 + \ldots + m_n)!}{m_1! \cdots m_n!}, & \hbox{if } m_i \ge 0\, (\forall i); \\
0, & \hbox{otherwise,}
\end{cases}
\]
with generating function $(1 - z_1 - \ldots - z_n)^{-1}$.
The number of lattice paths in the statement of
Theorem~\ref{AR_t:DZ_thm5} clearly satisfies the conditions on $g$
in Theorem~\ref{AR_t:DZ_thm2}, and therefore
\[
g = G_n = \Delta_n F_n \qquad (\hbox{on } A).
\]
This implies the claimed redundant generating function for $g$ on
$A$.
To get an explicit expression for $g(m)$ note that $(m_1 + \ldots
+ m_n)!$ is invariant under $X_i X_j^{-1}$, so that
\begin{eqnarray*}
g(m_1, \ldots, m_n) &=& \prod_{(i,j):\, i<j} (I - X_i X_j^{-1})
\left[ \frac{(m_1 + \ldots + m_n)!}{m_1! \cdots m_n!} \right] \\
&=& (m_1 + \ldots + m_n)! \cdot \prod_{(i,j):\, i<j} (I - X_i
X_j^{-1}) \left[ \frac{1}{m_1! \cdots m_n!} \right].
\end{eqnarray*}
Consider
\begin{eqnarray*}
H(m_1, \ldots, m_n)
&:=& \prod_{(i,j):\, i<j} (I - X_i X_j^{-1}) \left[ \frac{1}{m_1! \cdots m_n!} \right] \\
&=& \prod_{(i,j):\, i<j} (X_i^{-1} - X_j^{-1}) \cdot \prod_{i}
X_i^{n-i}
\left[ \frac{1}{m_1! \cdots m_n!} \right] \\
&=& \prod_{(i,j):\, i<j} (X_i^{-1} - X_j^{-1}) \left[
\frac{1}{\ell_1! \ell_2! \cdots \ell_n!} \right],
\end{eqnarray*}
where $\ell_i := m_i + n - i$ ($1 \le i \le n$). Clearly $H$ is an
alternating (anti-symmetric) function of $\ell_1, \ldots, \ell_n$,
which means that
\[
H(m_1, \ldots, m_n) = \frac{Q(\ell_1, \ell_2, \ldots,
\ell_n)}{\ell_1! \ell_2! \cdots \ell_n!},
\]
where $Q$ is an alternating polynomial of degree $n - 1$ in each
of its variables. $g$, and therefore also $H$ and $Q$, vanish on
each of the hyperplanes $m_i = m_{i+1} - 1$, namely $\ell_i =
\ell_{i+1}$ ($1 \le i \le n-1$). Hence $\ell_i - \ell_{i+1}$, and
by symmetry also $\ell_i - \ell_j$ for each $i \ne j$, divide $Q$.
Hence
\[
Q(\ell) = c \prod_{(i,j):\, i<j} (\ell_i - \ell_j)
\]
and
\[
g(m) = c \frac{(m_1 + \ldots + m_n)!}{(m_1 + n -1)! \cdots m_n!}
\prod_{(i,j):\, i<j} (m_i - m_j - i + j)
\]
for a suitable constant $c$, which is easily found to be $1$ by
evaluating $g(0)$.
\end{proof}
\begin{remark}\label{AR_r:DZ_coeff}
Theorem~\ref{AR_t:DZ_thm5} gives an expression of the number of
SYT of ordinary shape $\la = (\la_1, \ldots, \la_t)$ as the
coefficient of $z^\ell$ (where $\ell_i = \la_i + t - i$) in the
power series
\[
\prod_{(i,j):\, i < j} \left( z_i - z_j \right) \cdot \frac{1}{1 -
z_1 - \ldots - z_t},
\]
or as the constant term in the Laurent series
\[
\prod_i z_i^{-\ell_i} \prod_{(i,j):\, i < j} \left( z_i - z_j
\right) \cdot \frac{1}{1 - z_1 - \ldots - z_t}.
\]
\end{remark}
\section{Formulas for skew strips}\label{AR_s:formulas_skew_strips}
We focus our attention now on two important families of skew
shapes, which are of special interest: Zigzag shapes and skew
strips of constant width.
\subsection{Zigzag shapes}\label{AR_s:zigzag}
Recall (from Subsection~\ref{AR_s:zigzag_sum}) that a {\dem zigzag
shape} is a path-connected skew shape which does not contain a
$2\times 2$ square.
\begin{definition}\label{AR_d:zigzag_S}
For any subset $S \subseteq [n-1] := \{1, \ldots, n-1 \}$ define a
zigzag shape $D = \zigzag_n(S)$, with cells labeled $1, \ldots,
n$, as follows: Start with an initial cell labeled $1$. For each
$1 \le i \le n-1$, given the cell labeled $i$, add an adjacent
cell labeled $i+1$ above cell $i$ if $i \in S$, and to the right
of cell $i$ otherwise.
\end{definition}
\begin{example}\label{AR_ex:zigzag1
\[
n = 9,\, S = \{1,3,5,6\} \quad \longrightarrow \quad
\ytableaushort{\none\none789, \none\none6, \none45, 23, 1} \quad
\longrightarrow \quad \zigzag_9(S) =
\ydiagram{2 + 3, 2 + 1, 1+ 2, 2, 1}
\]
\end{example}
This defines a bijection between the set of all subsets of $[n-1]$
and the set of all zigzag shapes of size $n$ (up to translation).
The set $S$ consists of the labels of all the cells in the shape
$\zigzag_n(S)$ such that the cell directly above is also in the
shape. These are exactly the last (rightmost) cells in all the
rows except the top row.
Recording the lengths of all the rows in the zigzag shape, from
bottom up, it follows that zigzag shapes of size $n$ are also in
bijection with all the {\dem compositions} of $n$. In fact, given
a subset $S = \{ s_1, \ldots, s_k \} \subseteq [n-1]$ (with $s_1 <
\ldots < s_k$), the composition corresponding to $\zigzag_n(S)$ is
simply $(s_1, s_2 - s_1, \ldots, s_k - s_{k-1}, n - s_k)$.
\begin{theorem}\label{AR.t.zigzag
{\rm \cite[Vol.\ 1, p.\ 190]{MacMahon_book}\cite[Example
2.2.4]{Stanley_EC1}} Let $S=\{s_1,\dots,s_k\}\subseteq [n-1]$
$(s_1 < \ldots < s_k)$ and set $s_0:=0$ and $s_{k+1}:=n$. Then
\[
f^{\zigzag_n(S)} = n! \cdot \det \left[\frac{1}{(s_{j+1} - s_i)!}
\right]_{i, j = 0}^{k} = \det \left[{n - s_i \choose s_{j+1} -
s_i} \right]_{i, j = 0}^{k} = \det \left[{s_{j+1} \choose s_{j+1}
- s_i} \right]_{i, j = 0}^{k}.
\]
\end{theorem}
\medskip
For example, the zigzag shape
\[
[\la/\mu] =
\ydiagram{4 + 3, 1 + 4, 2}
\]
corresponds to $n = 9$ and $S = \{2,6\}$, and therefore
{\setstretch{1.5}
\[
f^{\la/\mu} =
9! \cdot \det \begin{bmatrix}
\frac{1}{2!} & \frac{1}{6!} & \frac{1}{9!} \\
1 & \frac{1}{4!} & \frac{1}{7!} \\
0 & 1 & \frac{1}{3!} \\
\end{bmatrix}
=
\det \begin{bmatrix}
{9 \choose 2} & {9 \choose 6} & {9 \choose 9} \\
{7 \choose 0} & {7 \choose 4} & {7 \choose 7} \\
0 & {3 \choose 0} & {3 \choose 3} \\
\end{bmatrix}
=
\det \begin{bmatrix}
{2 \choose 2} & {6 \choose 6} & {9 \choose 9} \\
{2 \choose 0} & {6 \choose 4} & {9 \choose 7} \\
0 & {6 \choose 0} & {9 \choose 3} \\
\end{bmatrix}
\]
Theorem~\ref{AR.t.zigzag} is a special case of the determinantal
formula for skew shapes (Theorem~\ref{t.AR_num_skew_det}). We
shall now consider a specific family of examples.
\begin{example}\label{AR_ex:Andre}
Consider, for each nonnegative integer $n$, one special zigzag
shape $D_n$. For $n$ even it has all rows of length $2$:
\[
\ydiagram{3 + 2, 2 + 2, 1 + 2, 2}
\]
and for $n$ odd it has a top row of length $1$ and all others of
length $2$:
\[
\ydiagram{4 + 1, 3 + 2, 2 + 2, 1 + 2, 2}
\]
Clearly, by Definition~\ref{AR_d:zigzag_S}, $D_n = \zigzag_n(S)$
for $S = \{ 2, 4, 6, \ldots \} \subseteq [n-1]$.
\end{example}
\begin{definition}\label{AR_d:up-down_permutations}
A permutation $\sigma \in S_n$ is {\dem up-down} (or {\dem
alternating},
or {\dem zigzag}) if
\[
\sigma(1) < \sigma(2) > \sigma(3) < \sigma(4) > \ldots \quad .
\]
\end{definition}
\begin{observation}\label{AR_t:up_down}
If we label the cells of $D_n$ as in
Definition~\ref{AR_d:zigzag_S} (see Example~\ref{AR_ex:zigzag1}),
then clearly each standard Young tableau $T: D_n \to [n]$ becomes
an up-down permutation, and vice versa. (For an extension of this
phenomenon see Proposition~\ref{t.zigzag-descent class}.)
\end{observation}
Up-down permutations were already studied by
Andr{\'e}~\cite{Andre1, Andre2} in the nineteenth century. He
showed that their number $A_n$~\cite[A000111]{oeis} satisfies
\begin{proposition}\label{AR_t:andre}
\[
\sum_{n = 0}^{\infty} \frac{A_n x^n}{n!} = \sec x + \tan x.
\]
\end{proposition}
They are therefore directly related to the {\dem secant} (or {\dem
zig}, or {\dem Euler}) {\dem numbers}
$E_{n}$~\cite[A000364]{oeis}, the {\dem tangent} (or {\dem zag})
{\dem numbers} $T_n$~\cite[A000182]{oeis} and the {\dem Bernoulli
numbers} $B_n$~\cite[A000367 and A002445]{oeis} by
\[
A_{2n} = (-1)^n E_{2n} \qquad (n \ge 0)
\]
and
\[
A_{2n-1} = T_n = \frac{(-1)^{n-1} 2^{2n} (2^{2n} - 1)}{2n} B_{2n}
\qquad (n \ge 1).
\]
Note that there is an alternative convention for denoting Euler
numbers, by which $E_n = A_n$ for all $n$.
\begin{proposition}\label{AR_t:zigzag_recur}
\[
2A_{n+1}=\sum_{k=0}^{n} {n \choose k} A_k A_{n-k} \qquad (n \ge 1)
\]
with $A_0 = A_1 = 1$.
\end{proposition}
\begin{proof}
In a SYT of the required shape and size $n+1$, the cell containing
$n+1$ must be the last in its row and column. Removing this cell
leaves at most two path-connected components, with the
western/southern one necessarily of odd size (for $n \ge 1$). It
follows that
\[
A_{n+1}=\sum_{k=0 \atop k \hbox{\scriptsize\ odd}}^{n} {n \choose
k} A_k A_{n-k} \qquad (n \ge 1).
\]
Applying a similar argument to the cell containing $1$ gives
\[
A_{n+1}=\sum_{k=0 \atop k \hbox{\scriptsize\ even}}^{n} {n \choose
k} A_k A_{n-k} \qquad (n \ge 0),
\]
and adding the two formulas gives the required recursion.
\end{proof}
Indeed, the recursion for $A_n$
(Proposition~\ref{AR_t:zigzag_recur}) can be seen to be equivalent
to the generating function (Proposition~\ref{AR_t:andre}) since
$f(x) = \sec x + \tan x$ satisfies the differential equation
\[
2 f'(x) = 1 + f(x)^2
\]
with $f(0) = 1$.
Proposition~\ref{AR.t.zigzag} thus gives the determinantal
formulas
\[
(-1)^n E_{2n} = (2n)! \cdot \det \left( \begin{array}{cccccccc}
\frac{1}{2!} & \frac{1}{4!} & \frac{1}{6!} & . & . & \frac{1}{(2n-4)!} & \frac{1}{(2n-2)!} & \frac{1}{(2n)!} \\
1 & \frac{1}{2!} & \frac{1}{4!} & . & . & . & \frac{1}{(2n-4)!} & \frac{1}{(2n-2)!} \\
0& 1 & \frac{1}{2!}& . & . & . & . & \frac{1}{(2n-4)!} \\
\vdots & \vdots &\vdots & . & . & . & \vdots & \vdots \\
0 & 0& 0& . & . & . & 1 & \frac{1}{2!}
\end{array}\right)
\]
and
\[
T_{n} = (2n-1)! \cdot \det \left( \begin{array}{cccccccc}
\frac{1}{2!} & \frac{1}{4!} & \frac{1}{6!} & . & . & \frac{1}{(2n-4)!} & \frac{1}{(2n-2)!} & \frac{1}{(2n-1)!} \\
1 & \frac{1}{2!} & \frac{1}{4!} & . & . & . & \frac{1}{(2n-4)!} & \frac{1}{(2n-3)!} \\
0& 1 & \frac{1}{2!}& . & . & . & . & \frac{1}{(2n-5)!} \\
\vdots & \vdots &\vdots & . & . & . & \vdots & \vdots \\
0 & 0& 0& . & . & . & 1 & \frac{1}{1!}
\end{array}\right)
\]
\subsection{Skew strips of constant width}
The {\dem basic skew strip of width $m$ and height $n$} is the
skew diagram
\[
D_{m,n} = [(n+m-1, n+m-2, \ldots, m+1, m)/(n-1, n-2, \ldots, 1,
0)].
\]
It has $n$ rows, of length $m$ each, with each row shifted one
cell to the left with respect to the row just above it. For
example, $D_{2,n}$ is $D_{2n}$ from Example~\ref{AR_ex:Andre}
above while
\[
D_{3,5} =
\ydiagram{4 + 3, 3 + 3, 2 + 3, 1 + 3, 3}
\]
The {\dem general skew strip of width $m$ and height $n$} ({\dem
$m$-strip}, for short), $D_{m,n,\la,\mu}$, has arbitrary
partitions $\la$ and $\mu$, each of height at most $k := \lfloor
m/2 \rfloor$, as ``head'' (northeastern tip) and ``tail''
(southwestern tip), respectively, instead of the basic partitions
$\la = \mu = (k, k-1, \ldots, 1)$. For example,
\[
D_{6,7,(4,2,1),(3,3,1)} =
\ytableausetup{baseline} \ydiagram{6 + 7, 5 + 6, 4 + 6, 3 + 6, 2 +
6, 7, 6}
* [\bullet]{9 + 4, 9 + 2, 9 + 1, 0, 2 + 1, 3, 3}
\ytableausetup{nobaseline}
\]
where $m = 6$, $n = 7$, $k = 3$ and the head and tail have marked
cells.
The determinantal formula for skew shapes
(Theorem~\ref{t.AR_num_skew_det}) expresses $f^D$ as an explicit
determinant of order $n$, the number of rows. Baryshnikov and
Romik~\cite{B-Romik}, developing an idea of Elkies~\cite{Elkies},
gave an alternative determinant of order $k$, half the length of a
typical row. This is a considerable improvement if $m$, $k$, $\la$
and $\mu$ are fixed while $n$ is allowed to grow.
The general statement needs a bit of notation. Denote, for a
non-negative integer $n$,
\[
A'_n := \frac{A_n}{n!}, \qquad A''_n := \frac{A'_n}{2^{n+1} - 1},
\qquad A'''_n := \frac{(2^n - 1) A''_n}{2^n},
\]
where $A_n$ are Andr{\' e}'s alternating numbers (as in the
previous subsection); and let
\[
\epsilon(n) := \begin{cases}
(-1)^{n/2}, & \hbox{if } n \hbox{ is even,} \\
0, & \hbox{if } n \hbox{ is odd.}
\end{cases}
\]
Define, for nonnegative integers $N$, $p$ and $q$,
\begin{eqnarray*}
X_N^{(0)}(p, q) &:=& \sum_{i = 0}^{\lfloor p/2 \rfloor} \sum_{j =
0}^{\lfloor q/2 \rfloor} \frac{(-1)^{i + j} A'_{N + 2i + 2j +
1}}{(p - 2i)!(q - 2j)!} + \epsilon(p+1) \sum_{j = 0}^{\lfloor q/2
\rfloor}
\frac{(-1)^{j} A'_{N + p + 2j + 1}}{(q - 2j)!} \\
& & + \epsilon(q+1) \sum_{i = 0}^{\lfloor p/2 \rfloor}
\frac{(-1)^{i} A'_{N + 2i + q + 1}}{(p - 2i)!} + \epsilon(p+1)
\epsilon(q+1) A'_{N + p + q + 1}
\end{eqnarray*}
and
\begin{eqnarray*}
X_N^{(1)}(p, q) &:=& \sum_{i = 0}^{\lfloor p/2 \rfloor} \sum_{j =
0}^{\lfloor q/2 \rfloor} \frac{(-1)^{i + j} A'''_{N + 2i + 2j +
1}}{(p - 2i)!(q - 2j)!} + \epsilon(p) \sum_{j = 0}^{\lfloor q/2
\rfloor}
\frac{(-1)^{j} A''_{N + p + 2j + 1}}{(q - 2j)!} \\
& & + \epsilon(q) \sum_{i = 0}^{\lfloor p/2 \rfloor}
\frac{(-1)^{i} A''_{N + 2i + q + 1}}{(p - 2i)!} + \epsilon(p)
\epsilon(q) A'''_{N + p + q + 1} \quad .
\end{eqnarray*}
\begin{theorem}\label{AR_t:Romik_4}{\rm \cite[Theorem 4]{B-Romik}}
Let $D = D_{m,n,\la,\mu}$ be an $m$-strip with head and tail
partitions $\la = (\la_1, \ldots, \la_k)$ and $\mu = (\mu_1,
\ldots, \mu_k)$, where $k := \lfloor m/2 \rfloor$. For $(1 \le i
\le k)$ define $L_i := \la_i + k - i$ and $M_i := \mu_i + k - i$,
and denote
\[
m\%2 := \begin{cases}
0, & \hbox{if } m \hbox{ is even;} \\
1, & \hbox{if } m \hbox{ is odd.}
\end{cases}
\]
Then
\[
f^D = (-1)^{k \choose 2} |D|! \cdot \det \left[
X_{2n-m+1}^{(m\%2)}(L_i,M_j) \right]_{i,j=1}^{k}.
\]
\end{theorem}
Note that $X_N^{(\epsilon)}(p,q)$ are linear combinations of
$A_{N+1}, \ldots, A_{N+p+q+1}$, so that $f^D$ is expressed as a
polynomial in the $A_i$ whose complexity depends on the row length
$m$ but not on the number of rows $n$.
The impressive formal definitions of $X_N^{(0)}$ have simple
geometric interpretations:
\[
X_{2n-1}^{(0)}(p,q) = \frac{f^D}{|D|!}
\]
where
\[
D = \zigzag_{2n+p+q}(\{p+2, p+4, \ldots, p+2n-2\}) = \quad
\ydiagram{5 + 5, 4 + 2, 3 + 2, 4} * [\bullet]{7 + 3, 0, 0, 2}
\]
($p$ marked southwestern cells in a row, $2n$ unmarked cells, and
$q$ marked northeastern cells in a row), and
\[
X_{2n}^{(0)}(p,q) = \frac{f^D}{|D|!}
\]
where
\[
D = \zigzag_{2n+p+q+1}(\{p+2, p+4, \ldots, p+2n, p+2n+1, \ldots,
p+2n+q\}) = \quad
\ydiagram{6 + 1, 6 + 1, 6 + 1, 6 + 1, 5 + 2, 4 + 2, 3 + 2, 4}
* [\bullet]{6 + 1, 6 + 1, 6 + 1, 0, 0, 0, 0, 2}
\]
($p$ marked southwestern cells in a row, $2n+1$ unmarked cells,
and $q$ marked northeastern cells in a column). These are
$2$-strips, i.e. zigzag shapes.
It is possible to define $X_{N}^{(1)}(p,q)$ similarly in terms of
$3$-strips, a task left as an exercise to the
reader~\cite{B-Romik}.
Here are some interesting special cases.
\begin{corollary}\label{AR_t:Romik_3}{\rm \cite[Theorem 1]{B-Romik}}
$3$-strips:
\[
f^{D_{3,n,(),()}} = \frac{(3n-2)!\, T_n}{(2n-1)!\, 2^{2n-2}},
\]
\[
f^{D_{3,n,(1),()}} = \frac{(3n-1)!\, T_n}{(2n-1)!\, 2^{2n-1}},
\]
\[
f^{D_{3,n}} = f^{D_{3,n,(1),(1)}} = \frac{(3n)!\, (2^{2n-1} - 1)
T_n}{(2n-1)!\, 2^{2n-1} (2^{2n}-1)}.
\]
\end{corollary}
\begin{corollary}{\rm \cite[Theorem 2]{B-Romik}}
$4$-strips:
\[
f^{D_{4,n,(),()}} = (4n-2)! \left( \frac{T_{n}^2}{(2n-1)!^2} +
\frac{E_{2n-2} E_{2n}}{(2n-2)!(2n)!} \right),
\]
\[
f^{D_{4,n}} = f^{D_{4,n,(1),(1)}} = (4n)! \left(
\frac{E_{2n}^2}{(2n)!^2} - \frac{E_{2n-2}
E_{2n+2}}{(2n-2)!(2n+2)!} \right).
\]
\end{corollary}
\begin{corollary}{\rm \cite[Theorem 3]{B-Romik}}
$5$-strip:
\[
f^{D_{5,n,(),()}} = \frac{(5n-6)!\, T_{n-1}^2}{(2n-3)!)^2 2^{4n-6}
(2^{2n-2}-1)}.
\]
\end{corollary}
\begin{proof}[Proof of Theorem~\ref{AR_t:Romik_4} (sketch)]
The proof uses transfer operators, following Elkies~\cite{Elkies}.
Elkies considered, essentially, the zigzag shapes ($2$-strips)
$D_n$ from Example~\ref{AR_ex:Andre}, whose SYT correspond to
alternating (up-down) permutations. Recall from
Subsection~\ref{AR_s:order_polytope} the definition of the order
polytope $P(D_n)$, whose volume is, by
Observation~\ref{AR_t:vol_order_polytope},
\[
\vol P(D_n) = \frac{f^{D_n}}{n!}.
\]
This polytope can be written as
\[
P(D_n) = \{ (x_1, \ldots, x_n) \in [0,1]^n \,:\, x_1 \le x_2 \ge
x_3 \le x_4 \ge \ldots \},
\]
and therefore its volume can also be computed by an iterated
integral:
\[
\vol P(D_n) = \int_0^1 dx_1 \int_{x_1}^1 dx_2 \int_0^{x_2} dx_3
\int_{x_3}^1 dx_4 \cdots .
\]
Some manipulations now lead to the expression
\[
\vol P(D_n) = \langle T^{n-1}({\bnum{1}}), {\bnum{1}} \rangle
\]
where ${\bnum{1}} \in L^2[0,1]$ is the function with constant
value $1$, $\langle \cdot, \cdot \rangle$ is the standard inner
product on $L^2[0,1]$, and $T : L^2[0,1] \to L^2[0,1]$ is the
compact self-adjoint operator defined by
\[
(Tf)(x) := \int_0^{1-x} f(y) dy \qquad (\forall f \in L^2[0,1]).
\]
The eigenvalues $\la_k$ and corresponding orthonormal
eigenfunctions $\phi_k$ of $T$ can be computed explicitly, leading
to the explicit formula
\[
\vol P(D_n) = \sum_{k} \la_k^{n-1} \langle {\bnum{1}}, \phi_k
\rangle^2 = \frac{2^{n+2}}{\pi^{n+1}} \sum_{k = - \infty}^{\infty}
\frac{1}{(4k+1)^{n+1}} \qquad (n \ge 1)
\]
which gives a corresponding expression for
\[
A_n = f^{D_n} = n! \vol P(D_n).
\]
Baryshnikov and Romik extended this treatment of a $2$-strip to
general $m$-strips, using additional ingredients. For instance,
the iterated integral for a $3$-strip
\[
\ydiagram{5 + 4, 4 + 3, 3 + 3, 2 + 3, 1 + 3, 3}
* [\bullet]{7 + 2, 0, 0, 0, 0, 1}
\]
gives
\[
\vol P(D_{3,n,\la,\mu}) = \langle (BA)^{n-1} T_{\mu}({\bnum{1}}),
T_{\la}({\bnum{1}}) \rangle
\]
where $\Omega := \{ (u,v) \in [0,1]^2 \,:\, u \le v\}$, the
transfer operators $A : L^2[0,1] \to L^2(\Omega)$ and $B :
L^2(\Omega) \to L^2[0,1]$ are defined by
\[
(Af)(u,v) := \int_u^v f(x) \,dx
\]
and
\[
(Bg)(x) := \int_0^x \int_x^1 g(u,v) \,dv \,du,
\]
and $T_\la$, $T_\mu$ are operators corresponding to the ``head''
and ``tail'' partitions $\la$ and $\mu$.
\end{proof}
In a slightly different direction, Stanley~\cite{Stanley-skew}
defines $\hD_{a,b,c,n}$ to be the skew shape whose diagram has $n$
rows, with $a$ cells in the top row and $b$ cells in each other
row, and such that each row is shifted $c-1$ columns to the left
with respect to the preceding (higher) row. For example,
\[
\hD_{5,4,3,4} =
\ydiagram{6 + 5, 4 + 4, 2 + 4, 4}
\]
\begin{theorem}{\rm \cite[Corollary 2.5]{Stanley-skew}}
For $a$, $b$ and $c$ with $c \le b < 2c$,
\[
\sum_{n \ge 0} f^{\hD_{a,b,c,n+1}} \frac{x^{n+1}}{(a + nb)!} =
\frac{x \sum_{n \ge 0} \frac{(-x)^n}{(b+nc)!}
{(b-c)! - x \sum_{n \ge 0} \frac{(-x)^n}{(a+nc)!}}.
\]
\end{theorem}
Two special cases deserve special attention: $a = b$ and $b = c$.
For $a = b$ all the rows are of the same length.
\begin{corollary}
For $a$ and $c$ with $c \le a < 2c$,
\[
1 + \sum_{n \ge 1} f^{\hD_{a,a,c,n}} \frac{x^n}{(na)!} = \left({1
- \frac{x}{(a-c)!} \sum_{n \ge 0} \frac{(-x)^n}{(a+nc)!}}
\right)^{-1}.
\]
\end{corollary}
In particular, for $a = b = 3$ and $c = 2$, $\hD_{3,3,2,n} =
D_{3,n}$ as in Theorem~\ref{AR_t:Romik_3}:
\[
\sum_{n \ge 0} f^{D_{3,n}} \frac{x^{2n}}{(3n)!} = \left( {\sum_{n
\ge 0} \frac{(-x^2)^n}{(2n+1)!}} \right)^{-1} = \frac{x}{\sin x}.
\]
This result was already known to Gessel and
Viennot~\cite{GesselViennot1989}.
For $b = c$ one obtains a zigzag shape: $\hD_{a,c,c,n+1} =
\zigzag_{a+nc}(S)$ for $S = \{c, 2c, \ldots, nc\}$.
\begin{corollary}
For any positive $a$ and $c$,
\[
\sum_{n \ge 0} f^{\zigzag_{a+nc}( \{c, 2c, \ldots, nc\})}
\frac{x^{n+1}}{(a + nc)!} =
\frac{x \sum_{n \ge 0} \frac{(-x)^n}{(c+nc)!}
{1 - x \sum_{n \ge 0} \frac{(-x)^n}{(a+nc)!}}.
\]
\end{corollary}
\section{Truncated and other non-classical shapes}\label{AR_s:formulas_non_classical}
\begin{definition}
A diagram of {\dem truncated shape} is a line-convex diagram
obtained from a diagram of ordinary or shifted shape by deleting
cells from the NE corner (in the English notation, where row
lengths decrease from top to bottom).
\end{definition}
For example, here are diagrams of a truncated ordinary shape
\[
[(4,4,2,1)\setminus (1)] =
\ydiagram{3, 4, 2, 1}
\]
and a truncated shifted shape:
\[
[(4,3,2,1)^*\setminus (1,1)] =
\ydiagram{3, 1 + 2, 2 + 2, 3 + 1}
\]
Modules associated to truncated shapes were introduced and studied
in~\cite{James-Peel, Reiner-Shimozono}. Interest in the
enumeration of SYT of truncated shapes was
recently enhanced by a new interpretation~\cite{AR_tft2}:
The number of geodesics between distinguished pairs of antipodes
in the flip graph of inner-triangle-free triangulations is twice
the number of SYT of a corresponding truncated shifted staircase
shape. Motivated by this result, extensive computations were
carried out for the number of SYT of these and other truncated
shapes. It was found that, in some cases, these numbers are
unusually ``smooth'', i.e., all their prime factors are relatively
very small. This makes it reasonable to expect a product formula.
Subsequently, such formulas were conjectured and proved for
rectangular and shifted staircase shapes truncated by a square, or
nearly a square, and for rectangular shapes truncated by a
staircase; see~\cite{AKR, Panova, Sun1, Sun2}.
\subsection{Truncated shifted staircase shape
\label{sec:staircase}
In this subsection, $\la = (\la_1, \ldots, \la_t)$ (with $\la_1 >
\ldots > \la_t > 0$) will be a strict partition,
with $g^{\la}$ denoting the number of SYT of shifted shape $\la$.
For any nonnegative integer $n$, let $\delta_n := (n, n-1, \ldots,
1)$ be the corresponding shifted staircase shape. By Schur's
formula for shifted shapes (Theorem~\ref{t.AR_num_shifted_prod}),
\begin{corollary}\label{t.shifted_staircase}
The number of SYT of shifted staircase shape $\delta_n$ is
\[
g^{\delta_n} = N! \cdot \prod_{i=0}^{n-1} \frac{i!}{(2i+1)!},
\]
where $N := |\delta_n| = {n+1 \choose 2}$.
\end{corollary}
\bigskip
The following enumeration problem was actually the original
motivation for the study of truncated shapes, because of its
combinatorial interpretation, as explained in~\cite{AR_tft2}.
\begin{theorem}\label{tr_thm1}{\rm \cite[Corollary 4.8]{AKR}\cite[Theorem 1]{Panova}}
The number of SYT of truncated shifted staircase shape
$\delta_{n}\setminus (1)$ is equal to
\[
g^{\delta_{n}}\frac{C_{n} C_{n - 2}}{2\, C_{2n - 3}},
\]
where $C_n=\frac{1}{n+1}{2n\choose n}$ is the $n$-th Catalan
number.
\end{theorem}
\begin{example}\label{AR_ex:SYT_truncated}
There are $g^{\delta_4} = 12$ SYT of shape $\delta_4$, but only
$4$ SYT of truncated shape $\delta_4\setminus (1)$:
\[
\ytableaushort{123, \none456, \none\none78, \none\none\none9}
\quad , \quad
\ytableaushort{124, \none356, \none\none78, \none\none\none9}
\quad , \quad
\ytableaushort{123, \none457, \none\none68, \none\none\none9}
\quad , \quad
\ytableaushort{124, \none357, \none\none68, \none\none\none9}
\quad .
\]
}
\end{example}
Theorem~\ref{tr_thm1} may be generalized to a truncation of a
$(k-1) \times (k-1)$ square from the NE corner of a shifted
staircase shape $\delta_{m+2k}$.
\begin{example} For $m=1$ and $k=3$, the truncated shape is
\[
[\delta_5 \setminus (2^2)] =
\ydiagram{3, 1 + 2, 2 + 3, 3 + 2, 4 + 1}
\]
\end{example}
\medskip
\begin{theorem}\label{t.stair_minus_sq}{\rm \cite[Corollary 4.8]{AKR}}
The number of SYT of truncated shifted staircase shape
$\delta_{m+2k} \setminus ((k-1)^{k-1})$ is
$$
g^{(m+k+1,\ldots,m+3,m+1,\ldots,1)} g^{(m+k+1,\ldots,m+3,m+1)}
\cdot \frac{N! M!}{(N - M - 1)!(2M + 1)!},
$$
where $N = {m+2k+1 \choose 2} - (k-1)^2$ is the size of the shape
and $M = k(2m+k+3)/2 - 1$.
\end{theorem}
\medskip
Similar results were obtained in~\cite{AKR} for truncation by
``almost squares'', namely by $\kappa=(k^{k-1},k-1)$.
\subsection{Truncated rectangular shapes}\label{sec:rectangular}
In this section, $\la = (\la_1, \ldots, \la_m)$ (with $\la_1 \ge
\ldots \ge \la_m \ge 0$) will be a partition with (at most) $m$
parts.
Two partitions which differ only in trailing zeros will be
considered equal.
For any nonnegative integers $m$ and $n$, let $(n^m) :=
(n,\ldots,n)$ ($m$ times) be the corresponding rectangular shape.
The Frobenius-Young formula (Theorem~\ref{t.AR_num_ordinary_prod})
implies the following.
\begin{observation}\label{t.rectangle}
The number of SYT of rectangular shape $(n^m)$ is
\[
f^{(n^m)} = (mn)! \cdot \frac{F_m F_n}{F_{m+n}},
\]
where
\[
F_m := \prod_{i=0}^{m-1} i!.
\]
\end{observation}
Consider truncating a $(k-1) \times (k-1)$ square from the NE
corner of a rectangular shape $((n+k-1)^{m+k-1})$.
\begin{example}
Let $n=3$, $m=2$ and $k=3$. Then
\[
[(5^4) \setminus (2^2)] =
\ydiagram{3, 3, 5, 5}
\]
\end{example}
\begin{theorem}\label{t.rect_minus_sq}{\rm \cite[Corollary 5.7]{AKR}}
The number of SYT of truncated rectangular shape
$((n+k-1)^{m+k-1}) \setminus ((k-1)^{k-1})$ is
\[
\frac{N!(mk-1)!(nk-1)!(m+n-1)!k}{(mk+nk-1)!} \cdot \frac{F_{m-1}
F_{n-1} F_{k-1}}{F_{m+n+k-1}},
\]
where $N$
is the size of the shape and $F_n$ is as in
Observation~\ref{t.rectangle}.
\end{theorem}
In particular,
\begin{corollary}\label{cor7}
The number of SYT of truncated rectangular shape $((n+1)^{m+1})
\setminus (1)$ is
\[
\frac{N!(2m-1)!(2n-1)! \cdot 2}{(2m+2n-1)!(m+n+2)} \cdot
\frac{F_{m-1} F_{n-1}}{F_{m+n+1}},
\]
where $N = (m+1)(n+1) - 1$ is the size of the shape and $F_n$ is
as in Observation~\ref{t.rectangle}.
\end{corollary}
Similar results were obtained in~\cite{AKR, Panova} for truncation
by almost squares $\kappa=(k^{k-1},k-1)$.
\medskip
Not much is known for truncation by rectangles. The following
formula was conjectured in~\cite{AKR} and proved by
Sun~\cite{Sun_conjecture} using complex integrals.
\begin{proposition}{\rm \cite{Sun_conjecture}} For $n\ge 2$
\[
f^{(n^n)\setminus (2)} =
\frac{ (n^2-2)! (3n-4)!^2 \cdot 6 }{ (6n-8)! (2n-2)! (n-2)!^2 }
\cdot \frac{F_{n-2}^2}{F_{2n-4}},
\]
where $F_n$ is as in Observation~\ref{t.rectangle}.
\end{proposition}
The following result was proved by Snow~\cite{Snow}.
\begin{proposition}{\rm \cite{Snow}} For $n\ge 2$ and $k \ge 0$
\[
f^{(n^{k+1}) \setminus (n-2)} =
\frac{ (kn-k)! (kn+n)! }{ (kn+n-k)! }
\cdot \frac{F_k F_n}{F_{n+k}},
\]
where $F_n$ is as in Observation~\ref{t.rectangle}.
\end{proposition}
\medskip
A different method to derive product formulas, for other families
of truncated shapes, has been developed by Panova~\cite{Panova}.
Consider a rectangular shape truncated by a staircase shape.
\begin{example}
\[
[(4^5) \setminus \delta_2] =
\ydiagram{2, 3, 4, 4, 4}
\]
\end{example}
\begin{theorem}{\rm \cite[Theorem 2]{Panova}}\
Let $m\ge n\ge k$ be positive integers. The number of SYT of
truncated shape $(n^m)\setminus \delta_k$ is
\[
{N\choose m(n-k-1)}f^{(n-k-1)^m}g^{(m, m-1, \ldots, m-k)}
\frac{E(k+1,m,n-k-1)}{E(k+1,m,0)},
\]
where $N=mn - {k+1\choose 2}$ is the size of the shape and
\[
E(r,p,s)=\begin{cases} \prod\limits_{r<l<2p-r+2}
\frac{1}{(l+2s)^{r/2}} \prod\limits_{2\le l\le r}
\frac{1}{((l+2s)(2p-l+2s+2))^{\lfloor l/2\rfloor}}
, & \text{if $r$ is even}; \\
\frac{((r-1)/2+s)!}{(p-(r-1)/2 +s)!}E(r-1,p,s) , & \text{if $r$ is
odd}.
\end{cases}
\]
\end{theorem}
\subsection{Other truncated shapes}
The following elegant
result regarding {\dem shifted strips} was recently proved by Sun.
\begin{theorem}~{\rm \cite[\S 4.2]{Sun3}}\label{Sun-lozenge}
The number of SYT of truncated shifted shape with $n$ rows and $4$
cells in each row
\[
\ydiagram{4, 1 + 4, 2 + 4, 3 + 4, 4 + 4}
\]
is the $(2n-1)$-st Pell number~{\rm \cite[A000129]{oeis}}
\[
\frac{1}{2\sqrt 2} \left( (1+\sqrt 2)^{2n-1}-(1-\sqrt 2)^{2n-1}
\right).
\]
\end{theorem}
Sun applied a probabilistic version of computations of volumes of
order polytopes to enumerate SYT of truncated and other exotic
shapes. In~\cite{Sun2} he obtained product formulas for the number
of SYT of certain truncated skew shapes. This includes the shape
$((n+k)^{r+1}, n^{m-1})/(n-1)^r$ truncated by a rectangle or an
``almost rectangle'', the truncated shape $((n+1)^3, n^{m-2}) /
(n-2) \setminus (2^2)$, and the truncated shape $(n + 1)^2 /
n^{m-2} \setminus (2)$.
\medskip
Modules associated with non-line-convex shapes were considered
in~\cite{James-Peel}. The enumeration of SYT of such shapes is a
very recent subject of study. Special non-line-convex shapes with
one box removed at the end or middle of a row were considered
in~\cite{Sun1}. For example,
\begin{proposition}{\rm \cite[Theorem 5.2]{Sun1}}
For $m\ge 0$, the number of SYT of shape $(m + 3, 3, 3)$ with
middle box in the second row removed, is
\[
\frac{m+5}{10}{m+2\choose 2}{m+9\choose 2}.
\]
\end{proposition}
There are very few known results in this direction; problems in
this area are wide open.
\subsection{Proof approaches for truncated shapes}
Different approaches were applied to prove the above product
formulas. We will sketch one method and remark on another.
\bigskip
The {\dem pivoting approach} of~\cite{AKR} is based on a
combination of two different bijections from SYT to pairs of
smaller SYT:
\begin{itemize}
\item[(i)] Choose a pivot cell $P$ in the NE boundary of a
truncated shape $\zeta$ and subdivide the entries of a given SYT
$T$ into those that are less than the entry of $P$ and those that
are greater. \item[(ii)] Choose a letter $t$ and subdivide the
entries in a SYT $T$ into those that are less than or equal to $t$
and those that are greater than $t$.
\end{itemize}
Proofs are obtained by combining applications of the first
bijection to truncated shapes and the second to corresponding
non-truncated ones. Here is a typical example.
\medskip
\begin{proof}[Proof of Theorem~\ref{tr_thm1} (sketch)]
First, apply the second bijection to a SYT of a shifted staircase
shape.
Let $n$ and $t$ be nonnegative integers, with $t \le {n+1 \choose
2}$. Let $T$ be a SYT of shifted staircase shape $\delta_n$, let
$T_1$ be the set of all cells in $T$ with values at most $t$, and
let $T_2$ be obtained from $T \setminus T_1$ by transposing the
shape (reflecting in an anti-diagonal) and replacing each entry
$i$ by $N-i+1$, where $N = |\delta_n| = {n+1 \choose 2}$. Clearly
$T_1$ and $T_2$ are shifted SYT.
Here is an example with $n=4$ and $t=5$.
\[
\ytableaushort{1246, \none358, \none\none79, \none\none\none{10}}
\, \to \, \left(\,
\ytableaushort{124, \none35} \;,\;
\ytableaushort{\none6, \none8, 79, \none{10}} \,\right)\, \to \,
\left(\,
\ytableaushort{124, \none35} \;,\;
\ytableaushort{{10}986, \none7} \,\right)\, \to \, \left(\,
\ytableaushort{124, \none35} \;,\;
\ytableaushort{1235, \none4} \,\right).
\]
Notice that, treating strict partitions as sets,
$\delta_4=(4,3,2,1)$ is the disjoint union of $\sh(T_1)=(3,2)$ and
$\sh(T_2)=(4,1)$. This is not a coincidence.
\medskip
\noindent{\bf Claim.} {\em Treating strict partitions as sets,
$\delta_n$ is the disjoint union of the shape of $T_1$ and the
shape of $T_2$. }
\smallskip
In order to prove the claim notice that the borderline between
$T_1$ and $T \setminus T_1$ is a lattice path of length exactly
$n$, starting at the NE corner of the staircase shape $\delta_n$,
and using only S and W steps, and ending at the SW boundary.
If the first step is S then the first part of $\sh(T_1)$ is $n$,
and the rest corresponds to a lattice path in $\delta_{n-1}$.
Similarly, if the first step is W then the first part of
$\sh(T_2)$ is $n$, and the rest corresponds to a lattice path in
$\delta_{n-1}$. Thus exactly one of the shapes of $T_1$ and $T_2$
has a part equal to $n$. The claim follows by induction on $n$.
\medskip
We deduce that, for any nonnegative integers $n$ and $t$ with $t
\le {n+1 \choose 2}$,
\begin{equation}\label{tr_eq2}
\sum_{\la \subseteq \delta_n \atop |\la|=t} g^{\la}g^{\la^c} =
g^{\delta_n}.
\end{equation}
Here summation is over all strict partitions $\la$ with the
prescribed restrictions, and $\la^c$ is the complement of $\la$ in
$\delta_n = \{1, \ldots, n\}$ (where strict partitions are treated
as sets). In particular, the LHS is independent of $t$.
\bigskip
Next apply the first bijection on SYT of truncated staircase shape
$\delta_n \setminus (1)$. Choose as a pivot the cell $c = (2,
n-1)$, just SW of the missing NE corner. The entry $t = T(c)$
satisfies $2n - 3 \le t \le {n \choose 2} - 2n + 2$. Let $T$ be a
SYT of shape $\delta_n \setminus (1)$ with entry $t$ in $P$. One
subdivides the other entries of $T$ into those that are (strictly)
less than $t$ and those that are greater than $t$. The entries
less than $t$ constitute $T_1$. To obtain $T_2$, replace each
entry $i > t$ of $T$ by $N - i + 1$, where $N$ is the total number
of entries in $T$, and suitably transpose the resulting array. It
is easy to see that both $T_1$ and $T_2$ are shifted SYT.
\begin{example}
\[
\ytableaushort{124, \none3{\bnum{5}}7, \none\none68,
\none\none\none9} \, \to \, \left(\,
\ytableaushort{124, \none3} \;,\;
\ytableaushort{\none7, 68, \none9} \,\right)\, \to \, \left(\,
\ytableaushort{124, \none3} \;,\;
\ytableaushort{987, \none6} \,\right)\, \to \, \left(\,
\ytableaushort{124, \none3} \;,\;
\ytableaushort{123, \none4} \,\right).
\]
\end{example}
Next notice that the shape of $T_1$ is $(m-1,m-3)\cup \lambda$
while the shape of $T_2$ is $(m-1,m-3)\cup \lambda^c$, where
$\lambda$ is a strict partition contained in $\delta_{n-2}$ and
$\lambda^c$ is its complement in $\delta_{n-2}$.
We deduce that
\begin{equation}\label{tr_eq1}
g^{\delta_n \setminus (1)}= \sum_t \sum_{\la \subseteq
\delta_{n-2} \atop |\la|=t} g^{(n-1, n-3) \cup \la} g^{(n-1, n-3)
\cup \la^c}.
\end{equation}
Here summation is over all strict partitions $\la$ with the
prescribed restrictions.
\bigskip
Finally, by Schur's formula (Theorem~\ref{t.AR_num_shifted_prod}),
for any strict partitions $\la$ and $\mu = (\mu_1, \ldots, \mu_k)$
with $\mu_1 > \ldots > \mu_k > m$ and $\la\cup \la^c=\delta_m$,
the following holds.
\begin{equation}\label{tr_eq3}
g^{\mu \cup \la} g^{\mu \cup \la^c} = c(\mu,|\la|,|\la^c|) \cdot
g^{\la} g^{\la^c},
\end{equation}
where
\[
c(\mu,|\la|,|\la^c|) =
\frac{g^{\mu \cup \delta_m} g^{\mu}}{g^{\delta_m}} \cdot
\frac{|\delta_m|!(|\mu|+|\la|)!(|\mu|+|\la^c|)!
{(|\mu|+|\delta_m|)!|\mu|!|\la|!|\la^c|!}
\]
depends only on the sizes $|\la|$ and $|\la^c|$ and not on the
actual partitions $\la$ and $\la^c$.
\medskip
Combining Equations (\ref{tr_eq1}), (\ref{tr_eq2}) and
(\ref{tr_eq3}) together with some binomial identities completes
the proof.
\end{proof}
For a detailed proof and applications of the method to other
truncated shapes see~\cite{AKR}.
\bigskip
A different proof was presented by Panova~\cite{Panova}. Panova's
approach is sophisticated and involved
and will just be outlined. The proof relies on a bijection from
SYT of the truncated shape to semi-standard Young tableaux of skew
shapes. This bijection translates the enumeration problem to
evaluations of sums of Schur functions at certain specializations.
These evaluations are then reduced to computations of complex
integrals, which are carried out by a comparison to another
translation of the original enumerative problem to a volume of the
associated order polytope.
\section{Rim hook and domino tableaux}
\subsection{Definitions}
The following concept generalizes the notion of SYT. Recall from
Subsection~\ref{AR_s:zigzag_sum} the definition of a zigzag shape.
\begin{definition}
Let $r$ and $n$ be positive integers and let $\la \vdash rn$. An
{\dem $r$-rim hook tableau} of shape $\la$ is a filling of the
cells of the diagram $[\la]$ by the letters $1, \ldots, n$ such
that
\begin{enumerate}
\item each letter $i$ fills exactly $r$ cells, which form a zigzag
shape
called the $i$-th {\dem rim hook} (or {\dem border strip});
and \item for each $1 \le k \le n$, the union of the $i$-th rim
hooks for $1 \le i \le k$ is a diagram of ordinary shape.
\end{enumerate}
Denote by $f^\la_r$ the number of $r$-rim hook tableaux of shape
$\la \vdash rn$.
\end{definition}
The $n$-th rim hook forms a path-connected subset of the {\dem
rim} (SE boundary) of the diagram $[\la]$, and removing it leads
inductively to a similar description for the other rim hooks.
$1$-rim hook tableaux are ordinary SYT; $2$-rim hook tableaux are
also called {\dem domino tableaux}.
\begin{example}
Here is a domino tableau of shape $(5,5,4)$:
\[
\ytableaushort{11336,24556,2477}
\]
and here is a $3$-rim hook tableau of shape $(5,4,3)$:
\[
\ytableaushort{11333,1244,224}
\]
\end{example}
\medskip
\begin{definition}
An {\dem $r$-partition} of $n$ is a sequence $\la = (\la^0,
\ldots, \la^{r-1})$ of partitions of total size $|\la^0| + \ldots
+ |\la^{r-1}| = n$. The corresponding {\dem $r$-diagram}
$[\la^0,\ldots, \la^{r-1}]$ is the sequence $([\la^0], \ldots,
[\la^{r-1}])$ of ordinary diagrams. It is sometimes drawn as a
skew diagram, with $[\la^{i}]$ lying directly southwest of
$[\la^{i-1}]$ for every $1 \le i \le r-1$.
\end{definition}
\begin{example}
The $2$-diagram of shape $(\la^0, \la^1) = ((3,1), (2,2))$ is
\[
[\la^0, \la^1] = \left(\,
\ydiagram{3,1} \;,\;
\ydiagram{2,2} \,\right) = \,
\ydiagram{2+3,2+1,2,2}
\]
\end{example}
\begin{definition}
A {\dem standard Young $r$-tableau} ({\dem $r$-SYT}) $T =
(T^0,\dots, T^{r-1})$ of shape $\la = (\la^0,\dots, \la^{r-1})$
and total size $n$ is obtained by inserting the integers $1, 2,
\ldots, n$ as entries into the cells of the diagram $[\la]$ such
that the entries increase along rows and columns.
\end{definition}
\subsection{The $r$-quotient and $r$-core}
\begin{definition}
Let $\la$ be a partition, and $D = [\la]$ the corresponding
ordinary diagram. The {\dem boundary sequence} of $\la$ is the
$0/1$ sequence $\partial(\la)$ constructed as follows: Start at
the SW corner of the diagram and proceed along the edges of the SE
boundary up to the NE corner. Each horizontal (east-bound) step is
encoded by $1$, and each vertical (north-bound) step by $0$.
\end{definition}
\begin{example}\label{AR_ex:boundary_map}
\[
\la = (3, 1) \quad\to\quad
D = [\la] = \ydiagram{3, 1} \quad\to\quad
\partial(\la) = (1,0,1,1,0)
\]
\end{example}
The boundary sequences starts with $1$ and ends with $0$ -- unless
$\la$ is the empty partition, for which $\partial(\la)$ is the
empty sequence.
\begin{definition}
The {\dem extended boundary sequence} $\partial_*(\la)$ of $\la$
is the doubly-infinite sequence obtained from $\partial(\la)$ by
prepending to it the sequence $(\ldots, 0, 0)$ and appending to it
the sequence $(1, 1, \ldots)$.
\end{definition}
Geometrically, these additions represent a vertical ray and a
horizontal ray, respectively, so that the tour of the boundary of
$[\la]$ actually ``starts'' at the far south and ``ends'' at the
far east.
\begin{example}
If $\la = (3, 1)$ then $\partial_*(\la) =
(\ldots,0,0,1,0,1,1,0,1,1,\ldots)$, and if $\la$ is empty then
$\partial_*(\la) = (\ldots,0,0,1,1,\ldots)$.
\end{example}
$\partial_*$ is clearly a bijection from the set of all partitions
to the set of all doubly-infinite $0/1$ sequences with initially
only $0$-s and eventually only $1$-s.
\begin{definition}
There is a {\dem natural indexing} of any (extended) boundary
sequence, as follows: The index $k$ of an element is equal to the
number of $1$-s weakly to its left minus the number of $0$-s
strictly to its right.
\end{definition}
\begin{example}
\[
\begin{array}{l}
\text{\rm $0/1$ sequence:} \\
\\
\text{\rm Indexing:}
\end{array}
\begin{array}{rrrrrrrrrrr}
\ldots &0 &0 &1 &0 &1 &1 &0 &1 &1 &\ldots \\
&\uparrow &\uparrow &\uparrow &\uparrow &\uparrow
&\uparrow &\uparrow &\uparrow &\uparrow & \\
\ldots &-3 &-2 &-1 &\,0 &\,1 &\,2 &\,3 &\,4 &\,5 &\ldots
\end{array}
\]
\end{example}
\begin{definition} Let $\la$ be a partition, $r$ a positive integer, and $s := \partial_*(\la)$.
\begin{enumerate}
\item The {\dem $r$-quotient} $q_r(\la)$ is a sequence of $r$
partitions obtained as follows: For each $0 \le i \le r-1$ let
$s^i$ be the subsequence of $s$ corresponding to the indices which
are congruent to $i \pmod r$, and let $\la^i :=
\partial_*^{-1}(s^i)$. Then $q_r(\la) := (\la^0, \ldots,
\la^{r-1})$. \item The {\dem $r$-core} (or {\dem $r$-residue})
$c_r(\la)$ is the partition $\la' = \partial_*^{-1}(s')$, where
$s'$ is obtained from $s$ by a sequence of moves which interchange
a $1$ in position $i$ with a $0$ in position $i + r$ (for some
$i$), as long as such a move is still possible.
\end{enumerate}
\end{definition}
Denote $|q_r(\la)| := |\la^0| + \ldots + |\la^{r-1}|$.
\begin{theorem}
$|\la| = r \cdot |q_r(\la)| + |c_r(\la)|.$
\end{theorem}
\begin{example}
For $\la = (6,4,2,2,2,1)$ and $r = 2$,
\[
s = \partial_*(\la) = (\ldots, 0, 0, 0, 1, 0, 1, 0, 0, \hat{0}, 1,
1, 0, 1, 1, 0, 1, 1, 1, \ldots)
\]
with a hat over the entry indexed $0$. It follows that
\[
s^0 = (\ldots, 0, 0, 0, 0, 0, 1, 1, 0, 1, \ldots) \quad \text{and}
\quad s^1 = (\ldots, 0, 1, 1, 0, 1, 0, 1, 1, 1, \ldots).
\]
The $2$-quotient is therefore $q_2(\la) = ((2), (3, 2))$.
The $2$-core is
\[
c_2(\la) = \partial_*^{-1}(s') = \partial_*^{-1}(\ldots, 0, 0, 0,
0, 0, 0, 0, 1, \hat{0}, 1, 0, 1, 1, 1, 1, 1, 1, 1, \ldots) = (2,
1).
\]
Indeed, $|\la| = 17 = 2 \cdot 7 + 3 = 2 \cdot |q_2(\la)| +
|c_2(\la)|$.
\end{example}
It is easy to see that, in this example, there are no $2$-rim hook
tableaux of shape $\la$.
\begin{theorem}
$f_r^{\la} \ne 0 \iff \text{\rm the } r \text{\rm-core } c_r(\la)
\text{\rm\ is empty.}$
\end{theorem}
\begin{example}
Let $\la=(4,2)$, $n=3$ and $r=2$. Then
\[
s = \partial_*(\la) = (\ldots, 0, 0, 0, 1, \hat{1}, 0, 1, 1, 0, 1,
1, 1, \ldots),
\]
so that the $2$-core
\[
s' = \partial_*^{-1}(\ldots, 0, 0, 0, 0, \hat{0}, 1, 1, 1, 1, 1,
1, 1, \ldots)
\]
is empty and the $2$-quotient is
\[
q_2(\la) =
(\partial_*^{-1}(\ldots,0,0,1,1,0,1,\ldots),\partial_*^{-1}(\ldots,0,1,0,1,1,1,\ldots))
= ((2),(1)).
\]
Of course, here $|\la| = 6 = 2 \cdot 3 = r \cdot |q_2(\la)|$.
\end{example}
In this example there are three domino tableaux of shape $(4,2)$,
and also three $2$-SYT of shape $((2),(1))$:
\[
\ytableaushort{1122,33} \quad , \quad
\ytableaushort{1133,22} \quad ,\quad
\ytableaushort{1233,12} \quad \longleftrightarrow \quad
\ytableaushort{\none12,3} \quad , \quad
\ytableaushort{\none13,2} \quad , \quad
\ytableaushort{\none23,1}
\]
This is not a coincidence, as the following theorem shows.
\begin{theorem}\label{AR_t:r_quotient}
Let $\la$ be a partition with empty $r$-core,
and let $q_r(\la)$
be its $r$-quotient. Then
\[
f_r^{\la} = f^{q_r(\la)}.
\]
\end{theorem}
Theorem~\ref{AR_t:r_quotient} may be combined with the hook length
formula for ordinary shapes (Theorem~\ref{t.AR_num_ordinary_hook})
to obtain the following.
\begin{theorem}\label{r_hook}{\rm \cite[p. 84]{JK}}
If $f^\la_r\ne 0$ then
\[
f^\la_r=\frac{(|\la|/r)!}{\prod_{c \in [\la]:\, r | h_{c}}
{h_{c}/{r}}}.
\]
\end{theorem}
\begin{proof}
$\la$ has an empty $r$-core; let $q_r(\la) = (\la^0, \ldots,
\la^{r-1})$ be its $r$-quotient. By the hook length formula for
ordinary shapes (Theorem~\ref{t.AR_num_ordinary_hook}) together
with Observation~\ref{AR_t:obs1},
\[
f^{(\la^0, \ldots, \la^{r-1})} = {|\la|/r \choose
{|\la^0|,\ldots,|\la^{r-1}|}} \prod_{i=0}^{r-1} f^{\la^i}
=\frac{(|\la|/r)!}{\prod_{c \in [\la^0, \ldots, \la^{r-1}]}
h_{c}}.
\]
A careful examination of the $r$-quotient implies that it induces
a bijection from the cells in $\la$ with hook length divisible by
$r$ to all the cells in $(\la^0, \ldots, \la^{r-1})$, such that
every cell $c \in [\la^0, \ldots, \la^{r-1}]$ with hook length
$h_c$ corresponds to a cell $c' \in [\la]$ with hook length
$h_{c'}= r h_{c}$. This completes the proof.
\end{proof}
Stanton and White~\cite{SW} generalized the RS correspondence to a
bijection from $r$-colored permutations (i.e., elements of the
wreath product $\bbz_r\wr \Sc_n$) to pairs of $r$-rim hook
tableaux of the same shape.
The Stanton-White bijection, together with
Theorem~\ref{AR_t:r_quotient}, implies the following
generalization of Corollary~\ref{AR_t:RSK_cor1}.
\begin{theorem}\label{sum_r}
\[
\sum\limits_{\la\vdash rn} (f_r^\la)^2= r^n n! \leqno(1)
\]
and
\[
\sum\limits_{\la\vdash rn} f_r^\la = \sum\limits_{k=0}^{\lfloor
n/2\rfloor} {n\choose 2k} (2k-1)!! r^{n-k}. \leqno(2)
\]
\end{theorem}
In particular, the total number of domino tableaux of size $2n$ is
equal to number of involutions in the hyperoctahedral group $B_n$.
By similar arguments, the number of SYT of size $n$ and unordered
$2$-partition shape is equal to the number of involutions in the
Weyl group $D_n$.
An important inequality for the number of rim hook tableaux has
been found by Fomin and Lulov.
\begin{theorem}{\rm \cite{Fomin_Lulov}}
For any $\la \vdash rn$,
\[
f_r^\la \le ≤ r^n n! \left( \frac{f^\la}{(rn)!} \right)^{1/r}.
\]
\end{theorem}
See also~\cite{Lulov_Pak}\cite{Roichman}\cite{Larsen_Shalev}.
\section{$q$-Enumeration}\label{AR_s:q}
This section deals primarily with three classical combinatorial
parameters -- inversion number, descent number and major index.
These parameters were originally studied in the context of
permutations (and, more generally, words).
The major index, for example, was introduced by
MacMahon~\cite{MacMahon_book}. These permutation statistics were
studied extensively by Foata and Sch\"utzenberger~\cite{Foata,
FS}, Garsia and Gessel~\cite{GarsiaGessel}, and others. Only later
were these concepts defined and studied for standard Young
tableaux.
\subsection{Permutation statistics}
We start with definitions of the main permutation statistics.
\begin{definition}\
The {\dem descent set} of a permutation $\pi\in \Sc_n$ is
\[
\Des(\pi):=\{i:\ \pi(i)>\pi(i+1)\},
\]
the {\dem descent number} of $\pi$ is
\[
\des(\pi) := |\Des(\pi)|,
\]
and the {\dem major index} of $\pi$ is
\[
\maj(\pi) := \sum\limits_{i\in \Des(\pi)} i.
\]
The {\dem inversion set} of $\pi$ is
\[
\Inv(\pi):=\{(i,j):\ 1\le i<j \le n,\, \pi(i)>\pi(j)\},
\]
and the {\dem inversion number} of $\pi$ is
\[
\inv(\pi) := |\Inv(\pi)|.
\]
\end{definition}
We also use standard $q$-notation:
For a nonnegative integer $n$ and nonnegative integers $k_1,
\ldots, k_t$ summing up to $n$ denote
\[
[n]_q := \frac{q^n-1}{q-1}, \quad [n]_q! := \prod_{i=1}^n [i]_q
\quad \text{(including } [0]_q! := 1), \quad \left[n \atop {k_1,
\ldots, k_t} \right]_q := \frac{[n]_q!}{\prod_{i=1}^t [k_i]_q!}.
\]
\begin{theorem}\label{AR_t:MM}{\rm \cite{MacMahon_book}
(MacMahon's fundamental equidistribution theorem)} For every
positive integer $n$
\[
\sum\limits_{\pi\in \Sc_n} q^{\maj(\pi)}= \sum\limits_{\pi\in
\Sc_n} q^{\inv(\pi)}=[n]_q!
\]
\end{theorem}
A bijective proof was given in the classical paper of
Foata~\cite{Foata}.
Refinements and generalizations were given by many. In particular,
Foata's bijection was applied to show that the major index and
inversion number are equidistributed over inverse descent
classes~\cite{FS}. A different approach was suggested by Garsia
and Gessel, who proved the following.
\begin{theorem}\label{GG}{\rm \cite{GarsiaGessel}}
For every subset $S=\{k_1,\dots,k_t\}\subseteq [n-1]$
\[
\sum\limits_{\pi\in \Sc_n \atop \Des(\pi^{-1})\subseteq S}
q^{\maj(\pi)} = \sum\limits_{\pi\in \Sc_n \atop
\Des(\pi^{-1})\subseteq S} q^{\inv(\pi)} =\left[n \atop
k_1,k_2-k_1,k_3-k_2,\dots,n-k_t \right]_q.
\]
\end{theorem}
The following determinantal formula~\cite[Example
2.2.5]{Stanley_EC1} follows by the inclusion-exclusion principle.
\begin{eqnarray*}\label{des_class_equi}
\sum_{\pi \in \Sc_n \atop \Des(\pi^{-1}) = S} q^{\maj(\pi)} =
\sum_{\pi \in \Sc_n \atop \Des(\pi^{-1}) = S} q^{\inv(\pi)}
&=& [n]!_q \det \left( \frac{1}{[s_{j+1} - s_i]!_q} \right)_{i, j = 0}^{k} \\
&=& \det \left( \left[ n - s_i \atop s_{j+1} - s_i \right]_q
\right)_{i, j = 0}^{k} .
\end{eqnarray*}
\subsection{Statistics on tableaux}
We start with definitions of descent statistics for SYT. Let $T$
be a standard Young tableau of shape $D$ and size $n = |D|$. For
each entry $1 \le t \le n$ let $\row(T^{-1}(t))$ denote the index
of the row containing the cell $T^{-1}(t)$.
\begin{definition}\
The {\dem descent set} of $T$ is
\[
\Des(T) := \{1 \le i \le n-1\,|\, \row(T^{-1}(i)) <
\row(T^{-1}(i+1))
\},
\]
the {\dem descent number} of $T$ is
\[
\des(T) := |\Des(T)|,
\]
and the {\dem major index} of $T$ is
\[
\maj(T) := \sum\limits_{i\in \Des(T)} i.
\]
\end{definition}
\medskip
\begin{example}\label{AR_ex:des_maj_T}
Let
\[
T = \,
\ytableaushort{1258, 346, 7} \quad .
\]
Then $\Des(T)= \{2,5,6\}$, $\des(T)= 3$ and $\maj(T)=2+5+6=13$.
\end{example}
For a permutation $\pi\in \Sc_n$, recall from
Section~\ref{AR_s:JdT} the notation $T_\pi$ for the skew
(anti-diagonal) SYT which corresponds to $\pi$ and the notation
$(P_\pi,Q_\pi)$ for the pair of ordinary SYT which corresponds to
$\pi$ under the Robinson-Schensted correspondence. By definition,
\[
\Des(T_\pi)=\Des(\pi^{-1}).
\]
The jeu de taquin algorithm preserves the descent set of a SYT,
and therefore
\begin{proposition}\label{AR_t:RSK_Des}
For every permutation $\pi\in \Sc_n$,
\[
\Des(P_\pi)=\Des(\pi^{-1}) \qquad \hbox{and} \qquad
\Des(Q_\pi)=\Des(\pi).
\]
\end{proposition}
When it comes to inversion number, there is more than one possible
definition for SYT.
\begin{definition}
An {\dem inversion} in $T$ is a pair $(i, j)$ such that $1 \le i <
j \le n$ and the entry for $j$ appears strictly south and strictly
west of the entry for $i$:
\[
\row(T^{-1}(i)) < \row(T^{-1}(j)) \qquad \hbox{and} \qquad
\col(T^{-1}(i)) > \col(T^{-1}(j)).
\]
The {\dem inversion set} of $T$, $\Inv(T)$, consists of all the
inversions in $T$, and the {\dem inversion number} of $T$ is
\[
\inv(T) := |\Inv(T)|.
\]
The {\dem sign} of $T$ is
\[
\sgn(T) := (-1)^{\inv(T)}.
\]
\end{definition}
\begin{definition
A {\dem weak inversion} in $T$ is a pair $(i, j)$ such that $1 \le
i < j \le n$ and the entry for $j$ appears strictly south and
weakly west of the entry for $i$:
\[
\row(T^{-1}(i)) < \row(T^{-1}(j)) \qquad \hbox{and} \qquad
\col(T^{-1}(i)) \ge \col(T^{-1}(j)).
\]
The {\dem weak inversion set} of $T$, $\Winv(T)$, consists of all
the weak inversions in $T$, and the {\dem weak inversion number}
of $T$ is
\[
\winv(T) := |\Winv(T)|.
\]
\end{definition}
\begin{observation}
For every standard Young tableaux $T$ of ordinary shape $\lambda$,
\[
\winv(T)=\inv(T)+\sum\limits_j \binom{\lambda_j'}{2}.
\]
Here $\lambda_j'$ is the length of the $j$-th column of the
diagram $[\la]$.
\end{observation}
\begin{example}
For $T$ as in Example~\ref{AR_ex:des_maj_T},
$\Inv(T) = \{(2,3),(2,7),(4,7),(5,7),(6,7)\}$, $\inv(T) = 5$
and $\sgn(T) = -1$. The weak inversion set consist of the
inversion set plus all pairs of entries in the same column. Thus
$\winv(T) = \inv(T) + {3 \choose 2} + {2 \choose 2} + {2 \choose
2} + {1 \choose 2} = 4+3+1+1 = 9$.
\end{example}
\bigskip
For another (more complicated) inversion number on SYT
see~\cite{Haglund}.
\subsection{Thin shapes}
We begin with refinements and $q$-analogues of results from
Section~\ref{AR_s:thin}.
\subsubsection{Hook shapes}
It is easy to verify that
\begin{observation}
For any $1 \le k \le n-1$
\[
\sum\limits_{\sh(T)= (n-k,1^k)}{\bf x}^{\Des(T)}=e_k,
\]
where
\[
e_k:=\sum\limits_{1\le i_1<i_2<\ldots<i_k< n} x_{i_1}\cdots
x_{i_k}
\]
are the elementary symmetric functions.
\end{observation}
\begin{proof}
Let $T$ be a SYT of hook shape. Then for every $1\le i< n$, $i$ is
a descent in $T$ if and only if the letter $i+1$ lies in the first
column.
\end{proof}
Thus
\begin{equation}\label{Des_hook}
\sum\limits_{\sh(T)= \text{hook of size $n$}}{\bf
x}^{\Des(T)}=\prod\limits_{i=1}^{n-1}(1+x_i).
\end{equation}
It follows that
\begin{equation}\label{des_maj_hook}
\sum\limits_{\sh(T)= \text{hook of size
$n$}}t^{\des(T)}q^{\maj(T)} =\prod\limits_{i=1}^{n-1}(1+tq^i).
\end{equation}
Notice that for a $T$ of hook shape and size $n$,
$\sh(T)=(n-k,1^k)$ if and only if $\des(T)=k$. Combining this
observation with Equation (\ref{des_maj_hook}), the $q$-binomial
theorem implies that
\[
\sum\limits_{\sh(T)= (n-k,1^k)}q^{\maj(T)}= q^{{k\choose
2}}\left[n-1\atop k\right]_q.
\]
Finally,
the statistics $\winv$ and $\maj$ are equal on all SYT of hook
shape.
For most Young tableaux of non-hook zigzag shapes these statistics
are not equal. However, the equidistribution phenomenon may be
generalized to all zigzags. This will be shown below.
\subsubsection{Zigzag shapes}
Recall from Subsection~\ref{AR_s:zigzag} that
each subset $S \subseteq [n-1]$ defines a zigzag shape
$\zigzag_n(S)$ of size $n$. The following statement refines
Proposition~\ref{total-zigzag}.
\begin{proposition}\label{t.zigzag-descent class}
For any $S \subseteq [n-1]$,
\[
f^{\zigzag_n(S)}=\#\{\pi\in \Sc_n:\ \Des(\pi)=S\}.
\]
\end{proposition}
\begin{proof} Standard Young tableaux of the zigzag shape encoded by $S$
are in bijection with permutations in $S_n$ whose descent set is
exactly $S$. The bijection converts such a tableau into a
permutation by reading the cell entries starting from the
southwestern corner. For example,
\[
T = \,
\ytableaushort{\none\none268, \none\none5, \none37, 14, 9}
\,\mapsto\, \pi=[914375268].
\]
\end{proof}
\medskip
Notice that in this example, $\pi^{-1}=[274368591]$ and
$\Des(T)=\Des(\pi^{-1})=\{2,3,6,8\}$. Also, $\Winv(T)=\Inv(\pi)$.
This is a general phenomenon. Indeed,
\begin{observation}\label{zigzag_Des_Winv}
Let $T$ be a SYT of zigzag shape and let $\pi$ be its image under
the bijection described in the proof of
Proposition~\ref{t.zigzag-descent class}. Then
\[
\Des(T)=\Des(\pi^{-1}),\qquad \Winv(T)=\Inv(\pi).
\]
\end{observation}
\medskip
By Observation~\ref{zigzag_Des_Winv}, there is a maj-winv
preserving bijection from SYT of given zigzag shape to
permutations in the correponding descent class.
Combining
this with Theorem~\ref{GG} one obtains
\begin{proposition}\label{AR.t.q_zigzag}
For every zigzag $z$ encoded by $S=\{s_1, \ldots, s_k\}\subseteq
[n-1]$ set $s_0:=0$ and $s_{k+1}:=n$. Then
\[
\sum\limits_{\sh(T)=z}q^{\maj(T)}=\sum\limits_{\sh(T)=z}q^{\winv(T)}=[n]_q!
\cdot \det \left(\frac{1}{[s_{j+1} - s_i]_q!} \right)_{i, j =
0}^{k}.
\]
\end{proposition}
\subsubsection{Two-rowed shapes}
The major index and (weak) inversion number are not
equidistributed over SYT of two-rowed shapes. However,
$q$-enumeration of both is nice.
Two different $q$-Catalan numbers appear in the scene. First,
consider, enumeration by major index.
\begin{proposition}\label{q-two-rows}
For every $n\in\bbn$ and $0\le k\le n/2$
\[
\sum\limits_{\sh(T) = (n-k,k)} q^{\maj(T)} = \left[ n \atop k
\right]_q - \left[ n \atop k-1 \right]_q.
\]
In particular,
\[
\sum\limits_{\sh(T)=(m,m)} q^{\maj(T)} = q^m C_m(q)
\]
where
\[
C_m(q) = \frac{1}{[m+1]_q}\left[2m \atop m\right]_q
\]
is the $m$-th F\"urlinger-Hofbauer $q$-Catalan
number~\cite{Furlinger_Hofbauer}.
\end{proposition}
Hence
\begin{corollary}\label{q-total-two-rows}
\[
\sum\limits_{\height(T)\le 2}q^{\maj(T)}=\left[n\atop \lfloor
n/2\rfloor\right]_q.
\]
\end{corollary}
For a bijective proof and refinements see~\cite{BBES}.
\bigskip
The descent set is invariant under jeu de taquin. Hence the proof
of Theorem~\ref{height3} may be lifted to a $q$-analogue. Here is
a $q$-analogue of Theorem~\ref{height3}.
\begin{theorem}
The major index generating function over SYT of size $n$ and
height $\le 3$ is equal to
\[
m_n(q) = \sum_{k=0}^{\lfloor n/2 \rfloor} q^k \left[ n \atop 2k
\right]_q C_k(q).
\]
\end{theorem}
\bigskip
Furthermore, the following strengthened version of
Corollary~\ref{AR_t:RSK_cor1}(3) holds.
\begin{corollary} For every positive integer $k$
$$
\sum\limits_{\{T \in \SYT_n:\ \height(T)< k\}} {\bf
x}^{\Des(T)}=\sum\limits_{\{\pi \in \Avoid_n(\sigma_k):\
\pi^2=id\}} {\bf x}^{\Des(T)},
$$
where $\Avoid_n(\sigma_k)$ is the subset of all permutations in
$\Sc_n$ which avoid $\sigma_k:=[k,k-1,\dots,1]$.
\end{corollary}
\begin{proof}
Combine Theorem~\ref{AR_t:Schensted} with
Proposition~\ref{AR_t:RSK_Des}.
\end{proof}
\bigskip
Counting by inversions is associated with another $q$-Catalan
number.
\begin{definition}{\rm \cite{CarlitzRiordan}}
Define the {\dem Carlitz-Riordan $q$-Catalan number} $\tC_n(q)$ by
the recursion
\[
\tC_{n+1}(q):=\sum\limits_{k=0}^{n} q^k \tC_k(q) \tC_{n-k}(q)
\]
with $\tC_0(q):=1$.
\end{definition}
These polynomials are, essentially, generating functions for the
area under Dyck paths of order $n$.
\begin{proposition}{\rm \cite{Shynar}}
\[
\sum\limits_{\sh(T)=(n,n)}q^{\inv(T)} = \tC_n(q).
\]
\end{proposition}
\begin{proposition}{\rm \cite{Shynar}}
For $0 \le k\le n/2$ denote
$G_k(q):=\sum\limits_{\sh(T)=(n-k,k)}q^{\inv(T)}$. Then
\[
\sum\limits_{k=0}^{\lfloor n/2 \rfloor}q^{\binom{n-2k}{2}}
G_k(q)^2 = \tC_n(q).
\]
\end{proposition}
\medskip
Enumeration of two-rowed SYT by descent number was studied by
Barahovski.
\begin{proposition}{\rm \cite{Barahovski}}
For $m \ge k \ge 1$,
\[
\sum_{\sh(T)=(m,k)} t^{\des(T)} = \sum_{d =1}^{k}
\frac{m-k+1}{k}{k \choose d}{m \choose d-1} t^d
\]
\end{proposition}
\subsection{The general case}
\subsubsection{Counting by descents}
There is a nice formula, due to Gessel, for the
number of SYT of a given shape $\la$ with a given descent set.
Since it involves as a scalar product of symmetric functions,
which are out of the scope of the current survey, we refer the
reader to the original paper~\cite[Theorem 7]{Gessel1984}.
There is also a rather complicated formula of
Kreweras~\cite{Kreweras66, Kreweras67} for the generaing function
of descent number on SYT of a given shape. However, the
first moments of the distribution of this statistic may be
calculated quite easily.
\begin{proposition}\label{one-des} For every partition $\lambda\vdash n$ and
$1\le k \le n-1$
\[
\#\{T\in \SYT(\lambda) :\, k \in \Des(T)\} =
\left( \frac{1}{2} - \frac{\sum_{i} \binom{\lambda_i}{2} - \sum_j
\binom{\lambda_j'}{2}}{n(n-1)} \right) f^\lambda.
\]
Here $\lambda_i$ is the length of the $i$-th row in the diagram $[\la]$,
and $\lambda'_j$ is the length of the $j$-th column.
\end{proposition}
For proofs see~\cite{Hasto, AR-Random}.
One deduces that $\#\{T\in \SYT(\la) :\, k \in \Des(T)\}$ is independent of $k$.
This phenomenon may be generalized as follows:
For any composition $\mu=(\mu_1, \ldots, \mu_t)$ of $n$ let
\[
S_\mu := \{\mu_1, \mu_1+\mu_2, \ldots, \mu_1+\ldots+\mu_{t-1}\}
\subseteq [1, n-1].
\]
The underlying partition of $\mu$ is obtained by
reordering the parts in a weakly decreasing order.
\begin{theorem}
For every partition $\la \vdash n$ and any two compositions
$\mu$ and $\nu$ of $n$ with same underlying partition,
\[
\sum_{\sh(T)=\la} {\bf x}^{\Des(T)\setminus S_\mu}=
\sum_{\sh(T)=\la} {\bf x}^{\Des(T)\setminus S_\nu}.
\]
\end{theorem}
\medskip
Proposition~\ref{one-des} implies that
\begin{corollary
The expected descent number of a random SYT of shape
$\lambda\vdash n$ is equal to
\[
\frac{n-1}{2} - \frac{1}{n}\left(\sum\limits_{i}
\binom{\lambda_i}{2}-\sum\limits_j \binom{\lambda_j'}{2}\right).
\]
\end{corollary}
The variance of descent number was computed in~\cite{AR-Random, Hasto}, implying
a concentration around the mean phenomenon. The proofs
in~\cite{AR-Random} involve character theory, while those in~\cite{Hasto}
follow from a careful examination of the hook length bijection of Novelli, Pak and
Stoyanovskii~\cite{NovelliPakSto97}, described in Subsection~\ref{bijective} above.
\subsubsection{Counting by major index}
Counting SYT of general ordinary shape by descents is difficult.
Surprisingly, it was discovered by Stanley that counting by major index
leads to a natural and beautiful $q$-analogue of the
ordinary hook length formula (Proposition~\ref{t.AR_num_ordinary_hook}).
\begin{theorem}\label{t.AR_q_ordinary_hook
{\rm\ ($q$-Hook Length Formula)~\cite[Corollary
7.21.5]{Stanley_EC2}} For every partition $\la \vdash n$
\[
\sum_{T \in \SYT(\la)} q^{\maj(T)} = q^{\sum_i {{\lambda_i'\choose
2}}}\frac{[n]_q!}{\prod_{c \in [\la]} [h_{c}]_q}.
\]
\end{theorem}
This result follows from a more general identity, showing that the
major index generating function for SYT of a skew shape is essentially
the corresponding skew Schur function~\cite[Proposition 7.19.11]{Stanley_EC2}.
If $|\la/\mu| = n$ then
\[
s_{\la/\mu}(1, q, q^2, \ldots) = \frac{\sum_{T \in \SYT(\la/\mu)}
q^{\maj(T)}}{(1-q)(1-q^2) \cdots (1-q^n)}.
\]
\medskip
An elegant $q$-analogue of Schur's shifted product formula
(Proposition~\ref{t.AR_num_shifted_prod}) was found by Stembridge.
\begin{theorem}{\rm \cite[Corollary 5.2]{Stembridge}}
For every strict partition $\la=(\la_1,\dots,\la_t) \vdash n$
\[
\sum_{T \in \SYT(\la^*)} q^{n\cdot\des(T)-\maj(T)} =
\frac{[n]_q!}{\prod_{i=1}^t [\la_i]_q!}\cdot
\prod_{(i,j):i<j}\frac{q^{\la_j}-q^{\la_i}}{1-q^{\la_i+\la_j}}.
\]
\end{theorem}
Theorem~\ref{t.AR_q_ordinary_hook} may be easily generalized to
$r$-tableaux.
\begin{corollary}
For every $r$-partition $\la = (\la^1, \ldots, \la^r)$ of total size $n$
\[
\sum_{T \in \SYT(\la)} q^{\maj(T)} =
q^{\sum_i {{\la_i'\choose 2}}} \cdot \frac{[n]_{q}!}{\prod_{c \in [\la]} [h_{c}]_{q}}.
\]
\end{corollary}
The proof relies on a combination of Theorem~\ref{GG} with
the Stanton-White bijection for colored permutations.
\subsubsection{Counting by inversions}
Unlike descent statistics, not much is known about enumeration by
inversion statistics in the general case.
The following result was conjectured by Stanley~\cite{Stanley-sign} and proved,
independently, by Lam~\cite{Lam}, Reifegerste~\cite{Reifegerste}
and Sj\"ostrand~\cite{Sjostrand}.
\begin{theorem}
\[
\sum_{\la \vdash n} \sum_{T \in \SYT(\la)} \sgn(T) = 2^{\lfloor
n/2 \rfloor}.
\]
\end{theorem}
A generalization of Foata's bijective proof of MacMahon's
fundamental equidistribution theorem (Theorem~\ref{AR_t:MM}) to
SYT of any given shape was described in~\cite{Haglund}, using a
more involved concept of inversion number for SYT.
\section{Counting reduced words}\label{AR_s:words}
An interpretation of SYT as reduced words is presented in this
section. This interpretation is based on Stanley's seminal
paper~\cite{Stanley-words} and follow ups. For further reading
see~\cite[\S 7.4-7.5]{Bjorner-Brenti} and~\cite[\S
7]{Bjorner-Stanley}.
\subsection{Coxeter generators and reduced words}
Recall that the symmetric group $\Sc_n$ is generated by the set
$S:=\{s_i:\ 1\le i <n\}$ subject to the defining Coxeter
relations:
\[
s_i^2=1\ \ (1\le i<n);\ \ \ \ \ \ s_is_j=s_js_i\ \ (|j-i|>1); \ \ \ \ \ \ s_is_{i+1}s_i=s_{i+1}s_is_{i+1}\ \ (1\le i<n-1).
\]
The elements in $S$ are called simple reflections and may be
identified as adjacent transposition in $\Sc_n$, where
$s_i=(i,i+1)$.
\bigskip
The Coxeter length of a permutation $\pi\in \Sc_n$ is
\[
\ell(\pi):=\min \{t:\ s_{i_1}\cdots s_{i_t}=\pi\}
\]
the minimal length of an expression of $\pi$ as a product of
simple reflections.
\begin{claim}
For every $\pi\in \Sc_n$
\[
\ell(\pi)=\inv(\pi).
\]
\end{claim}
A series of Coxeter generators $(s_{i_1},\dots, s_{i_t})$ is a
reduced word of $\pi\in \Sc_n$ if the resulting product is a
factorization of minimal length of $\pi$, that is
$s_{i_1}\cdots s_{i_t}=\pi$ and $t=\ell(\pi)$. In this section, the enumeration of
reduced words will be reduced to enumeration of SYT.
\subsection
Ordinary and skew shapes}
A {\dem shuffle} of the two sequences $(1, 2, \ldots, k)$ and $(k
+ 1, k + 2, \ldots, \ell)$ is a permutation $\pi\in \Sc_{\ell}$ in
which both sequences appear as subsequences.
For example ($k = 3$, $\ell = 7$): $4516237$.
\begin{proposition}\label{t.AR_num_shuffles}{\rm ~\cite{Elizalde-R}}
There exists a bijection $\la \mapsto \pi_\la$ from the set of all
partitions to the set of all fixed point free shuffles such that:
\begin{enumerate}
\item If $\la$ has height $k$, width (length of first row) $\ell -
k$ and size $n$ then $\pi_\la$ is a fixed point free shuffle of
$(1, 2, \ldots, k)$ and $(k+1, k+2, \ldots, \ell)$ with
$\inv(\pi_\la) = n$. \item The number of SYT of shape $\lambda$ is
equal to the number of reduced words of $\pi_\la$.
\end{enumerate}
\end{proposition}
\begin{proof}[Proof sketch]
For the first claim, read the permutation from the shape as
follows: Encode the rows by $1, 2, \ldots, k$ from bottom to top,
and the columns by $k+1, k+2, \ldots, \ell$ from left to right.
Then walk along the SE boundary from bottom to top. If the $i$-th
step is horizontal set $\pi_\la(i)$ to be its column encoding;
otherwise set $\pi_\la(i)$ to be its row encoding.
\begin{example} The shape
\[
\ytableausetup{baseline}
\ytableaushort{\none\none{\none[{\ts4}]}{\none[\ts5]}{\none[\ts6]}{\none[\ts7]}{\none[\ts8]},
\none, {\none[\ts3]}\none{}{}{}{}{}, {\none[\ts2]}\none{}{}{}{},
{\none[\ts1]}\none{}} \ytableausetup{nobaseline}
\]
corresponds to the shuffle permutation
$$\pi =41567283.$$
\end{example}
For the second claim, read the reduced word from the SYT as
follows:
If the letter $1 \le j \le n$ lies on the $i$-th diagonal (from
the left), set the $j$-th letter in the word (from right to left)
to be $s_i$.
\begin{example} The SYT
\[
\ytableausetup{baseline}
\ytableaushort{\none\none{\none[{\ts4}]}{\none[\ts5]}{\none[\ts6]}{\none[\ts7]}{\none[\ts8]},
\none, {\none[\ts3]}\none12368, {\none[\ts2]}\none459{10},
{\none[\ts1]}\none7} \ytableausetup{nobaseline}
\]
corresponds to the reduced word (in adjacent transpositions)
\[
s_5 s_4 s_7 s_1 s_6 s_3 s_2 s_5 s_4 s_3 = 41567283.
\]
\end{example}
The proof that this map is a bijection from all SYT of shape
$\lambda$ to all reduced words of $\pi_\lambda$ is obtained by
induction on the size of $\lambda$.
\end{proof}
\begin{corollary}\label{t.AR_num_shuffles_rectangle}
For every pair of positive integers $1 \le k \le \ell$, the number
of reduced words of the permutation $[k+1, k+2, \ldots, \ell, 1,
2, \ldots, k]$ is equal to the number of SYT of rectangular shape
$(k^{\ell - k})$.
\end{corollary}
\begin{proposition}\label{t.AR_num_skew_321}
There exists an injection from the set of all $321$-avoiding
permutations to the set of all skew shapes, which satisfies the
following property: For every $321$-avoiding permutation $\pi$
there exists a skew shape $\lambda/\mu$ such that the number of
reduced words of $\pi$ is equal to the number of SYT of shape
$\lambda/\mu$.
\end{proposition}
\medskip
The following theorem was conjectured and first proved by Stanley
using symmetric functions~\cite{Stanley-words}. A bijective proof
was given later by Edelman and Greene~\cite{Edelman-Greene}.
\begin{theorem}\label{AR_reduced_words}{\rm \cite[Corollary 4.3]{Stanley-words}}
The number of reduced words (in adjacent transpositions) of the
longest permutation $w_0:=[n,n-1,...,1]$ is equal to the number of
SYT of staircase shape $\delta_{n-1} = (n-1,n-2,...,1)$.
\end{theorem}
Corollary~\ref{t.AR_num_shuffles_rectangle} and
Theorem~\ref{AR_reduced_words} are special instances of the
following remarkable result.
\begin{theorem}\label{2143}{\rm \cite{Stanley-words, Edelman-Greene}}
\begin{enumerate}
\item For every permutation $\pi\in \Sc_n$, the number of reduced
words of $\pi$ can be expressed as
\[
\sum_{\la \vdash \inv(\pi)} m_\la f^\la
\]
where $m_\la$ are nonnegative integers canonically determined by
$\pi$. \item The above sum is a unique $f^{\la_0}$ (i.e., $m_\la =
\delta_{\la, \la_0}$) if and only if $\pi$ is $2143$-avoiding.
\end{enumerate}
\end{theorem}
\medskip
Reiner~\cite{Reiner-note} applied Theorem~\ref{AR_reduced_words}
to show that the expected number of subwords of type $s_i
s_{i+1}s_i$ and $s_{i+1}s_i s_{i+1}$ in a random reduced word of
the longest permutation is exactly one. He conjectured that the
distribution of their number is Poisson. For some recent progress
see~\cite{Tenner}.
\bigskip
A generalization of Theorem~\ref{AR_reduced_words} to type $B$
involves square shapes. The following theorem was conjectured by
Stanley and proved by Haiman.
\begin{theorem}\label{AR_reduced_words_B}{\rm \cite{Haiman1992}}
The number of reduced words (in the alphabet of Coxeter
generators) of the longest signed permutation
$w_0:=[-1,-2,...,-n]$ in $B_n$ is equal to the number of SYT of
square shape.
\end{theorem}
For a recent application see~\cite{Petersen-Serrano}.
\subsection
Shifted shapes}
An interpretation of the number of SYT of a shifted shape was
given by Edelman. Recall the left weak order from
Section~\ref{AR_s:appetizer}, and recall that $\sigma$ covers
$\pi$ in this order if $\sigma=s_i \pi$ and
$\ell(\sigma)=\ell(\pi)+1$. Edelman considered a modification of
this order in which we further require that the letter moved to
the left be larger than all letters that precede it.
\begin{theorem}{\rm \cite[Theorem 3.2]{Edelman}}
The number of maximal chains in the modified weak order
is equal to the number of SYT of shifted staircase shape.
\end{theorem}
\medskip
A related interpretation of SYT of shifted shapes was given
in~\cite{Elizalde-R}. A permutation $\pi\in \Sc_n$ is {\dem
unimodal} if $\Des(\pi^{-1})=\{1,\dots,j\}$ for some $0\le j \le
n-1$. Consider $U_n$, the set of all unimodal permutations in
$\Sc_n$, as a poset under the left weak order induced from
$\Sc_n$.
\begin{proposition}\label{t.AR_num_shifted_unimodal}{\rm \cite{Elizalde-R}}\
There exists a bijection $\la \mapsto \pi_\la$ from the set of all
shifted shapes contained in the shifted staircase $\delta_{n-1} =
(n-1,n-2,\ldots,1)$ to the set $U_n$ of all unimodal permutations
in $\Sc_n$ such that:
\begin{enumerate}
\item $|\la| = \inv(\pi_\la)$. \item The number of SYT of shifted
shape $\la$ is equal to the number of maximal chains in the
interval $[id,\pi_\la]$ in $U_n$.
\end{enumerate}
\end{proposition}
\begin{proof}[Proof sketch]
Construct the permutation $\pi_\la$ from the shape $\la$ as
follows: Encode the rows of $\la$ by $1, 2, \ldots$ from top to
bottom, and the columns by $2, 3, \ldots$ from left to right. Then
walk along the SE boundary from bottom to top. If the $i$-th step
is horizontal, set $\pi_\la(i)$ to be its column encoding;
otherwise set $\pi_\la(i)$ to be its row encoding.
\begin{example} The shifted shape
\[
\ytableausetup{baseline}
\ytableaushort{\none\none{\none[{\ts2}]}{\none[\ts3]}{\none[\ts4]}{\none[\ts5]}{\none[\ts6]},
\none, {\none[\ts1]}\none{}{}{}{}{},
{\none[\ts2]}\none\none{}{}{}{}, {\none[\ts3]}\none\none\none{}}
\ytableausetup{nobaseline}
\]
corresponds to the permutation
\[
\pi =4356217.
\]
\end{example}
Now construct the reduced word from the SYT $T$ as follows:
If the letter $j$ lies in the $i$-th diagonal (from left to right)
of $T$ then set the $j$-th letter in the word (from right to left)
to be $s_i$.
\begin{example} The SYT
\[
\ytableausetup{baseline}
\ytableaushort{\none\none{\none[{\ts2}]}{\none[\ts3]}{\none[\ts4]}{\none[\ts5]}{\none[\ts6]},
\none, {\none[\ts1]}\none12368, {\none[\ts2]}\none\none459{10},
{\none[\ts3]}\none\none\none7} \ytableausetup{nobaseline}
\]
corresponds to the reduced word (in adjacent transpositions)
\[
s_4 s_3 s_5 s_1 s_4 s_2 s_1s_3 s_2 s_1 =4356217.
\]
\end{example}
\end{proof}
\section{Appendix 1: Representation theoretic aspects}
Representation theory may be considered as the birthplace of SYT;
in fact, one cannot imagine group representations without the
presence of SYT. Representation theory has been intimately related
to combinatorics since its early days. The pioneering work of
Frobenius, Schur and Young made essential use of integer
partitions and tableaux. In particular, formulas for restriction,
induction and decomposition of representations, as well as many
character formulas, involve SYT. On the other hand, it is well
known that many enumerative problems may be solved using
representations. In this survey we restricted the discussion to
combinatorial approaches. It should be noted that most
results have representation theoretic proofs, and in many cases
the discovery of the enumerative results was motivated by
representation theoretic problems.
In this section we briefly point on several connections, assuming
basic knowledge in non-commutative algebra, and give a very short
sample of applications.
\subsection{Degrees and enumeration}
A SYT $T$ of shape $\la$ has an associated group algebra element
$y_T\in \bbc[\Sc_n]$, called the {\dem Young symmetrizer}.
$y_T$ has a key role: It is an idempotent, and its principal right
ideal
\[
y_T\bbc[\Sc_n]
\]
is an irreducible module of $\Sc_n$. All irreducible modules are
generated, up to isomorphism, by Young symmetrizers and two
modules, which are generated by the Young symmetrizers of two SYT
are isomorphic if and only if these SYT have same shape. The
irreducible characters of the symmetric group $\Sc_n$ over $\bbc$
are, thus, parameterized by the integer partitions of $n$.
\begin{proposition}\label{RT1}
The degree of the character indexed by $\la\vdash n$ is equal to
$f^\la$, the number of SYT of the ordinary shape $\la$.
\end{proposition}
This phenomenon extends to skew and shifted shapes. The number of
SYT of skew shape $\la/\mu$, $f^{\la/\mu}$, is equal to the degree
of the decomposable module generated by a Young symmetrizer of a
SYT of shape $\la/\mu$. Projective representations are indexed by
shifted shapes; the number of SYT of shifted shape $\la$, $g^\la$,
is equal to the degree of the associated projective
representation.
\bigskip
Most of the results in this survey have representation theoretic
proofs or interpretations. A few examples will be given here.
\begin{proof}[Proof sketch of Proposition~\ref{two-rows}]
The symmetric group $\Sc_n$ acts naturally on subsets of size $k$.
The associated character, $\mu^{(n-k,k)}$, is multiplicity free;
its decomposition into irreducibles is
\begin{equation}\label{RT_eq1}
\mu^{(n-k,k)}=\sum\limits_{i=0}^k \chi^{(n-i,i)}.
\end{equation}
Hence
\[
\chi^{(n-k,k)}=\mu^{(n-k,k)}-\mu^{(n-k+1,k-1)}.
\]
The degrees thus satisfy
\[
f^{(n-k,k)}=\chi^{(n-k,k)}(1)=\mu^{(n-k,k)}(1)-\mu^{(n-k+1,k-1)}(1)={n\choose
k}-{n\choose k-1}.
\]
\end{proof}
This argumentation may be generalized to prove
Theorem~\ref{t.AR_num_ordinary_det}. First, notice that
(\ref{RT_eq1}) is a special case of the Young rule for decomposing
permutation modules. The Young rule implies the determinantal
Jacobi-Trudi formula for expressing an irreducible module as an
alternating sum of permutation modules, see e.g.~\cite{JK}.
Evaluation of the characters at the identity permutation implies
Theorem~\ref{t.AR_num_ordinary_det}.
\medskip
Next proceed to identities which involve sums of $f^\la$-s.
\medskip
\begin{proof}[Proof of Corollary~\ref{AR_t:RSK_cor1}(1)]
Recall that for every finite group, the sum of squares of the
degrees of the irreducibles is equal to the size of the group.
This fact together with the interpretation of the $f^\la$-s as
degrees of the irreducibles of the symmetric group $\Sc_n$
(Proposition~\ref{RT1}) completes the proof.
\end{proof}
The same proof yields Theorem~\ref{sum_r}(1).
\medskip
The Frobenius-Schur indicator theorem implies that for every
finite group, which may be represented over $\bbr$, the sum of
degrees of the irreducibles is equal to the number of involutions
in the group, implying Corollary~\ref{AR_t:RSK_cor1}(2). The
proof of Theorem~\ref{sum_r}(2) is similar, see e.g.~\cite{BG,
APR_Gelfand_wreath}.
\medskip
\begin{proof}[Proof sketch of Corollary~\ref{even_parts}]
The permutation module, defined by the action of $\Sc_{2n}$ on the
cosets of $B_n=\bbz_2\wr \Sc_n$ is isomorphic to a multiplicity
free sum of all $\Sc_{2n}$-irreducible modules indexed by
partitions with all parts even~\cite[\S VII (2.4)]{Md}. Comparison
of the dimensions completes the proof.
\end{proof}
\subsection{Characters and $q$-enumeration}
The Murnaghan-Nakayama rule is a formula for computing values of
irreducible $\Sc_n$-characters as signed enumerations of rim hook
tableaux. Here is an example of special interest.
\begin{proposition}\label{AR_t:RT2}
For every $\la\vdash rn$, the value of the irreducible character
$\chi^\la$ at a conjugacy class of cycle type $r^n$ is equal to
the number of $r$-rim hook tableaux; namely,
\[
\chi^\la_{(r,\ldots,r)}=f^\la_r.
\]
\end{proposition}
Another interpretation of $f^\la_r$ is as the degree of an
irreducible module of the wreath product $\bbz_r\wr \Sc_n$.
\medskip
An equivalent formula for the irreducible character values is by
weighted counts of all SYT of a given shape by their descents;
$q$-enumeration then amounts to a computation of the
corresponding Hecke algebra characters.
\bigskip
These character formulas may be applied to counting SYT by
descents. Here is a simple example.
\smallskip
\begin{proof}[Proof sketch of Proposition~\ref{one-des}]
By the Murnaghan-Nakayama rule, the character of $\chi^\la$ at a
transposition $s_i=(i,i+1)$ is equal to
\[
|\{T\in \SYT(\la):\ i\not\in\Des(T)\}|- |\{T\in \SYT(\la):\
i\in\Des(T)\}|=f^\la-2|\{T\in \SYT(\la):\ i\in\Des(T)\}|.
\]
Combining this with the explicit formula for this
character~\cite{I}
\[
\chi^\la_{(2,1^{n-2})} = \frac{\sum_i {\la_i \choose 2} - \sum_j
{\la'_j \choose 2}}{{n \choose 2}} f^\la.
\]
completes the proof.
\end{proof}
\bigskip
Finally, we quote two classical results, which apply enumeration
by major index.
\begin{theorem}{\rm (Kra{\'s}kiewicz-Weyman, in a widely circulated manuscript
finally published as~\cite{Kraskiewicz_Weyman})} Let $\omega$ be a
primitive $1$-dimensional character on the cyclic group $C_n$ of
order $n$.
Then, for any partition $\la$ of $n$, the multiplicity of
$\chi^{\la}$ in the induced character $Ind_{C_n}^{S_n} \omega$,
which is also the character of the $S_n$ action on the multilinear
part of the free Lie algebra on $n$ generators (and of many other
actions on combinatorial objects) is equal to the number of $T \in
\SYT(\la)$ with $\maj(T) \equiv 1 \pmod n$.
\end{theorem}
This result may actually be deduced from the following one.
\begin{theorem}{\rm (Lusztig-Stanley)}
For any partition $\la$ of $n$ and any $0 \le k \le {n \choose
2}$, the multiplicity of
$\chi^{\la}$ in the character of the $S_n$ action on the $k$-th
homogeneous component of the coinvariant algebra is equal to the
number of $T \in \SYT(\la)$ with $\maj(T) = k$.
\end{theorem}
A parallel powerful language is that of symmetric functions. The
interested reader is referred to the excellent textbooks~\cite[Ch.
7]{Stanley_EC2} and~\cite{Md}.
\section{Appendix 2: Asymptotics and probabilistic aspects}
An asymptotic formula is sometimes available when a simple
explicit formula is not known. Sometimes, such formulas do lead to
the discovery of surprising explicit formulas. A number of
important asymptotic results will be given in this appendix.
\medskip
Recall the exact formulas
(Corollary~\ref{total-two-rows}, Theorem~\ref{height3} and
Theorem~\ref{height4}) for the total number of SYT of ordinary
shapes with small height. An asymptotic formula for the total
number of SYT of bounded height was given by
Regev~\cite{Regev-height}; see also~\cite{Berele_Regev
\cite{Stanley_ICM_2007}\cite{Regev-height-new}.
\begin{theorem}\label{AR_t:regev_sum}{\rm \cite{Regev-height}}
Fix a positive integer $k$ and a positive real number $\alpha$.
Then, asymptotically as $n \to \infty$,
\[
F_{k,\alpha}(n) := \sum_{\la \vdash n\atop \ell(\la)\le k}
(f^\la)^{2 \alpha} \,\sim\, k^{2 \alpha n} \cdot n^{-\frac{1}{2}
(k - 1)(\alpha k + 2 \alpha - 1)} \cdot c(k, \alpha),
\]
where
\[
c(k, \alpha) := k^{\frac{1}{2} k (\alpha k + \alpha - 1)} (2
\alpha)^{-\frac{1}{2} (k - 1)(\alpha k + 1)} (2 \pi)^{-\frac{1}{2}
(k - 1)(2 \alpha - 1)} \prod_{i = 1}^{k} \frac{\Gamma(i
\alpha)}{\Gamma(\alpha)}.
\]
\end{theorem}
In particular, $\lim_{n \to \infty} {F_{k,\alpha}(n)}^{1/n} = k^{2
\alpha}$. Important special cases are: $\alpha = 1$, which gives
(by Theorem~\ref{AR_t:RSK_bijection} and
Proposition~\ref{AR_t:Schensted}) the asymptotics for the number
of permutations in $\Sc_n$ which do not contain a decreasing
subsequence of length $k+1$; and $\alpha = 1/2$, which gives (by
Corollary~\ref{AR_t:RSK_cor1}(3)) the asymptotics for the number
of involutions in $\Sc_n$ with the same property.
See~\cite{Regev-height} for many other applications.
The proof of Theorem~\ref{AR_t:regev_sum} uses the hook length
formula for $f^\la$, factoring out the dominant terms from the sum
and interpreting what remains (in the limit $n \to \infty$) as a
$k$-dimensional integral. An explicit evaluation of this integral,
conjectured by Mehta and Dyson~\cite{Mehta-Dyson, Mehta},
has been proved
using Selberg's integral formula~\cite{Selberg}.
\medskip
Okounkov and Olshanski~\cite{OO1} introduced and studied a
non-homogeneous analogue of Schur functions, the {\dem shifted
Schur function}. As a combinatorial application, they gave an
explicit formula for the number of SYT of skew shape $\la/\mu$.
Stanley~\cite{Stanley_FPSAC} proved a formula for $f^{\la/\mu}$ in
terms of values of symmetric group characters. By applying the
Vershik-Kerov $\Sc_\infty$-theory together with the
Okounkov-Olshanski theory of shifted Schur functions, he used that
formula to deduce the asymptotics of $f^{\la/\mu}$. See
also~\cite{Stanley-skew}
\medskip
Asymptotic methods were applied to show that certain distinct
ordinary shapes have the same multiset of hook
lengths~\cite{Regev-Vershik}. Bijective and other purely
combinatorial proofs were given later~\cite{RZ, Bes, Kratt_hooks,
GY}.
\medskip
In two seminal papers, Logan and Shepp~\cite{Logan}, and
independently Vershik and Kerov~\cite{Kerov}, studied the problem
of the {\dem limit shape} of the pair of SYT which correspond,
under the RS correspondence, to a permutation chosen uniformly at
random from $\Sc_n$. In other words, choose each partition $\la$
of $n$ with probability $\mu_n(\la) = (f^{\la})^2 / n!$. This
probability measure on the set of all partitions of $n$ is called
{\dem Plancherel measure}.
It was shown in~\cite{Logan, Kerov} that, under Plancherel
measure, probability concentrates near one asymptotic shape. See
also~\cite{Borodin}.
\begin{theorem}{\rm \cite{Logan, Kerov}}
Draw a random ordinary diagram of size $n$ in Russian notation
(see Subsection~\ref{AR_s:def_classical_shapes}) and scale it down
by a factor of $n^{1/2}$. Then, as $n$ tends to infinity, the
shape converges in probability, under Plancherel measure, to the
following limit shape:
\[
f(x)=
\begin{cases}
\frac{2}{\pi}(x \arcsin\frac{x}{2}+\sqrt{4-x^2}), & \hbox{if } |x|\le 2; \\
|x|, & \hbox{if } |x|> 2.
\end{cases}
\]
\end{theorem}
This deep result had significant impact on mathematics in recent
decades~\cite{Romik_book}.
\medskip
A closely related problem is to find the shape which maximizes
$f^\la$.
First, notice that
Corollary~\ref{AR_t:RSK_cor1} implies that
\[
\sqrt{\frac{n!}{p(n)}}\le \max \{f^\la :\, \la \vdash n\} \le
\sqrt {n!},
\]
where $p(n)$ is the number of partitions of $n$.
\begin{theorem}{\rm \cite{VK2}}
\begin{itemize}
\item[(1)] There exist constants $c_1 > c_0 > 0$ such that
\[
e^{-c_1 \sqrt n} \sqrt {n!}\le \max \{f^\la :\, \la \vdash n\} \le
e^{-c_0 \sqrt n} \sqrt {n!}.
\]
\item[(2)] There exists constants $c'_1 > c'_0 > 0$ such that
\[
\lim_{n \to \infty} \mu_n \left\{ \la \vdash n \,:\, c'_0 <
-\frac{1}{\sqrt n} \ln \frac{f^\lambda}{\sqrt{n !}} < c'_1 \right\}
= 1.
\]
\end{itemize}
\end{theorem}
Similar phenomena occur when Plancherel measure is replaced by
other measures. For the uniform measure see, e.g., \cite{Pittel, Pittel_02}.
\medskip
Motivated by the limit shape result, Pittel and Romik proved that
there exists a limit shape to the two-dimensional surface defined
by a uniform random SYT of rectangular shape~\cite{Pittel-Romik}.
\bigskip
Consider a fixed $(i,j)\in \bbz_+^2$ and a SYT $T$
chosen according to some probability distribution on the SYT of size $n$.
A natural task is to estimate the probability that $T(i, j)$ has a prescribed value.
Regev was the first to give an asymptotic answer to this problem,
for some probability measures,
using $\Sc_\infty$-theory~\cite{Regev-probability};
see also~\cite{Olshanski-Regev}.
A combinatorial approach was suggested later by McKay, Morse and
Wilf~\cite{Wilf}.
Here is an interesting special case.
\begin{proposition}\label{Regev-Wilf}{\rm \cite{Regev-probability}\cite{Wilf}}
For a random SYT $T$ of order $n$ and a positive integer $k>1$
\[
\Prob(T(2,1) = k) \sim \frac{k-1}{k!}+O(n^{-3/2}).
\]
\end{proposition}
In~\cite{Wilf}, Proposition~\ref{Regev-Wilf} was deduced from the
following theorem.
\begin{theorem}\label{MW}{\rm \cite{Wilf}}
Let $\mu\vdash k$ be a fixed partition and let $T$ be a fixed SYT
of shape $\mu$. Let $n\ge k$ and let $N(n;T)$ denote the number of
SYT with $n$ cells that contain $T$. Then
\[
N(n;T)\sim \frac{t_n f^\la}{k!},
\]
where $t_n$ denotes the number of involutions in the symmetric
group $\Sc_n$.
\end{theorem}
It follows that
\[
\sum\limits_{\la\vdash n}f^{\la/\mu}\sim \frac{t_n f^\la}{k!}.
\]
Stanley~\cite{Stanley_FPSAC}, applying techniques of symmetric
functions, deduced precise formulas for $N(n;T)$ in the form of
finite linear combinations of the $t_n$-s
\bibliographystyle{amsplain
|
1,314,259,994,900 | arxiv | \section{INTRODUCTION}
In hyperparameter tuning of machine learning models, we seek to find hyperparameters $x$ in some set ${A} \subseteq \mathbb{R}^d$ to minimize the validation error $f(x)$, i.e., to solve
\begin{equation}
\min_{x\in\mathbb{A}} f(x)
\label{eqn:min_f}
\end{equation}
Evaluating $f(x)$ can take substantial time and computational power \citep{bergstra2012random}
and may not provide gradient evaluations. Bayesian optimization, which requires relatively few
function evaluations, provides a compelling approach to such optimization problems \citep{jones1998efficient, snoek2012practical}.
As the computational expense of training and testing a modern deep neural network for a single set of hyperparameters has grown, researchers have sought to supplant some evaluations of $f(x)$ with computationally inexpensive low-fidelity approximations.
Conceputally, an algorithm can use low-fidelity evaluations to quickly identify a smaller set of promising hyperparameter settings, and then later focus more expensive high-fidelity evaluations within this set to refine its estimates.
Pioneering multi-fidelity approaches focused on hyperparameter tuning for deep neural networks
include the Bayesian optimization methods FaBOLAS \citep{klein2016fast,klein2015towards}, Freeze-Thaw Bayesian Optimization \citep{swersky2014freeze}, BOCA \citep{kandasamy2017multi}, predictive entropy search for a single continuous fidelity \citep{mcleod2017practical}, early-stopping SMAC \citep{domhan2015speeding}, and
Hyperband \citep{li2016hyperband}.
This work builds on
earlier multi-fidelity optimization approaches \citep{huang2006sequential,lam2015multifidelity,poloczek2016multi} focused on low-fidelity approximation of physics-based computational models.
These validation error approximations
perform the same training and testing steps as in standard Bayesian optimization, but control fidelity with fewer training iterations than required for convergence, fewer training data points, or fewer validation data points.
These approximations present unique opportunities not typically considered in the multifidelity literature, even within the portion focused on hyperparameter tuning.
First, we observe a full \emph{trace} of performance with respect to training iterations, rather than just a single performance value at the chosen fidelity.
Indeed, training with $s$ iterations produces evaluations of the low-fidelity approximation for {\it all} training iterations less than or equal to $s$.
Second, by caching state after completing $s$ iterations, we can significantly reduce computation time when later evaluating for $s'>s$ evaluations. This allows quickly evaluating low-fidelity approximations to the validation error for many hyperparameter settings, then later returning to those most promising hyperparameter settings to cheaply obtain more accurate observations.
Third, we may simultaneously alter fidelity along several continuous dimensions (iterations, training data, validation data), rather than modifying one continuous fidelity control or choosing from among a discrete set of ambiguously related fidelities.
In this paper, we propose the trace-aware knowledge gradient (taKG) for Bayesian optimization with multiple fidelities. taKG is distinctive in that it leverages both trace information and multiple fidelity controls at once, efficiently selecting training size, validation size, number of training iterations, and hyperparameters to optimize. Moreover, we provide a provably-convergent method for maximizing this acquisition function.
taKG addresses the challenges presented by trace observations by considering the reduced cost of adding iterations at a previously evaluated point, and using an intelligent selection scheme to choose a subset of the observed training iterations to include in inference.
Additionally, taKG can be used in either a batch or sequential setting, and can also efficiently leverage gradient information if it is available.
We present two variants of our trace-aware knowledge-gradient acquisition function, one for when the cost of sampling is substantial over the whole fidelity space (even when using few training points or iterations), and the other for when the cost and value of information vanish as fidelity decreases to 0. The first form we refer to simply as taKG, and the second as 0-avoiding taKG ($\text{taKG}^\emptyset$) because it avoids the tendency of multi-fidelity other methods to measure repeatedly at near-0 fidelities even when these low fidelities provide almost no useful information. Alternative approaches \citep{mcleod2017practical,klein2016fast} add and tune a fixed cost per sample to avoid this issue, while $\text{taKG}^\emptyset$ does not require tuning.
Furthermore, we present a novel efficient method to optimize these acquisition functions, even though they cannot be evaluated in closed form.
This method first constructs a stochastic gradient estimator which it then uses within multistart stochastic gradient ascent. We show that our stochastic gradient estimator is unbiased and thus asymptotically consistent, and the resulting stochastic gradient ascent procedure converges to a local stationary point of the acquisition function.
Our numerical experiments demonstrate a significant improvement over state-of-the-art alternatives, including FaBOLAS \citep{klein2016fast,klein2015towards}, Hyperband \citep{li2016hyperband}, and
BOCA \citep{kandasamy2017multi}. Our approach is also applicable to problems that do not have trace observations, but use continuous fidelity controls, and we additionally show strong performance in this setting.
In general, efficient and flexible multifidelity optimization is of crucial practical importance, as evidenced by growing momentum in this research area. Although Bayesian optimization has shown great promise for tuning hyperparameters of machine learning algorithms, computational bottlenecks have remained a major deterrent to mainstream adoption. With taKG, we leverage crucial trace information, while simultaneously providing support for several fidelity controls, providing remarkably efficient optimization of expensive objectives. This work is intended as a step towards the renewed practical adoption of Bayesian optimization for machine learning.
\begin{comment}
\agw{should there be a related work section? the following paragraphs, up to contributions, (1) seem too long-winded and out of place for an introduction; (2) way too specialized; (3) defensive and may alienate many of our potential reviewers}
\citet{li2016hyperband} and \citet{domhan2015speeding} avoid these challenges by placing artificial limits on starting and stopping evaluation of hyperparameters: \citet{li2016hyperband} simply never adds new promising hyperparameters to its set of candidates;
while \citet{domhan2015speeding} fully evaluates all but extremely poorly performing candidates, and never restarts hyperparameter evaluation after stopping it.
In addition, \citet{li2016hyperband} can consider only one fidelity at a time (either training iterations or training data, but not both) and \citet{domhan2015speeding} cannot consider training data. These artificial limits simplify analyses, but reduce potential benefits. \agw{I think the way this paragraph is phrased would upset these authors}
\citet{kandasamy2017multi} avoids these challenges by ignoring the opportunity offered by the unique characteristics of training iterations as a fidelity, instead adopting a model closer to traditional multi-fidelity Bayesian optimization: it supposes we do not observe data from training iterations smaller than the one requested, which discards information about the training loss' rate of decrease; and it imagines evaluating $s$ iterations does not reduce the cost of evaluating $s'>s$ iterations in the future.
\citet{klein2016fast} and \citet{mcleod2017practical} avoid these challenges by not considering training iterations to be a fidelity. Instead, they focus on training data size alone, where these challenges do not arise.
While this simplifies analysis,
we argue that not considering training data size and training iterations together hampers the speed at which high-quality solutions may be found.
\citet{swersky2014freeze}, an unpublished working paper \agw{shouldn't say this}, considers using training iterations as a fidelity most directly, but does not allow inclusion of other fidelities (e.g., training data size) and its approach does not extend easily in that direction. It also makes approximations that may harm performance:
for example,
the set of new hyperparameters considered for evaluation with predictive entropy search is limited to a basket of size three.
\citet{domhan2015speeding} cites a personal communication with the authors reporting the method does not perform well on tuning deep networks. \agw{this paragraph seems quite awkward}
\paragraph{Contributions:}
\agw{theoretical contributions?}
We propose the trace-aware knowledge gradient, which can select the training data size, training iterations, and hyperparameters at which to evaluate. Moreover, we provide a provably-convergent method for maximizing this acquisition function.
Our approach is applicable to problems with trace observations along one or more fidelity controls, including hyperparameter optimization, while varying both training iterations and training data \agw{isn't that what you just said, two sentences ago?}.
It addresses the challenges presented by trace observations by considering the reduced cost of adding iterations at a previously evaluated point, and using an intelligent selection scheme to choose a subset of the observed training iterations to include in inference.
It can be used in either a batch or sequential setting, and can leverage gradient information if it is available.
\agw{mention theoretical contributions?}
We present two variants of our trace-aware knowledge-gradient acquisition function, one for use when the cost of sampling is substantial over the whole fidelity space \agw{e.g., for any number of training points}, and the other for when the cost and value of information vanish as fidelity decreases to 0. The first form we refer to simply as taKG, and the second as 0-avoiding taKG ($\text{taKG}^\emptyset$) because it avoids the tendency of other methods to measure repeatedly at near-0 fidelities even when these low fidelities provide almost no useful information. Other methods \citep{mcleod2017practical,klein2016fast} add and tune a fixed cost per sample to avoid this issue, while $\text{taKG}^\emptyset$ does not require tuning.
While these acquisition functions cannot be evaluated in closed form, presenting a challenge to their optimization and thus use within a Bayesian optimization framework, we present a novel efficient computational method that optimizes them despite this challenge. This method first constructs a stochastic gradient estimator which it then uses within multistart stochastic gradient ascent. Justifying use, we show that our stochastic gradient estimator is unbiased and thus asymptotically consistent, and the resulting stochastic gradient ascent procedure converges to a local stationary point of the acquisition function.
Our numerical experiments demonstrate a significant improvement over FaBOLAS, Hyperband, and BOCA. We do not compare against \citet{swersky2014freeze} as the method is incompletely described (e.g., how the 2$^{\mathrm{nd}}$ and 3$^{\mathrm{rd}}$ candidate solutions are added to the basket) and code is not publicly available.\agw{if we include comments like this, they should be as a footnote in the experiments section}.
Our approach is also applicable to problems that do not have trace observations, but use continuous fidelity controls.
Indeed, other than BOCA and the proposed taKG, we are unaware of other methods that explicitly treat more than one continuous fidelity \agw{meaning, e.g., training points and number of iterations?}, and taKG outperforms BOCA on problems without trace observations in our experiments.
This may be because BOCA uses a two-stage process that selects the point to evaluate without considering the fidelity, and only selects the fidelity afterward. In contrast, taKG selects the point and fidelity jointly, and considers the impact of fidelity choice on the best point to evaluate.
taKG is also the first algorithm to allow both continuous fidelities and batch or derivative observations within one framework.
\paragraph{Other related work:}
\agw{We can probably incorporate some of these citations into our descriptions of related work, but don't require this paragraph}.
Beyond related work discussed above proposing other approaches to multi-fidelity Bayesian optimization in settings related to the one with trace observations we consider here, our work is also related methodologically to work on acquisition functions in non-trace settings. First, our acquisition function is a knowledge-gradient acquisition function, and is thus related to work on knowledge-gradient acquisition functions for other problem settings \citep{frazier2009knowledge,wu2016parallel}. Second, the technique we propose for optimizing our acquisition function generalizes a recently proposed computational method for single-fidelity optimization in \citet{wu2017bayesian} to trace observations. Third, our acquisition functions is given by a ratio of a value of information to the cost of acquiring that information. Acquisition functions based on ratios of value to cost have been studied in other contexts \citep{klein2016fast,mcleod2017practical,poloczek2016multi}.
\end{comment}
\section{THE taKG AND $\text{taKG}^\emptyset$ ACQUISTION FUNCTIONS}
\label{sect:taKG}
In this section we define the trace-aware knowledge-gradient acquisition function.
\sectn{sec:obj} defines our formulation of multi-fidelity optimization with traces and continuous fidelities, along with our inference procedure.
\sectn{sect:valuing} describes a measure of expected solution quality possible after observing a collection of fidelities within a trace.
\sectn{sect:voi} uses this measure to define the $\text{taKG}$ acquisition function,
and \sectn{sect:mvoi} defines an improved version, $\text{taKG}^\emptyset$, appropriate for settings in which the
the cost and value of information vanish together (for example, as the number of training iterations declines to 0).
\sectn{sect:optimize} then presents a computational approach for maximizing the $\text{taKG}$ and $\text{taKG}^\emptyset$ acquisition functions and theoretical results justifying its use. \sectn{sect:warm-start} discusses warm-starting previously stopped traces, and \sectn{sect:extensions} briefly discusses generalizations to batch and derivative observations.
\subsection{Problem Setting}
\label{sec:obj}
We model our objective function and its inexpensive approximations by a real-valued function $g(x, s)$ where our objective is $f(x) := g(x, 1)$ and $s \in [0,1]^m$ denotes the $m$ fidelity-control parameters. (Here, $1$ in $g(x,1)$ is a vector of $1$s.)
We assume that our fidelity controls have been re-scaled so that $1$ is the highest fidelity and $0$ the lowest.
$g(x,s)$ can be evaluated, optionally with noise, at a cost depending on $x$ and $s$.
We let $B(s)$ be the additional fidelities observed for free when observing fidelity $s$. Although our framework can be easily generalized, we assume that $B(s)$ is a cross product of sets of the form either $[0,s_i]$ (\emph{trace fidelities}) or $\{s_i\}$ (\emph{non-trace fidelities}).
We let $m_1$ denote the number of trace fidelities and $m_2$ the number of non-trace fidelities.
We also assume that the cost of evaluation is non-decreasing in each component of the fidelity.
For example, consider hyperparameter tuning with $m=2$ fidelities: first is the number of training iterations;
second is the amount of training data.
Each is bounded between $0$ and some maximum value, and $s_i \in [0,1]$ specifies training iterations or number of training data points as a fraction of this maximum value.
Then, $B(s) = [0,s_1] \times \{s_2\}$, because we observe results for the number of training iterations ranging from $0$ up to the number evaluated.
If the amount of validation data is another trace fidelity, we would have: $B(s) = [0,s_1] \times \{s_2\} \times [0,s_3]$.
We model $g$ using Gaussian process regression jointly over $x$ and $s$, assuming that observations are perturbed by independent normally distributed noise with mean $0$ and variance $\sigma^2$. Each evaluation consists of $x$, a vector of fidelities $s$, and a noisy observation of $g(x,s')$ for each fidelity $s'$ in $B(s)$. For computational tractability, in our inference, we will choose to retain and incorporate observations only from a subset $\mathcal{S} \subseteq B(s)$ of these fidelities with each observation. After $n$ such evaluations, we will have a posterior distribution on $g$ that will also be a Gaussian process, and whose mean and kernel we refer to by $\mu_n$ and $K_n$. We describe this inference framework in more detail in the supplement.
We model the logarithm of the cost of evaluating $g$ using a separate Gaussian process, updated after each evaluation,
and let $\text{cost}_n(x,s)$ be the predicted cost after $n$ evaluations.
We assume for now that the cost of evaluation does not depend on previous evaluations, and then discuss later in \sectn{sect:warm-start} an extension to warm-starting evaluation at higher fidelities using past lower-fidelity evaluations.
\subsection{Valuing Trace Observations}
\label{sect:valuing}
Before defining the $\text{taKG}$ and $\text{taKG}^\emptyset$ acquisition functions, we define a function $L_n$ that quantifies the extent to which observing trace information improves the quality of our solution to \eqref{eqn:min_f}.
Let $\mathbb{E}_n$ indicate the expectation with respect to the posterior $\mathbb{P}_n$ after $n$ evaluations. Given any $x$ and set of fidelities $\S$,
we will define a function $L_n(x,\S)$ to be the expected loss (with respect to the time-$n$ posterior) of our final solution to \eqref{eqn:min_f} if we are allowed to first observe $x$ at all fidelities in $\S$.
To define this more formally, let $\mathcal{Y}(x,\S)$ be a random vector comprised of observations of $g(x,s)$ for all $s \in \S$. Then, the conditional expected loss from choosing a solution $x'$ to \eqref{eqn:min_f} after this observation is $\mathbb{E}_n\left[g(x',1) \mid \mathcal{Y}(x,\S)\right]$.
This quantity is a function of $x$, $\S$, $\mathcal{Y}(x,\S)$, and the first $n$ evaluations, and
can be computed explicitly using formulas from GP regression given in the supplement.
We would choose the solution for which this is minimized, giving a conditional expected loss of $\min_{x'} \mathbb{E}_n\left[g(x',1) \mid \mathcal{Y}(x,\S) \right]$. This is a random variable under the time-$n$ posterior whose value depends on $\mathcal{Y}(x,\S)$.
We finally take the expected value under the time-$n$ posterior to obtain $L_n(x,\S)$:
\begin{equation}
\begin{split}
&L_n(x,\S) := \mathbb{E}_n\left[ \min_{x'} \mathbb{E}_n\left[g(x',1) \mid \mathcal{Y}(x,\S) \right]\right] \\
&\!=\!\int\!\mathbb{P}_n\!\left(\mathcal{Y}(x,\S)\!=\!y\right) \min_{x'} \mathbb{E}_n\left[g(\!x'\!,\!1\!)\!\mid\!\mathcal{Y}(\!x\!,\!\S)\!=\!y \right]\, dy,
\end{split}
\end{equation}
where the integral is over all $y \in \mathbb{R}^{|\S|}$.
We compute this quantity using simulation. To create one replication of this simulation we first simulate $(g(x,s) : s \in S)$ from the time-$n$ posterior. We then add simulated noise to this quantity to obtain a simulated value of $\mathcal{Y}(x,\S)$. We then update our posterior distribution on $g$ using this simulated data, allowing us to compute $\mathbb{E}_n\left[g(x',1) \mid \mathcal{Y}(x,\S)\right]$ for any given $x'$ as a predicted value from GP regression.
We then use continuous optimization method designed for inexpensive evaluations with gradients (e.g., multi-start L-BFGS) to optimize this value, giving one replication of $\min_{x'} \mathbb{E}_n\left[g(x',1) \mid \mathcal{Y}(x,\S)\right]$. We then average many replications to give an unbiased and asymptotically consistent estimate of $L_n(x,\S)$.
In a slight abuse of notation, we also define $L_n(\emptyset) = \min_{x'} \mathbb{E}_n\left[g(x',1) \right]$. This is the minimum expected loss we could achieve by selecting a solution without observing any additional information.
This is equal to $L_n(x,\emptyset)$ for any $x$.
The need to compute $L_n(x,\S)$ via simulation will present a challenge when optimizing acquisition functions defined in terms of it. Below, in \sectn{sect:optimize} we will overcome this challenge via a novel method for simulating unbiased estimators of the gradient of $L_n(x,\S)$ with respect to $x$ and the components of $\S$. First, however, we define the $\text{taKG}$ and $\text{taKG}^\emptyset$ acqisition functions.
\subsection{Trace-aware Knowledge Gradient (taKG)}
\label{sect:voi}
The $\text{taKG}$ acquisition function will value observations of a point and a collection of fidelities according to the ratio of the reduction in expected loss (as measured using $L_n$) that it induces, to its computational cost.
While evaluating $x$ at a fidelity $s$ in principle provides observations of $g(x,s')$ at all $s' \in B(s)$, we choose to retain and include in our inference only a subset of the observed fidelities $\S \subseteq B(s)$. This reduces computational overhead in GP regression. In our numerical experiments, we take the cardinality of $\S$ to be either 2 or 3, though the approach also allows increased cardinality.
With this in mind, the $\text{taKG}$ acquisition function at a point $x$ and set of fidelities $\S$ at time $n$ is
\begin{equation*}
\text{taKG}_n(x, \mathcal{S}) := \frac{L_n(\emptyset) - L_n(x,\S)}{\text{cost}_n(x, \max \S)},
\end{equation*}
where we also refer to the numerator as the {\it value of information} \citep{Ho66},
$\text{VOI}_n(x,\S) := L_n(\emptyset) - L_n(x,\S)$.
Thus, $\text{taKG}$ quantifies the value of information per unit cost of sampling.
The cost of observing at all fidelities in $\S$ is taken here to be the cost of evaluating $g$ at a vector of fidelities equal to the elementwise maximum, $\max \mathcal{S} := (\max_{s \in \mathcal{S}} s_i: 1 \le i \le m)$.
This is the least expensive fidelity at which we could observe $\S$.
The taKG algorithm chooses to sample at the point $x$, fidelity $s$, and additional lower-fidelity point(s) $\S \setminus \{s\}$ to retain that jointly maximize the $\text{taKG}$ acquisition function, among all fidelity sets $\S$ with limited cardinality $\ell$.
\begin{equation}
\max_{x,s,\mathcal{S}:\mathcal{S}\subseteq B\left(s\right),\left|\mathcal{S}\right|=\ell,s\in\mathcal{S}} \text{taKG}_n \left(x, \mathcal{S} \right).
\label{eqn:max-taKG}
\end{equation}
This is a continuous optimization problem whose decision variable is described by $d + \ell m_1 + m_2$ real numbers. $d$ describe $x$, $m=m_1+m_2$ describe $s$, and $(\ell-1)m_1$ describe $\S \setminus \{s\}$.
\subsection{0-avoiding Trace-aware Knowledge Gradient ($\text{taKG}^\emptyset$)}
\label{sect:mvoi}
The $\text{taKG}$ acquisition function uses the value of information per unit cost of sampling.
When the value of information and cost become small simultaneously, as when we shrink training iterations or training data to 0 in hyperparameter tuning, this ratio becomes sensitive to misspecification of the GP model on $g$. We first discuss this issue, and then develop a version of $\text{taKG}$ for these settings.
To understand this issue, we first observe the value of information for sampling $g(x,s)$, for any $s$, is strictly positive when the kernel has strictly positive entries.
\begin{proposition}
If the kernel function $K_n((x, s), (x', 1)) > 0$ for any $x, x' \in \mathbb{A}$, then for any $x \in \mathbb{A}$ and any $s\in[0,1]^m$, $\text{VOI}_n(x, \left\{ s\right\}) > 0$.
\label{prop:positive}
\end{proposition}
Proposition~\ref{prop:positive} holds even if $s=0$, or has some components set to 0.
Thus, if the estimated cost at such extremely low fidelities is small relative to the (strictly positive) value of information there, $\text{taKG}$ may be drawn to sample them, even though the value of information is small. We may even spend a substantial portion of our budget evaluating $g(x,0)$ at different $x$.
This is usually undesirable.
For example, in hyperparameter tuning with training iterations as our fidelity,
fidelity $0$ corresponds to training a machine learning model with no training iterations.
This would return the validation error on initial model parameter estimates.
While this likely provides {\it some} information about the validation error of a fully trained model,
specifying a kernel over $g$ that productively uses this information from a large number of hyperparameter sets $x$
would be challenging.
This issue becomes even more substantial when considering training iterations
and training data together, as we do here, because cost nearly vanishes
as either fidelity vanishes. Thus, there are many
fidelities at each $x$ that we may be drawn to oversample.
This issue is not specific to \text{taKG}.
It also occurs in previous literature
\citep{klein2016fast,mcleod2017practical,klein-bayesopt17}
when using the ratio of information gain to cost in
an entropy search or predictive entropy search method based on the same predictive model.
To deal with this issue, \citet{klein2016fast,mcleod2017practical} artificially inflate the cost of evaluating at fidelity $0$ to penalize low fidelity evaluations. Similarly, \citet{klein-bayesopt17} recommends adding a fixed cost to all evaluations motivated by the overhead of optimizing the acquisition function, but then recommends setting this to the same order of magnitude as a full-fidelity evaluation even though the overhead associated with optimizing a BO acquisition function using well-written code and efficient methodology will usually be substantially smaller. As a result, any fixed cost must be tuned to the application setting to avoid oversampling at excelssively small fidelities while still allowing sampling at moderate fidelities.
Here, we propose an alternate solution that we find works well without tuning,
focusing on the setting where the cost of evaluation becomes small as the smallest component in $s$ approaches 0.
We first define $C(s) = \cup_{i=1}^m \{ s' : s'_i = 0, s'_j = s_i\ \forall j\ne i \}$
to be the set of fidelities obtained by replacing one component of $s$ by $0$.
We then let $C(\S) = \cup_{s \in \S} C(s)$ be the union of these fidelities over $s \in \S$.
For example, suppose $s_1$ is a trace fidelity (say, training iterations), $s_2$ is a non-trace fidelity (say, training data size), and $\S = \{ (1/2, 1), (1,1) \}$, corresponding to an evaluation of $g$ at $s=(1,1)$ and retention of the point $(1/2,1)$ from the trace $B((1,1))$.
Then $C(\mathcal{S}) = \{ (0,1), (1/2, 0), (1,0) \}$.
Fidelities in $C(\S)$ (for any $\S$) are extremely inexpensive to evaluate and provide extremely small but strictly positive value of information.
These, and ones close to them, are ones we wish to avoid sampling, even when $\text{taKG}$ is large.
To accomplish this, we modify our value of information
$\text{VOI}_n(x,\S) = L_n(\emptyset) - L_n(x,\S)$
to suppose free observations $\mathcal{Y}(x,s')$ will be provided of these problematic low-fidelity $s'$.
Our modified value of information will suppose these free observations will be provided to both the benchmark, previously set to $L_n(\emptyset)$, and to the reduced expected loss, previously set to $L_n(x,\S)$, achieved through observing $x$ at fidelities $\S$.
The resulting modified value of information is
\begin{equation*}
\text{VOI}^\emptyset_n(x,\S) = L_n(x,C(\S)) - L_n(x,\S \cup C(\S)) \notag \\
\end{equation*}
We emphasize our algorithm will not evaluate $g$ at fidelities in $C(\S)$. Instead, it will simulate these evaluations according to the algorithm in \sectn{sect:valuing}.
We define the $\text{taKG}^\emptyset$ acquisition function using this modified value of information as
\begin{equation}
\text{taKG}^\emptyset_n(x, \mathcal{S}) = \frac{\text{VOI}^\emptyset_n(x,\S)}{\text{cost}_n(x , \max \S)}
\label{eqn:takg0}
\end{equation}
To find the point $x$ and fidelity $s$ to sample, we optimize $\text{taKG}^\emptyset$ over
$x$, fidelity $s$, and additional lower-fidelity point(s) $\S \setminus \{s\}$ as we did in \eqref{eqn:max-taKG}.
We refer to this VOI and acquisition function as ``0-avoiding,'' because they place 0 value on fidelities with any component equal to 0.
This prevents sampling at these points as long as the cost of sampling is strictly positive.
Indeed, suppose $s=\max(\S)$ has a component equal to $0$. Then
each element in $\S$ will have one component equal to $0$, and $\S \subseteq C(\S)$.
Then $\text{VOI}^\emptyset_n(x,\S) = L_n(x,C(\S)) - L_n(x,C(\S)\cup \S) = 0$.
Moreover, the following proposition shows that if $s=\max(\S)$ has all components strictly positive and additional regularity conditions hold, then $\text{VOI}^\emptyset_n(x,\S)$ is also strictly positive.
\begin{proposition}
If $s=\max(\S)$ has all components strictly positive, our kernel $K_n$ is positive definite, and the hypothesis of Proposition 1 is satisfied for $K_n$ given $g(x,C(\S))$, then $\text{VOI}^\emptyset_n(x,\S)$ is strictly positive.
\end{proposition}
Thus, maximizing $\text{taKG}^\emptyset$ will never choose to sample at a fidelity $s$ with a $0$ component.
Additionally, under other regularity conditions (see Corollary~1 in the supplement), $\text{VOI}^\emptyset_n(x,\S)$ is continuous in $\S$, and so the property that $\text{VOI}^\emptyset_n(x,\S)=0$ when a component of $s=\max(S)$ is 0 also discourages sampling at $s$ whose smallest component is {\it close} to 0.
\subsection{Efficiently maximizing $\text{taKG}$ and $\text{taKG}^\emptyset$}
\label{sect:optimize}
The $\text{taKG}$ and $\text{taKG}^\emptyset$ acquisition functions are defined in terms of a hard-to-calculate function $L_n(x,\S)$.
Here, we describe how to efficiently maximize these acquisition functions using stochastic gradient ascent with multiple restarts. The heart of this method is a simulation-based procedure for simulating a stochastic gradient of $L_n(x,\S)$, i.e., a random variable whose expectation is the gradient of $L_n(x,\S)$ with respect to $x$ and the elements of $\S$.
To construct this procedure, we first provide a more explicit expression for $L_n(x,\S)$.
Because $L_n(x,\S)$ is the expectation of the minimum over $x'$ of $\mathbb{E}_n\left[g(x',1) \mid \mathcal{Y}(x,\S)\right]$,
we begin with the distribution of this conditional expectation for a fixed $x'$ under the time-$n$ posterior distribution.
This conditional expectation can be calculated with GP regression from previous observations, the new point $x$ and fidelities $\S$, and the observations $\mathcal{Y}(x,\S)$. This conditional expectation is linear in $\mathcal{Y}(x,\S)$.
Moreover, $\mathcal{Y}(x,\S)$ is the sum of $g(x,\S)$ (which is multivariate normal under the posterior) and optional observational noise (which is independent and normally distributed), and so is itself multivariate normal.
As a multivariate normal random variable, it can be written as the sum of its mean vector and the product of the Cholesky decomposition of its covariance matrix with an independent standard normal random vector, call it $W$. (The coefficients of this mean vector and covariance matrix may depend on $x$, $\S$, and previously observed data.) The dimension of $W$ is the number of components in the observation, $|\S|$.
Thus, the conditional expected value of the objective $\mathbb{E}_n\left[g(x',1) \mid \mathcal{Y}(x,\S)\right]$ is a linear function (through GP regression) of another linear function (through the distribution of the observation) of $W$.
We also have that the mean of this conditional expectation is
$\mathbb{E}_n[\mathbb{E}_n[g(x’,1) | \mathcal{Y}(x,\S) ] ] =
\mathbb{E}_n[g(x’,1) ] = \mu_n(x’)$ by iterated conditional expectation.
These arguments imply the existence of a function $\tilde{\sigma}_n(x',x,\S)$ such that
$\mathbb{E}_n[g(x',1) | \mathcal{Y}(x,\S)]
= \mu_n + \tilde{\sigma}_n(x',x,\S) W$ simultaneously for all $x'$.
In the supplement, we show
$\tilde{\sigma}_n(x',x,\S)= K_n\left((x', 1), x_{\S}\right) (D_{n}^T)^{-1}$
where $x_\mathcal{S} = \{(x, s): s \in \mathcal{S}\}$, and $D_{n}$ is the Cholesky factor of the covariance matrix $K_n\left(x_\mathcal{S}, x_\mathcal{S}\right)+\sigma^2 I$.
Thus,
\begin{equation*}
L_{n}\left(x,\S\right)=\mathbb{E}_{n}\left[\mbox{min}_{x'} \mu^{\left(n\right)}\left(x',1\right)+\tilde{\sigma}_{n}\left(x', x, \S\right)W\right].
\end{equation*}
When certain regularity conditions hold,
\begin{align*}
\nabla L_n(x,\S)
&= \nabla \mathbb{E}_{n}\left[\min_{x'}\left(\mu_n\left(x',1\right)+\tilde{\sigma}_{n}\left(x', x, \S\right)W\right)\right]\\
&= \mathbb{E}_{n}\left[\nabla \min_{x'}\left(\mu_n\left(x',1\right)+\tilde{\sigma}_{n}\left(x', x, \S\right)W\right)\right]\\
&= \mathbb{E}_{n}\left[\nabla \left(\mu_n\left(x^*,1\right)+\tilde{\sigma}_{n}\left(x^*, x, \S\right)W\right)\right]\\
&= \mathbb{E}_{n}\left[\nabla \tilde{\sigma}_{n}\left(x^*, x, \S\right)W\right],
\end{align*}
where $x^*$ is a global minimum (over $x'\in \mathbb{A}$) of $\mu_n(x',1) + \tilde{\sigma}_n(x',x,\S)W$, and the gradient in the last line is taken holding $x^*$ fixed even though in reality its value depends on $x$.
Here, the interchange of expectation and the gradient is justified using results from infinitessimal perturbation analysis \citep{l1990unified} and ignoring the dependence of $x^*$ on $x$ is justified using the envelope theorem \citep{milgrom2002envelope}.
We formalize this below in Theorem~\ref{stochastic_gradient}.
\begin{theorem}
\label{stochastic_gradient}
Suppose $\mathbb{A}$ is compact, $\mu_0$ is constant, the kernel $K_0$ is continuously differentiable, and $\mathrm{argmin}_{x'\in\mathbb{A}}\ \mu_{n}(x', 1)+\tilde{\sigma}_n\left(x', x, \S\right)W$ contains a single element almost surely. Then, $\nabla L_n(x,\S) = \mathbb{E}_n\left[ \nabla \tilde{\sigma}(x^*,x,\S) W \right]$
\end{theorem}
With this result in place, we can obtain an unbiased estimator of $\nabla L_n(x,\S)$ by simulating $W$, calculating $x^*$, and then returning $\nabla \tilde{\sigma}_n(x^*,x,\S) W$.
Using this, together with the chain rule and an exact gradient calculation for $\text{cost}_n(x,\max \mathcal{S})$,
we can then compute stochastic gradients for $\text{taKG}$ and $\text{taKG}^\emptyset$.
We then use this stochastic gradient estimator within stochastic gradient ascent \citep{kushner2003stochastic}
to solve the optimization problem \eqn{max-taKG} (or the equivalent problem for maximizing $\text{taKG}^\emptyset$). The following theorem shows that, under the right conditions, a stochastic gradient ascent algorithm converges almost surely to a critical point of $\text{taKG}^\emptyset$. Its proof is in the supplement.
\begin{theorem}
Assume the conditions of Theorem \ref{stochastic_gradient}, $\mathbb{A}$ is a compact hyperrectangle, and $\text{cost}_n(\max \mathcal{S})$
is continuously differentiable and bounded below by a strictly positive constant. In
addition, assume that we optimize $\text{taKG}^\emptyset$ using a stochastic gradient
ascent method with the stochastic gradient from Theorem \ref{stochastic_gradient} whose stepsize
sequence $\left\{ \epsilon_{t}:t=0,1,\ldots\right\} $
satisfies $\epsilon_{t}\rightarrow0$, $\epsilon_{t}\geq0$,
$\sum_{t}\epsilon_{t}=\infty$ and $\sum_{t}\epsilon_{t}^{2}<\infty$.
Then the sequence of points $\left\{ x_{t}, \S_{t}\right\} _{t\geq0}$ from stochastic gradient ascent
converges almost surely to a connected set of stationary points of $\text{taKG}^\emptyset$.
\label{t:convergence}
\end{theorem}
\subsection{Warm-starting from partial runs}
\label{sect:warm-start}
When tuning hyperparameters using training iterations as a fidelity,
we can cache the state of training after $s$ iterations, for a warm start, and then continue training later up to a larger number of iterations $s'$ for less than $s'$ training iterations would cost at a new $x$.
We assume trace fidelities can be ``warm-started'' while non-trace fidelities cannot. We also assume the incremental cost of evaluting fidelity vector $s'$ warm-starting from $s$ is the difference in the costs of their evaluations from a ``cold start''. We model the cost of cold-start evaluation as in \sectn{sec:obj} with a Gaussian process on log(cost). To obtain training data for this model, costs observed from warm-started evaluations are summed with those of the previous evaluations they continue to approximate what the cold-start cost would be. We set $\text{cost}_n(x,s)$ to be the difference in estimated cold-start costs if a previous evaluation would allow warm starting, and to the estimated cold start cost if not.
While our approach to choosing $x$ and $s$ to evaluate is to optimize $\text{taKG}^\emptyset$ as before
our computational approach from \sectn{sect:optimize} required that $\text{cost}_n(x,s)$ be continuously differentiable (in Theorem~\ref{t:convergence}). This requirement is not met.
To address this, we modify the way we optimize the $\text{taKG}^\emptyset$ acquisition function.
(The approach we describe also works for $\text{taKG}$.)
First, we maintain a basket of size at most $b$ of previously evaluated point, fidelity pairs, $(x(j),s(j))$.
For each $j\le b$, we optimize $\text{taKG}^\emptyset_n(x(j),\S)$ letting $\S$ vary over those sets satisfying two conditions: (1) $|\S| = \ell$;
(2) $s' \ge s(j)$ componentwise for each $s' \in \S$,
with equality for non-trace fidelity components.
Over this set, $\text{cost}_n(x(j),\S)$ is continuously differentiable in $\S$ and the method from \sectn{sect:optimize} can be applied.
We also optimize $\text{taKG}^\emptyset_n(x,\S)$ over all $x$ and $\S$ with $|\S|=\ell$, but using the estimated cold-start cost function and the method from \sectn{sect:optimize}.
Among the solution to these at most $b+1$ optimization problems, we select the $x$ and $\S$ that provide the largest $\text{taKG}^\emptyset_n(x,\S)$ at optimality, and evaluate $g$ at $x$ and $\max(\S)$.
We then update our basket.
We first add the $x$ and $\max(\S)$ produced by the optimization not constraining $x$.
If the basket size exceeds $b$, we then remove the $x$ and $s$ whose optimization over $\text{taKG}^\emptyset_n$ produced the smallest value.
In practice, we set $b=10$.
\subsection{Batch and Derivative Evaluations}
\label{sect:extensions}
$\text{taKG}$ and $\text{taKG}^\emptyset$ generalize naturally
following \citet{wu2016parallel} and \citet{wu2017bayesian}
to batch settings where we can evaluate multiple point, fidelity pairs simultaneously and derivative-enabled settings where we observe gradients.
The batch version uses the same acquisition functions $\text{taKG}$ and $\text{taKG}^\emptyset$ defined above, but optimizes over a {\it set} of values for $s$, each of which has an associated $\S \in B(s)$ of limited cardinality.
In the derivative-enabled setting, we incorporate (optionally noisy) gradient observations into our posterior distribution directly through GP regression. We also generalize the $\text{taKG}$ and $\text{taKG}^\emptyset$ acquisition functions to allow inclusion of gradients of the objective in the set of quantities observed $\mathcal{Y}(x,\S)$ in the definition of $L_n(x,\S)$.
\section{NUMERICAL EXPERIMENTS}
\label{sect:numerical}
We compare sequential, batch, and derivative-enabled $\text{taKG}^\emptyset$ with benchmark algorithms on synthetic optimization problems (Sect.~\sect{synthetic}), hyperparameter optimization of neural networks (Sect.~\sect{hyperexp}), and hyperparameter optimization for large-scale kernel learning (Sect.~\sect{kernel-learning}). The synthetic and neural network benchmarks use fidelities with trace observations, while the large-scale kernel learning benchmark does not. We integrate over GP hyperparameters by sampling $10$ sets of values using the \texttt{emcee} package \citep{foreman2013emcee}.
\subsection{Optimizing synthetic functions} \label{sect:synthetic}
Here, we compare $\text{taKG}^\emptyset$ against both the sequential and batch versions of the single-fidelity algorithms KG \citep{wu2016parallel} and EI \citep{jones1998efficient,wang2015parallel}, a derivative-enabled single-fidelity version of KG \citep{wu2017bayesian}, Hyperband \citep{li2016hyperband}, and the multi-fidelity method BOCA \citep{kandasamy2017multi}. BOCA is the only previous method of which we are aware that treats multiple continuous fidelities. We do not compare against FaBOLAS \citep{klein2016fast,klein2015towards} because the kernel it uses is specialized to neural network hyperparameter tuning.
Following experiments in \citet{kandasamy2017multi}, we augment four synthetic test functions, 2-d Branin, 3-d Rosenbrock, 3-d Hartmann, and 6-d Hartmann, by adding one or two fidelity controls, as described in the supplement.
We set the cost of an individual evaluation of $x$ at fidelity $s$ to $0.01 + \prod_i s_i$.
Thinking of $s_1$ as mimicking training data and $s_2$ training iterations,
the term $\prod_i s_i$ mimics a cost of training that is proportional to the number of times a datapoint is visited in training. The term $0.01$ mimics a fixed cost associated with validating a trained model.
We set the cost of a batch evaluation to the maximum of the costs of the individual evaluations,
to mimic wall-clock time for running evaluations synchronously in parallel.
\newcommand{230pt}{230pt}
\newcommand{190pt}{190pt}
\begin{figure*}[tb]
\centering
\subfigure
\centering
\includegraphics[width=230pt, height = 190pt]{Branin.pdf}
\subfigure
\centering
\includegraphics[width=230pt, height = 190pt]{Rosenbrock.pdf}\\
\subfigure
\centering
\includegraphics[width=230pt, height = 190pt]{Hartmann3.pdf}
\subfigure
\centering
\includegraphics[width=230pt, height = 190pt]{Hartmann6.pdf}
\vspace{-8pt}
\caption{\small Optimizing synthetic functions: Plots show simple regret over $40$ independent runs for synthetic functions with trace observations and one or two continuous fidelity controls for 2-d Branin, 3-d Rosenbrock, 3-d Hartmann, and 6-d Hartmann problems. $\taKG0$ performs well compared with a variety of competitors in both sequential and batch settings.}
\vspace{-12pt}
\label{Fig_syn_taKG}
\end{figure*}
Fig.~\ref{Fig_syn_taKG} shows results. For methods that have both sequential and batch versions, the batch size is indicated with a number before the method name. For example, 8-EI indicates expected improvement performed with batches of size 8. We run versions of $\text{taKG}^\emptyset$ using $|\S|$ set to $2$ (taKG\_0 2-points) and $3$ (taKG\_0 3-points).
We first see that using the larger $|\S|$ improves the performance of $\text{taKG}^\emptyset$.
We then see that, for both values of $|\S|$, sequential $\text{taKG}^\emptyset$ performs well relative to sequential competitors (1-EI, 1-KG, 1-BOCA), and batch $\text{taKG}^\emptyset$ with batch size 8 (8-$\text{taKG}^\emptyset$) performs well relative to its batch competitors (8-EI, 8-KG, Hyperband).
Here, we consider Hyperband to be a batch method although the amount of parallelism it can leverage varies through the course of its operation.
\subsection{Optimizing hyperparameters of neural networks } \label{sect:hyperexp}
Here, we evaluate on hyperparameter optimization of neural networks. Benchmarks include the single-fidelity Bayesian optimization algorithms KG \citep{wu2016parallel} and EI \citep{jones1998efficient}, their batch versions, and the state-of-art hyperparameter tuning algorithms HyperBand and FaBOLAS. $\text{taKG}^\emptyset$ uses two fidelity controls: the size of the training set and the number of training iterations.
Following \citet{li2016hyperband}, we set the cost to the number of training examples passed during training divided by the number passed in full fidelity. For example, if we have $5\times 10^4$ training points and the maximum number of epochs is $100$, then the cost of evaluating a set of hyperparameters using $10^4$ sub-sampled training points per epoch over $10$ epochs is $10^4 \times 10 / (5\times10^4 \times 100) = 0.02$. Complete training has cost $1$.
\begin{figure*}[tb]
\centering
\subfigure
\centering
\includegraphics[width=230pt, height = 190pt]{MNIST.pdf}
\subfigure
\centering
\includegraphics[width=230pt, height = 190pt]{CIFAR10.pdf}\\
\subfigure
\centering
\includegraphics[width=230pt, height = 190pt]{SVHN.pdf}
\subfigure
\centering
\includegraphics[width=230pt, height = 190pt]{KISSGP.pdf}
\vspace{-8pt}
\caption{\small We show the -log marginal likelihood divided by the number of datapoints for tuning feedforward neural networks on MNIST (each with 20 runs); tuning convolutional neural networks on CIFAR-10 and SVHN (each with 10 runs); and for KISS-GP kernel learning.
$\text{taKG}^\emptyset$ outperforms competitors in both sequential and batch settings. }
\vspace{-12pt}
\label{Fig_deep_taKG}
\end{figure*}
\paragraph{Feedforward neural networks on MNIST}
\label{sect:mlps}
We tune a fully connected two-layer neural network on MNIST. The maximum number of epochs allowed is $20$. We optimize $5$ hyperparameters: learning rate, dropout rate, batch size and the number of units at each layer.
Fig.~\ref{Fig_deep_taKG} shows that sequential $\text{taKG}^\emptyset$ performs much better than the sequential methods KG, EI and the multi-fidelity hyperparameter optimization algorithm FaBOLAS. $\text{taKG}^\emptyset$ with a batch size $4$ substantially improves over batch versions of KG and EI, and also over the batch method Hyperband.
\paragraph{Convolutional neural networks on CIFAR-10 and SVHN}
\label{sect:cnns}
We tune convolution neural networks (CNNs) on CIFAR-10 and SVHN. Our CNN consists of 3 convolutional blocks and a softmax classification layer. Each convolutional block consists of two convolutional layers with the same number of filters followed by a max-pooling layer. There is no dropout or batch-normalization layer. We split the CIFAR-10 dataset into 40000 training samples, 10000 validation samples and 10000 test samples. We split the SVHN training dataset into 67235 training samples and 6000 validation samples, and use the standard 26032 test samples. We apply standard data augmentation: horizontal and vertical shifts, and horizontal flips. We optimize $5$ hyperparameters to minimize the classification error on the validation set: the learning rate, batch size, and number of filters in each convolutional block. Hyperband uses the size of the training set as its resource (it can use only one resource or fidelity), using a bracket size of $s_{\max} = 4$ as in \citet{li2016hyperband} and the maximum resource allowed by a single configuration set to 40000.
We set the maximum number of training epochs for all algorithms to $50$ for CIFAR-10 and $40$ for SVHN.
Because of the computational expense of training CNNs, we leave out some benchmarks, dropping the single-fidelity method EI in favor of the structurally similar single-fidelity method KG, and performing batch evaluations for only some methods.
Fig.~\ref{Fig_deep_taKG} shows that sequential $\text{taKG}^\emptyset$ outperforms its competitors (including the batch method Hyperband) on both problems. Using batch evaluations with $\text{taKG}^\emptyset$ on CIFAR-10 improves performance even further.
When we train using optimized hyperparameters on the full training dataset for $200$ epochs, test data classification error is $\sim12\%$ for CIFAR-10 and $\sim5\%$ for SVHN.
\subsection{Optimizing hyperparameters for large-scale kernel learning}
\label{sect:kernel-learning}
We test derivative-enabled $\text{taKG}^\emptyset$ (ta-dKG$^\emptyset$) in a large-scale kernel learning example: the 1-d demo example for KISS-GP \citep{wilson2015kernel} on the GPML website \citep{gpml}. In this example, we optimize 3 hyperparameters (marginal variance, length scale, and variance of the noise) of a GP with an RBF kernel on $1$ million training points to maximize the log marginal likelihood. We evaluate both the log marginal likelihood and its gradient using the KISS-GP framework. We use two continuous fidelity controls: the number of training points and the number of inducing points. We set the maximum number of inducing points to $m=1000$.
We compare ta-d-KG to the derivative-enabled knowledge gradient ($\text{\it d-KG}$) \citep{wu2017bayesian}, using both algorithms in the sequential setting (1-dKG and 1-cf-dKG) and with a batch size of 4 (4-dKG and 4-cf-dKG). We leave out methods that are unable to utilize derivatives, as these are likely to substantially underperform.
Fig.~\ref{Fig_deep_taKG} shows that ta-dKG$^\emptyset$ successfully utilizes inexpensive function and gradient evaluations to find a good solution more quickly than d-KG, in both the sequential and batch setting.
\section{CONCLUSION}
\label{sect:conclusion}
We propose a novel multi-fidelity acquisition function, the trace aware knowledge-gradient, which
leverages special structure provided by trace observations,
is able to handle multiple simultaneous continuous fidelities, and
generalizes naturally to batch and derivative settings.
This acquisition function uses traces to find
good solutions to global optimization problems more quickly than state-of-the-art algorithms
in application settings including deep learning and kernel learning.
\bibliographystyle{apalike}
|
1,314,259,994,901 | arxiv |
\section{Introduction}
\noindent
Due to
the increasing concerns in understanding the reasoning behind AI decisions for critical applications,
interpretable Machine Learning (ML) models gained a lot of attention.
Examples of such ML
applications include job recruitment, bank credit applications, and justice~\cite{EUlaw}.
Most of traditional
approaches for building interpretable models are greedy,
for example, decision trees~\cite{DBLP:books/wa/BreimanFOS84, 10.1023/A:1022643204877/Quinlan1986, quinlan1993c45},
rule lists~\cite{DBLP:conf/icml/Cohen95, 10.1007/BFb0017011}, and
decision sets~\cite{DBLP:conf/kdd/LakkarajuBL16}.
Compared to traditional approaches,
exact methods offer guarantee of optimality, such as model size and accuracy.
In this context,
combinatorial optimisation methods, such as Constraint Programming~\cite{Bonfietti2015, DBLP:journals/constraints/VerhaegheNPQS20}, Mixed Integer Programming~\cite{Angelino2018learning, aaai/Zhang19, DBLP:conf/aaai/AglinNS20}, or Boolean Satisfiablility ({SAT}{})~\cite{bessiere-cp09, DBLP:conf/ijcai/NarodytskaIPM18, DBLP:conf/aaai/Avellaneda20, DBLP:conf/ijcai/Hu0HH20, DBLP:conf/sat/JanotaM20, DBLP:conf/cp/YuISB20} have been successfully used to learn interpretable models.
These declarative approaches are particularly interesting since they offer certain flexibility to handle additional requirements when learning a model.
Decision Trees are widely used as a standard interpretable model.
However,
they suffer from two major flaws: \textit{replication} and \textit{fragmentation}~\cite{89-ijcai-duplication,90-ml, Rokach-14-dm-dt-book}. The \textit{replication} problem appears when two identical subtrees are in the decision tree.
The \textit{fragmentation} problem appears when only few samples are associated to leave nodes.
By providing compact representations for Boolean functions,
Binary Decision Diagrams (\texttt{BDD}{}s)~\cite{78-bdd, 82-bdd, 86-bdd, KnuthTAOCP4} are widely studied for hardware design, model checking, and knowledge representation.
In the context of ML,
\texttt{BDD}{} could be viewed as an intrepretable model for binary classification. In addition, they were extended for multi-classification, known as \textit{decision graphs} and
heuristic methods were proposed in~\cite{93-decisiongraph, DBLP:conf/ecml/Kohavi94, DBLP:conf/ijcai/KohaviL95, DBLP:conf/diagrams/MuesBFV04}.
Moreover,~\cite{DBLP:conf/ictai/IgnatovI17} proposed \textit{decision stream}, a similar topology to \texttt{BDD}{} based on merging \textit{similar} subtrees in each split made in decision trees to improve the generalization.
\cite{93-decisiongraph, DBLP:conf/ecml/Kohavi94} showed that \textit{decision graphs} could avoid the \textit{replication} problem and \textit{fragmentation} problem {of decision trees} effectively
For binary classification, \texttt{BDD}{}s also could avoid these problems.
This fact indicates that generally in the practice of ML, a \texttt{BDD}{} have a smaller size than the corresponding decision tree.
In this paper, we introduce a {SAT}{}-based model
for learning optimal \texttt{BDD}{}s with the smallest number of features classifying all examples correctly,
and a lifted {MaxSAT}{}-based model to learn optimal \texttt{BDD}{}s minimizing the classification error.
We assume that all \texttt{BDD}{}s are \textit{ordered} and \textit{reduced}\footnote{The two notions are defined in the background}, the limitation on the depth for a \texttt{BDD}{}, corresponds to the number of features to be selected by our model.
To the best of our knowledge,~\cite{DBLP:conf/date/CabodiCI0PP21} is the only exact method of learning optimal \texttt{BDD}{}s in the context of ML.
The authors proposed a {SAT}{} model to learn optimal
\texttt{BDD}{}s with the smallest sizes that correctly classify all examples.
In their approach, the depth of the \texttt{BDD}{} is not restrained.
In fact, it is possible that the constructed \texttt{BDD}{} is small in size (number of nodes) and high in depth.
As the \texttt{BDD}{} is \textit{ordered}, this approach could not limit the number of features used,
making it not quite comparable with our proposition.
Another related work is in~\cite{DBLP:conf/ijcai/Hu0HH20} where the authors consider a {MaxSAT}{} model to learn optimal decision trees minimizing the classification error within a limited depth.
The usage of the same solving methodology with the same objective function and
the depth limit,
makes these two {MaxSAT}\ models comparable.
Finally, in order to increase the scalability of our approach, we propose a heuristic extension based on a simple pre-processing step.
The rest of the paper is organized as follows.
First, we give the related technical background in Section~\ref{sec:background}.
Then, in Section~\ref{sec:Sat-models}, we show the proposed {SAT}{} and {MaxSAT}{}
models for learning \texttt{BDD} s in binary classification.
Finally in Section \ref{sec:exp}, we present our large experimental studies to show the competitive prediction quality of the proposed approach.
\section{Appendix 1: Comparing the Original {SAT}{} Encoding with the Improved Version}
We evaluate the first {SAT}{} model (\texttt{BDD}{1} as described in Section~\ref{sec:Sat-models}) and the improved version (\texttt{BDD}{2} described in the same section~\ref{sec:Sat-models}) in terms of encoding size.
Considering the scalability problem, we use the hold-out method to split training set and testing set for all datasets.
We choose $5$ different small splitting ratios $r=\{0.05, 0.1, 0.15, 0.2, 0.25\}$ to generate training set.
The remaining examples for each ratio are used as testing set.
This process is repeated $10$ times with different random seeds.
The optimisation problem that we consider is to find a \texttt{BDD}{} that classifies all training examples with the minimum depth.
The approach we use is a simple linear search by solving the decision problem that asks to find a $\texttt{BDD}$ with a given depth $H$ (Problem $P_{{bdd}}(\mathcal{E}, H)$).
The initial depth used in the linear search $H_0$ is set to $7$.
The {SAT}{} solver we use is Kissat~\cite{BiereFazekasFleuryHeisinger-SAT-Competition-2020-solvers}, the winner of {SAT}{} competition 2020.
For each experiment, we set $20$ hours as the global timeout for {SAT}{} solver.
In Table \ref{tab:annex1_satbdd_diff}, we report the average results of instances where all the runs finished within the limited time.
The column ``Acc'' stands for average testing accuracy in percent,
``dopt'' stands for the optimal depth,
``E\_Size'' stands for the encoding size (number of literals in 10 thousands),
and ``Time'' stands for the runtime in seconds of the successful runs. The value ``N/A'' indicates the lack of result. Finally, the blue color is used to show the best values.
\input{tables/Annex1_satbdd_diff}
Table \ref{tab:annex1_satbdd_diff} shows that, compared to \texttt{BDD}{1}, \texttt{BDD}{2} improves clearly the encoding size and the run time.
This empirical observation {confirms}
the complexity {evaluation} for \texttt{BDD}{1} and \texttt{BDD}{2} {given} in Section~\ref{sec:Sat-models}.
\section{Appendix 2: The Experiments Related to the Heuristic {MaxSAT}-\texttt{BDD}{} Method}
This appendix presents the details of the {comparative} evaluation {of}
\texttt{CART}{}, heuristic {MaxSAT}-\texttt{BDD}{}, and the original {MaxSAT}-\texttt{BDD}{} in prediction quality, model size, encoding size, and run time.
The protocol of the experiments is described in Section~\ref{sec:exp}.
Table~\ref{tab:annex2_cart_hmaxsatbdd_maxsatbdd} shows the detailed results.
The column ``Opt'' stands for the percentage of the instances reporting optimality,
``Time'' stands for the run time of the {MaxSAT}\ solver.
The other columns are described in Section~\ref{sec:exp}.
From Table~\ref{tab:annex2_cart_hmaxsatbdd_maxsatbdd}, we can first observe
{that} the prediction quality of the heuristic {MaxSAT}-\texttt{BDD}{} is competitive to \texttt{CART}{}.
The two scatter plots in Figure~\ref{fig:annex2_combined} show this fact more clearly.
Although, \texttt{CART}{} could almost always get better training accuracy than the heuristic {MaxSAT}-\texttt{BDD}{}, it seems slightly over-fitting when {the} depth grows.
Compared to the original {MaxSAT}-\texttt{BDD}{}, our heuristic approach obtains clear benefits in terms of encoding size.
The reduction in encoding size gives more possibility to
the heuristic approach to report optimality within limited time, however using a subset of features (thus a relaxed version of the problem).
The results in columns ``Opt'' and ``Time'' illustrate this fact.
\begin{figure}[h!]
\centering
\begin{minipage}{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/Annex2_scatter_training_cart_hmaxsatbdd_size.png}
\end{minipage}\hfill
\begin{minipage}{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/Annex2_scatter_testing_cart_hmaxsatbdd_size_065.png}
\end{minipage}
\caption{\footnotesize{Comparison between \texttt{CART}{} and Heuristic {MaxSAT}-\texttt{BDD}{} in training (the left one) and testing accuracy (the right one).}}
\label{fig:annex2_combined}
\end{figure}
\input{tables/Annex2_cart_hmaxsatbdd_maxsatbdd}
\section{Technical Background}
\label{sec:background}
\subsection{Classification}
Consider a dataset $\mathcal{E} =\{\atraindataind{q}, \dots, \atraindataind{M}\}$
with $M$ examples.
Each example $\atraindataind{q} \in \mathcal{E}$ is characterized
by a list of binary features
$\atrainfeatliteralind{q} = [\afeatind{1}, \dots, \afeatind{K} ]$
and a binary target $\aclassind{q}$, representing the class of the example ($\aclassind{q} \in \{0,1 \}$).
The data set is partitioned into $\mathcal{E}^+$ and $\mathcal{E}^-$, where $\mathcal{E}^+$ (respectfully $\mathcal{E}^-$) is the set of positive (respectifully negative) examples. That is, $\aclassind{q} = 1$ iff $\atraindataind{q} \in \mathcal{E}^+$ and $\aclassind{q}=0$ iff $\atraindataind{q} \in \mathcal{E}^-$.
We assume that, $\forall 1 \leq q, q' \leq M$,
$\atrainfeatliteralind{q}=\atrainfeatliteralind{q'}$ implies $\aclassind{q} = \aclassind{q'}$.
Let $\phi$ be the function defined by
$\phi(\atrainfeatliteralind{q})= \aclassind{q}$, $\forall q \in [1,M]$.
The classification problem is to compute
a function $\learnfunc$ (called a \emph{classifier})
that matches as accurately as possible the function $\phi$ on examples $\atraindataind{q}$
of the training data and generalizes well on unseen test data.
\subsection{Binary Decision Diagrams}
Binary Decision Diagrams (\texttt{BDD}{}s) are used to provide compact representation of Boolean functions.
Let $[\avarind{1}, \dots, \avarind{n} ]$
be a sequence of
of $n$ Boolean variables.
A \texttt{BDD}{} is a rooted, directed, acyclic graph $\mathcal{G}{}$.
The vertex set $\mathcal{V}{}$ of $\mathcal{G}{}$ contains two types of vertices.
A \emph{terminal} vertex $v{}$ is associated to a binary value: \Value{v{}} $\in \{0, 1\}$.
A \emph{nonterminal} vertex $v{}$, is associated to a Boolean variable $\avarind{i}$ and has two children \Left{v{}}, \Right{v{}} $\in \mathcal{V}{}$.
In this case, $\index{v{}} =i \in \{1, \dots, n\}$ is the index of the Boolean variable associated to $v{}$.
We assume that all \texttt{BDD}{}s are \emph{ordered} and \emph{reduced}.
These two restrictions are widely considered in the literature as they guarantee
a \textit{unique} \texttt{BDD}{} for a given Boolean function.
The restriction \emph{ordered} indicates that for any \emph{nonterminal} vertex $v{}$,
\index{v{}} $<$ \index{\Left{v{}}} and \index{v{}} $<$ \index{\Right{v{}}}.
The restriction \emph{reduced} indicates that the graph contains no \emph{nonterminal} vertex $v{}$ with \Left{v{}} $=$ \Right{v{}},
nor does it contain distinct \emph{nonterminal} vertices $v{}$ and $v{}'$
{having isomorphic rooted sub-graphs.}
Therefore, given an \emph{ordered} \emph{reduced} {\texttt{BDD}{}} $\mathcal{G}{}$ with \textit{root}{} $v{}$, the {associated} Boolean function
can be recursively obtained with the Shannon expansion process~\cite{ShannonSymbolic}.
Let $g{}$ be a Boolean function defined over a sequence
$\mathcal{X} = [\avarind{1}, \dots, \avarind{n}]$ of $n$ Boolean variables.
The function $g{}$
can be represented by a
\emph{truth table}
that lists
the $2^n$ values of all assignments of the $n$ variables.
The value of the truth table is therefore associated to a string of $2^n$ binary values.
{A truth table $\beta{}$ of length $2^n$ is said to be of order $n$.}
A truth table $\beta{}$ of order $n > 0$
has the form $\beta{}_0\beta{}_1$, where $\beta{}_0$ and $\beta{}_1$ are truth tables of order $n -1$, and
$\beta{}_0$ and $\beta{}_1$ are called \textit{subtables}{} of $\beta{}$.
The \textit{subtables}{} of \textit{subtables}{} are also considered to be \textit{subtables}{}, and a table is considered as a \textit{subtable}{} of itself.
A \textit{bead}{} of order $n$ is a truth table $T$ of order $n$ that {does not} have the form $\alpha{}\alpha{}$
{where $\alpha{}$}
is a subtable of $T$.
The \textit{beads}{} {of} $g{}$ are the \textit{subtables}{} of its truth table that happen to be \textit{beads}{}.
Proposition \ref{prop:bead_bdd_node} from \cite{KnuthTAOCP4} relates truth table and binary decision diagram
for the same Boolean function.
\begin{proposition}
\label{prop:bead_bdd_node}
All vertices in $\mathcal{V}{}$ of a binary decision diagram $\mathcal{G}{}$,
are in one-to-one correspondence with the \textit{beads}{} of the Boolean function $g{}$ it represents.
\end{proposition}
Based on Proposition \ref{prop:bead_bdd_node},
we can produce the ordered and reduced binary decision diagram of a Boolean function by finding its \textit{beads}{} and combine its \textit{beads}{} with its sequence of variables.
An algorithm for producing the corresponding \texttt{BDD}{} is provided in the Appendix.
\begin{example}
\label{exp:bdd_example}
Consider the Boolean function from~\cite{KnuthTAOCP4}: $g{}_1(\avarind{1}, \avarind{2}, \avarind{3}) = (\avarind{1} \lor \avarind{2}) \land (\avarind{2} \lor \avarind{3}) \land (\avarind{1} \lor \avarind{3})$.
The binary string associated to its
truth table $\beta{}$ is $00010111$.
The \textit{beads}{} of $\beta{}$
are $\{00010111, 0001, 0111, 01, 0, 1\}$.
From Proposition \ref{prop:bead_bdd_node}, we can draw the \texttt{BDD}{} with the beads found, shown as the left part of Figure~\ref{fig:Ex1_bdd}.
The dashed (solid) line of each vertex indicates the left (right) child.
Then, we can replace the beads by vertices associated with the sequence of Boolean variables.
The final binary decision diagram for $g{}_1$ is shown as the right part of Figure~\ref{fig:Ex1_bdd}.
\end{example}
\input{figures/Ex1_combined}
\subsection{Oblivious Read-Once Decision Graphs}
Oblivious Read-Once Decision Graphs (\texttt{OODG}{}s)
are proposed in~\cite{DBLP:conf/ecml/Kohavi94} to overcome some limitations of
decision trees for multi-classification, like \textit{replication} and \textit{fragmentation} problem.
We refer the readers to \cite{DBLP:conf/ijcai/KohaviL95, DBLP:conf/ecml/Kohavi94} for details on \texttt{OODG}{}s.
An \texttt{OODG}{} is a rooted, directed, acyclic graph, which contains terminal \textit{category nodes} labelled with classes to make decisions, and non-terminal \textit{branching nodes} labelled with features to make splits.
The property ``\textit{read-once}'' indicates that each feature occurs at most once along any path from the root to a category node.
The property ``\textit{levelled}'' indicates that the nodes are partitioned into a sequence of pairwise disjoint sets, representing the levels, such that outgoing edges from each level terminate
at the next level.
The property
``\textit{oblivious}'' extends the idea of ``\textit{levelled}'' by guaranteeing that all nodes at a given level are labelled by the same feature.
For the classification process, top-down and bottom-up heuristic methods for building \texttt{OODG}{}s are proposed in~\cite{DBLP:conf/ijcai/KohaviL95, DBLP:conf/ecml/Kohavi94}.
Here, we introduce briefly the top-down heuristic method,
which is similar to the heuristic methods \texttt{C4.5}{} and \texttt{CART}{} for computing decision trees.
The top-down heuristic induction for \texttt{OODG}{} with given depth
contains three critical phases:
(1) selecting a sequence of features with the help of \textit{mutual information} (the difference of \textit{conditional entropy} \cite{elementInformationtheory});
(2) growing an oblivious decision tree (\texttt{ODT}{}) by splitting the dataset with features in the sequence selected;
and (3) merging \textit{isomorphic} and \textit{compatible} subtrees from top to down
to build the \texttt{OODG}{}.
When building the \texttt{ODT}{}, the algorithm marks nodes that capture no example of the dataset as ``\textit{unknwon}''.
For the merging phase, two subtrees are \textit{compatible} if at least one root is labelled as ``\textit{unknown}'', or if the two root nodes are labelled with same feature and their corresponding children are the roots of compatible subtrees.
The \texttt{ODT}{} grown could make classifications directly by assigning ``\textit{unknown}'' nodes with the majority class of their parents.
\input{figures/Ex4_combined}
Figure \ref{fig:Ex4_compatible_oodg} shows an example of two compatible subtrees and the merged tree, where ``\textit{unknown}'' nodes are labelled as ``\textit{u}''.
Merging compatible subtrees changes the bias by assuming that a ``\textit{unknown}'' node is likely to behave the same as another child if they belong to compatible subtrees.
In binary classification for binary datasets, \texttt{OODG}{}s could be considered \textit{equivalent} to \texttt{BDD}{}s, as the properties ``\textit{oblivious}'' and ``\textit{read-once}'' for \texttt{OODG}{}s are same as property ``\textit{ordered}'' for \texttt{BDD}{}s.
In addition, the use of merging compatible subtrees could also be applied for \texttt{BDD}{}s.
\subsection{SAT \& MaxSAT}
We use standard terminology for Boolean Satisfiabily~\cite{DBLP:series/faia/2009-185}. A \emph{literal} is a Boolean variable or its negation, and a \emph{clause} is a disjunction of literals.
An assignment of variables satisfies a clause if one of its literals is true.
Given a set of Boolean variables
and a set of clauses defined over these variables,
the {SAT}{} problem can be defined as finding an assignment of the variables such that all the clauses are satisfied.
Maximum Satisfiability ({MaxSAT}{}) is an optimization version of the {SAT}{} problem, where
the clauses are partitioned into \emph{hard} and \emph{soft} clauses.
Here we consider the Partial {MaxSAT}{} problem, that is to find an assignment of the Boolean variables that satisfies all the hard clauses and maximizes the number of satisfied soft clauses.
\section*{General response}
We would like to thank the reviewers for their constructive feedback.
All minor comments and typos will be fixed in the new version.
An additional typo is to replace the ``\textbf{set} $\{ x_1, \ldots x_n \}$'' by the ``\textbf{sequence} $(x_1, \ldots x_n )$'' in Section 2, \textbf{Binary Decision Diagrams}, second line.
\section*{Response to Reviewer 1}
\noindent
We would like first to comment on the weak points
\begin{itemize}
\item \texttt{BDD}{}s are compact representations of Boolean functions. A motivating example (in particular regarding interpretability) can be added.
\item The current model does not support such hype-parameter, however, it could be extended by constraining the number of nodes via new variables and constraints.
\item The different running times are mentioned in the appendix except those for ODT and OODG.
\end{itemize}
\noindent
{Answers to the questions:}
\begin{enumerate}
\item %
Yes, thanks to the (partial) MaxSAT framework, sample-weights can easily be handled by changing the weights of related soft clauses.
This is also related to the answer of question 4.
\item
A bead, by definition, does not have the form $\alpha \alpha$ therefore $\{00\}$ and $\{11\}$ are not {beads}. \\
The typo that we mentioned in the general response (\textit{sequence} instead of \textit{set}), makes [$x_1, x_2, x_3$] an \textit{ordered sequence}.
Therefore, $x_2$ cannot be a child of $x_3$.
The construction of the \texttt{BDD}{} is done using Proposition 1 and the sequence.
\item
We did not consider \texttt{BDD}{}s without the properties \textit{reduced} and \textit{ordered} as they are highly useful and used in the literature [Knuth 2009, Section 7.1.4].
For instance, a non \textit{reduced} \texttt{BDD}{} potentially has redundant nodes/branches.
\item
Yes, such preferences can easily be translated into weights then associated to soft clauses.
\item The ``\textit{ordered}'' property is preserved by the definition of the variable set $a_r^i$ to find an ordered sequence of features with constraints 1 and 2.
The (Max)SAT model does not enforce the ``reduced'' property.
[Knuth 2009, Section 7.1.4] showed that Proposition 1 guarantees the ``reduced'' property.
We therefore apply Proposition 1 on the feature ordering and the truth table decided by the (Max)SAT solver. We will clarify this in the paper.
\item The complexity of merging compatible subtrees is $O(k^2)$, where $k$ is the number of subtrees, from~\cite{DBLP:conf/ijcai/KohaviL95}.
In practice, this process just takes about few seconds.
\item
\begin{itemize}
\item Yes, we have tried the experiments with higher depth until 9.
The testing accuracy of ODT and OODG gets closer to those found {MaxSAT}\ model.
\item
The average test accuracy of the three MaxSAT-BDD does improve with the growth of maximum depth as shown in Figure 5.
\end{itemize}
\item Yes, scalabity is an important research direction to investigate.
Our heuristic approach is a first step towards this concern and should be further explored. Incremental learning can be helpful in this context.
\item We choose all (and only) the features in the tree constructed by \texttt{CART}{}.
The numbers are given in column $\mathbf{F\_d}$ of Table 6 in the appendix
\end{enumerate}
\section*{Response to Reviewer 2}
Answers to the specific questions:
\begin{enumerate}
\item Q1:
The number of features is a metric to minimize the \texttt{BDD}{}.
The latter can include other metrics, such as the number of nodes, the classification error, etc.
\item Q2: No, we did not try other encodings.
Sinz's encoding is standard, easy to use, and did not show any issue in practice.
It would be indeed interesting to try others.
\item Q3:
It is hard to find an explanation.
The only insight we can make is that
the solutions are not sensitive to these biases.
\end{enumerate}
\noindent
The following are responses to the different comments.
\begin{itemize}
\item
As \texttt{BDD}{}s are assumed to be \textit{ordered} and \textit{reduced}, the number of different features is equal to the maximum depth (which is an important metric for the topology).
Contrarily to ~\cite{DBLP:conf/date/CabodiCI0PP21}, the objective of our MaxSAT model of \texttt{BDD}{} is to find the \texttt{BDD}{} in limited depths that maximizes the number of examples correctly classified.
\item {On the comparison with ~\cite{DBLP:conf/date/CabodiCI0PP21}}:
The \texttt{BDD}{} found by~\cite{DBLP:conf/date/CabodiCI0PP21} could be small in size, however deep.
Given a fixed size, one can construct \texttt{BDD}{}s with a big difference in depth, making a fair comparison between the two approaches difficult.
\item We can add a short discussion regarding the scalability of BDD1 vs BDD2.
\item Loandra is still a state of the art solver. In addition, we used it for a fair comparison with [Hu et al. 2020]. Using alternative MaxSAT solvers is possible.
\item
In Figure 5, we use $15$ different datasets
with $5$ different depths. This is mentioned in the caption of Figure 5 well as section ``{Comparison with Existing Heuristic Approaches}''.
\end{itemize}
\section*{Response to Reviewer 3}
\begin{enumerate}
\item Table 6 in the appendix reports the running time.
\item
When the depth grows, the encoding size as well as the search space gets bigger. Hence the solver ``struggels'' to explore the search space.
\end{enumerate}
\section*{Response to Reviewer 4}
Extensions to non-integer features is an interesting future work. One can for instance binarize the features based on some thresholds.
\section*{Appendix: Algorithm for Producing A \texttt{BDD}{} Based on the Beads of a Truth table}
Recall that Proposition \ref{prop:bead_bdd_node} states that there is a one-to-one correspondence between the vertices of a BDD and the beads of the correspondent Boolean function.
Consider a string $\beta{}$ of length $2^{H{}}$ associated to a sequence of variables $[\avarind{1}, \avarind{2}, \dots, \avarind{H{}}]$.
This appendix describes the algorithm we used to construct a \texttt{BDD}{} of a maximum depth $H{}$ using the beads of $\beta{}$.
As the \texttt{BDD}{} built is \textit{ordered} and \textit{reduced}, it contains no isomorphic subtrees.
This algorithm creates nodes level by level in a \textit{breadth-first} way.
The \texttt{BDD}{} constructed is defined by \textit{a list of nodes} and \textit{a list of edges}.
Each node is a pair {(\textit{node\_id}, \textit{variable})}.
The value of \textit{node\_id} is a unique integer called the id of the node which is non-negative for non terminal nodes. There are two sink nodes:
{($-1, 1$)} associated to the value $1$ (i.e., positive class in the context of binary classification); and {($-2, 0$)} associated to for the value $0$ (i.e., negative class in binary classification).
Each edge is a tuple {$(p, c , direction)$}, where $p$ is the id of the parent node, $c$ is the if of the child, and $direction \in \{left,right\}$) indicates if $c$ is the left or right child of $p$.
Before we present the algorithm, we show some predefined functions used in the algorithm in Table \ref{tab:lambda_functions_algo}.
\begin{table}[htb]
\centering
\small
\setlength{\tabcolsep}{1pt}
\begin{tabular}{|l||l|}
\hline
\footnotesize{\textbf{Function(Input): Output}} & \footnotesize{\textbf{Description}} \\\hline
$FirstHalf(string~s)$: string & Returns the first half of $s$ \\\hline
$SecondHalf(string~s)$: string & Returns the second half of $s$ \\\hline
$IsBead(string~s)$: Boolean & Returns \textit{True} iff $s$ is a bead \\\hline
$LeadToZero(string~s)$: Boolean & Returns \textit{True} iff $s$ contains only $0$ \\\hline
$LeadToOne(string~s)$: Boolean & Returns \textit{True} iff $s$ contains only $1$
\\\hline
\end{tabular}
\caption{Predefined functions in the algorithm}
\label{tab:lambda_functions_algo}
\end{table}
The detailed algorithm is described in Algorithm \ref{alg:beads_to_bdd}.
We use a FIFO queue $q$ in the algorithm.
Each item in the queue follows the format \textit{(str, parent\_id, current\_level, direction)}.
The first item pushed in the queue is a special case associated to the root denoted by $(\beta{}, 0, 1, \emptyset))$ since the root has not parent.
At each iteration of the main loop, the algorithm pops an element $(s, parent\_id, level, direction)$ from the queue at
Line~\ref{line:pop}.
If $s$ is a bead, the algorithm creates a new node at Line~\ref{line:newnode} associated with the level $level$ if $s$ is seen for the first time.
The set of edges is updated in Line~\ref{line:newedge} accordingly.
The left and right children of $s$ are added in the queue in Lines~\ref{line:left} and ~\ref{line:right}.
When the current string $s$ is not a Bead of size $>1$, there might be two cases where $s$ leads immediately to a sink node. Either $s$ contains only $0$s or $s$ contains only $1$s. The two cases are handled (according to the size of $s$) in two parts of the algorithm: from Lines~\ref{line:check} to Line~\ref{line:endsink}
and from Lines~\ref{line:1} to Line~\ref{line:2}.
The case where $s$ is a bead that is not equivalent to a sink node, the set of edges is updated and only one child of $s$ is added to the queue without creating nodes (since $s$ is a bead).
The algorithm ends when all the elements of the queue are treated.
\corrhao{}
\begin{algorithm}[htb]
\caption{GenBDD$(\beta{}, \mathcal{X})$, an algorithm to construct a \texttt{BDD}{} from a given string $\beta{}$ and variable sequence $\mathcal{X}$}\label{alg:beads_to_bdd}
\begin{algorithmic}[1]
\REQUIRE{String $\beta{}$, variable sequence $\mathcal{X} = [\avarind{1}, \dots, \avarind{H{}}]$.}
\ENSURE{BDD(\textit{nodes}, \textit{edges})}
\STATE $nodes \gets \{\}; edges \gets \{\}; T \gets \{\}$
\STATE $nodes.append((-1, 1)); nodes.append((-2, 0))$
\STATE $q \gets Queue()$
\STATE $q.put((\beta{}, 0, 1, \emptyset)))$
\WHILE{$not \ q.empty()$}
\STATE \label{line:pop} $(s, parent\_id, level, direction) \gets q.pop())$
\IF{$Length(s) > 1 \ and \ IsBead(s)$}
\STATE // \textit{When the current string $s$ is a Bead.}
\IF{${ s \not\in T} $}
\STATE // \textit{$s$ is a new string, i.e., not seen before}
\STATE $T.append(s)$
\STATE $index \gets T.index(s) + 1$
\STATE \label{line:newnode} $nodes.append((index, \avarind{level}))$
\ENDIF
\STATE $index \gets T.index(s) + 1$
\IF{ $parent\_id \geq 1$}
\STATE \label{line:newedge} $edges.append(( parent\_id, index, direction))$
\ENDIF
\STATE // \textit{Put the left and right child into the queue.}
\STATE \label{line:left} $q.put((FirstHalf(s), index, level + 1, left))$
\STATE \label{line:right} $q.put((FirstHalf(s), index, level + 1, right))$
\ELSIF{ \label{line:check} $Length(s)>1 \textbf{ and not } \ IsBead(s)$}
\STATE // \textit{When the current string $s$ is \textbf{not} a Bead.}
\IF{$LeadToOne(s) \textbf{ or } \ LeadToZero(s)$}
\STATE // \textit{$s$ leads to sink nodes}
\IF{$LeadToOne(s)$}
\STATE $sink \gets -1$
\ELSE
\STATE $sink \gets -2$
\ENDIF
\IF{$p = 0$}
\STATE $edges.append((1, sink, left))$
\STATE $edges.append((1, sink, right))$
\ELSE
\STATE $edges.append((parent\_id, sink, direction))$
\ENDIF \label{line:endsink}
\ELSE
\STATE // \textit{Otherwise put the left child into the queue.}
\STATE \label{line:normalcase} $q.put((FirstHalf(s), parent\_id, level+1 ~, direction))$
\ENDIF
\ELSE
\STATE // \label{line:1} \textit{The current string is a sink node.}
\IF{$s=1$}
\STATE $sink \gets -1$
\ELSE
\STATE $sink \gets -2$
\ENDIF
\STATE \label{line:2} $edges.append(( parent\_id, sink, direction))$
\ENDIF
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\section{(Max)SAT-based model for Binary Decision Diagrams}
\label{sec:Sat-models}
In this section, we present our approach for learning \texttt{BDD} {}s for binary classification using {SAT}{} and {MaxSAT}{}.
\subsection{Problem Definition}
\label{sec:problemdefinition}
We firstly consider the following decision problem for classification with \texttt{BDD}{} in a given depth.
\begin{itemize}
\item $P_{{bdd}}(\mathcal{E}, H):$ \emph{Given a set of examples $\mathcal{E}$, is there a \texttt{BDD}\ of depth $H$ that classifies correctly all examples in $\mathcal{E}$?}
\end{itemize}
Notice that the algorithm for $P_{bdd}(\mathcal{E}, H)$ can be used to the alternative problem of optimizing a \texttt{BDD}{} that classifies all examples in the dataset correctly with a minimum depth.
For that purpose, one can use a linear search that takes
an initial depth $H_0$ as input
and progressively increases or decreases this value depending on the result of solving $P_{bdd}(\mathcal{E}, H)$.
Next, we consider another optimization problem for the classification with \texttt{BDD}{}
in a limited depth.
\begin{itemize}
\item $P_{{bdd}}^*(\mathcal{E}, H):$ \emph{Given a set of examples $\mathcal{E}$, find a \texttt{BDD}\ of depth $H$ that maximises the number of examples in $\mathcal{E}$ that are correctly classified.}
\end{itemize}
We propose an initial {SAT}\ model for the decision problem $P_{bdd}(\mathcal{E}, H)$.
Then, we propose an improved version in tighter formula size.
Finally, we show how the improved {SAT}{} model for $P_{bdd}(\mathcal{E}, H)$ can be used effectively to solve
the optimization problem $P_{{bdd}}^*(\mathcal{E}, H)$ with {MaxSAT}{}.
\subsection{SAT Model for $P_{bdd}(\mathcal{E}, H)$}
As shown in Section~\ref{sec:background}, a \texttt{BDD}{} of depth $H$ could be generated from the combination of
a sequence of Boolean variables of size $H$: $[\avarind{1}, \dots, \avarind{H}]$, and a truth table of order $H$ associated to a Boolean function.
To solve the classification problem $P_{bdd}(\mathcal{E}, H)$,
we then have to find a sequence of binary features of size $H$
that maps one-to-one the
sequence of Boolean variables, and a truth table
associated to a Boolean function that well-classified all examples.
We denote the sequence of binary features found as \textit{feature ordering}.
Therefore, the {SAT}\ encoding consists of two parts:
\begin{itemize}
\item \textbf{Part 1:} Constraints for selecting features of the dataset into the feature ordering of size $H$.
\item \textbf{Part 2:} Constraints for generating a truth table that classifies all examples of $\mathcal{E}$ correctly with the selected feature ordering.
\end{itemize}
To realize the {SAT}{} encoding, we introduce two sets of Boolean variables as follow:
\begin{itemize}
\item
$\featplacevar{r}{i}$:
the variable $\featplacevar{r}{i}$ is $1$ iff feature $\afeatind{r}$ is selected as $i$-th feature in the feature ordering,
where $i=1,\dots, H$, $r=1,\dots,K$.
\item
$\adecisionind{j}$:
the variable $\adecisionind{j}$ is 1 iff the $j$-th
value of the truth table is 1, where $j=1,\dots,2^{H}$.
\end{itemize}
The set of variables $\featplacevar{r}{i}$ guarantees the \textit{ordered} restriction.
Then, we introduce two constraints (\ref{con:feature_used_atmost_once}) and~(\ref{con:exact_one_feature})
for the feature ordering.
Constraint~\ref{con:feature_used_atmost_once} ensures that any feature $\afeatind{r}$ can be selected at most once.
\begin{equation}
\label{con:feature_used_atmost_once}
\sum_{i=1}^{H}\featplacevar{r}{i} \leq 1, \quad r=1,\dots,K
\end{equation}
Then, there is exacty one feature selected for each index of the feature ordering.
\begin{equation}
\label{con:exact_one_feature}
\sum_{r=1}^{K}\featplacevar{r}{i} = 1, \quad i = 1,
\dots, H
\end{equation}
We use the classical sequential counter encoding proposed in~\cite{05-sequential} to model constraints~(\ref{con:feature_used_atmost_once}) and~(\ref{con:exact_one_feature}) as a Boolean formula.
The truth table we are looking for is the binary string of the values of variables
$\adecisionind{1}\adecisionind{2}\dots\adecisionind{2^H}$.
To avoid the first feature selected makes useless split, we need to make sure that the truth table is a \textit{bead}{}.
\begin{equation}
\label{con:root_is_bead}
\bigvee_{j=1}^{2^{H-1}}(\adecisionind{j} \oplus \adecisionind{j + 2^{H-1}})
\end{equation}
There is a relationship between the values of a truth table and the assignments of the given sequence of Boolean variables.
For example, the first value of a truth table corresponds to the assignment that
$\avarind{1} = 0$ and $\avarind{2} =0$.
Therefore, we define the following function to obtain the value of the $i$-th feature in the feature ordering of size $H$ given the $j$-th value in the truth table.
\begin{equation}
\label{func:direction_func}
\textit{rel}{}(i, j) = \lfloor \frac{j-1}{2^{H-i}} \rfloor \bmod 2, \quad i \in [1, H], j \in [1, 2^H]
\end{equation}
For an example $\atraindataind{q} \in \mathcal{E}$, we denote the value of the feature $\afeatind{r}$ as $\sigma(r, q)$.
If $\textit{rel}{}(i, j) = \sigma(r,q)$, it indicates that for example $\atraindataind{q}$, the feature $\afeatind{r}$ can be at the $i$-th position in the feature ordering to produce the $j$-th value in the truth table.
To classify all examples correctly, we ensure
that
no example follows an assignment
in the truth table leading to its opposite class.
Thus, we propose the following constraints for classification.
Let $\atraindataind{q} \in \mathcal{E}^{+}$, for all $j = 1, \dots, 2^H$:
\begin{equation}
\label{con:original_classify_pos}
\neg \adecisionind{j} \rightarrow \bigvee_{i=1}^{H}\bigvee_{r=1}^{K}(\featplacevar{r}{i} \land \textit{rel}{}(i, j) \oplus \sigma(r, q))
\end{equation}
That is, for every positive example $\atraindataind{q}$, any variable $\adecisionind{j}$ assigned to $0$ must be associated to an assignment of features that contains at least one feature-value that is not coherent with $\atraindataind{q}$.
For negative examples, we use a similar idea.
Let $\atraindataind{q} \in \mathcal{E}^{-}$, for all $j = 1, \dots, 2^H$:
\begin{equation}
\label{con:original_classify_neg}
\adecisionind{j} \rightarrow \bigvee_{i=1}^{H}\bigvee_{r=1}^{K}(\featplacevar{r}{i} \land \textit{rel}{}(i, j) \oplus \sigma(r, q))
\end{equation}
\input{tables/Ex2_table_problem}
\begin{example}
\label{exp:bdd_classify_all_examples_orig}
Let $\mathcal{E}_0$ be the given set of examples shown in Table \ref{tab:dataset_example}.
Figure \ref{fig:dt_constructed} shows the corresponding decision tree classifying all examples correctly.
We consider
to encode a \texttt{BDD}{} with depth $H=2$ classifying all examples of $\mathcal{E}_0$ correctly.
The two sets of variables are: $\{\featplacevar{1}{1}, \featplacevar{1}{2}, \featplacevar{2}{1}, \featplacevar{2}{2}, \featplacevar{3}{1}, \featplacevar{3}{2}, \featplacevar{4}{1}, \featplacevar{4}{2}\}$, and $\{\adecisionind{1}, \adecisionind{2}, \adecisionind{3}, \adecisionind{4}\}$.
The constraints \ref{con:feature_used_atmost_once}, \ref{con:exact_one_feature}, and \ref{con:root_is_bead} are:
\begin{align*}
\centering
\featplacevar{1}{1} + \featplacevar{1}{2} \leq 1, \quad
\featplacevar{2}{1} + \featplacevar{2}{2} &\leq 1, \quad
\featplacevar{3}{1} + \featplacevar{3}{2} \leq 1, \quad
\featplacevar{4}{1} + \featplacevar{4}{2} \leq 1 \\
\featplacevar{1}{1} + \featplacevar{2}{1} + \featplacevar{3}{1} + \featplacevar{4}{1} &= 1, \quad
\featplacevar{1}{2} + \featplacevar{2}{2} + \featplacevar{3}{2} + \featplacevar{4}{2} = 1 \\
(\adecisionind{1} \oplus &\adecisionind{3}) \lor (\adecisionind{2} \oplus \adecisionind{4})
\end{align*}
For classification constraints (i.e.,~\ref{con:original_classify_pos} and~\ref{con:original_classify_neg}),
we show the encoding of $\atraindataind{1} \in \mathcal{E}^-$ with for value $\adecisionind{1}$. The encoding for other examples and other values is similar.
\begin{equation*}
\begin{split}
\adecisionind{1} \rightarrow (\featplacevar{1}{1} \land 0 \oplus 1) &\lor (\featplacevar{2}{1} \land 0 \oplus 0) \lor
(\featplacevar{3}{1} \land 0 \oplus 1) \\ \lor (\featplacevar{4}{1} \land 0 \oplus 0) &\lor
(\featplacevar{1}{2} \land 0 \oplus 1) \lor (\featplacevar{2}{2} \land 0 \oplus 0) \\ \lor
(\featplacevar{3}{2} \land 0 \oplus 1) &\lor (\featplacevar{4}{2} \land 0 \oplus 0)
\end{split}
\end{equation*}
This could be simplified as follow:
\begin{equation*}
\neg \adecisionind{1} \lor \featplacevar{1}{1} \lor
\featplacevar{3}{1} \lor \featplacevar{1}{2} \lor \featplacevar{3}{2}
\end{equation*}
\input{tables/Ex2_table_solution}
The values of truth table found by the {SAT}{} model are shown in Table~\ref{tab:Bdd-solution}, the feature ordering is $[\afeatind{1}, \afeatind{2}]$.
Moreover, Table \ref{tab:Bdd-solution} illustrates the relationship between the values of truth table and the assignments of the given sequence Boolean variable of size $2$.
Figure \ref{fig:bdd_constructed} shows
the corresponding \texttt{BDD}{}.
This \texttt{BDD}{} classifies all examples of the dataset $\mathcal{E}_0$ correctly,
also provides more compact representation than the decision tree shown in Figure \ref{fig:dt_constructed}.
\end{example}
We refer to this first {SAT}\ encoding for $P_{bdd}(\mathcal{E}, H)$ as \Bdd1. The size of \Bdd1 is given in Proposition~\ref{prop:model_size_bdd_orig}.
\begin{proposition}
\label{prop:model_size_bdd_orig}
For a $P_{bdd}(\mathcal{E}, H)$ problem with $K$ binary features and $M{}$ examples, the encoding size (in terms of the number of literals used in the different clauses) of \Bdd1\ is $O(M{} \times H \times K \times 2^H)$.
\end{proposition}
\begin{proof}
Notice first that $j$ ranges from $1$ to $2^H$, $i$ ranges from $1$ to $H$, and $r$ ranges from $1$ to $K$.
The term $M{} \times 2^H$ results from constraint (\ref{con:original_classify_pos}) and (\ref{con:original_classify_neg}), each contains $O(H \times K)$ literals.
For the remaining constraints, it is $O(H \times K)$ for constraints~(\ref{con:feature_used_atmost_once})
and~(\ref{con:exact_one_feature}), $O(2^H)$ for constraint~(\ref{con:root_is_bead}).
\end{proof}
The size of \Bdd1 is quite huge due to the size of clauses
generated by constraints (\ref{con:original_classify_pos}) and (\ref{con:original_classify_neg}) for classification.
This makes \Bdd1 impractical in practice.
\subsection{An improved SAT Model for $P_{{bdd}}(\mathcal{E}, H)$ }
In order to reduce the size of \texttt{BDD}{1}, we propose new classification constraints to replace constraints (\ref{con:original_classify_pos}) and (\ref{con:original_classify_neg}).
The idea is that every positive (respectivery negative) example follows an assignment leading to a positive (respectively negative) value of the truth table.
We introduce a new set of Boolean
variables:
\begin{itemize}
\item
$\signfeatexample{i}{q}$: The variable $\signfeatexample{i}{q}$ is 1 iff for example $\atraindataind{q}$ the value of the
$i$-th feature selected in feature ordering is 1, where $i=1,\dots,H$, $q=1, \dots, M{}$.
\end{itemize}
Then, We describe constraints that relate the values
of features for each example $\atraindataind{q} \in \mathcal{E}$, for $i=1,\dots, H$, $r=1,\dots, K$:
\begin{equation}
\label{con:sign_features}
\begin{split}
\featplacevar{r}{i} &\rightarrow \signfeatexample{i}{q} \quad\;\;\, \text{if } \sigma(q, r) = 1 \\
\featplacevar{r}{i} &\rightarrow \neg \signfeatexample{i}{q} \quad \text{if } \sigma(q, r) = 0
\end{split}
\end{equation}
Let $\atraindataind{q} \in \mathcal{E}^+$, we have $2^H$ constraints for classifying examples correctly:
\begin{equation}
\label{con:improved_classify_pos}
\begin{split}
\neg \signfeatexample{1}{q} \land \neg \signfeatexample{2}{q} \land &\dots \land \neg \signfeatexample{H-1}{q} \land \neg \signfeatexample{H}{q} \rightarrow \adecisionind{1} \\
\neg \signfeatexample{1}{q} \land \neg \signfeatexample{2}{q} \land &\dots \land \neg \signfeatexample{H-1}{q} \land \signfeatexample{H}{q} \rightarrow \adecisionind{2} \\
&\dots \\
\signfeatexample{1}{q} \land \signfeatexample{2}{q} \land &\dots \land \signfeatexample{H-1}{q} \land \signfeatexample{H}{q} \rightarrow \adecisionind{2^H}
\end{split}
\end{equation}
That is, any positive example follows an assignment of the feature ordering that leads to a positive value in the truth table.
Similarly, for any $\atraindataind{q} \in \mathcal{E}^-$, we also have $2^H$ constraints:
\begin{equation}
\label{con:improved_classify_neg}
\begin{split}
\neg \signfeatexample{1}{q} \land \neg \signfeatexample{2}{q} \land &\dots \land \neg \signfeatexample{H-1}{q} \land \neg \signfeatexample{H}{q} \rightarrow \neg \adecisionind{1} \\
\neg \signfeatexample{1}{q} \land \neg \signfeatexample{2}{q} \land &\dots \land \neg \signfeatexample{H-1}{q} \land \signfeatexample{H}{q} \rightarrow \neg \adecisionind{2} \\
&\dots \\
\signfeatexample{1}{q} \land \signfeatexample{2}{q} \land &\dots \land \signfeatexample{H-1}{q} \land \signfeatexample{H}{q} \rightarrow \neg \adecisionind{2^H}
\end{split}
\end{equation}
We refer to this new {SAT}\ encoding for $P_{bdd}(\mathcal{E}, H)$ as \Bdd2. The encoding size of \Bdd2 is given in Proposition~\ref{prop:model_size_bdd_improved}.
\begin{proposition}
\label{prop:model_size_bdd_improved}
For a $P_{dd}(\mathcal{E}, H)$ problem with $K$ binary features and $M{}$ examples, the encoding size of the SAT encoding (\Bdd2) is $O(M{} \times H \times (2^H +K))$.
\end{proposition}
\begin{proof}
The term $M{} \times H \times K$ results from constraint (\ref{con:sign_features}).
For constraints~(\ref{con:improved_classify_pos})
and~(\ref{con:improved_classify_neg}), for each example, there are $2^H$ clauses containing $H + 1$ literals.
The term $M{} \times H \times 2^H$ results from that.
\end{proof}
Propositions \ref{prop:model_size_bdd_orig} and \ref{prop:model_size_bdd_improved} show a clear theoretical advantage of \texttt{BDD}{2} compared to \texttt{BDD}{1} in terms of the encoding size, thus scalability.
\subsection{MaxSAT Model for $P_{{bdd}}^*(\mathcal{E}, H):$}
We now present a {MaxSAT}\ encoding for the optimization problem $P_{bdd}^*(\mathcal{E}, H)$.
That is, given a set of examples $\mathcal{E}$, find a binary decision diagram of depth $H$ that maximises the number of examples correctly classified.
We transform the {SAT}\ encoding of \texttt{BDD}{}s into a {MaxSAT}\ encoding following a simple technique.
The idea is to keep structural constraints as hard clauses and classification constraints as soft clauses.
We consider
\Bdd2 as it has a reduced size.
Constraints (\ref{con:feature_used_atmost_once}), (\ref{con:exact_one_feature}), (\ref{con:root_is_bead}) and~(\ref{con:sign_features}) are kept as hard clauses.
To classify the examples, we declare all clauses of constraints~(\ref{con:improved_classify_pos})
and~(\ref{con:improved_classify_neg}) as soft clauses.
For any example $\atraindataind{q}$, the number of satisfied soft clauses associated to $\atraindataind{q}$ is either $2^H$ (indicating $\atraindataind{q}$ is classified correctly), or $2^H-1$ (indicating $\atraindataind{q}$ is classified wrongly).
Therefore, the objective of maximising the number of satisfied soft clauses is equivalent to maximise the number of examples correctly classified.
\subsection{Merging Compatible Subtrees}
Consider a \texttt{BDD}\ $\mathcal{G}{}$ found by a {MaxSAT}\ solver and its associated truth table $\beta{}$.
Based on the feature ordering of $\mathcal{G}{}$, it is possible that some values in $\beta{}$ capture no (training) example
(Equivalent to ``\textit{unknown}'' nodes for \texttt{OODG}{}).
Such values are decided by the {MaxSAT}\ solver in an arbitrary way, which gives a certain bias in generalisation.
We propose to merge compatible subtrees in $\mathcal{G}{}$ in order to handle this bias.
This will result in changing some values in the truth table $\beta{}$ (i.e. the arbitrary ones decided by {MaxSAT}).
We propose a post-processing procedure to merge compatible subtrees using the following three phase:
(1) update the truth table $\beta{}$ by replacing the values of $\beta{}$ that capture no examples
with a special value ``u'';
(2) for each level, check the \textit{beads}{}, where ``u'' can be used to match $1$ or $0$, and create a node for each \textit{bead}{};
(3) for each level, after creating the nodes, check the matches between all subtables of the next level.
For matched subtables, update the corresponding \textit{beads}{} of current level to eliminate the ``u'' values.
This process is illustrated in Example~\ref{exp:compatible_merge}.
\begin{example}
\label{exp:compatible_merge}
Assume that a {MaxSAT}\ model finds the truth table $\beta{}$ $00010111$ with the feature ordering $[f_1, f_2, f_3]$,
and assume that the updated truth table $\beta{'}$ is $u0u1011u$ after phase (1).
For level $1$, we create a root node for $\beta{'}$ as it is a bead.
Then, we check the subtables of $\beta{'}$ ($u0u1$ and $011u$). Since they do not match, we move to the next level.
For level $2$, we create a node for $u0u1$ and a node for $011u$ as they are beads.
Then, we check all subtables of the next level, which are $u0$, $u1$, $01$ and $1u$.
We observe that $u0$ matches $1u$, and $u1$ matches $01$, therefore, the beads $u0u1$ and $011u$ are updated as $1001$ and $0110$.
Therefore, the updated beads of $\beta{'}$ are $\{u0u1011u, 1001(u0u1), 0110(011u), 10, 01, 0, 1\}$.
Figure \ref{fig:Ex3_compatible_bdd} shows the beads updated (in the left), the \texttt{BDD}{} before the merging process (in the right) and the final \texttt{BDD}{} found (in the middle).
\end{example}
\input{figures/Ex3_combined}
\section{Experimental Results}
\label{sec:exp}
We present our large experimental studies to evaluate empirically our propositions on different levels\footnote{The source code and datasets are available online at https://gitlab.laas.fr/hhu/bddencoding}.
At first, we make some preliminary experiments
on the proposed {SAT}{}
models to confirm the great improvements in the encoding size of \Bdd2 compared to \Bdd1, as shown theoretically in proposition \ref{prop:model_size_bdd_orig} and \ref{prop:model_size_bdd_improved}.
Then, we evaluate the prediction performance between the proposed {MaxSAT}-\texttt{BDD}{} model and the heuristic methods, \texttt{ODT}{} and \texttt{OODG}{}~\cite{DBLP:conf/ijcai/KohaviL95}.
Next, we compare our {MaxSAT}-\texttt{BDD}{} model
with an exact method for building decision trees using {MaxSAT}{}~\cite{DBLP:conf/ijcai/Hu0HH20} in {terms} of prediction quality, model size, and encoding size.
Finally, we propose and evaluate
a simple, yet efficient, scalable heuristic version of our {MaxSAT}-\texttt{BDD}{} model
We consider
datasets from CP4IM\footnote{https://dtai.cs.kuleuven.be/CP4IM/datasets/}.
These datasets are binarized with the one-hot encoding.
Table \ref{tab:dataset_exp_cp4im} describes
the characteristics of these datasets:
$M$ indicates the number of examples,
$K_{orig}$ indicates the original number of features, $K$ indicates the number of binary features after binarization,
and $pos$ indicates the percentage of positive examples.
All experiments were run on a cluster using Xeon E5-2695 v3@2.30GHz CPU running xUbuntu 16.04.6 LTS.
The {SAT}{} solver we used is Kissat~\cite{BiereFazekasFleuryHeisinger-SAT-Competition-2020-solvers}, the winner of {SAT}{} competition 2020.
For each experiment of {SAT}{} encoding, we set $20$ hours as the global timeout for {SAT}{} solver.
The {MaxSAT}\ solver we used is Loandra~\cite{DBLP:conf/cpaior/BergDS19},
an efficient incomplete {MaxSAT}\ solver.
For each experiment of {MaxSAT}-\texttt{BDD}{}, the time limit for generating formulas and the time limit for solver are set to $15$ minutes.
\input{tables/Exp0_dataset_info}
\subsection{Comparison Of The SAT Encodings}
We consider the optimisation problem of finding a \texttt{BDD}{} that classifies all training examples correctly with the minimum depth.
We use a simple linear search by solving multiple times the decision problem asking to find a \texttt{BDD}{} with a given depth $H$ (Problem $P_{bdd}(\mathcal{E}, H)$ in Section \ref{sec:Sat-models}).
We set the initial depth $H_0=7$.
Considering the scalability problem, for each dataset, we use the hold-out method to split the training and testing set.
We choose $5$ different small splitting ratios $r=\{0.05, 0.1, 0.15, 0.2, 0.25\}$ to generate the training set. The remaining instances are used for testing.
This process is repeated $10$ times with different random seeds.
Table \ref{tab:annex1_satbdd_diff} reports the average results of instances that are solved to optimality by all methods within the given time.
The columns ``Acc'' and ``dopt'' indicate the average testing accuracy in percent and the average optimal depth, respectively.
The encoding size in given in column ``E\_Size'' (i.e., the number of literals in the cnf file divided by $10^3$). The column ``Time'' indicates the runtime in seconds of successful runs.
The value ``N/A'' indicates the lack of results because of the timeout.
The best values are indicated in blue.
Table \ref{tab:annex1_satbdd_diff} shows the great improvements in terms of the encoding size and the runtime of \texttt{BDD}{2} compared to \texttt{BDD}{1}.
This empirical observation is coherent with the complexity analysis made in Proposition \ref{prop:model_size_bdd_orig} and \ref{prop:model_size_bdd_improved}.
We also observe no substantial difference in terms of testing accuracy between the two approaches.
\input{tables/Annex1_satbdd_diff}
\begin{figure*}[htb!]
\centering
\includegraphics[width=.8\linewidth, height=.28\linewidth]{figures/Exp2_boxplot_all_oodg_odt_maxsatbdd.png}
\captionof{figure}{
\footnotesize{
The average testing accuracy with different biases: \textbf{MaxSAT BDD-P}, \textbf{MaxSAT BDD-C}, \textbf{MaxSAT BDD-S}, \texttt{ODT}{}, \texttt{OODG}{} (respectively from left to right).
}}
\label{fig:exp2_oodg_maxsatbdd_all}
\end{figure*}
\subsection{Comparison with Existing Heuristic Approaches}
We consider the proposed {MaxSAT}-\texttt{BDD}{} model for solving the $P_{{bdd}}^*(\mathcal{E}, H)$ problem (defined in Section~\ref{sec:problemdefinition}) with $5$ different depths $H \in \{2, 3, 4, 5, 6\}$.
For each dataset, we use random $5$-fold cross-validation with $5$ different seeds.
We compare our {MaxSAT}-\texttt{BDD}{} model with the heuristic approaches proposed in \cite{DBLP:conf/ijcai/KohaviL95} to learn \texttt{ODT}{} and \texttt{OODG}{}.
For the heuristic methods, as described in the background section, after merging the \textit{isomorphic} and \textit{compatible} subtrees of \texttt{ODT}{},
the
\texttt{OODG}{}
changes the bias for those ``\textit{unknown}'' nodes.
In fact, different bias affects the prediction for unseen examples, but \textit{not} for the training examples.
Therefore, the training accuracies of \texttt{ODT}{} and \texttt{OODG}{} are equal
whereas the testing accuracies could be different.
This fact is also true in the {MaxSAT}-\texttt{BDD}{} model.
In this experiment, we consider the following three biases:
\begin{itemize}
\item By assigning for each unknown node the majority class of its branch
(denoted as \textbf{MaxSAT BDD-P})
\item By merging compatible subtrees (\textbf{MaxSAT BDD-C})
\item By using the class decided by the {MaxSAT}\ solver (\textbf{MaxSAT BDD-S}).
\end{itemize}
\begin{figure}[h!]
\centering
\includegraphics[scale=.18]
{figures/Exp2_scatter_training_maxsatbdd_oodg_size_045.png}
\captionof{figure}{\footnotesize{Comparison between the average training accuracy of \texttt{OODG}{} and the {MaxSAT}-\texttt{BDD}{}}}
\label{fig:OODGvsMaxSAT}
\end{figure}
Figure~\ref{fig:OODGvsMaxSAT} presents the comparison of the average training accuracy between \texttt{OODG}{} and {MaxSAT}-\texttt{BDD}{} model.
In this figure,
different datasets are marked with different colors, and different depths are labelled with points of different sizes.
From the scatter {plot}, we observe that the average training accuracy of both approaches increase with the increase of depth.
{Moreover, and more importantly,} the {MaxSAT}-\texttt{BDD}{} model performs better than the {heuristic} \texttt{OODG}{} in training accuracy.
Figure~\ref{fig:exp2_oodg_maxsatbdd_all} shows the average testing accuracy of {MaxSAT}-\texttt{BDD}{} with different biases, \texttt{ODT}{}, and \texttt{OODG}{} using different depths averaged over all datasets.
The white line and green triangle of each box indicate the median and the average value, {respectively}.
Clearly,
the {MaxSAT}-\texttt{BDD}{} models have better prediction performance than \texttt{ODT}{} and \texttt{OODG}{}. This is particularly true with small depths.
Increasing the depth increases the predictions for all methods as expected.
However, the increase is less important with the different {MaxSAT}-\texttt{BDD}{} models.
We observe also that there is little difference between the different biases for {MaxSAT}-\texttt{BDD}{}. This suggests that the optimal solutions are somewhat robust to the bias.
We noticed also that for all datasets (except one), all {MaxSAT}-\texttt{BDD}{} models report optimality when the depth is equal to $2$.
\subsection{Comparison with an Exact Decision Tree Approach}
The purpose of this experiment is to
{compare} our proposition
with the exact method for learning decision trees using the same solving approach ({MaxSAT}).
For {MaxSAT}-\texttt{BDD}{},
we consider only the bias of merging compatible subtrees \textbf{MaxSAT BDD-C} since no substantial difference was observed between the different biases.
We consider different values for the depth: $H \in \{2, 3, 4, 5, 6\}$.
For each dataset, we use random $5$-fold cross-validation with $5$ different seeds.
For {MaxSAT}-\texttt{BDD}{}, the depth also corresponds to the number of selected features,
whereas for {MaxSAT}-\texttt{DT}{}
the depth indicates the \textit{maximum depth} of the \texttt{BDD}.
Table \ref{tab:exp3_maxsatbdd_maxsatdt_all}
presents the results of the evaluation.
The column ``Size'' and ``E\_Size'' indicate the number of nodes of the model and the encoding size (number of literals divided by $10^3$), respectively.
The column ``F\_d'' indicates the average number of features used in the decision tree.
The best values are indicated in blue.
The results in Table \ref{tab:exp3_maxsatbdd_maxsatdt_all}
show that the {MaxSAT}-\texttt{BDD}{} approach is competitive to {MaxSAT}-\texttt{DT}{} in terms of prediction quality.
In most cases, the training and testing accuracy of these two approaches are close.
However, the size of the models are always smaller with {MaxSAT}-\texttt{BDD}{}.
The difference grows bigger when the depth increases.
The reduction in model size provides better intrepretability.
Moreover, sometimes, compared to the optimal \texttt{BDD}{}s found via {MaxSAT}-\texttt{BDD}{}, the optimal decision trees found via {MaxSAT}-\texttt{DT}{} uses useless splits.
This is, for instance, the case for the datasets ``\textit{car}'' and ``\textit{hypothyroid}'' with depth $2$.
We observe also that {MaxSAT}-\texttt{BDD}{} has always a {much} lighter encoding size than {MaxSAT}-\texttt{DT}{}. This gives a clear advantage to {MaxSAT}-\texttt{BDD}{} to handle the problem and to report optimality.
\input{tables/Exp3_maxsatbdd_maxsatdt}
\subsection{Evaluation of a Heuristic {MaxSAT}-\texttt{BDD}{} method}
To increase the scalability of our model, we propose a simple heuristic version of {MaxSAT}-\texttt{BDD}{}.
The idea is to perform as a pre-processing step to choose a subset of (important) features that are used exclusively in the {MaxSAT}-\texttt{BDD}{} model.
By doing this, the search space is greatly reduced by focusing only on the selected features.
We chose to run $\texttt{CART}{}$~\cite{DBLP:books/wa/BreimanFOS84} (an efficient and scalable heuristic for learning decision trees) to build quickly a decision tree.
The features selected in our heuristic method are the ones used in the decision tree found by $\texttt{CART}{}$.
{This experimental study follows the same protocol as the previous one.}
{The results are detailed in Figure~\ref{fig:exp4_combined} and in Table~\ref{tab:annex2_cart_hmaxsatbdd_maxsatbdd}}.
As expected, the size of the encoding of the model is in favor of the heuristic approach. This is shown in the two columns \textbf{E\_size} of Table~\ref{tab:annex2_cart_hmaxsatbdd_maxsatbdd}.
This advantage gives our approach the key to handle larger problems.
Column \textbf{Opt} of Table~\ref{tab:annex2_cart_hmaxsatbdd_maxsatbdd} indicates the percentage of instances solved to optimality.
It should be noted that proving optimality is much easier with the heuristic approach since the problem is naturally easier to solve with fewer features.
The results of the average testing accuracy are shown in the left scatter of Figure~\ref{fig:exp4_combined}.
Our heuristic approach is clearly very competitive to the exact {MaxSAT}-\texttt{BDD}{} in terms of learning generalization.
This is particularly true for datasets with a large number of features.
Indeed, the proposed heuristic approach obtains better prediction performance than the exact one within the same limited resources (time and memory).
The middle and right scatters of Figure \ref{fig:exp4_combined} show the comparison of the average training and testing accuracy between the heuristic approach and \texttt{CART}{}.
It is clear that \texttt{CART}{} almost always gets better training accuracy.
However, the heuristic {MaxSAT}-\texttt{BDD}{} is still competitive in terms of generalisation.
\begin{figure*}[ht!]
\centering
\begin{minipage}{0.31\linewidth}
\centering
\includegraphics[width=.8\linewidth, height=.7\linewidth]
{figures/Exp4_scatter_testing_maxsatbdd_hmaxsatbdd_size_0575.png}
\end{minipage}\hfill
\begin{minipage}{0.31\linewidth}
\centering
\includegraphics[width=.8\linewidth, height=.7\linewidth]
{figures/Annex2_scatter_training_cart_hmaxsatbdd_size.png}
\end{minipage}\hfill
\begin{minipage}{0.31\linewidth}
\centering
\includegraphics[width=.8\linewidth, height=.7\linewidth]
{figures/Annex2_scatter_testing_cart_hmaxsatbdd_size_065.png}
\end{minipage}
\caption{{Left: comparison of Heuristic {MaxSAT}-\texttt{BDD}{} and Excact {MaxSAT}-\texttt{BDD}{} in testing accuracy. Middle: comparison of the training accuracy of Heuristic {MaxSAT}-\texttt{BDD}{} and \texttt{CART}{}. Right: comparison of the training accuracy of Heuristic {MaxSAT}-\texttt{BDD}{} and \texttt{CART}{}}}
\label{fig:exp4_combined}
\end{figure*}
\input{tables/Annex2_cart_hmaxsatbdd_maxsatbdd}
\section{Conclusion}
We propose exact and heuristic methods for optimizing binary decision diagrams (\texttt{BDD}{}s) based on the (Maximum) Boolean Satisfiability framework.
Our large experimental studies show clear benefits of the proposed approach in terms of prediction quality and interpretability (compact topology) compared to the existing (heuristic) approaches.
In the future, it would be interesting to extend the proposed approach for multi-valued classification.
Moreover, a deeper investigation of \texttt{BDD}{}s with other interpretable models (such as decision rules and decision sets) is needed for the sake of explainable AI.
|
1,314,259,994,902 | arxiv | \section{Introduction}
The study of group actions on metric space is a broad topic in which one studies the interplay between the group structure and the structure of the metric space on which it acts. When considering a group action on Hilbert spaces, property (FH) is an imporant notion which is defined as follows: a group $G$ has property (FH) if every isometric action of $G$ on a Hilbert space admits a fixed point. Property (FH) can be rephrased in a cohomological language as follows: a group has property (FH) if and only if $H^1 (G,\pi)=0$ for any unitary representation of $G$ on a Hilbert space (the proof of this fact can be found for instance in \cite{PropTBook}[Lemma 2.2.6]).
Recently there have been much interest in studying the generalization of property (FH) for group actions on Banach spaces (see \cite{Nowak2} for a survey of recent developments regarding this question). In order to state a generalization of property (FH) in the Banach setting, we recall the following facts taken from \cite{Nowak2}[Section 2.2]:
\begin{itemize}
\item Any affine action $A$ on a Banach space $X$ is of the form $Ax = Tx+b$ where $T$ is a linear map and $b \in X$.
\item As a result of the previous fact, if $\rho$ defines an affine action of $G$ on $X$, then
$$\forall g \in G, \rho (g).x = \pi (g).x + b(g),$$
where $\pi : G \rightarrow B(X)$ is a linear representation of $G$ on $X$ called the linear part of $\rho$ and $b:G \rightarrow X$ is a map which satisfies the cocycle condition:
$$\forall g,h \in G, b(gh) = \pi (g) b(h) + b(g).$$
\item For a group $G$ and a linear representation $\pi$ on a Banach space $X$, $H^1 (G,\pi)=0$ if and only if any affine action with a linear part $\pi$ admits a fixed point.
\end{itemize}
Thus the vanishing of the first cohomology reflects a rigidity phenomenon. In this article, we will explore a generalization of this phenomenon and we will prove vanishing of higher cohomologies for groups acting on simplicial complexes. The idea is that given a ``nice enough'' group action on a simplicial complex $\Sigma$, one can show vanishing of cohomology with coefficients in representations on Banach spaces under suitable assumption on the norm growth of the action and on the geometry of the simplicial complex. This is done by using the interplay between the geometry of the Banach space and the geometry of the simplicial complex as it is reflected in the angles between couples of subgroups of $G$ stabilizing top-dimensional simplices in the simplicial complex (see definition below).
The definition of angle between subgroups in the Hilbert setting is as follows: let $G$ be a group, $\pi$ by a unitary representation of $G$ on a Hilbert space $H$ and let $K_1, K_2 < G$ be subgroups of $G$. The angle between $K_1$ and $K_2$ with respect to $\pi$ is defined as the (Friedrichs) angle between $H^{\pi (K_1)}$ and $H^{\pi (K_2)}$. The angle between $K_1$ and $K_2$ is then defined as the supremum with respect to all unitary representations of $G$.
This idea of angle was used in the work of Dymara and Januszkiewicz \cite{DymaraJ} to prove property (T) (which is equivalent to property (FH) in this setting) and vanishing of higher cohomologies with coefficients in representations on Hilbert spaces for groups acting on simplicial complexes. Dymara and Januszkiewicz further showed how to bound the angle between the two subgroups using the spectral gap of the Laplacian on a graph generated by these subgroups.
At first glance, this idea seems very much related to the so called ``geometrization of property (T)'' (this term was coined by Shalom \cite{Shalom}), since it uses the spectral gap of a Laplacian to deduce property (T) in a way similar to Zuk's famous criterion for property (T) (see \cite{Zuk}, \cite{BallmannS}). However, at its core, the idea of angle between subgroups is much stronger than Zuk's criterion, because it better captures the behaviour of the group $G$. In \cite{ORobust} the author generalized this idea of angle to the setting of Banach spaces, considering angle between projections instead of angle between subspaces. This new notion of angle was used by the author it to show a strengthened version of Banach property (T) for a large class of Banach spaces. This in turn implies the vanishing of the first group cohomology with coefficients in the isometric representations on this class of Banach spaces.
The aim of this paper is to generalize the vanishing of cohomologies theorem of Dymara and Januszkiewicz in \cite{DymaraJ} to coefficients in representations on Banach spaces. A major problem with transfering the results of Dymara and Januszkiewicz to Banach space setting was that the angle computations of \cite{DymaraJ}[section 4] heavily relied on the idea that in Hilbert spaces the angle between two subspaces is equal to the angle between their orthogonal complements. However, this idea of computing the angle by passing to the orthogonal complement does not seem to work in our definition of angle between projections.
The technical heart of this paper is devoted to attaining results regarding angles between projections in Banach spaces that are similar to the results of Dymara and Januszkiewicz (but without passing to the orthogonal complemet). In order to attain these results, we first explore the idea of angle between more than two projections (this was inspired by the ideas of Kassabov in \cite{Kassabov}).
After obtaining these technical results, the vanishing theorem can be reproved for coefficients in representations on Banach spaces by the same arguments given in \cite{DymaraJ}.
In order to apply these results in concrete examples (such as groups groups coming from a BN-pair), we need to bound angles between pairs of subgroups $K_1,K_2 <G$ with respect to representations on Banach spaces. Given a pair of subgroups $K_1,K_2 <G$, this is done by bounding this angle between these subgroups in the Hilbert setting and then (if this angle is large enough to begin with) using this bound in order to get a bound on the angles between these subgroups with respect to representations on Banach spaces that are ``close enough'' to a Hilbert space. Being ``close enough'' to a Hilbert space involves a several step process of deforming a Hilbert space that will be explained in detail below.
\subsection{Deformations of Hilbert spaces}
We will consider Banach spaces which are deformations of Hilbert spaces. In order to explain which deformations we consider, we need to introduce several ideas from the theory of Banach spaces.
\subsubsection{The Banach-Mazur distance and Banach spaces with ``round'' subspaces}
The Banach-Mazur distance measures a distance between isomorphic Banach spaces:
\begin{definition}
Let $Y_1, Y_2$ be two isomorphic Banach spaces. The (multiplicative) Banach-Mazur distance between $Y_1$ and $Y_2$ is defined as
$$d_{BM} (Y_1, Y_2) = \inf \lbrace \Vert T \Vert \Vert T^{-1} \Vert : T : Y_1 \rightarrow Y_2 \text{ is a linear isomorphism} \rbrace.$$
\end{definition}
This distance has a multiplicative triangle inequality (the proof is left as an exercise to the reader):
\begin{proposition}
Let $Y_1,Y_2,Y_3$ be isomorphic Banach spaces. Then
$$d_{BM}(Y_1,Y_3) \leq d_{BM}(Y_1,Y_2) d_{BM}(Y_2,Y_3).$$
\end{proposition}
We will be especially interested in the Banach-Mazur distance between $n$-dimensional Banach spaces and $\ell_2^n$. A classical theorem by F. John \cite{John} states for every $n$-dimensional Banach space $Y$, $d_{BM} (Y, \ell_2^n) \leq \sqrt{n}$ and the classical cases in which this inequality is an equality are $\ell_1^n$ and $\ell_\infty^n$. Later, Milman and Wolfson \cite{MilmanWolfson} proved that these classical cases are in some sense generic: \cite{MilmanWolfson}[Theorem 1] states that if $d_{BM} (Y, \ell_2^n) = \sqrt{n}$, then there is $k\geq \frac{\ln (n)}{2 \ln (12)}$ such that $Y$ contains a $k$-dimensional subspace isometric to $\ell_1^k$.
In this paper we will concern ourselves with Banach spaces whose finite dimensional subspaces are sufficiently ``round'', i.e., sufficiently close to $\ell_2$-spaces. Given a Banach space $X$ and a constant $k \in \mathbb{N}$, we use the following notation $d_k (X)$ taken from the work of de Laat and de la Salle \cite{LaatSalle2}:
$$d_k (X) = \sup \lbrace d_{BM} (Y) : Y \subseteq X, \dim (Y) \leq k \rbrace.$$
We further introduce the following notation: given a constant $r \geq 2$ and a constant $C_1 \geq 1$, we denote $\mathcal{E}_1 (r,C_1)$ to be the class of Banach spaces defined as follows:
\begin{align*}
\mathcal{E}_1 (r, C_1 ) =
\lbrace X : \forall k \in \mathbb{N}, d_k (X) \leq C_1 k^{\frac{1}{r}} \rbrace.
\end{align*}
The reader should note that for every choice of $r \geq 2, C_1 \geq 1$, the class $\mathcal{E}_1 (r, C_1 )$ always contains the class of all Hilbert spaces, since for every Hilbert space $H$ we have that $d_k (H) =1$ for every $k$.
An example of Banach spaces contained in $\mathcal{E}_1 (r,C_1)$ are spaces of bounded type and cotype. The definitions of type and cotype are given in the background section below, but for our uses, it is sufficient to state the following theorem due to Tomczak-Jaegermann \cite{Tomczak-Jaegermann}[Theorem 2 and the corollary after it]: if $X$ is a Banach space of type $p_1$, cotype $p_2$ and corresponding constants $T_{p_1} (X)$, $C_{p_2} (X)$ (see definitions below), then $d_k (X) \leq 4 T_{p_1} (X) C_{p_2} (X) k^{\frac{1}{p_1} - \frac{1}{p_2}}$.
This theorem yields that for every $r>2$, every $\frac{1}{r}$ and every $C_1 \geq 1$, the class $\mathcal{E}_1 (r,C_1 )$ contains all Banach spaces $X$ with type $p_1$, cotype $p_2$ and corresponding constants $T_{p_1} (X)$, $C_{p_2} (X)$ such that $\frac{1}{p_1} - \frac{1}{p_2} \leq \frac{1}{r}$ and $4 T_{p_1} (X) C_{p_2} (X) \leq C_1$.
\subsubsection{Interpolation}
Two Banach spaces $X_0, X_1$ form a compatible pair $(X_0,X_1)$ if there is a continuous linear embedding of both $X_0$ and $X_1$ in the same topological vector space. The idea of complex interpolation is that given a compatible pair $(X_0,X_1)$ and a constant $0 < \theta < 1$, there is a method to produce a new Banach space $[X_0, X_1]_\theta$ as a ``combination'' of $X_0$ and $X_1$. We will not review this method here, and the interested reader can find more information on interpolation in \cite{InterpolationSpaces}.
We will introduce the following notation: let $\mathcal{E}$ be a class of Banach spaces and let $0 < \theta_2 \leq 1$ be a constant. Denote $\mathcal{E}_2 (\mathcal{E}, \theta_2)$ the class of Banach spaces defined as follows
\begin{align*}
\mathcal{E}_2 (\mathcal{E},\theta_2)= \lbrace X : \exists X_1 \in \mathcal{E}, \exists X_0 \text{ Banach}, \exists \theta_2 \leq \theta \leq 1 \text{ such that } X=[X_0,X_1]_\theta \rbrace.
\end{align*}
We will be interested with composing this definition with $\mathcal{E}_1 (r,C_1)$ defined above and considering $\mathcal{E}_2 (\mathcal{E}_1 (r,C_1),\theta_2)$.
As noted above $\mathcal{E}_1 (r,C_1)$ contains the class of all the Hilbert spaces. This brings us to consider the following definition is due to Pisier in \cite{Pisier}: a Banach space $X$ is called strictly $\theta$-Hilbertian for $0 < \theta \leq 1$, if there is a compatible pair $(X_0,X_1)$ such that $X_1$ is a Hilbert space such that $X = [X_0, X_1]_\theta$. Examples of strictly $\theta$-Hilbertian spaces are $L^p$ space and non-commutative $L^p$ spaces, where in these cases $\theta = \frac{2}{p}$ if $2 \leq p < \infty $ and $\theta = 2 - \frac{2}{p}$ if $1 < p \leq 2$ (a reader who is not familiar with non-commutative $L^p$ spaces can find a detailed account in \cite{PisierXu2}).
Another source of examples for strictly $\theta$-Hilbertian spaces are superreflexive Banach lattices. Recall that a Banach space $X$ is called uniformly convex if
$$\sup \left\lbrace \left\Vert \frac{x+y}{2} \right\Vert : \forall x,y \in X, \Vert x \Vert = \Vert y \Vert =1, \Vert x - y \Vert \geq \varepsilon \right\rbrace <1 \text{ for every } \varepsilon >0.$$
Further recall that a Banach space $X$ is called superreflexive if all its ultrapowers are reflexive, which is equivalent by \cite{BenLin}[Theorem A.6] to $X$ being isomorphic to a uniformly convex space. A Banach lattice is a Banach space with a ``well-behaved'' partial order on it - the definition is rather techincal and we will not recall it here (for the exact defition of a Banach lattice and further properties of it, the reader is reffered to \cite{BenLin}[Appendix G]).
Pisier \cite{Pisier} proved that any superreflexive Banach lattice is strictly $\theta$-Hilbertian and suggested that this result might by true even for superreflexive Banach spaces which are not Banach lattices.
\subsubsection{Passing to a isomorphic space}
The last deformation we want to consider is passing to an isomorphic space. We introduce the following notation: let $\mathcal{E}$ be a class of Banach spaces and let $C_3 \geq 1$ be a constant, denote by $\mathcal{E}_3 (\mathcal{E},C_3)$ the class of Banach spaces defined as
\begin{align*}
\mathcal{E}_3 (\mathcal{E}, C_3) = \lbrace X : \exists X' \in \mathcal{E} \text{ such that } d_{BM} (X,X') \leq C_3 \rbrace.
\end{align*}
\subsubsection{Passing to the closure}
Our criterion for vanishing of cohomology relies on geometric properties of a Banach space that are stable under certain operations. Therefore, we can enlarge our Banach class by passing to the closure under these operations: for a class of Banach spaces $\mathcal{E}$, denote by $\overline{\mathcal{E}}$ the smallest class of Banach spaces containing $\mathcal{E}$ that is closed under passing to quotients, subspaces, $l_2$-sums and ultraproducts.
\subsubsection{Composing the deformations}
The class of Banach spaces we will want to consider is the composition of the all the deformations described above, i.e., we start with a Hilbert space and use $\mathcal{E}_1 (r, C_1)$ to consider deformations of it, on that class we consider interpolation, then pass to isomorphic spaces with bounded Banach-Mazur distance and finish by passing to the closure.
To put it all together, we start with constants $r \geq 2$, $C_1 \geq 1$, $1 \geq \theta_2 >0$ and $C_3 \geq 1$ and consider the class $\overline{\mathcal{E}_3 (\mathcal{E}_2 (\mathcal{E}_1 (r,C_1),\theta_2),C_3)}$.
\subsection{The main theorem for BN-pair groups}
Following Dymara and Januszkiewicz, our vanishing of cohomology results are true for groups acting on simplicial complexes given that certain conditions are fulfilled (conditions $(\mathcal{B}1)-(\mathcal{B} 4)$ and $(\mathcal{B}_{\delta, r})$ stated below). However, currently, our only examples of groups acting on complexes satisfying these conditions are groups with a BN-pair (e.g., classical BN-pair groups acting on Euclidean buildings or 2-spherical Kac-Moody groups). Therefore, in this introduction we will state our main result only for BN-pair groups (the more general Theorem \ref{vanishing of cohomology by conditions on the links and cont dual assumption} is given below).
In order to state the main theorem, we recall some generalities regarding BN-pair groups (a reader not familiar with BN-pair groups can find and extensive treatment of this subject in \cite{BuildingsBook}[Chapter 6]) and introduce a few notations regarding representations.
Let $G$ be a BN-pair group and let $\Sigma$ be the $n$-dimensional building on which it acts. Then $G$ acts on $\Sigma$ cocompactly and $\triangle = \Sigma / G$ is a single chamber of $\Sigma$. We assume that $n >1$, i.e., that $\Sigma$ is not a tree and denote $\triangle (k)$ to be the $k$-dimensional faces of $\Sigma / G$. We assume further that there is some $l \in \mathbb{N}$ such that all the $l$-dimensional links of $\Sigma$ are compact. Be this assumption, for every $\tau \in \triangle (n-2)$, the isotropy group $G_\tau = \operatorname{Stab} (\tau)$ is compact and $G$ is generated by $\bigcup_{\tau \in \triangle (n-2)} G_\tau$. Let $\mathcal{E}$ be a class of Banach spaces and let $s_0 >0$ be a constant. Denote $\mathcal{F} (\mathcal{E}, G, s_0)$ to be all the continuous representations $(\pi,X)$ of $G$ such that $X \in \mathcal{E}$ and
$$\sup_{g \in \bigcup_{\tau \in \triangle (n-2)} G_\tau} \Vert \pi (g) \Vert \leq e^{s_0}.$$
Note that $\mathcal{F} (\mathcal{E}, G, s_0)$ contains all the isometric representation of $G$ on some $X \in \mathcal{E}$, but is also contains representations which are not uniformly bounded. Indeed, if $G$ is taken with the word norm $\vert . \vert$ with respect to $\bigcup_{\tau \in \triangle (n-2)} G_\tau$, then $\mathcal{F} (\mathcal{E}, G, s_0)$ contains all the representations $\pi$ such that $\Vert \pi (g) \Vert \leq e^{s_0 \vert g \vert}$ for every $g \in G$. Denote further $\mathcal{F}_0 (\mathcal{E}, G, s_0)$ to be
$$\mathcal{F}_0 (\mathcal{E}, G, s_0) = \lbrace \pi \in \mathcal{F} (\mathcal{E}, G, s_0) : \pi^* \text{ is a continuous representation} \rbrace,$$
where $\pi^*$ is the dual representation of $\pi$.
After all these notations and definitions, we are ready to state our main theorem:
\begin{theorem*}
Let $G$ be a group coming from a BN-pair and let $\Sigma$ be the $n$-dimensional building on which it acts. Assume that $n >1$ and there is some $l \in \mathbb{N}$ such that all the $l$-dimensional links of $\Sigma$ are compact. Denote by $q+1$ the thickness of the building $\Sigma$.
Let $r>20$, $C_1 \geq 1$, $1 \geq \theta_2 >0$, $C_3 \geq 1$ be constants. Then there are constants $s_0 = s_0 (n)$ and $Q = Q(n, C_1 , \theta_2, C_3)$ such that if $q \geq Q$, then for every
$$\pi \in \mathcal{F}_0 (\overline{\mathcal{E}_3 (\mathcal{E}_2 (\mathcal{E}_1 (r, C_1),\theta_2),C_3)}, G, s_0),$$
we have that
$$H^i (G,\pi) = 0, i=1,...,l.$$
\end{theorem*}
\begin{remark}
In the above theorem, when considering $\mathcal{E}_1 (r, C_1)$ we took $r>20$. In many cases, this choice can be improved, i.e., $r$ can be taken to be smaller, if the codimension $1$ links of the building $\Sigma$ are known. For instance, if $\Sigma$ is known to be an $\widetilde{A}_n$ building, then we have the same theorem above with $r>4$. The precise statement of this fact is given in Corollary \ref{vanishing of cohomology for BN pair groups}
\end{remark}
\subsection{Examples of Banach spaces for which the theorem holds}
In the main theorem above, we considered only representations whose dual is continuous. This might seem to be a major restriction, but we will show below that the class of representations that we are considering is still very rich. We will do so by showing that there are interesting examples (families of) of Banach spaces in $\overline{\mathcal{E}_3 (\mathcal{E}_2 (\mathcal{E}_1 (r, C_1),\theta_2),C_3)}$ with $r>20$ for which each continuous representation has a continuous dual.
Indeed, \cite{Megre}[Corollary 6.9] states that if $X$ is an Asplund Banach space then for every continuous representation $\pi$, the dual representation $\pi^*$ is also continuous. The exact definition of Asplund spaces in given in the next section (along with a good reference regarding these spaces), but for our needs, it is enough to recall that any reflexive Banach space is an Asplund space. Using this fact, we will show that $\overline{\mathcal{E}_3 (\mathcal{E}_2 (\mathcal{E}_1 (r, C_1),\theta_2),C_3)}$ contains many interesting reflexive spaces.
First, for a Banach space $X$, we recall that $X$ is called uniformly non-square if there is some $\varepsilon >0$ such that for every $x, y \in X$ with in the unit ball of $X$, $\min \lbrace \Vert \frac{x+y}{2} \Vert, \Vert \frac{x-y}{2} \Vert \rbrace \leq 1- \varepsilon$. James \cite{James}[Theorem 1.1] showed that every uniformly non-square space is reflexive. An easy exercise shows that if $d_2 (X) < \sqrt{2}$ then $X$ is uniformly non-square. Therefore, for every $X \in \mathcal{E}_1 (r, C_1)$, if $d_2 (X) < \sqrt{2}$, then $X$ is reflexive, i.e., every $X \in \mathcal{E}_1 (r, C_1)$ whose $2$-dimensional subspaces are not too distorted is a reflexive space.
Second, since $\mathcal{E}_1 (r, C_1)$ contains all Hilbert space, we have that $\mathcal{E}_2 (\mathcal{E}_1 (r, C_1),\theta_2)$ contains all $\theta$-Hilbertian spaces with $\theta \geq \theta_2$. As noted above this includes $L^p$ spaces and non commutative $L^p$ spaces with $\frac{2}{2-\theta_2} \leq p \leq \frac{2}{\theta_2}$. By \cite{PisierXu2}[Theorem 5.1] these spaces are uniformly convex and therefore superreflexive (hence reflexive). Also, $\mathcal{E}_2 (\mathcal{E}_1 (r, C_1),\theta_2)$ also includes a subclass of the class of superreflexive Banach lattices (as noted above, for any superreflexive Banach lattice $X$, there is $\theta >0$ such that $X$ is $\theta$-Hilbertian).
Third, reflexivity of Banach spaces is preserved under isomorphism and therefore $\mathcal{E}_3 (\mathcal{E}_2 (\mathcal{E}_1 (r, C_1),\theta_2),C_2)$ contains isomorphic spaces to the reflexive Banach spaces contained in $\mathcal{E}_2 (\mathcal{E}_1 (r, C_1),\theta_2)$.
Last, reflexivity is preserved under passing to a closed subspace, taking a quotient by a closed subspace and countable $l_2$-sums. Ultrapowers does not preserve reflexivity, but by definition, if $X$ is superreflexive, then all its ultrapowers are reflexive. Therefore passing to the closure $\overline{\mathcal{E}_3 (\mathcal{E}_2 (\mathcal{E}_1 (r, C_1),\theta_2),C_3)}$ provides more examples of reflexive Banach spaces constructed from $\mathcal{E}_3 (\mathcal{E}_2 (\mathcal{E}_1 (r, C_1),\theta_2),C_3)$ by these operations.
\begin{remark}
Above we have a list of families of Banach spaces (e.g., $L^p$ spaces or uniformly non-square spaces in $\mathcal{E} (r,C_1)$ when $r>20$) for which our main theorem holds for every representation in which the norm doesn't grow too fast with respect to the word norm (in particular, for every isometric representation). As far as we know, for each one of these families our vanishing of higher cohomologies results are new even in the classical case of BN-pair groups acting on Euclidean buildings.
\end{remark}
\textbf{Structure of this paper.} Section 2 includes all the needed background material. Section 3 is devoted to proving the main technical result regarding angles between projections in Banach spaces. In section 4, we formulate and prove our main results regarding vanishing of cohomologies for groups acting on simplicial complexes. The appendix contains technical results regarding angles between projections under a weaker assumptions that the ones used in section 3, that may be of independent interest.
\section{Background}
\subsection{Groups acting on simplicial complexes}
\label{Groups acting on simplicial complexes subscetion}
Here we present the set up needed for our results of groups acting on simplicial complexes. We start by recalling some definitions given by Dymara and Januszkiewicz in \cite{DymaraJ}[section 1].
Let $\Sigma$ be a countable pure $n$-dimensional simplicial complex with $n \geq 2$. The top dimensional simplices of $\Sigma$ will be called chambers and $\Sigma$ will be called gallery connected if for any two chambers $\sigma, \sigma'$ there is a sequence of chambers
$$\sigma = \sigma_1, \sigma_2,...,\sigma_k = \sigma',$$
such that for every $i$, $\sigma_i \cap \sigma_{i+1}$ is a simplex of co-dimension $1$ in $\Sigma$.
Denote by $Aut (\Sigma)$ the group of simplicial automorphisms of $\Sigma$. On $Aut (\Sigma)$ define the compact-open topology whose basis are the sets $U(K,g_0)$ where $g_0 \in Aut (\Sigma)$, $K \subseteq \Sigma$ compact and $U(K,g_0)$ is defined as
$$U(K,g_0) = \lbrace g \in Aut (\Sigma) : g \vert_{K} = g_0 \vert_K \rbrace.$$
Let $G < Aut (\Sigma)$ be a closed subgroup of $Aut (\Sigma)$.
Given a continuous representation $\pi$ of $G$ on a Banach space, one can define $H^* (G, \pi)$ and $H^* (\Sigma, \pi)$. We will not review these definitions here and a reader unfamilier with these definitions can find them in \cite{DymaraJ}[Section 3] and reference therein. The main fact that we will use is that one can compute $H^* (G, \pi)$ based on $H^* (\Sigma, \pi)$:
\begin{lemma}\cite{BorelW}[X.1.12]
\label{locally finite simplicial complex cohomology lemma}
Let $\Sigma$ be a simplicial complex, $G < Aut (\Sigma)$ be a closed subgroup and $\pi$ be a representation of $G$ on a Banach space. Assume that $\Sigma$ is contractible and locally finite and that the action of $G$ on $\Sigma$ is cocompact, then $H^* (G, \pi) = H^* (\Sigma, \pi)$.
\end{lemma}
The above lemma assumes that $\Sigma$ is locally finite (i.e., that the link of every vertex is compact). In order to compute the cohomology of $G$ in cases where $\Sigma$ is not locally finite, Dymara and Januszkiewicz introduced the following definition of the core of $\Sigma$:
\begin{definition}\cite{DymaraJ}[Definition 1.3]
Let $\Sigma$ be a simplicial complex such that every link of $\Sigma$ is either compact or contractible (including $\Sigma$ itself, which is the link of the empty set) and such that the $0$-dimensional links of $\Sigma$ are finite. Denote $\Sigma '$ to be the first barycentric subdivision of $\Sigma$. The core of $\Sigma$, denoted $\Sigma_D$, is the subcomplex of $\Sigma'$ spanned by the barycenters of simplices of $\Sigma$ with compact links.
\end{definition}
\begin{lemma}\cite{DymaraJ}[Lemma 1.4]
Let $\Sigma$ be an infinite simplicial complex such that every link of $\Sigma$ is either compact or contractible (in particular $\Sigma$ is contractible, because it is the link of $\emptyset$) and such that the $0$-dimensional links of $\Sigma$ are finite. Then $\Sigma_D$ is contractible.
\end{lemma}
Note that if the assumption that the $0$-dimensional links of $\Sigma$ are finite implies that $\Sigma_D$ is locally finite. Also note that any closed subgroup $G < Aut (\Sigma)$ is also a closed subgroup in $Aut (\Sigma_D)$. Therefore combining the above lemma with Lemma \ref{locally finite simplicial complex cohomology lemma} above yields the following corollary:
\begin{corollary}
Let $\Sigma$ be an infinite pure $n$-dimensional simplicial complex, $G < Aut (\Sigma)$ be a closed subgroup and $\pi$ be a representation of $G$ on a Banach space. Assume that every link of $\Sigma$ is either compact or contractible and such that the $0$-dimensional links of $\Sigma$ are finite. If the action of $G$ on $\Sigma$ is cocompact, then $H^* (G, \pi) = H^* (\Sigma_D, \pi)$.
\end{corollary}
Following Dymara and Januszkiewicz, we will use the above corollary to show vanishing of the group cohomology under additional assumptions on $\Sigma$ and on the action of $G$. In order to state our additional assumptions we recall the following conditions on the couple $(\Sigma, G)$ taken from \cite{DymaraJ}:
\begin{enumerate}[label=($\mathcal{B}${{\arabic*}})]
\item All the $0$-dimensional links are finite.
\item All the links of dimension $\geq 1$ are gallery connected.
\item All the links are either compact or contractable (including $\Sigma$ itself).
\item $G$ acts transitively on chambers and $\Sigma \rightarrow \Sigma / G$ restricts to an isomorphism on every chamber.
\end{enumerate}
Let $\Sigma$ be an infinite simplicial complex and $G < Aut (\Sigma)$ be a closed subgroup satisfying $(\mathcal{B}1)-(\mathcal{B} 4)$ and let $\pi$ a continuous representation of $G$ on a Banach space $X$. Fix a chamber $\triangle \in \Sigma (n)$ and for every $\eta \subseteq \triangle$, denote $G_\eta$ to be the subgroup of $G$ fixing $\sigma$ and also denote $X^{\pi (G_\eta)} = X_\eta$ to be the subspace of $X$ fixed by $G_\eta$ (under the action of $\pi$). One of the key ideas in \cite{DymaraJ} is that one can deduce vanishing of cohomologies of $G$ with coefficients in $\pi$ given that there are projections on all the $X_\eta$'s and nice decompositions of these $X_\eta$'s with respect to these projections. To make this precise:
\begin{theorem}\cite{DymaraJ}[Theorems 5.2,7.1]
\label{general vanishing of cohomology based on decomposition}
Let $\Sigma$ be an infinite simplicial complex, $G < Aut (\Sigma)$ be a closed subgroup satisfying $(\mathcal{B}1)-(\mathcal{B} 4)$ and $\pi$ a continuous representation of $G$ on a Banach space $X$. Under the notations above, for every $\eta \subseteq \triangle$ denote $D_\eta$ to be the subcomplex of $\Sigma_D$ spanned by the barycenters of simplices of $\triangle$ that have compact links and do not contain $\eta$.
Assume that for every $\eta \subseteq \triangle$ there is a projection $P_\eta :X \rightarrow X$ on $X_\eta$. For every $\eta \subseteq \triangle$, denote
$$X^\eta = \operatorname{Im} (P_\eta) \cap \bigcap_{\tau \subsetneqq \eta} \operatorname{Ker} (P_\tau).$$
If for every $\eta \subseteq \triangle$, the following holds
$$X_\eta = \bigoplus_{\tau \subseteq \eta} X^\tau,$$
then
$$H^* (G, \pi) = \bigoplus_{\eta \subseteq \triangle} \widetilde{H}^{*-1} (D_\eta; X^{\eta}).$$
Moreover, if there is $l \geq 1$ such that all the $l$-dimensional links of $\Sigma$ are compact, then for every $i=1,...,l$, $H^i (G, \pi)=0$.
\end{theorem}
\begin{remark}
In \cite{DymaraJ}[Theorem 7.1] the assumptions of the theorem do not include the decomposition $X_\eta = \bigoplus_{\tau \subseteq \eta} X^\tau$, but assumptions regrading the spectral gap in the $1$-dimensional links from which this decomposition is deduced. However, the proof of the theorem only relies on the above decomposition, therefore the theorem can be stated as above. Also, \cite{DymaraJ}[Theorems 5.2, 7.1] are stated for continuous unitary representations on Hilbert spaces, but the proof of \cite{DymaraJ}[Theorem 7.1] and the proof of \cite{DymaraJ}[Theorem 5.2] based on \cite{DymaraJ}[Theorem 7.1] pass verbatim to continuous representations on Banach spaces.
\end{remark}
We would like to add an additional condition on $\Sigma$ that will be denoted ($\mathcal{B}_{\delta, r}$) (replacing the condition ($\mathcal{B}_\delta$) appearing in \cite{DymaraJ}):
\begin{enumerate}[label=($\mathcal{B}_{\delta, r}$)]
\item For every $\eta \in \Sigma(n-2)$, the link of $\eta$, denoted $\Sigma_\eta$, is finite bipartite graph with sides $V_{\eta,1}, V_{\eta,2}$. For any $\eta \in \Sigma(n-2)$ denote $$V_{min} (\eta) = \min \lbrace \vert V_{\eta,1} \vert,\vert V_{\eta,2} \vert\rbrace,$$
and denote $\kappa (\eta)$ to be the smallest positive eigenvalue of the normalized Laplacian of $\Sigma_\eta$, then
$$(1-\kappa (\eta) ) \left( V_{min} (\eta) \right)^{\frac{1}{r}} \leq \delta.$$
\end{enumerate}
\begin{remark}
We note that if condition $(\mathcal{B} 4)$ is fulfilled and if the $1$-dimensional links of $\Sigma$ are finite, then every $1$-dimensional link has to be a bipartite graph.
\end{remark}
The main source of examples of $(\Sigma, G)$ fulfilling $(\mathcal{B}1)-(\mathcal{B} 4)$ and $(\mathcal{B}_{\delta, r})$ are groups coming from BN-pairs (a reader unfamilier with the definition of a BN-pair can find it in \cite{BuildingsBook}[Chapter 6]), when $G$ is the group and $\Sigma$ is the building on which it acts. In \cite{DymaraJ} the following is proved:
\begin{proposition}\cite{DymaraJ}[Propositions 1.6,1.7]
Let $G$ be a group coming from a BN-pair and let $\Sigma$ be the building on which it acts. Assume further that $\Sigma$ is non compact and has finite thickness. Then conditions $(\mathcal{B}1)-(\mathcal{B} 4)$ are fulfilled for $(\Sigma, G)$ and $\Sigma_D$ is contractible.
\end{proposition}
In order to check the condition $(\mathcal{B}_{\delta, r})$ in buildings, we recall that if a building $\Sigma$ has finite $1$-dimensional links, then these links are spherical building, i.e., they are thick generalized $m$-gons with $m=2,3,4,6,8$ (a reader unfamilier with generalized $m$-gons can find a good introduction in \cite{VanM}[Chapter 1]).
\begin{proposition}
\label{condition B-delta,r for buildings}
Let $\Sigma$ be a building such that the $1$-dimensional links of $\Sigma$ are compact. Let $m'$ be the smallest integer such that all the links of $1$-dimensional links of $\Sigma$ are generalized $m$-gons with $m \leq m'$. Then for every
$$r >
\begin{cases}
4 & m' =3 \\
8 & m' =4 \\
18 & m' = 6 \\
20 & m' =8
\end{cases},$$
and every $\delta >0$, if the thickness of the building is large enough, then $( \mathcal{B}_{\delta, r} )$ holds for $\Sigma$.
\end{proposition}
\begin{proof}
Let $(V,E)$ be a generalized $m$-gon of order $(s,t)$ and assume without loss of generality that $s \geq t$. Denote $\kappa$ to be the smallest positive eigenvalue of the normalized Laplacian on $(V,E)$. If $m=2$, then $1-\kappa=0$ and therefore this case is of no interest to us.
For $m >2$ the spectral gap $\kappa$ was explicitly computed by Feit and Higman \cite{FeitHig} for all generalized $m$-gons (the reader can find a summation of these results in \cite{BallmannS}[Section 3]).
We will not recall the exact values of $\kappa$ depending on $(s,t)$, but only the asymptotic behaviour of $1-\kappa$ as $s$ and $t$ tends to $\infty$:
$$1- \kappa \sim \begin{cases}
O(\frac{1}{\sqrt{t}}) = O(\frac{1}{\sqrt{s}}) & m =3 \\
O(\frac{1}{\sqrt{t}} + \frac{1}{\sqrt{s}}) & m=4,6,8
\end{cases}.$$
We recall that generalized $m$-gons are always bipartite graphs. Denote $V_1, V_2$ to be the vertices in the two sides of $(V,E)$ and denote
$$V_\min = \min \lbrace \vert V_1 \vert, \vert V_2 \vert \rbrace.$$
The exact value of $\vert V_\min \vert$ depending on $(s,t)$ is computed in \cite{VanM}[Corollary 1.5.5] (recall we assumed that $s \geq t$):
$$V_\min =
\begin{cases}
t^2 + t +1 & m=3 \\
(st+1)(t+1) & m =4\\
((st)^2 + st +1))(t+1) & m =6\\
((st)^2+1)(st+1))(t+1) & m =8\\
\end{cases}.$$
In order to complete the proof, we will also need the following connections between $s$ and $t$ (see \cite{VanM}[Theorem 1.7.2]):
$$\begin{cases}
s=t & m=3 \\
t^{\frac{1}{2}} \leq s \leq t^2 & m=4,8 \\
t^{\frac{1}{3}} \leq s \leq t^3 & m=6
\end{cases}.$$
To conclude the proof, we combine all of the above in order to show that for $r$ as above, $(1-\kappa)\vert V_\min \vert^{\frac{1}{r}}$ tends to $0$ as $t$ tends to infinity.
\begin{dmath*}
(1-\kappa)\vert V_\min \vert^{\frac{1}{r}} \sim \begin{cases}
(t^2+t+1)^{\frac{1}{r}} \frac{1}{\sqrt{t}} & m =3 \\
(st+1)^{\frac{1}{r}} (t+1)^{\frac{1}{r}} (\frac{1}{\sqrt{t}} + \frac{1}{\sqrt{s}}) & m =4 \\
((st)^2 + st +1))^{\frac{1}{r}} (t+1)^{\frac{1}{r}} (\frac{1}{\sqrt{t}} + \frac{1}{\sqrt{s}}) & m=6 \\
((st)^2+1)(st+1))^{\frac{1}{r}} (t+1)^{\frac{1}{r}} (\frac{1}{\sqrt{t}} + \frac{1}{\sqrt{s}}) & m =8
\end{cases} \leq \\
\begin{cases}
(t^2+t+1)^{\frac{1}{r}} \frac{1}{\sqrt{t}} & m =3 \\
(t^2 t+1)^{\frac{1}{r}} (t+1)^{\frac{1}{r}} (\frac{1}{\sqrt{t}} + \frac{1}{\sqrt{t}}) & m =4 \\
((t^3 t)^2 + t^3 t +1))^{\frac{1}{r}} (t+1)^{\frac{1}{r}} (\frac{1}{\sqrt{t}} + \frac{1}{\sqrt{t}}) & m=6 \\
((t^2 t)^2+1)(t^2 t+1))^{\frac{1}{r}} (t+1)^{\frac{1}{r}} (\frac{1}{\sqrt{t}} + \frac{1}{\sqrt{t}}) & m =8
\end{cases} \leq
\begin{cases}
3^{\frac{1}{r}} t^{\frac{2}{r}} \frac{1}{\sqrt{t}} & m =3 \\
4^{\frac{1}{r}} t^{\frac{4}{r}} \frac{2}{\sqrt{t}} & m =4 \\
6^{\frac{1}{r}} t^{\frac{9}{r}} \frac{2}{\sqrt{t}} & m=6 \\
8^{\frac{1}{r}} t^{\frac{10}{r}} \frac{2}{\sqrt{t}} & m=8
\end{cases},
\end{dmath*}
and the conclusion follows.
\end{proof}
\subsection{Averaged projections in a Banach space}
Let $X$ be a Banach space. Recall that a projection $P$ is a bounded operator $P \in \mathcal{B} (X)$ such $P^2 =P$. Note that $\Vert P \Vert \geq 1$ if $P \neq 0$. For subspaces $M, N$ of $X$, we'll say that $P$ is a projection on $M$ along $N$ if $P$ is a projection such that $\operatorname{Im} (P) = M$, $\operatorname{Ker}(P)=N$.
Given a family of projections $P_1,...,P_N$ on $M_1,...,M_N$ in $X$, there is a well known algorithm of finding a projection on $\cap_{j=1}^N M_j$, which is known as the method of averaged projections. The idea is to define the operator $T= \frac{P_1 +...+P_N}{N}$ and to take a limit $T^i$ as $i$ goes to infinity. The reader should note that in general $T^i$ need not converge in the operator norm. In \cite{ORobust}, the author had established a criterion for the convergence of $T^i$ using the idea of an angle between projections.
\begin{definition}[Angle between projections]
\label{angle between projections definition}
Let $X$ be a Banach space and let $P_1, P_2$ be projections on $M_1,M_2$ respectively. Assume that there is a projection $P_{1,2}$ on $M_1 \cap M_2$ such that $P_{1,2} P_1 = P_{1,2}$ and $P_{1,2} P_2 = P_{1,2}$ and define
$$\cos (\angle (P_1,P_2)) = \max \left\lbrace \Vert P_1 (P_2 - P_{1,2} ) \Vert, \Vert P_2 (P_1 - P_{1,2} ) \Vert \right\rbrace.$$
\end{definition}
\begin{remark}
In the above definition, we are actually defining the ``cosine'' of the angle. This is a little misleading, because we do not know if $\cos (\angle (P_1,P_2)) \leq 1$ holds in general (although this inequality holds in all the examples we can compute or bound).
\end{remark}
\begin{remark}
We note that in the case where $X$ is a Hilbert space and $P_1,P_2$ are orthogonal projections on $M_1,M_2$, the orthogonal projection $P_{1,2}$ on $M_1 \cap M_2$ will always fulfill $P_{1,2} P_1 = P_{1,2}$ and $P_{1,2} P_2 = P_{1,2}$. Also, in this case, $\cos (\angle (P_1,P_2))$ will be equal to the Friedrichs angle between $M_1$ and $M_2$ defined as
$$\cos (\angle (M_1,M_2))= \sup \lbrace \vert \langle u,v \rangle \vert : \Vert u \Vert \leq 1, \Vert v \Vert \leq 1, u \in M_1 \cap (M_1 \cap M_2)^\perp, v \in M_2 \rbrace.$$
\end{remark}
Next, we recall the following theorems from \cite{ORobust}:
\begin{theorem} \cite{ORobust}[Theorem 3.12]
\label{Quick uniform convergence criterion}
Let $X$ be a Banach space and let $P_1,...,P_N$ be projections in $X$ ($N \geq 2$). Assume that for every $1 \leq j_1 < j_2 \leq N$, there is a projection $P_{j_1,j_2}$ on $\operatorname{Im} (P_{j_1}) \cap \operatorname{Im} (P_{j_2})$, such that $P_{j_1,j_2} P_{j_1} = P_{j_1,j_2}, P_{j_1,j_2} P_{j_2} = P_{j_1,j_2}$.
Denote $T=\frac{P_1+...+P_N}{N}$ and assume there are constants
$$\gamma < \frac{1}{8N-11} \text{ and } \beta < 1+ \frac{1-(8N-11)\gamma}{N-2 + (3N-4)\gamma},$$
such that
$$ \max \lbrace \Vert P_1 \Vert,..., \Vert P_N \Vert \rbrace \leq \beta \text{ and } \max \lbrace \cos(\angle (P_{j_1},P_{j_2})) : 1 \leq j_1 < j_2 \leq N \rbrace \leq \gamma.$$
Then for
$$r = \dfrac{1+(N-2)\beta}{N}+(4-\dfrac{6}{N})\dfrac{1+\beta}{1-\gamma} \gamma,$$
$$C = \dfrac{(2N-2)\beta^2}{N (1-r)},$$
we have that $r <1$ and there is an operator $T^\infty$, such that $\Vert T^\infty - T^i \Vert \leq C r^{i-1}$. Moreover, $T^\infty$ is a projection on $\bigcap_{j=1}^N \operatorname{Im} (P_j)$.
\end{theorem}
To avoid carrying messy constants, we note the following:
\begin{corollary}
\label{Quick uniform convergence corollary}
In the notations of the above theorem, there are $\gamma_0 >0$ and $\beta_0 >1$ such that if
$$ \max \lbrace \Vert P_1 \Vert,..., \Vert P_N \Vert \rbrace \leq \beta_0 \text{ and } \max \lbrace \cos(\angle (P_{j_1},P_{j_2})) : 1 \leq j_1 < j_2 \leq N \rbrace \leq \gamma_0,$$
then $\Vert T^\infty - T^i \Vert \leq (4N) \left( \frac{2N-1}{2N} \right)^{i-1}$.
\end{corollary}
\begin{proof}
Note that in Theorem \ref{Quick uniform convergence criterion} above, $r$ tends to $\frac{1+(N-2)\beta}{N}$ as $\gamma$ tends to $0$. Therefore, we can choose $\beta_0 > 1$ and $\gamma_0$ small enough such that $r \leq \frac{2N-1}{2N}$. Also note that for such $r$, we have that $C=\frac{(2N-2)\beta_0^2}{N \frac{1}{2N}} = (4N-4) \beta_0^2$. Therefore, we can choose $\beta_0 >1$ small enough such that $C \leq 4N$.
\end{proof}
Last, we note that $T^i$ converges to a ``canonical'' projection with respect to $P_1,...,P_N$ if such projection exists.
\begin{proposition}
\label{canonical proposition}
Let $X$ be a Banach space and let $P_1,...,P_N$ be projections in $X$ ($N \geq 2$). Denote $T=\frac{P_1+...+P_N}{N}$ and assume that $T^i$ converges in the operator norm to $T^\infty$ which is a projection on $\bigcap_{j=1}^N \operatorname{Im} (P_j)$. If there is a projection $P_{1,2,...,N}$ on $\bigcap_{j=1}^N \operatorname{Im} (P_j)$ such that for every $i$, $P_{1,2,...,N} P_j = P_{1,2,...,N}$, then $T^\infty = P_{1,2,...,N}$.
\end{proposition}
\begin{proof}
Note that for every $i$, we have that $P_{1,...,N} T^i = P_{1,...,N}$ and therefore $T^\infty = P_{1,...,N} T^\infty = P_{1,...,N}$.
\end{proof}
\subsection{Type and cotype}
Let $X$ be a Banach space. For $1< p_1 \leq 2$, $X$ is said to have (Gaussian) type $p_1$, if there is a constant $T_{p_1}$, such that for $g_1,...,g_n$ independent standard Gaussian random variables on a probability space $(\Omega, P)$, we have that for every $x_1,...,x_n \in X$ the following holds:
$$\left( \int_0^1 \left\Vert \sum_{i=1} g_i (\omega) x_i \right\Vert^2 dP \right)^{\frac{1}{2}} \leq T_{p_1} \left( \sum_{i=1}^n \Vert x_i \Vert^{p_1} \right)^{\frac{1}{p_1}}.$$
The minimal constant $T_{p_1}$ such that this inequality is fulfilled is denoted $T_{p_1} (X)$.
For $2 \leq p_2 < \infty$, $X$ is said to have (Gaussian) cotype $p_2$, if there is a constant $C_{p_2}$, such that for $g_1,...,g_n$ independent standard Gaussian random variables on a probability space $(\Omega, P)$, we have that for every $x_1,...,x_n \in X$ the following holds:
$$C_{p_2} \left( \sum_{i=1}^n \Vert x_i \Vert^{p_2} \right)^{\frac{1}{p_2}} \leq \left( \int_0^1 \left\Vert \sum_{i=1} g_i (\omega) x_i \right\Vert^2 dP \right)^{\frac{1}{2}} .$$
The minimal constant $C_{p_2}$ such that this inequality is fulfilled is denoted $C_{p_2} (X)$.
We recall the following fact mentioned in the introduction regarding Banach spaces with given type and cotype which is due to Tomczak-Jaegermann \cite{Tomczak-Jaegermann}[Theorem 2 and the corollary after it]: if $X$ is a Banach space of type $p_1$, cotype $p_2$ and corresponding constants $T_{p_1} (X)$, $C_{p_2} (X)$ as above, then $d_k (X) \leq 4 T_{p_1} (X) C_{p_2} (X) k^{\frac{1}{p_1} - \frac{1}{p_2}}$.
\begin{remark}
We remark that the Gaussian type and cotype defined above are equivalent to the usual (Rademacher) type and cotype (see \cite{HistoryBanach}[pages 311-312] and reference therein).
\end{remark}
\begin{remark}
In \cite{PisierXu}, Pisier and Xu showed that for any $p_2 >2$ one can construct a non superreflexive Banach space $X$ with type $2$ and cotype $p_2$.
\end{remark}
\subsection{Vector valued $L^2$ spaces}
\label{Vector valued spaces section}
Given a measure space $(\Omega, \mu)$ and Banach space $X$, a function $s : \Omega \rightarrow X$ is called simple if it is of the form:
$$s(\omega) = \sum_{i=1}^n \chi_{E_i} (\omega) v_i,$$
where $\lbrace E_1,...,E_n \rbrace$ is a partition of $\Omega$ where each $E_i$ is a measurable set, $\chi_{E_i}$ is the indicator function on $E_i$ and $v_i \in X$.
A function $f : \Omega \rightarrow X$ is called Bochner measurable if it is almost everywhere the limit of simple functions. Denote $L^2 (\Omega ; X)$ to be the space of Bochner measurable functions such that
$$\forall f \in L^2 (\Omega ; X), \Vert f \Vert_{L^2 (\Omega ; X)} = \left( \int_\Omega \Vert f (\omega) \Vert^2_X d \mu (\omega) \right)^{\frac{1}{2}} < \infty.$$
Given an operator $T \in B(L^2 (\Omega, \mu))$, we can define $T \otimes id_X \in B(L^2 (\Omega ; X))$ by defining it first on simple functions.
For our uses, it will be important to bound the norm of an operator of the form $T \otimes id_X$ given that $X$ is derived by one of the deformation procedures given in the introduction.
We will start by bounding the norm of $T \otimes id_X$ given that $X$ has ``round'' enough finite dimensional subspaces. For this, following \cite{LaatSalle2}, we introduce the following notation: for a Banach space $X$ and a constant $k \in \mathbb{N}$ denote
$$e_k (X) = \sup \lbrace \Vert T \otimes id_X \Vert_{L^2 (\Omega ; X)} : T \text{ is of rank } k \text{ with } \Vert T \Vert_{\ell_2} \leq 1 \rbrace.$$
By a theorem of Pisier (see \cite{LaatSalle2}[Theorem 5.2]), this constant is connected to the constant $d_k (X)$ defined in the introduction by the inequality $e_k (X) \leq 2 d_k (X)$
(there is also a reverse inequality $d_k (X) \leq e_k (X)$ which we will not use). Next, we recall the following definition:
\begin{definition}
For a Hilbert space $H$ and a bounded operator $T \in B(H)$ and a constant $r \in [1,\infty]$, the $r$-th Schatten norm is defined as
$$r < \infty, \Vert T \Vert_{S^r} = \left( \sum_{i=1}^\infty (s_i (T))^r \right)^{\frac{1}{r}},$$
$$\Vert T \Vert_{S^\infty} = s_1 (T),$$
where $s_1 (T) \geq s_2 (T) \geq ...$ are the eigenvalues of $\sqrt{T^* T}$. An operator $T$ is said to be of Schatten class $r$ if $\Vert T \Vert_{S^r} < \infty$.
\end{definition}
In \cite{Salle} the following connection was between $e_k (X)$ and the norm of $T \otimes id_X$:
\begin{lemma} \cite{Salle}[Proposition 3.3]
Let $r \in [2, \infty)$, $r >r' \geq 2$ be constants and assume there is a constant $C'$ such that $e_k (X) \leq C' k^{\frac{1}{r}}$ for every $k$. Denote
$$M = \sum_{i=1}^\infty 2^{\frac{r'}{r'-1} (\frac{1}{r} - \frac{1}{r'} )i} .$$
If $(\Omega, \mu)$ is a measure space and $T \in B(L^2 (\Omega,\mu))$ is of Schatten class $r'$, then
$$\Vert T \otimes id_X \Vert_{B(L^2(\Omega ; X))} \leq M C' \Vert T \Vert_{S^{r'}}.$$
\end{lemma}
\begin{remark}
The statement of \cite{Salle}[Proposition 3.3] refers to Banach spaces with specified type and cotype, but is only uses the fact that for these spaces $e_k (X)$ can be bounded by some $ C' k^{\frac{1}{r}}$. Therefore the proof of \cite{Salle}[Proposition 3.3] actually prove the more general case stated above (this was already observed and used in \cite{LaatSalle2}).
\end{remark}
Combining the above lemma with the theorem of Pisier stated above gives the following corollary:
\begin{corollary}
\label{norm of T otimes id using Schatten}
Let $r \in [2, \infty)$, $r >r'$ be constants and assume there is a constant $C_1$ such that $d_k (X) \leq C_1 k^{\frac{1}{r}}$ for every $k$. Then there is a constant $C=C(C_1,r,r')$ such that for every measure space $(\Omega, \mu)$ and every $T \in B(L^2 (\Omega,\mu))$ of Schatten class $r'$, we have that
$$\Vert T \otimes id_X \Vert_{B(L^2(\Omega ; X))} \leq C \Vert T \Vert_{S^{r'}}.$$
\end{corollary}
Second, we will see that if $X$ is given as an interpolation of two spaces $X_0$, $X_1$, the norm of $T \otimes id_X$ can be bounded using bounds on the norms of $T \otimes id_{X_0}, T \otimes id_{X_1}$:
\begin{lemma} \cite{Salle}[Lemma 3.1]
\label{interpolation fact}
Given a compatible pair $(X_0,X_1)$, a measure space $(\Omega,\mu)$ and an operator $T \in B(L^2 (\Omega,\mu))$, we have for every $0 \leq \theta \leq 1$ that
$$\Vert T \otimes id_{[X_0, X_1]_\theta} \Vert_{B(L^2 (\Omega ; [X_0, X_1]_\theta))} \leq \Vert T \otimes id_{X_0} \Vert_{B(L^2 (\Omega ; X_0))}^{1-\theta} \Vert T \otimes id_{X_1} \Vert_{B(L^2 (\Omega ; X_1))}^{\theta},$$
where $[X_0, X_1]_\theta$ is the interpolation of $X_0$ and $X_1$ (see definition above).
\end{lemma}
Third, if $X$ and $X'$ are isomorphic then the norm on $T \otimes id_X$ can be bounded using the norm on $T \otimes id_{X'}$ and the Banach-Mazur distance between $X$ and $X'$.
\begin{lemma} \cite{ORobust}[Lemma 2.7]
\label{norm of T otimes id using BM}
Let $(\Omega, \mu)$ be a measure space and $T$ a bounded operator on $L^2 (\Omega, \mu)$. Given two isomorphic Banach spaces $X$, $X'$, we have that
$$\Vert T \otimes id_X \Vert_{B(L^2(\Omega ; X))} \leq d_{BM} (X,X') \Vert T \otimes id_{X'} \Vert.$$
\end{lemma}
Last, we need the following fact of regarding passing to the closure under quotients, subspaces, $l_2$-sums and ultraproducts:
\begin{lemma}\cite{Salle}[Lemma 3.1]
\label{L2 norm stability}
Let $(\Omega, \mu)$ be a measure space, $C\geq 0$ and $T$ a bounded operator on $L^2 (\Omega, \mu)$. The class of Banach spaces $X$, for which $\Vert T \otimes id_X \Vert \leq C$ is stable under quotients, subspaces, $l_2$-sums and ultraproducts.
\end{lemma}
\begin{remark}
The fact that the above class is closed under $l_2$ sums, did not appear in \cite{Salle}[Lemma 3.1] and it is left as an exercise to the reader.
\end{remark}
Combining all the results above yields the following:
\begin{corollary}
\label{bounding norm of composition of deformations}
Let $T \in B(L^2 (\Omega,\mu))$ be an operator and let $L \geq 1, r' \geq 2$ be constants such that $\Vert T \Vert_{S^{r'}} \leq 1$ and such that for every Banach space $X$ we have that $\Vert T \otimes id_X \Vert_{B(L^2(\Omega ; X))} \leq L$. Then for every constants $r>r', C_1 \geq 1, 1 \geq \theta_2 > 0, C_3 \geq 1$, there is a constant $C=C(C_1,r,r')$ such that for every $X \in \overline{\mathcal{E}_3 (\mathcal{E}_2 (\mathcal{E}_1 (r, C_1),\theta_2),C_3)}$ the following holds
$$\Vert T \otimes id_X \Vert_{B(L^2(\Omega ; X))} \leq C_3 L (C \Vert T \Vert_{S^{r'}})^{\theta_2} .$$
\end{corollary}
\begin{proof}
By Corollary \ref{norm of T otimes id using Schatten} there is a constant $C=C(C_1,r,r')$ such that for every $X \in \mathcal{E}_1 (r, C_1)$ the following holds:
$$\Vert T \otimes id_X \Vert_{B(L^2(\Omega ; X))} \leq C \Vert T \Vert_{S^{r'}}.$$
Combining this with Lemma \ref{interpolation fact} and our assumptions on $T$ gives that for every $X \in \mathcal{E}_2 (\mathcal{E}_1 (r, C_1),\theta_2)$, we have that
$$\Vert T \otimes id_X \Vert_{B(L^2(\Omega ; X))} \leq L (C \Vert T \Vert_{S^{r'}})^{\theta_2}.$$
Applying Lemma \ref{norm of T otimes id using BM} yields that for every $X \in \mathcal{E}_3 (\mathcal{E}_2 (\mathcal{E}_1 (r, C_1),\theta_2),C_3)$, we have that
$$\Vert T \otimes id_X \Vert_{B(L^2(\Omega ; X))} \leq C_3 L^{1-\theta_2} (C \Vert T \Vert_{S^{r'}})^{\theta_2} .$$
Last, Lemma \ref{L2 norm stability} states that this inequality does not change when passing to the closure.
\end{proof}
\subsection{Group representations in a Banach space}
Let $G$ be a locally compact group and $X$ a Banach space. Let $\pi$ be a representation $\pi : G \rightarrow \mathcal{B} (X)$. Throughout this paper we shall always assume $\pi$ is continuous with respect to the strong operator topology without explicitly mentioning it.
Denote by $C_c (G)$ the groups algebra of compactly supported simple functions on $G$ with convolution. For any $f \in C_c (G)$ we can define $\pi (f) \in \mathcal{B} (X)$ as
$$\forall v \in X, \pi (f). v = \int_G f(g) \pi(g).v d\mu (g),$$
where the above integral is the Bochner integral with respect to the (left) Haar measure $\mu$ of $G$.
Recall that given $\pi$ one can define the following representations:
\begin{enumerate}
\item The complex conjugation of $\pi$, denoted $\overline{\pi} : G \rightarrow \mathcal{B} (\overline{X})$ is defined as
$$\overline{\pi} (g). \overline{v} = \overline{\pi (g). v}, \forall g \in G, \overline{v} \in \overline{X}.$$
\item The dual representation $\pi^* : G \rightarrow \mathcal{B} (X^*)$ is defined as
$$\langle v, \pi^* (g) u \rangle = \langle \pi (g^{-1}) .v, u \rangle, \forall g \in G, v \in X, u \in X^*.$$
\end{enumerate}
Next, we'll restrict ourselves to the case of compact groups. Let $K$ be a compact group with a Haar measure $\mu$ and let $C_c (K) = C(K)$ defined as above. Let $X$ be Banach space and let $\pi$ be a representation of $K$ on $X$ that is continuous with respect to the strong operator topology. We shall show that for every $f \in C_c (K)$, we can bound the norm of $\pi (f)$ using the norm of $\lambda \otimes id_X \in B(L^2 (K ;X))$ (the definition of $L^2 (K; X)$ is given in subsection \ref{Vector valued spaces section} above).
\begin{proposition}\cite{ORobust}[Corollary 2.11]
\label{bounding the norm of pi(f) - proposition}
Let $\pi$ be a representation of a compact group $K$ on a Banach space $X$. Then for any real function $f \in C_c (G)$ we have that
$$\Vert \pi (f) \Vert_{B(X)} \leq \left( \sup_{g \in K} \Vert \pi (g) \Vert \right)^2 \Vert (\lambda \otimes id_X) (f) \Vert_{B(L^2 (K ; X))},$$
where $\lambda$ is the left regular representation of $G$.
\end{proposition}
\subsection{Asplund spaces}
\begin{definition}
A Banach space $X$ is said to be an Asplund space if every separable subspace of $X$ has a separable dual.
\end{definition}
There are many examples of Asplund spaces - for instance every reflexive space is Asplund. A very nice exposition of Asplund spaces was given by Yost in \cite{Yost}. The reason we are interested in Asplund space is the following theorem of Megrelishvili:
\begin{theorem}\cite{Megre}[Corollary 6.9]
\label{Asplund implies continuous dual rep}
Let $G$ be a topological group and let $\pi$ be a continuous representation of $G$ on a Banach space $X$. If $X$ is an Asplund space, then the dual representation $\pi^*$ is also continuous.
\end{theorem}
\section{Angle between more than 2 projections and space decomposition}
\label{Angle between more than 2 projections and space decomposition}
The aim of this section is to show that given several projections on a Banach space, this space can be decomposed with respect to these projections, given that the angle between every two projections is large enough. The main motivation for establishing such a decomposition is applying it to deduce vanishing of cohomology relying on Theorem \ref{general vanishing of cohomology based on decomposition}. In order to prove this decomposition, we define and study the notion an angle between several projections.
Following our main motivation, we will think about our projections as defined by faces of a simplex:
\begin{definition}
\label{simplex projections definition}
Let $X$ be a Banach space and let $\triangle = \lbrace 0,...,n \rbrace$ be a simplex with $n+1$ vertices. For $k = -1,0,...,n$, denote by $\triangle (k)$ the $k$-dimensional faces of $\triangle$, i.e., the subsets of $\triangle$ with cardinality $k+1$.
Let $P_\sigma$ be projections defined for every $\sigma \in \triangle (n) \cup \triangle (n-1)$ such that
$$\forall \sigma \in \triangle (n-1), P_\sigma P_\triangle = P_\sigma.$$
For every $\tau \subseteq \triangle$ define an operator $T_\sigma$ as follows:
$$T_\tau = \begin{cases}
P_\triangle & \tau = \triangle \\
\dfrac{\sum_{\sigma \in \triangle (n-1), \tau \subseteq \sigma } P_\sigma}{\vert \triangle \setminus \tau \vert} & \tau \neq \triangle
\end{cases}
.$$
Fix $\tau \subsetneqq \triangle$. If $T_\tau^i$ converges to a projection on the space $\cap_{\sigma \in \triangle (n-1), \tau \subseteq \sigma} \operatorname{Im} (P_\sigma)$ as $i \rightarrow \infty$, then we define $P_\tau = \lim T_\tau^i$. In this case we say that $P_\tau$ exists.
\end{definition}
\begin{remark}
We note that the above setting is general for any $n+1$ projections $P_0,...,P_{n}$. Indeed, given any such projections, we can always denote $P_{i} = P_{\triangle \setminus \lbrace i \rbrace}$ and take $P_\triangle = I$ (the reason we define the operator $P_\triangle$ above is that in the setting we will consider, such an operator appears naturally).
\end{remark}
\begin{remark}
\label{P-triangle always commute remark}
By the definition of $P_\triangle$, we have for every $\tau \subseteq \triangle$ and every $i$ that
$$T_\tau^i P_\triangle = T_\tau^i P_\triangle = T_\tau^i.$$
Therefore for every $\tau \subseteq \triangle$, if $P_\tau$ exists, then $P_\tau P_\triangle = P_\tau$.
\end{remark}
Using this notations, we will define the $\cos$ of an angle between more than $2$ projections:
\begin{definition}
Let $X$ and $P_\sigma$ for $\sigma \in \triangle (n-1)$ be defined as in definition \ref{simplex projections definition} above.
Fix $1 \leq k \leq n$. Denote $Sym ( 0,1,...,k )$ to be the group of all permutations of $\lbrace 0,1,...,k \rbrace$.
For $\sigma_0,...,\sigma_k \in \triangle (n-1)$ pairwise disjoint, denote $\tau = \bigcap_{i=0}^k \sigma_i$. If $P_\tau$ exists, define $\cos (\angle (P_{\sigma_0},...,P_{\sigma_k}))$ as
$$\cos (\angle (P_{\sigma_0},...,P_{\sigma_k})) = \max_{\pi \in Sym (0,...,k)} \Vert P_{\sigma_{\pi (0)}} P_{\sigma_{\pi (1)}} ... P_{\sigma_{\pi (k)}} (I- P_\tau ) \Vert.$$
\end{definition}
\begin{theorem}
\label{angle between several projections theorem}
Let $X$, $\triangle$, $P_\sigma$ for $\sigma \in \triangle (n-1)$ be defined as above and assume $n>1$. Assume that for every $\eta \in \triangle (n-2)$, the projection $P_{\eta}$ exists and that
$$\forall \sigma \in \triangle (n-1), \eta \subseteq \sigma \Rightarrow P_\eta P_\sigma = P_\eta.$$
Also assume that $ \max_{\sigma \in \triangle (n-1)} \Vert P_\sigma \Vert \leq \beta_0,$
where $\beta_0 >1$ is the constant of Corollary \ref{Quick uniform convergence corollary}.
Then for every $\varepsilon >0$ there is $\gamma>0$ such that if
$$\max \lbrace \cos(\angle (P_\sigma,P_{\sigma '})) : \sigma, \sigma' \in \triangle (n-1) \rbrace \leq \gamma.$$
then for every $\tau \subseteq \triangle$, $P_\tau$ is well defined and for every pairwise disjoint $\sigma_0,...,\sigma_k \in \triangle (n-1)$ the following holds:
$$\cos (\angle (P_{\sigma_0},...,P_{\sigma_k})) \leq \varepsilon.$$
\end{theorem}
\begin{proof}
Let $\gamma_0>0$ and $\beta_0$ be the constants of Corollary \ref{Quick uniform convergence corollary} and fix $\varepsilon >0$. Note that $\beta_0 \leq 2$.
Fix $1 \leq k \leq n$ and $\sigma_0,...,\sigma_k \in \triangle (n-1)$. Denote $\tau = \cap_{j=0}^k \sigma_j$.
Assume first that $\gamma \leq \gamma_0$, then by Corollary \ref{Quick uniform convergence corollary}, we have that $T_\tau^i$ converges to $P_\tau$ and
\begin{equation}
\label{rate of conv ineq}
\Vert P_\tau - T_\tau^i \Vert \leq 4(k+1) \left( \frac{2(k+1)-1}{2(k+1)} \right)^{i-1} \leq 4(n+1) \left( \frac{2(n+1)-1}{2(n+1)} \right)^{i-1}.
\end{equation}
Without loss of generality, it is enough to show that there is $\gamma$ such that
$$\Vert P_{\sigma_0} ... P_{\sigma_k} (I- P_\tau ) \Vert \leq \varepsilon.$$
By \eqref{rate of conv ineq}, we can choose $i_0$ large enough such that
$$\Vert P_\tau - T_\tau^{i_0} \Vert \leq \frac{\varepsilon}{2^{n+2}} ,$$
and this $i_0$ can be chosen independently of $k$.
Therefore
\begin{align*}
\Vert P_{\sigma_0} ... P_{\sigma_k} (I- P_\tau ) \Vert \leq \Vert P_{\sigma_0} ... P_{\sigma_k} (I- T_\tau^{i_0} ) \Vert + \Vert P_{\sigma_0} ... P_{\sigma_k} (T_\tau^{i_0}- P_\tau ) \Vert \leq \\
\Vert P_{\sigma_0} ... P_{\sigma_k} (I- T_\tau^{i_0} ) \Vert + \Vert P_{\sigma_0} \Vert ... \Vert P_{\sigma_k} \Vert \frac{\varepsilon}{2^{n+2}} \leq \\
\Vert P_{\sigma_0} ... P_{\sigma_k} (I- T_\tau^{i_0} ) \Vert + \beta_0^{k+1} \frac{\varepsilon}{2^{n+2}} \leq \\ \Vert P_{\sigma_0} ... P_{\sigma_k} (I- T_\tau^{i_0} ) \Vert+ \frac{\varepsilon}{2}.
\end{align*}
We are left to show that by choosing $\gamma$ small enough, we can ensure that
$$\Vert P_{\sigma_0} ... P_{\sigma_k} (I- T_\tau^{i_0} ) \Vert \leq \dfrac{\varepsilon}{2}.$$
Denote $T_\tau' = I- T_\tau= \frac{(I-P_{\sigma_0}) +...+(I-P_{\sigma_k})}{k+1}$. Note that
$$I-T_\tau^{i_0} =T_\tau ' \left({i_0 \choose 1} I - {i_0 \choose 2} T_\tau ' + ...+ (-1)^{i_0-1} {i_0 \choose i_0} \left( T_\tau ' \right)^{i_0-1} \right).$$
Recall that by our assumptions $\Vert T_\tau \Vert \leq \beta_0$ and therefore that $\Vert T_\tau ' \Vert \leq 1+\beta_0 \leq 3$. This yields that
\begin{dmath*}
\Vert P_{\sigma_0} ... P_{\sigma_k} (I- T_\tau^{i_0} ) \Vert \leq \\
\Vert P_{\sigma_0} ... P_{\sigma_k} T_\tau ' \Vert \left\Vert {i_0 \choose 1} I - {i_0 \choose 2} T_\tau ' + ...+ (-1)^{i_0-1} {i_0 \choose i_0} \left( T_\tau ' \right)^{i_0-1} \right\Vert \leq \\
\Vert P_{\sigma_0} ... P_{\sigma_k} T_\tau ' \Vert \left(\Vert I \Vert +{i_0 \choose 2} \Vert T_\tau ' \Vert + ... + {i_0 \choose i_0} \Vert T_\tau ' \Vert^{i_0-1} \right) \leq \\
\Vert P_{\sigma_0} ... P_{\sigma_k} T_\tau ' \Vert \dfrac{1}{3}\left(3 +{i_0 \choose 2} 3^2 + ... + {i_0 \choose i_0} 3^{i_0} \right) \leq \Vert P_{\sigma_0} ... P_{\sigma_k} T_\tau ' \Vert \dfrac{4^{i_0}}{3}.
\end{dmath*}
Therefore it is enough to show we can choose $\gamma$ small enough such that
$$\Vert P_{\sigma_0} ... P_{\sigma_k} T_\tau ' \Vert \leq \dfrac{3}{4^{i_0}} \dfrac{\varepsilon}{2},$$
(note that $i_0$ is independent of $\gamma$ as long as $\gamma \leq \gamma_0$). We will finish the proof by showing that
\begin{equation}
\label{claimed ineq}
\Vert P_{\sigma_0} ... P_{\sigma_k} T_\tau ' \Vert \leq n 2^{n+1} \gamma.
\end{equation}
By the definition of $ T_\tau '$, we have that
\begin{dmath*}
\Vert P_{\sigma_0} ... P_{\sigma_k} T_\tau ' \Vert \leq
\left\Vert \dfrac{ P_{\sigma_0} ... P_{\sigma_k} (I-P_{\sigma_0})}{k+1} \right\Vert +...+ \left\Vert \dfrac{ P_{\sigma_0} ... P_{\sigma_k} (I-P_{\sigma_k})}{k+1} \right\Vert.
\end{dmath*}
Therefore, in order to prove inequality \eqref{claimed ineq}, it is enough to show that for every $j,k$ such that $k\geq j \geq 0$, we have that
$$\Vert P_{\sigma_0} ... P_{\sigma_k} (I-P_{\sigma_j}) \Vert \leq (k-j) 2^{k+1} \gamma.$$
We will show this by induction on $k-j$. If $k-j=0$, i.e., if $k=j$ then
$$P_{\sigma_0} ... P_{\sigma_k} (I-P_{\sigma_k}) =0,$$
and we are done. Assume that $k>j$ and that the inequality holds for $k-1,j$, i.e., assume that
$$\Vert P_{\sigma_0} ... P_{\sigma_{k-1}} (I-P_{\sigma_j}) \Vert \leq (k-1-j) 2^{k} \gamma.$$
Then for $k$ and $j$ we have that
\begin{dmath*}
P_{\sigma_0} ... P_{\sigma_{k}} (I-P_{\sigma_j}) = P_{\sigma_0} ... P_{\sigma_{k-1}} (P_{\sigma_{k}}-P_{\sigma_{k}} P_{\sigma_j}) = P_{\sigma_0} ... P_{\sigma_{k-1}} (P_{\sigma_{k}}- P_{\sigma_j} P_{\sigma_{k}}) +P_{\sigma_0} ... P_{\sigma_{k-1}} (P_{\sigma_j} P_{\sigma_{k}}-P_{\sigma_{k}} P_{\sigma_j}) = P_{\sigma_0} ... P_{\sigma_{k-1}} (I- P_{\sigma_j}) P_{\sigma_{k}} +P_{\sigma_0} ... P_{\sigma_{k-1}} (P_{\sigma_j} P_{\sigma_{k}}-P_{\sigma_{k}} P_{\sigma_j}).
\end{dmath*}
Therefore
\begin{dmath*}
{\Vert P_{\sigma_0} ... P_{\sigma_{k}} (I-P_{\sigma_j}) \Vert} \leq \\ \Vert P_{\sigma_0} ... P_{\sigma_{k-1}} (I- P_{\sigma_j}) P_{\sigma_{k}} \Vert + \Vert P_{\sigma_0} ... P_{\sigma_{k-1}} (P_{\sigma_j} P_{\sigma_{k}}-P_{\sigma_{k}} P_{\sigma_j}) \Vert.
\end{dmath*}
Note that
$$\Vert P_{\sigma_j} P_{\sigma_{k}}-P_{\sigma_{k}} P_{\sigma_j} \Vert \leq \Vert P_{\sigma_j} P_{\sigma_{k}}-P_{\sigma_{k} \cap \sigma_j} \Vert + \Vert P_{\sigma_k} P_{\sigma_{j}}-P_{\sigma_{k} \cap \sigma_j} \Vert \leq 2 \gamma,$$
and therefore
$$\Vert P_{\sigma_0} ... P_{\sigma_{k-1}} (P_{\sigma_j} P_{\sigma_{k}}-P_{\sigma_{k}} P_{\sigma_j}) \Vert \leq \Vert P_{\sigma_0} ... P_{\sigma_{k-1}} \Vert 2 \gamma \leq 2^{k+1} \gamma.$$
Also, note that by the induction assumption
$$\Vert P_{\sigma_0} ... P_{\sigma_{k-1}} (I- P_{\sigma_j}) P_{\sigma_{k}} \Vert \leq (k-1-j) 2^{k} \gamma \Vert P_{\sigma_{k}} \Vert \leq (k-1-j) 2^{k+1} \gamma.$$
Combining the two inequalities above yields
\begin{dmath*}
{\Vert P_{\sigma_0} ... P_{\sigma_{k}} (I-P_{\sigma_j}) \Vert} \leq (k-j) 2^{k+1} \gamma,
\end{dmath*}
as needed.
\end{proof}
\begin{definition} [Consistency]
\label{consistency definition}
Let $X$, $\triangle$, $P_\sigma$ for $\sigma \in \triangle (n-1)$ defined as above. We shall say that the projections $P_\sigma$ for $\sigma \in \triangle (n-1)$ are consistent, given that for every $\tau \subseteq \eta \subsetneqq \triangle$, if $P_\tau$ and $P_\eta$ exist then $P_\tau P_\eta = P_\tau$.
\end{definition}
\begin{remark}
If the projections $P_\sigma$ for $\sigma \in \triangle (n-1)$ are consistent and $P_\tau$ exists for every $\tau \subseteq \triangle$, then for every $\tau , \tau' \subseteq \triangle$, we can define $\cos ( \angle (P_\tau, P_{\tau '}))$ as in the background section, i.e.,
$$\cos ( \angle (P_\tau, P_{\tau '})) = \max \lbrace \Vert P_\tau P_{\tau '} - P_{\tau \cap \tau '} \Vert, \Vert P_{\tau '} P_\tau - P_{\tau \cap \tau '} \Vert \rbrace.$$
\end{remark}
\begin{proposition}
\label{consistency is checked on P_i's proposition}
Let $X$, $\triangle$, $P_\sigma$ for $\sigma \in \triangle (n-1)$ defined as above. Assume that for every $\tau \in \triangle$, $P_\tau$ exists. Then the projections $P_\sigma$ for $\sigma \in \triangle (n-1)$ are consistent if and only if for
$$\forall \tau \subsetneqq \triangle, \forall \sigma \in \triangle (n-1), \tau \subseteq \sigma \Rightarrow P_\tau P_\sigma = P_\tau.$$
\end{proposition}
\begin{proof}
One direction is trivial - assume that the projections $P_\sigma$ for $\sigma \in \triangle (n-1)$ are consistent, then for every $\tau \subseteq \eta \subsetneqq \triangle$, we have that $P_\tau P_\eta = P_\tau$ and in particular this holds for every $\eta \in \triangle (n-1)$.
In the other direction, fix some $\tau \subseteq \eta \subsetneqq \triangle$. By our assumptions, we have for every $\sigma \in \triangle (n-1)$, $\tau \subseteq \sigma$ that $P_\tau P_\sigma = P_\tau$. Therefore, by the definition of $T_\eta$,
$$\forall i, P_\tau (T_\eta)^i = P_\tau,$$
which in turn implies that $P_\tau P_\eta = P_\tau$ as needed.
\end{proof}
\begin{proposition}
Let $X$, $\triangle$, $P_\sigma$ for $\sigma \in \triangle (n-1)$ defined as above. Assume that for every $\tau \subseteq \triangle$, $P_\tau$ exists. If for every $\tau \subsetneqq \triangle$ there is a projection $P_\tau '$ on $\cap_{\sigma \in \triangle (n-1), \tau \subseteq \sigma} \operatorname{Im} (P_\sigma)$ such that
$$\forall \sigma \in \triangle (n-1), \tau \subseteq \sigma \Rightarrow P_\tau' P_\sigma = P_\tau ',$$
then the projections $P_\sigma$ for $\sigma \in \triangle (n-1)$ are consistent and for every $\tau \subsetneqq \triangle$, $P_\tau = P_\tau '$.
\end{proposition}
\begin{proof}
By Proposition \ref{canonical proposition}, we have that $T^i_\tau$ converges to $P_\tau '$ for every $\tau \subsetneqq \triangle$ and the consistency follows from Proposition \ref{consistency is checked on P_i's proposition}.
\end{proof}
The main tool that we will use to decompose the space $X$ is the following theorem stating that bounding the angle between each $P_{\sigma},P_{\sigma'}$ where $\sigma, \sigma' \in \triangle (n-1)$ gives a bound on the angle between $P_\tau,P_{\tau '}$ where $\tau, \tau '$ are any faces of $\triangle$.
\begin{theorem}
\label{small angle theorem}
Let $X$, $\triangle$, $P_\triangle$ and $P_\sigma$ for $\sigma \in \triangle (n-1)$ defined as above. Assume the following:
\begin{enumerate}
\item The projections $P_\sigma$ for $\sigma \in \triangle (n-1)$ are consistent.
\item For any $\eta \in \triangle (n-2)$, the projections $P_\eta$ exist.
\item $ \max_{\sigma \in \triangle (n-1) \cup \triangle (n)} \Vert P_\sigma \Vert \leq \beta_0,$ where $\beta_0 >1$ is the constant of Corollary \ref{Quick uniform convergence corollary}.
\end{enumerate}
Then for every $\varepsilon >0$ there is $\gamma>0$ such that if
$$ \max \lbrace \cos(\angle (P_\sigma,P_{\sigma '})) : \sigma, \sigma ' \in \triangle (n-1) \rbrace \leq \gamma.$$
then the following holds:
\begin{enumerate}
\item For every $\tau \subseteq \triangle$, $P_\tau$ exists and $\Vert P_\tau \Vert \leq 4(n+1)+2$.
\item For every $\tau, \tau' \subseteq \triangle$ and every $\eta \subseteq \triangle$ such that $\tau \cap \tau' \subseteq \eta$ we have that
$$\Vert P_\tau P_{\tau'} (I-P_\eta) \Vert \leq \varepsilon.$$
In particular, $\cos (\angle (P_\tau,P_{\tau '})) \leq \varepsilon$.
\end{enumerate}
\end{theorem}
\begin{remark}
Variations of the above theorem were proven in the setting of Hilbert spaces in \cite{DymaraJ}, \cite{ErshovJZ} and \cite{Kassabov}. However, all these proofs use the fact that in a Hilbert space the following equality holds for any two subspaces $U,V$: $\angle (V,U) = \angle (V^\perp, U^\perp)$, where the angle here is the Friedrichs angle. In our setting, we do not know if such equality holds, namely if $\cos (\angle (P_\tau,P_{\tau '})) = \cos (\angle (I-P_{\tau},I-P_{\tau'}))$ (we don't even know if $\cos (\angle (I-P_{\tau},I-P_{\tau'}))$ is well defined). This limitation required us to give a more direct proof using the idea of angle between several projections.
\end{remark}
\begin{proof}
Let $\gamma_0>0$ and $\beta_0$ be the constants of Corollary \ref{Quick uniform convergence corollary} and let $\varepsilon ' >0$ be a constant to be determined later. By Theorem \ref{angle between several projections theorem}, there is a constant $\gamma_1 >0$ such that if
$$ \max \lbrace \cos(\angle (P_\sigma,P_{\sigma '})) : \sigma, \sigma ' \in \triangle (n-1) \rbrace \leq \gamma,$$
then for any $k = 1,...,n$ and for any $\eta \in \triangle (n-1-k)$, we have that
$$\cos (\angle (P_{\sigma_0},...,P_{\sigma_k} )) \leq \varepsilon',$$
where $\sigma_0,...,\sigma_k \in \triangle (n-1)$ are all the $n-1$ faces of $\triangle$ that contain $\eta$. Choose $\gamma = \min \lbrace \gamma_0, \gamma_1 \rbrace$.
If $\tau \in \triangle (n-1) \cup \triangle (n)$, then $P_\tau$ exists and $\Vert P_\tau \Vert \leq \beta_0 < 2 \leq 4(n+1)+2$. Assume next $\vert \tau \vert < n$, then by Corollary \ref{Quick uniform convergence corollary} we have that $P_\sigma$ is exists and
$$\Vert P_\tau \Vert \leq 4(n+1) + \Vert T_\tau \Vert \leq 4(n+1)+\beta_0 \leq 4(n+1)+2.$$
This concludes the proof of the first assertion of the theorem.
Let $\tau, \tau ' \subseteq \triangle$ and $\eta \subseteq \triangle$ such that $\tau \cap \tau ' \subseteq \eta$. First, we note that by the consistency assumption $P_{\tau \cap \tau' } (I-P_\eta)=0$ and therefore
\begin{dmath*}
{\Vert P_\tau P_{\tau '} (I-P_\eta) \Vert = \Vert P_\tau P_{\tau '} (I-P_{\tau \cap \tau' })(I-P_\eta) \Vert \leq} \\ \cos (\angle (P_\tau, P_{\tau '})) \Vert I-P_\eta \Vert \leq (4(n+1)+3)\cos (\angle (P_\tau, P_{\tau '})).
\end{dmath*}
Therefore, it is enough to show that for $\gamma$ small enough
$$\Vert P_\tau P_{\tau '}(I - P_{\tau \cap \tau '}) \Vert \leq \dfrac{\varepsilon}{(4(n+1)+3)},$$
for any $\tau, \tau' \subseteq \triangle$.
Note that if $\tau = \tau'$ or $\tau = \triangle$ or $\tau' = \triangle$, then $\cos (\angle (P_\tau, P_{\tau '})) =0$ and there is nothing to prove. Therefore, we can assume that $\tau \cap \tau' \in \triangle (n-1-k)$ for $1 \leq k \leq n$. Let $\sigma_0,...,\sigma_k \in \triangle (n-1)$ be all the pairwise disjoint simplices that contain $\tau \cap \tau'$. Without loss of generality we can assume that
$$\tau \subseteq \sigma_0,...,\tau \subseteq \sigma_j \text{ and } \tau' \subseteq \sigma_{j+1},...,\tau' \subseteq \sigma_k.$$
We note that by the consistency assumption
$$P_\tau = P_\tau P_{\sigma_0} ... P_{\sigma_j},$$
and
$$P_{\tau'} = P_{\sigma_{j+1}} ... P_{\sigma_k} P_{\tau'}.$$
Therefore
\begin{dmath*}
\Vert P_\tau P_{\tau '}(I - P_{\tau \cap \tau '}) \Vert = \\
\Vert P_\tau P_{\sigma_0} ... P_{\sigma_k} P_{\tau'} (I- P_{\tau \cap \tau '}) \Vert = \\
\Vert P_\tau P_{\sigma_0} ... P_{\sigma_k} (I- P_{\tau \cap \tau '}) P_{\tau'} \Vert \leq \\
\Vert P_\tau \Vert \Vert P_{\tau'} \Vert \cos (\angle (P_{\sigma_0},...,P_{\sigma_k})) \leq (4(n+1)+2)^2 \varepsilon '.
\end{dmath*}
We conclude by choosing $\varepsilon ' = \frac{\varepsilon}{(4(n+1)+2)^2 (4(n+1)+3)}$.
\end{proof}
\begin{remark}
Theorem \ref{small angle theorem} can be proven without the assumption that the projections $P_\sigma$ with $\sigma \in \triangle (n-1)$ are consistent. However, we could not prove Theorem \ref{space decomposition} below without this assumption (see remark after the proof of Theorem \ref{space decomposition}). Our motivation for proving Theorem \ref{small angle theorem} was deducing Theorem \ref{space decomposition} and therefore we assumed consistency in the proof (this assumption simplifies the proof considerably). For completeness, we added a proof of Theorem \ref{small angle theorem} in the appendix that does not rely on the consistency assumption.
\end{remark}
Assuming that $P_\eta$ exists for each $\eta \subseteq \triangle$, we denote $X_\eta = \operatorname{Im} (P_\eta)$ and
$$X^\eta =
\begin{cases}
X_\emptyset & \eta = \emptyset \\
X_\eta \cap \bigcap_{\tau \subsetneqq \eta} \operatorname{Ker} (P_\tau) & \eta \neq \emptyset
\end{cases}.$$
The next theorem states that under suitable bounds on the angles between the $P_\sigma$'s for $\sigma \in \triangle (n-1)$ and the norms of the $P_\sigma$'s for $\sigma \in \triangle (n-1) \cup \triangle(n)$ , we have that
$$X_\eta = \bigoplus_{\tau \subseteq \eta} X^\tau.$$
\begin{theorem}
\label{space decomposition}
Let $X$, $\triangle$, $P_\triangle$ and $P_\sigma$ for $\sigma \in \triangle (n-1)$ defined as above. Assume the following:
\begin{enumerate}
\item The projections $P_\sigma$ for $\sigma \in \triangle (n-1)$ are consistent.
\item For every $\tau \in \triangle (n-2)$, the projection $P_\tau$ exists.
\item $ \max_{\sigma \in \triangle (n-1) \cup \triangle (n)} \Vert P_\sigma \Vert \leq \beta_0,$ where $\beta_0 >1$ is the constant of Corollary \ref{Quick uniform convergence corollary}.
\end{enumerate}
Then there is $\gamma >0$ such that if
$$ \max \lbrace \cos(\angle (P_\sigma,P_{\sigma '})) : \sigma, \sigma ' \in \triangle (n-1) \rbrace \leq \gamma,$$
then for every $\eta \subseteq \triangle$, $P_\eta$ exists and
$$X_\eta = \bigoplus_{\tau \subseteq \eta} X^\tau.$$
\end{theorem}
The proof of this theorem is based on a theorem similar to our Theorem \ref{small angle theorem} that appears in \cite{DymaraJ}[section 11] and the proof given there applies almost verbatim is our setting. We will repeat the proof below for completeness, but we claim no originality here.
\begin{lemma}
\label{In X^eta lemma}
Let $X$, $\triangle$, $P_\triangle$ and $P_\sigma$ for $\sigma \in \triangle (n-1)$ defined as above. Assume that the projections $P_\sigma$ for $\sigma \in \triangle (n-1)$ are consistent and that for every $\tau \subseteq \triangle$, $P_\tau$ exists.
Fix $0 \leq i \leq n+1$ and assume that for every $\tau \subseteq \triangle$ with $\vert \tau \vert <i$ there is a projection $R_\tau: X \rightarrow X$ on $X^\tau$ such that $R_\tau = R_\tau P_\tau$. Then for every $\eta \subseteq \triangle$ with $\vert \eta \vert =i$ the following holds for every $v \in X_\eta$:
$$v \in X^\eta \Leftrightarrow \forall \tau \subsetneqq \eta, R_\tau v = 0.$$
\end{lemma}
\begin{proof}
Assume first that $v \in X^\eta$, then by definition for every $\tau \subsetneqq \eta$, $v \in \operatorname{Ker} (P_\tau)$. By assumptions of the lemma $R_\tau P_\tau = R_\tau$ and therefore $R_\tau v = R_\tau P_\tau v = 0$.
In the other direction we will use induction on $\vert \eta \vert$. For $\vert \eta \vert =0$, $X^\emptyset=X_\emptyset$ and therefore the assertion of the lemma holds. Fix $0 <i < n+1$ and assume the lemma is true for every $\tau \subseteq \triangle$ with $\vert \tau \vert <i$. Fix $\eta \subseteq \triangle$ with $\vert \eta \vert =i$ and fix $v \in X_\eta$ such that for every $\tau \subsetneqq \eta$, $R_\tau v = 0$. Let $\tau \subsetneqq \eta$ arbitrary. By the assumptions of the lemma for every $\tau ' \subsetneqq \tau$ the following holds:
$$R_{\tau '} P_\tau v = R_{\tau '} P_{\tau '} P_\tau v.$$
By the consistency assumption (and remark \ref{P-triangle always commute remark}), $ P_{\tau '} P_{\tau} = P_{\tau'}$ and therefore
$$R_{\tau'} P_{\tau} v = R_{\tau '} P_{\tau '} v = R_{\tau '} v =0.$$
By the induction assumption, we conclude that $P_\tau v \in X^\tau$. We also assumed that $R_\tau P_\tau v = R_\tau v=0$, therefore this yields that $P_\tau v =0$. We showed that for every $\tau \subsetneqq \eta$, $v \in \operatorname{Ker} (P_\tau)$ which implies that $v \in X^\eta$.
\end{proof}
We will use the above lemma to prove Theorem \ref{space decomposition}:
\begin{proof}
Let $\varepsilon >0$ to be determined later and let $\gamma>0$ be the constant corresponding to $\varepsilon >0$ given by Theorem \ref{small angle theorem}.
We shall prove that if $\varepsilon>0$ is small enough, then for each $0 \leq i \leq (n+1)$, there is a constant $C_i$ such that the following holds:
\begin{enumerate}
\item For each $\eta \subseteq \triangle$ with $\vert \eta \vert \leq i$, there is a projection $R_\eta : X \rightarrow X$ on $X^\eta$ such that $R_\eta P_\eta = R_\eta$ and $\Vert R_\eta \Vert \leq C_i$.
\item For every $0 \leq j \leq i$, $C_i \geq C_j$.
\item For every $\eta, \eta ' \subseteq \triangle$ such that $\eta \neq \eta'$ and $\vert \eta \vert, \vert \eta' \vert \leq i$, we have that $\Vert R_\eta R_{\eta'} \Vert \leq (C_{i})^2 \varepsilon$.
\item For each $\eta \subseteq \triangle$ with $\vert \eta \vert =i$, $X_\eta = \bigoplus_{\tau \subseteq \eta} X^\tau.$
\end{enumerate}
The cases $i=0, i=1$ are straightforward:
For $i = 0$, we have that if $\vert \eta \vert = 0$, then $\eta = \emptyset$. Take $R_\emptyset = P_\emptyset$ and $C_0 = 4(n+1)+2$. We will check that for this choice conditions 1.-4. hold:
\begin{enumerate}
\item Note that
$$R_\emptyset P_\emptyset =P_\emptyset P_\emptyset = P_\emptyset = R_\emptyset.$$
Also by Theorem \ref{small angle theorem}, $\Vert R_\emptyset \Vert \leq C_0$.
\item Holds vacuously.
\item Holds vacuously.
\item $X_\emptyset = X^\emptyset$.
\end{enumerate}
Also, by definition $X_\emptyset = X^\emptyset$.
For $i=1$, for $\eta \subseteq \triangle$ with $\vert \eta \vert =1$, take $R_\eta = P_\eta - P_\emptyset$ and $C_1 = 2 (4(n+1)+2)$. We will check that for this choice conditions 1.-4. hold:
\begin{enumerate}
\item Note that
$$R_\eta = P_\eta - P_\emptyset = (I-P_\emptyset) P_\eta = (I-P_\emptyset) P_\eta P_\eta = R_\eta P_\eta.$$
Also, by Theorem \ref{small angle theorem},
$$\Vert R_\eta \Vert \leq \Vert P_\eta \Vert + \Vert P_\emptyset \Vert \leq C_1.$$
\item $C_1 = 2 C_0 \geq C_0$.
\item Let $\eta, \eta' \subseteq \triangle$ such that $\vert \eta \vert, \vert \eta ' \vert \leq 1$ and $\eta \neq \eta '$. If $\eta = \emptyset$ or $\eta ' = \emptyset$, then $R_\eta R_{\eta'} =0$. If $\vert \eta \vert = \vert \eta ' \vert =1$, then $\eta \cap \eta' =\emptyset$ and
$$\Vert R_\eta R_{\eta'} \Vert = \Vert (P_\eta - P_\emptyset) (P_{\eta '} - P_\emptyset ) \Vert = \Vert P_\eta P_{\eta '} -P_{\eta \cap \eta ' } \Vert \leq \varepsilon \leq C_1^2 \varepsilon,$$
as needed.
\item For every $\eta \subseteq \triangle$, such that $\vert \eta\vert =1$, $P_\eta - P_\emptyset$ is a projection on $X^\eta$ and therefore $X_\eta = X^\eta \oplus X^\emptyset$.
\end{enumerate}
We proceed by induction. Let $i >1$ and assume that $(1),(2),(3),(4)$ above hold for every $j<i$.
\textbf{Step 1 (proof of conditions 1.,2.):} Let $\eta \subseteq \triangle$ with $\vert \eta \vert = i$. We will show that $X_\eta$ is a sum of $X^\tau$ with $\tau \subseteq \eta$ and in doing so, we will find a projection operator $R_\eta : X \rightarrow X$ such that $\operatorname{Im} (R_\eta) = X^\eta$ and $R_\eta P_\eta = R_\eta$.
Let $d = 2^i-2$ and consider the $(d+1)$-valent tree such that each edge is labelled by some $\tau \subsetneqq \eta$ and no two edges with the same label meet at a vertex. Fix a vertex $x_0$ to be the root of this tree. Then for every vertex $x_j$ with distance $j >0$ from $x_0$ there is a path labelled $\tau_1,...,\tau_j$ from $x_0$ to $x_j$. For such $x_j$, define and operator $R(x_j) = (-1)^j R_{\tau_j} ... R_{\tau_1}$ and define $R(x_0) = I$.
Denote the vertices of the tree by $V$ and define
$$L_\eta = \sum_{x \in V} R(x).$$
Let $x_j$ be a vertex with distance $j>0$ from $x_0$. By the induction assumption $(3)$ we have that $\Vert R(x_j) \Vert \leq (C_{i-1}^2 \varepsilon )^{j-1} C_{i-1}$. Therefore if we choose $\varepsilon \leq \frac{1}{2 d C_{i-1}^{2}}$, then for every $v \in X_\eta$, $ \sum_{x} R(x) v$ is absolutely convergent:
$$\sum_{x} \Vert R(x) v \Vert = (1+ (d+1) C_{i-1} \sum_{j=1}^ \infty (d C_{i-1}^2 \varepsilon )^{j-1} ) \Vert v \Vert \leq (1 + 2(d+1) C_{i-1}) \Vert v \Vert.$$
Therefore $L_\eta$ is well defined if $\varepsilon$ is sufficiently small. For every $\tau \subsetneqq \eta$, denote
$$B_\tau = \lbrace x \in V \setminus \lbrace x_0 \rbrace \text{ such that the path from } x_0 \text{ to } x \text{ begins with } \tau \rbrace, $$
$$E_\tau = \lbrace x \in V \setminus \lbrace x_0 \rbrace \text{ such that the path from } x_0 \text{ to } x \text{ ends with } \tau \rbrace. $$
Then for a every $\tau \subsetneqq \eta$, we have that
\begin{dmath*}
L_\eta = \sum_{x \in E_\tau} R(x) +\sum_{x \in V \setminus E_\tau} R(x) = -R_\tau (\sum_{x \in V \setminus E_\tau} R(x)) + \sum_{x \in V \setminus E_\tau} R(x).
\end{dmath*}
Therefore, for every $\tau \subsetneqq \eta$, $R_\tau L_\eta = 0$ and therefore by Lemma \ref{In X^eta lemma} above, for every $v \in X_\eta$, $L_\eta v \in X^\eta$. This shows that $\operatorname{Im} (L_\eta) \subseteq X^\eta$. To see that $\operatorname{Im} (L_\eta) = X^\eta$, notice that for every $v \in X^\eta$ and for every $\tau \subsetneqq \eta$, $R_\tau v=0$ and therefore by the definition of $L_\eta$, $L_\eta v =v$.
We will take $L_\eta P_\eta$ as our candidate for $R_\eta$ and take $C_i = (4(n+1)+2) (1 + 2(d+1) C_{i-1})$ as a bound on $\Vert R_\eta \Vert$ (we showed above that $\Vert L_\eta \Vert \leq 1 + 2(d+1) C_{i-1}$). Notice that $C_i$ was chosen such that $C_i \geq C_{i-1}$ as needed. It is clear that taking $R_\eta = L_\eta P_\eta$ implies that $R_\eta P_\eta = R_\eta$.
To show that $R_\eta$ is indeed a projection, notice first that for every $\tau \subsetneqq \eta$, we have that
\begin{dmath*}
L_\eta = \sum_{x \in B_\tau} R(x) +\sum_{x \in V \setminus B_\tau} R(x) = - (\sum_{x \in V \setminus B_\tau} R(x)) R_\tau + \sum_{x \in V \setminus B_\tau} R(x).
\end{dmath*}
Therefore, for every $\tau \subsetneqq \eta$, $L_\eta R_\tau = 0$. Second, notice that
\begin{dmath*}
L_\eta = I+ \sum_{\tau \subsetneqq \eta} \sum_{x \in E_\tau} R(x) = I- \sum_{\tau \subsetneqq \eta} R_\tau \sum_{x \in V \setminus E_\tau} R(x).
\end{dmath*}
Therefore
$$L_\eta^2 = L_\eta (I- \sum_{\tau \subsetneqq \eta} R_\tau \sum_{x \in V \setminus E_\tau} R(x)) = L_\eta -\sum_{\tau \subsetneqq \eta} L_\eta R_\tau \sum_{x \in V \setminus E_\tau} R(x) = L_\eta.$$
This yields that $R_\eta^2 = R_\eta$.
The same computation also shows that $X_\eta$ is a linear sum of $X^\tau$ with $\tau \subseteq \eta$. First, for every $v \in X_\eta$ we showed that $L_\eta v \in X^\eta$. Second, if we denote for every $\tau \subsetneqq \eta$,
$$v^\tau = R_\tau \sum_{x \in V \setminus E_\tau} R(x) v,$$
then $v^\tau \in X^\tau$. Last, we showed above that
$$L_\eta = I- \sum_{\tau \subsetneqq \eta} R_\tau \sum_{x \in V \setminus E_\tau} R(x),$$
and this yields that for every $v \in X_\eta$,
$$v = \sum_{\tau \subsetneqq \eta} v^\tau + L_\eta v,$$
as needed.
\textbf{Step 2 (proof of condition 3.):} We will show that for every $\eta, \eta' \subseteq \triangle$, with $\vert \eta \vert, \vert \eta ' \vert \leq i$ and $\eta \neq \eta'$, we have that $\Vert R_\eta R_{\eta '} \Vert \leq (C_{i})^2 \varepsilon$. We'll split the proof of this fact into several cases.
In the case that $\eta \cap \eta ' \subsetneqq \eta '$, notice that $\operatorname{Im} (P_{\eta '}) \cap \operatorname{Ker} (P_{\eta \cap \eta '}) \subseteq \operatorname{Im} (R_{\eta '})$ and therefore
$$R_\eta R_{\eta '} = R_\eta P_\eta P_{\eta'} (I- P_{\eta \cap \eta'}) R_{\eta '}.$$
This yields that
$$\Vert R_\eta R_{\eta '} \Vert \leq \Vert R_\eta \Vert \Vert R_{\eta '} \Vert \cos (\angle (P_\eta, P_{\eta ' } )) \leq C_{\vert \eta \vert} C_{\vert \eta ' \vert} \varepsilon \leq (C_i)^2 \varepsilon,$$
as needed.
In the case that $\eta \subseteq \eta'$, we have that $\operatorname{Im} (R_{\eta '}) \subseteq \operatorname{Ker} (P_\eta)$ and therefore
$$R_\eta R_{\eta '} = R_\eta P_\eta R_{\eta '} = 0.$$
In the case that $\eta ' \subseteq \eta$ and $\vert \eta \vert \leq i-1, \vert \eta ' \vert \leq i-1$, the inequality follows from the induction assumption.
We are left with the case in which $\vert \eta \vert =i$ and $\eta ' \subseteq \eta$. In this case, by step 1 above, $L_\eta R_{\eta '} =0$ and therefore $R_\eta R_{\eta '} =0$ and we are done.
\textbf{Step 3 (proof of condition 4.):} We will finish by showing that given that $\varepsilon >0$ is small enough,
$$X_\eta = \bigoplus_{\tau \subseteq \eta} X^\tau.$$
We already showed in step 1, that $X_\eta$ is a linear sum of $X^\tau$ such that $\tau \subseteq \eta$. Assume there are $v^\tau \in X^\tau$ such that
$$\sum_{\tau \subseteq \eta} v^\tau =0.$$
Let $\tau'$ be such that for every $\tau \subseteq \sigma$, $\Vert v^{\tau'} \Vert \geq \Vert v^{\tau} \Vert$. Then $R_{\tau'} (\sum_{\tau \subseteq \eta} v^\tau) =0$. Using the bound on the norm of $\Vert R_{\tau'} R_{\tau} \Vert$ established in step 2, this yields
\begin{dmath*}
0= \Vert R_{\tau'} (\sum_{\tau \subseteq \eta} v^\tau) \Vert \geq \Vert v^{\tau'} \Vert - \Vert \sum_{\tau \subseteq \eta, \tau \neq \tau '} R_{\tau'} v^\tau \Vert \geq \\
\Vert v^{\tau'} \Vert - \sum_{\tau \subseteq \eta, \tau \neq \tau'} \Vert R_{\tau'} R_{\tau} v^\tau \Vert \geq \\
\Vert v^{\tau'} \Vert - (C_i)^2 \varepsilon \Vert v^{\tau'} \Vert = \Vert v^{\tau'} \Vert (1-(2^{i}-1) (C_i)^2 \varepsilon).
\end{dmath*}
Therefore, if $\varepsilon$ is chosen such that $\varepsilon < \frac{1}{(2^{i}-1) (C_i)^2}$, we get that $\Vert v^{\tau'} \Vert =0$ and therefore $v^\tau =0$ for every $\tau \subseteq \eta$. This yields
$$X_\eta = \bigoplus_{\tau \subseteq \eta} X^\tau,$$
as needed.
\end{proof}
\begin{remark}
Note that in the above proof, the consistency assumption is crucial in the proof of Lemma \ref{In X^eta lemma} which in turn was crucial for step 1 of the above proof.
\end{remark}
\section{Vanishing of cohomology}
Let $\Sigma$ be a pure $n$-dimensional infinite simplicial complex and let $G < Aut (\Sigma)$ be a closed subgroup. Assume that $(\Sigma, G)$ satisfies conditions $(\mathcal{B} 1)-(\mathcal{B} 4)$ defined in subsection \ref{Groups acting on simplicial complexes subscetion} above. Assume further that all the $1$-dimensional links of $\Sigma$ are compact. Fix a chamber $\triangle \in \Sigma (n)$. Let $\mu$ be the Haar measure on $G$. For $-1 \leq i \leq n$, denote $\triangle (i)$ to be the $i$-dimensional simplices of $\triangle$.
For $\sigma \in (\triangle (n) \cup \triangle (n-1) \cup \triangle (n-2))$ define $k_\sigma \in C_c (G)$ as
$$k_\sigma = \frac{\chi_{G_\sigma}}{\mu (G_\sigma)},$$
where $\chi_{G_\sigma}$ is the indicator function on $G_\sigma$ (note that by our assumptions $G_\sigma$ is a compact group). Observe that
\begin{itemize}
\item For $\sigma, \tau \in (\triangle (n) \cup \triangle (n-1) \cup \triangle (n-2))$, if $\tau \subset \sigma$, then $k_\tau k_\sigma = k_\tau$.
\item For any continuous representation $\pi$ of $G$ on a Banach space $X$ and any $\sigma \in (\triangle (n) \cup \triangle (n-1) \cup \triangle (n-2))$, $\pi (k_\sigma)$ is a projection on the $X^{\pi (G_\sigma)}$ (recall that $X^{\pi (G_\sigma)}$ is the subspace of vectors fixed by $G_\sigma$).
\end{itemize}
These observations yields that for any two $\sigma, \sigma ' \in \triangle (n-1)$ and any representation $\pi$ of $G$, we can define the cosine of the angle between $\pi (k_\sigma)$ and $\pi (k_{\sigma'})$ as in definition \ref{angle between projections definition} above:
\begin{dmath*}
{\cos (\angle (\pi (k_\sigma), \pi (k_{\sigma'}))) =} \\
{ \max \lbrace \Vert \pi (k_\sigma) \pi ( k_{\sigma '}) - \pi (k_{\sigma \cap \sigma '} ) \Vert, \Vert \pi (k_{\sigma'}) \pi ( k_{\sigma }) - \pi (k_{\sigma \cap \sigma '} ) \Vert \rbrace =} \\
\max \lbrace \Vert \pi (k_\sigma k_{\sigma '} - k_{\sigma \cap \sigma '} ) \Vert, \Vert \pi (k_{\sigma'} k_{\sigma } - k_{\sigma \cap \sigma '} ) \Vert \rbrace.
\end{dmath*}
Therefore we are in the setting of Theorem \ref{space decomposition}. Applying Theorem \ref{space decomposition} combined with Theorem \ref{general vanishing of cohomology based on decomposition} yields the following:
\begin{theorem}
\label{vanishing of cohomology by conditions on the representations}
Let $\Sigma$ be a pure $n$-dimensional infinite simplicial complex and let $G < Aut (\Sigma)$ be a closed subgroup. Assume that $(\Sigma, G)$ satisfy conditions $(\mathcal{B} 1)-(\mathcal{B} 4)$ and that there is $l \in \mathbb{N}$ such that all the $l$-dimensional links of $\Sigma$ are compact. Then there are constants $\gamma = \gamma (n) >0$, $\beta = \beta (n) >1$ such that for every representation $\pi$ of $G$ on a Banach space, if
$$\sup_{\sigma \in \triangle (n-1)} \Vert \pi (k_\sigma) \Vert \leq \beta , \sup_{\sigma, \sigma ' \in \triangle (n-1)} \cos (\angle (\pi (k_\sigma), \pi (k_{\sigma '}))) \leq \gamma,$$
and the projections $\pi (k_\sigma)$ with $\sigma \in \triangle (n-1)$ are consistent, then
$$H^* (G, \pi) = \bigoplus_{\eta \subseteq \triangle} \widetilde{H}^{*-1} (D_\eta; X^{\eta}),$$
and
$$H^i (G,\pi) = 0 \text{ for } i=1,...,l.$$
\end{theorem}
\begin{proof}
Denote $P_\sigma = \pi (k_\sigma)$ for $\sigma \in \triangle (n-2) \cup \triangle (n-1) \cup \triangle (n)$. Let $\beta = \beta_0>1$ and $\gamma$ as in Theorem \ref{space decomposition}. The assumptions on $\pi$ grantee that the $P_\sigma$'s fulfil the conditions of Theorem \ref{space decomposition}.
Therefore for every $\eta \subseteq \triangle$, $X_\eta = \bigoplus_{\tau \subseteq \eta} X^\tau$. The vanishing of cohomology follows from Theorem \ref{general vanishing of cohomology based on decomposition}.
Note that the constants $\gamma$, $\beta$ depend only on the dimension $n$ (and not on any other characteristics of $\Sigma$).
\end{proof}
We will show that there are sufficient conditions that grantee the fulfilment of the conditions of the theorem above in a class of representations $\mathcal{F}_0 (\overline{\mathcal{E}_3 (\mathcal{E}_2 (\mathcal{E}_1 (r, C_1),\theta_2),C_3)}, G, s_0)$ defined in the introduction for suitable choices of $s_0>0$ and $r$. We start by recalling the following result from \cite{ORobust} that connects the Schatten norm of the projection operators to condition $(\mathcal{B}_{\delta, r})$ defined above:
\begin{lemma}\cite{ORobust}[Corollary 4.20]
\label{condition r, delta to schatten norm lemma}
Let $\Sigma$ be a pure $n$-dimensional infinite simplicial complex and let $G < Aut (\Sigma)$ be a closed subgroup. Assume that $(\Sigma, G)$ satisfy conditions $(\mathcal{B} 1)-(\mathcal{B} 4)$ and condition $(\mathcal{B}_{\delta, r})$, then for every $\sigma, \sigma' \in \triangle (n-1)$,
$$\Vert \lambda (k_\sigma k_{\sigma' } - k_{\sigma \cap \sigma'}) \Vert_{S^r} \leq \delta,$$
where $\lambda \in B(L^2 (G_{\sigma \cap \sigma'}, \mu))$ is the left regular representation.
\end{lemma}
Using the above lemma, we are able to deduce arbitrary small angles between all the projections $\pi (k_\sigma)$ and $\pi (\sigma')$ given the condition $(\mathcal{B}_{\delta,r'})$ is fulfilled:
\begin{lemma}
\label{small angle by B-delta,r condition}
Let $\Sigma$ be a pure $n$-dimensional infinite simplicial complex and let $G < Aut (\Sigma)$ be a closed subgroup. Assume that $(\Sigma, G)$ satisfy conditions $(\mathcal{B} 1)-(\mathcal{B} 4)$ and that the $1$-dimensional links of $\Sigma$ are finite.
Let $r > 2, C_1 \geq 1, 1 \geq \theta_2 > 0, C_{3} \geq 1$ be constants. For every $\gamma >0$, $s_0 \geq 0$, $2 \leq r' < r$, there is a $\delta >0$ such that if $(\Sigma, G)$ satisfies condition $(\mathcal{B}_{\delta,r'})$, then for every $\pi \in \mathcal{F} (\overline{\mathcal{E}_3 (\mathcal{E}_2 (\mathcal{E}_1 (r, C_1),\theta_2),C_3)}, G, s_0)$,
$$\sup_{\sigma, \sigma ' \in \triangle (n-1)} \cos (\angle (\pi (k_\sigma), \pi (k_{\sigma '}))) \leq \gamma.$$
\end{lemma}
\begin{proof}
Fix $\pi \in \mathcal{F} (\overline{\mathcal{E}_3 (\mathcal{E}_2 (\mathcal{E}_1 (r, C_1),\theta_2),C_3)}, G, s_0)$. Let $\sigma, \sigma' \in \triangle (n-1)$ be any two different $(n-1)$-dimensional faces of $\triangle$ and assume without loss of generality that
$$\cos (\angle (\pi (k_\sigma), \pi (k_{\sigma '}))) = \Vert \pi (k_\sigma k_{\sigma '} - k_{\sigma \cap \sigma '} ) \Vert.$$
By Proposition \ref{bounding the norm of pi(f) - proposition}, we have that
$$\Vert \pi (k_\sigma k_{\sigma '} - k_{\sigma \cap \sigma '} ) \Vert \leq e^{2s_0} \Vert \lambda (k_\sigma k_{\sigma '} - k_{\sigma \cap \sigma '} ) \otimes id_X \Vert_{B(L^2 (G_{\sigma \cap \sigma'} ;X))}.$$
Note that for any Banach space $X$, we have that
$$\Vert \lambda (k_\sigma k_{\sigma '} - k_{\sigma \cap \sigma '} ) \otimes id_X \Vert_{B(L^2 (G_{\sigma \cap \sigma'} ;X))} \leq \Vert \lambda (k_\sigma k_{\sigma '} - k_{\sigma \cap \sigma '} ) \Vert_{B(L^1 (G_{\sigma \cap \sigma'}))} \leq 2.$$
Assuming that $\delta \leq 1$ and applying Lemma \ref{condition r, delta to schatten norm lemma} and Corollary \ref{bounding norm of composition of deformations} (with $L=2$) yields that
$$ \Vert \lambda (k_\sigma k_{\sigma '} - k_{\sigma \cap \sigma '} ) \otimes id_X \Vert_{B(L^2 (G_{\sigma \cap \sigma'} ;X))} \leq C_3 2 (C \delta)^{\theta_2},$$
where $C=C(C_1,r,r')$ is the constant given in Corollary \ref{bounding norm of composition of deformations}. Therefore, we have that for every $\pi \in \mathcal{F} (\overline{\mathcal{E}_3 (\mathcal{E}_2 (\mathcal{E}_1 (r, C_1),\theta_2),C_3)}, G, s_0)$,
$$\cos (\angle (\pi (k_\sigma), \pi (k_{\sigma '}))) \leq e^{2s_0} C_3 2 (C \delta)^{\theta_2},$$
and choosing $\delta = \frac{1}{C} (\frac{\gamma}{2e^{2s_0} C_3} )^{\frac{1}{\theta_2}}$ yields the needed inequality.
\end{proof}
The implication of the above lemma is that when applying Theorem \ref{vanishing of cohomology by conditions on the representations} on a class of representations of the form $\mathcal{F} (\overline{\mathcal{E}_3 (\mathcal{E}_2 (\mathcal{E}_1 (r, C_1),\theta_2),C_3)}, G, s_0)$, one can replace the condition
$$\sup_{\sigma, \sigma ' \in \triangle (n-1)} \cos (\angle (\pi (k_\sigma), \pi (k_{\sigma '}))) \leq \gamma,$$
by the condition $(\mathcal{B}_{\delta,r'})$ for suitable values of $\delta$ and $r'$:
\begin{theorem}
\label{vanishing of cohomology by conditions on the links and consistency assumption}
Let $\Sigma$ be a pure $n$-dimensional infinite simplicial complex and let $G < Aut (\Sigma)$ be a closed subgroup. Assume that $(\Sigma, G)$ satisfy conditions $(\mathcal{B} 1)-(\mathcal{B} 4)$ and that there is a $l \geq 1$ such that all the $l$-dimensional links of $\Sigma$ are compact.
Let $r > r' \geq 2, C_1 \geq 1, 1 \geq \theta_2 > 0, C_{3} \geq 1$ be constants. Then there are $s_0 = s_0 (n)>0$ and
$$\delta = \delta (n,r,r',C_{1},\theta_2, C_{3}) >0$$
such that if $(\Sigma, G)$ fulfil condition $(\mathcal{B}_{\delta,r'})$ and if the projections $\pi (k_\sigma)$ with $\sigma \in \triangle (n-1)$ are consistent, then
$$H^* (G, \pi) = \bigoplus_{\eta \subseteq \triangle} \widetilde{H}^{*-1} (D_\eta; X^{\eta}),$$
and
$$H^i (G,\pi) = 0 \text{ for } i=1,...,l,$$
for every $\pi \in \mathcal{F} (\overline{\mathcal{E}_3 (\mathcal{E}_2 (\mathcal{E}_1 (r, C_1),\theta_2),C_3)}, G, s_0)$.
\end{theorem}
\begin{proof}
Let $\beta >1$, $\gamma >0$ be the constants given by Theorem \ref{vanishing of cohomology by conditions on the representations}.
Choose $s_0 = \ln (\beta)$, by this choice the inequality
$$\max_{\sigma \in \triangle (n-1)} \Vert \pi (k_\sigma) \Vert \leq \sup_{g \in \bigcup_{\sigma \in \triangle (n-1)} G_\sigma} \Vert \pi (g) \Vert \leq \sup_{g \in \bigcup_{\tau \in \triangle (n-2)} G_\tau} \Vert \pi (g) \Vert \leq e^{s_0} = \beta$$
is satisfied for each $\pi \in \mathcal{F} (\overline{\mathcal{E}_3 (\mathcal{E}_2 (\mathcal{E}_1 (r, C_1),\theta_2),C_3)}, G, s_0)$.
By Lemma \ref{small angle by B-delta,r condition}, we can choose $\delta>0$ small enough such that the condition $(\mathcal{B}_{\delta,r'})$ we imply that
$$\sup_{\sigma, \sigma ' \in \triangle (n-1)} \cos (\angle (\pi (k_\sigma), \pi (k_{\sigma '}))) \leq \gamma,$$
for every $\pi \in \mathcal{F} (\overline{\mathcal{E}_3 (\mathcal{E}_2 (\mathcal{E}_1 (r, C_1),\theta_2),C_3)}, G, s_0)$.
Therefore for this choice of $s_0>0$ and $\delta >0$, the conditions of Theorem \ref{vanishing of cohomology by conditions on the representations} are fulfilled and the conclusion follows.
\end{proof}
The unsatisfactory part of the above theorem is the assumption of consistency of the projections $\pi (k_\sigma)$. We will show that when passing to the class $\mathcal{F}_0 (\overline{\mathcal{E}_3 (\mathcal{E}_2 (\mathcal{E}_1 (r, C_1),\theta_2),C_3)}, G, s_0)$ (in which the dual representations are continuous) this always assumption holds.
\begin{lemma}
\label{continuous dual implies consistency}
Let $\Sigma$ be a pure $n$-dimensional infinite simplicial complex and let $G < Aut (\Sigma)$ be a closed subgroup. Assume that $(\Sigma, G)$ satisfy conditions $(\mathcal{B} 1)-(\mathcal{B} 4)$.
Let $\pi$ be a continuous representation on a Banach space $X$ such that
$$\sup_{\sigma \in \triangle (n-1)} \Vert \pi (k_\sigma) \Vert \leq \beta_0 , \sup_{\sigma, \sigma ' \in \triangle (n-1)} \cos (\angle (\pi (k_\sigma), \pi (k_{\sigma '}))) \leq \gamma_0,$$
where $\beta_0 >1, \gamma_0 >0$ are the constants given by Corollary \ref{Quick uniform convergence corollary}.
If the dual representation $\pi^*$ is continuous, then the projections $\pi (k_\sigma)$ for $\sigma \in \triangle (n-1)$ are consistent.
\end{lemma}
\begin{proof}
Let $\mathcal{F} = \lbrace \pi, \pi^* \rbrace$. Note that for $\pi^*$ the following holds:
$$\sup_{\sigma \in \triangle (n-1)} \Vert \pi^* (k_\sigma) \Vert \leq \beta_0 , \sup_{\sigma, \sigma ' \in \triangle (n-1)} \cos (\angle (\pi^* (k_\sigma), \pi^* (k_{\sigma '}))) \leq \gamma_0.$$
Therefore by Corollary \ref{Quick uniform convergence corollary}, for every $\tau \subsetneqq \triangle$,
$$\left(\frac{1}{n+1-\vert \tau \vert} \sum_{\sigma \in \triangle (n-1), \tau \subseteq \sigma} k_\sigma \right)^i \xrightarrow{i \rightarrow \infty} k_\tau ,$$
where the convergence is in $C_{\mathcal{F}}$ and $\pi (k_\tau)$ and $\pi^* (k_\tau)$ are projections on $X^{\pi(G_\tau)}$ and $(X^*)^{\pi^* (G_\tau)}$ respectively.
By Proposition \ref{consistency is checked on P_i's proposition}, in order to prove consistency, it is enough to show that
$$\forall \tau \subsetneqq \triangle, \forall \sigma \in \triangle (n-1), \tau \subseteq \sigma \Rightarrow \pi (k_\tau) \pi (k_\sigma) = \pi (k_\tau).$$
We will prove the following condition which is actually stronger:
$$\forall \tau \subsetneqq \triangle, \forall g \in G_\tau, \pi (k_\tau) \pi (g) = \pi (k_{\tau}).$$
Fix some $\tau \subsetneqq \triangle$ and $g \in G_\tau$. For every $v \in X, w \in X^*$ we have that
\begin{dmath*}
\langle \pi (k_\tau) \pi (g).v, w \rangle = \langle v, \pi^* (g) \pi^* (k_\tau).w \rangle = \langle v, \pi^* (k_\tau).w \rangle = \langle \pi (k_\tau).v, w \rangle.
\end{dmath*}
Therefore, $\pi (k_\tau) \pi (g) = \pi (k_{\tau})$ as needed.
\end{proof}
\begin{remark}
We note that if $G$ is a discrete group, then the condition of $\pi^*$ being continuous always holds (since it is vacuous).
\end{remark}
As a corollary of the Lemma \ref{continuous dual implies consistency} we deduce the following theorem:
\begin{theorem}
\label{vanishing of cohomology by conditions on the links and cont dual assumption}
Let $\Sigma$ be a pure $n$-dimensional infinite simplicial complex and let $G < Aut (\Sigma)$ be a closed subgroup. Assume that $(\Sigma, G)$ satisfy conditions $(\mathcal{B} 1)-(\mathcal{B} 4)$ and that there is $l \in \mathbb{N}$ such that all the $l$-dimensional links of $\Sigma$ are compact.
Let $r > r' \geq 2, C_1 \geq 1, 1 \geq \theta_2 > 0, C_{3} \geq 1$ be constants. Then there are $s_0 = s_0 (n)>0$ and
$$\delta = \delta (n,r,r',C_{1},\theta_2, C_{3}) >0$$
such that if $(\Sigma, G)$ fulfil condition $(\mathcal{B}_{\delta,r'})$, then
$$H^* (G, \pi) = \bigoplus_{\eta \subseteq \triangle} \widetilde{H}^{*-1} (D_\eta; X^{\eta}),$$
and
$$H^i (G,\pi) = 0 \text{ for } i=1,...,l,$$
for every $\pi \in \mathcal{F}_0 (\overline{\mathcal{E}_3 (\mathcal{E}_2 (\mathcal{E}_1 (r, C_1),\theta_2),C_3)}, G, s_0)$.
\end{theorem}
\begin{proof}
By Lemma \ref{continuous dual implies consistency}, the projections $\pi (k_\sigma)$ with $\sigma \in \triangle (n-1)$ are consistent for every $\pi \in \mathcal{F}_0 (\overline{\mathcal{E}_3 (\mathcal{E}_2 (\mathcal{E}_1 (r, C_1),\theta_2),C_3)}, G, s_0)$ and therefore we can apply Theorem \ref{vanishing of cohomology by conditions on the links and consistency assumption}.
\end{proof}
We recall that by Theorem \ref{Asplund implies continuous dual rep} stated above, for a continuous representation $\pi$ on a Banach space $X$, if $X$ is an Asplund space then $\pi^*$ is continuous. We also recall that all reflexive Banach spaces are Asplund spaces and it was shown in the introduction that the subclass of reflexive Banach spaces of $\overline{\mathcal{E}_3 (\mathcal{E}_2 (\mathcal{E}_1 (r, C_1),\theta_2),C_3)}$ contains several interesting families of Banach spaces.
As stated in subsection \ref{Groups acting on simplicial complexes subscetion} above, the main example of couples $(\Sigma, G)$ satisfying the conditions $(\mathcal{B} 1)-(\mathcal{B} 4)$ are groups $G$ with a BN-pair acting on a building $\Sigma$. In Proposition \ref{condition B-delta,r for buildings}, we showed that the condition $(\mathcal{B}_{\delta,r})$ can also be deduced for these examples for suitable values of $r$. Therefore we can deduce the following corollary:
\begin{corollary}
\label{vanishing of cohomology for BN pair groups}
Let $G$ be a group coming from a BN-pair and let $\Sigma$ be the $n$-dimensional building on which it acts. Assume that $n >1$ and there is some $l \geq 1$ such that all the $l$-dimensional links of $\Sigma$ are compact. Denote by $q$ the thickness of the building $\Sigma$ and let $m'$ be the smallest integer such that all the links of $1$-dimensional links of $\Sigma$ are generalized $m$-gons with $m \leq m'$.
Let
$$r > \begin{cases}
4 & m' =3 \\
8 & m' =4 \\
18 & m' = 6 \\
20 & m' =8
\end{cases}$$
and $C_{1} \geq 1, 1 \geq \theta_2 > 0, C_{3} \geq 1$ be constants, then there are $s_0 = s_0 (n)>0$ and
$$Q = Q (n,r,m',C_{1},\theta_2, C_{3},s_0) \in \mathbb{N}$$
such that if $q \geq Q$, then
$$H^* (G, \pi) = \bigoplus_{\eta \subseteq \triangle} \widetilde{H}^{*-1} (D_\eta; X^{\eta}),$$
and
$$H^i (G,\pi) = 0 \text{ for } i=1,...,l,$$
for every $\pi \in \mathcal{F}_0 (\overline{\mathcal{E}_3 (\mathcal{E}_2 (\mathcal{E}_1 (r, C_1),\theta_2),C_3)}, G, s_0)$.
\end{corollary}
\begin{proof}
Fix
$$r' = \begin{cases}
\frac{4+r}{2} & m' =3 \\
\frac{8+r}{2} & m' =4 \\
\frac{18+r}{2} & m' = 6 \\
\frac{20+r}{2} & m' =8
\end{cases}$$
and let $s_0 (n) >0$, $\delta = \delta (n,r,r',C_{1},\theta_2, C_{3},s_0) >0$ be as in Theorem \ref{vanishing of cohomology by conditions on the links and cont dual assumption}. By Proposition \ref{condition B-delta,r for buildings}, there is a large enough $Q$ such that for every $q \geq Q$, $\Sigma$ fulfils the condition $(\mathcal{B}_{\delta,r'})$ and we are done by Theorem \ref{vanishing of cohomology by conditions on the links and cont dual assumption}.
\end{proof}
|
1,314,259,994,903 | arxiv | \section{Definitions}
\label{sec_def}
We introduce these scaled variables:
\begin{itemize}
\item the scaled mutation rate $\theta=2 N_0 \mu$
\item the scaled sweep rate $\nu=2N_0 V$
\item the scaled selection coefficient $\sigma=2Ns$
\item the scaled time to the common ancestor of two speces $\tau=t/2N_0$
\item the scaled effective population size $\lambda=N/N_0$. For most of the derivation, we consider $\lambda=1$ and will relax this further below.
\end{itemize}
\section{General allele frequency distribution}
\subsection{Neutral equilibrium allele frequency distribution}
We consider a single biallelic site with two alleles 0 and 1. We denote the frequency of allele 1 in the population with x. In the absence of any selective advantage, a symmetric mutational process with scaled rate $\theta$ and an effective population size $N$, genetic drift will lead to an equilibrium distribution \cite{Sawyer:1992vb} which for small mutation rates can be decomposed into two terms \cite{Mustonen:2007im}:
\[
Q(x;\theta)=\frac{1}{2} Q_0(\theta)+\frac{1}{2} Q_1(\theta)
\]
with the partial distribution
\[
Q_a(x;\theta) = \frac{2}{Z_a (\theta)} ((1-a)(1-x)+a x)\left(x(1-x)\right)^{-1+\theta}
\]
and the normalization factor
\[
Z_a(\theta)=\frac{\Gamma(\theta)^2}{\Gamma(2\theta)}.
\]
To derive the probability to observe a discrete allele count rather than a continuous allele frequency we model the binomial sampling process explicitly. The probability to observe k out of m individuals carrying allele 1 is then given as a binomial moment of the equilibrium distribution:
\begin{equation}
\label{eq_binom_sampling}
M(k;m,\theta) = \binom{m}{k} \int_0^1 x^k (1-x)^(m-k) Q(x;\theta)\mathrm{d}x
\end{equation}
which leads again to the independent partial distributions
\[
M_a (k;m,\theta) = \binom{m}{k} \frac{2}{Z_a(\theta)} ((1-a)(m-k)+a k+2\theta)
\frac{\Gamma(k+\theta)\Gamma(-k+m+\theta)}{\Gamma(1+m+2\theta)}
\]
for $a={0,1}$.
\subsection{Hitchhiking with recurrent selective sweeps}
We model recurrent selective sweeps as a Poisson process with scaled rate $\nu$. The probability that no sweep occurs for a time $t$ (in scaled units of $2N$ generations) is then exponential:
\[
\mathrm{Prob(no sweep)}(t)=e^{-\nu t}
\]
We approximate the expected scaled time it takes for a neutral polymorphism to reach allele frequency $x$ by $t=x$ (again in units of $2N$ generations) and express the partial equilibrium probability under recurrent sweeps approximately as
\begin{equation}
\label{eq_qdist_hh}
Q_a(x;\theta,\nu) = Q_a(x;\theta) e^{-\nu((1-a)x+a(1-x))} + \sum_{b=\{0,1\}}\delta(x-b)h_a(b,\theta,\nu)
\end{equation}
where the sum over the two delta distributions accounts for complete hitchhiking to the two fixed states $b=\{0,1\}$ and which will be given further below.
We can again derive binomial moments to express the discrete probability distribution for sampling $k$ out of $m$ individuals with allele 1:
\[
\begin{split}
M_a(k;m, \theta, \nu) = & \binom{m}{k}\frac{2}{Z_a(\theta)}\left(a e^{-\nu}+(1-a)\right)
\frac{\Gamma(k+\theta)\Gamma(-k+m+\theta)}{\Gamma(m+2\theta)}\times\\
& \left(
a {}_1F_1(k+\theta, m+2\theta, (2a-1)) - \frac{m-k+\theta}{m+2\theta}
{}_1F_1(k+\theta, 1+m+2\theta, (2a-1)\nu)
\right)\\
& + \sum_{b=\{0,1\}}\delta_{k,bm}h_a(b,\theta,\nu)
\end{split}
\]
where the last term again account for the fraction of hitchhiking alleles and affects the two boundary states $k=0$ and $k=m$. Here, ${}_1F_1(a,b,x)$ denotes the confluent hypergeometric function.
The hitchhiking fraction is derived by integrating the probability to hitchhike from frequency $x$ to frequency $b=0$ or $b=1$:
\[
h_a(b,\theta,\nu) = \int_0^1 Q_a(x;\theta) (b x + (1-b)(1-x))\left(1-e^{-\nu((1-a)x+a(1-x))}\right)\mathrm{d}x
\]
where the term $bx+(1-b)(1-x)$ just accounts for the fact that the allele hichhikes to $b=1$ with probability $x$ and to $b=0$ with probability $1-x$.
Using the above defined moments $M_a(k;m,\theta)$ we can compute this integral to:
\[
\begin{split}
h_a(b,\theta,\nu) = & M_a(b;1,\theta)-\frac{-2a e^{-\nu}+(1-a)}{Z_a(\theta)}
\frac{\Gamma(b+\theta)\Gamma(1-b+\theta)}{\Gamma(1+2\theta)}\times\\
& \left(-a {}_1F_1(b+\theta, 1+2\theta, (2a-1)\nu) + \frac{1-b+\theta}{1+2\theta}
{}_1F_1(b+\theta, 2+2\theta, (2a-1)\nu) \right)
\end{split}
\]
In the following figure we plot both the continuous and the discrete neutral model with and without hitchhiking (parameters: $\mu=0.025$, $\nu = 2$):
\includegraphics[width=\textwidth]{theory_neutral.pdf}
As expected, the hitchhiking model predicts fewer common variants in comparison to the standard model.
\subsection{Selection}
We can add selection to the standard model without hitchhiking, following \cite{Mustonen:2007im}:
\[
Q_a(x;\theta,\sigma)=\frac{1}{Z_a(\theta,\sigma)}(x(1-x))^{-1+\theta}\left(1-e^{\sigma((1-a)(x-1)+ax)}\right)
\]
with the normalization factor
\[
Z_a(\theta,\sigma)=\frac{\Gamma(\theta)^2}{\Gamma(2\theta)}
\left(1-e^{-(1-a)\sigma}{}_1{}F_1(\theta,2\theta,\sigma)\right).
\]
The binomial moments are derived again by integration, similar to equation \ref{eq_binom_sampling}:
\[
\begin{split}
M_a(k;m, \theta, \sigma) = & \binom{m}{k}\frac{1}{Z_a(\theta, \sigma)}
\frac{\Gamma(k+\theta)\Gamma(-k+m+\theta)}{\Gamma(m+2\theta)}\times\\
& \left(
1 - e^{-(1-a)\sigma}{}_1F_1(k+\theta, m+2\theta, \sigma)
\right)
\end{split}
\]
We can now apply the same modification using an exponential factor for hitchhiking as we did in equation \ref{eq_qdist_hh}, which for the equilibrium distributions under selection leads to:
\[
Q(x; \mu, \sigma, \nu) = Q_a(x; \theta, \sigma)e^{-\nu((1-a)x+a(1-x))} +
\sum_{b=\{0,1\}}\delta(x-b)h_a(b,\theta,\sigma, \nu)
\]
with the hitchhiking fraction
\[
\begin{split}
h_a(b, \theta, \sigma, \nu) = & M_a(b;1,\theta,\sigma) - \frac{1}{Z_a(\theta,\sigma)}
\frac{\Gamma(b+\theta) \Gamma(-b + 1 + \theta)}{\Gamma(1+2\theta)} e^{-(1-a)\sigma-a\nu}\times\\
& \left(
e^{(1-a)\sigma} {}_1F_1(b+\theta, 1+2\theta, -(1-a)\nu + a\nu)
-{}_1F_1(b+\theta, 1+2\theta, s-(1-a)\nu + a\nu)
\right)
\end{split}
\]
and the binomial moments
\[
\begin{split}
M_a(k; m, \theta, \sigma, \nu) = & \binom{m}{k}\frac{1}{Z_a(\theta,\sigma)}
\frac{\Gamma(k+\theta)\Gamma(-k+m+\theta)}{\Gamma(m + 2\theta)}e^{-(1-a)\sigma-a\nu}\times\\
& \left(
e^{(1-a)\sigma}{}_1F_1(k+\theta, m+2\theta, -(1-a)\nu + a\nu) -
{}_1F_1(k+\theta, m+2\theta, \sigma - (1-a)\nu + a\nu)
\right)\\
& + \sum_{b=\{0,1\}}\delta_{k,bm}h_a(b,\theta,\sigma, \nu)
\end{split}
\]
We plot again both the continuous and the discrete neutral model with and without hitchhiking, but including selection (same parameters as above but $\sigma=2$)
\includegraphics[width=\textwidth]{theory_sel.pdf}
As expected, due to directional selection, allele frequencies are skewed towards the $B$-allele (i.e. higher $x$).
\subsection{Substitutions and divergence from outgroup}
\label{sec_subst}
To model the divergence from a related species, we exploit the fact that we generally consider relatively small mutation rates, which leads to a separation of the substitution (fixation) time scale from the polymorphism time scale \cite{Lassig:2007iq}. In this regime, which corresponds to $\theta\ll1$, we can model fixations independently from polymorphisms as a Poisson process. Without hitchhiking and in the weak mutation regime, the rate per scaled unit time of this process is approximately \cite{Mustonen:2007im}
\[
u(\theta,\sigma) = \frac{\theta\sigma}{1-e^{-\sigma}}.
\]
Under hitchhiking with an effective rate $\nu$ (see above), we have previously shown \cite{Schiffels:2011fu} that the substitution rate changes to approximately
\[
u(\theta,\sigma,\nu)=\begin{cases}
u(\theta,\sigma){}_2F_1\left(1, \frac{\nu}{\sigma}, 1+\frac{\nu}{\sigma}, 1-\frac{u(\theta,\sigma)}{\theta}\right) & \text{for } \sigma>0\\
\frac{\theta}{|\sigma|+\nu}\left(\frac{u(\theta,\sigma)}{\theta} |\sigma| + \nu\right) & \text{for }\sigma<0,
\end{cases}
\]
which effectively reduces the rate of beneficial mutations, and increases the rate of deleterious mutations, making them \"more neutral\".
We abbreviate $u_+=u(\theta,+\sigma,\nu)$ and $u_-=u(\theta,-\sigma,\nu)$ and will omit the dependencies on $\theta$, $\sigma$ and $\nu$, which are always implicit in the following expressions. The rates $u_+$ and $u_-$ form a two state Markov process with states $a=\{0,1\}$ and a rate matrix
\begin{equation}
\label{eq_rateM}
\mathbf{R}=\bordermatrix{
& 0 & 1 \cr
0 & -u_+ & u_- \cr
1 & u_+ & -u_-
}
\end{equation}
and transition probability
\[
\mathbf{T}(\tau) = \exp\left(\mathbf{R}\,\tau\right) =
\frac{1}{u_-+u_+}\left(\begin{matrix}
u_- + u_+e^{-\tau(u_++u_-)} & u_- - u_-e^{-\tau(u_++u_-)} \cr
u_+ - u_+e^{-\tau(u_++u_-)} & u_+ + u_-e^{-\tau(u_++u_-)}
\end{matrix}\right).
\]
The equilibrium state $\boldsymbol{\lambda}$ for this transition probability is
\[
\boldsymbol{\Lambda} = \frac{1}{u_+ + u_-} \left(\begin{matrix} u_+\\u_-\end{matrix}\right)
\]
where we again left out the dependency on $\theta$, $\sigma$ and $\nu$ for brevity.
This transition probability lets us write the probability to observe a frequency $k_1$ out of $m_1$ samples with allele $B$ in species 1, and $k_2$ out of $m_2$ samples in species 2, where both species share a common ancestral species at time $\tau$ in the past:
\[
M(k_1, k_2; m_1, m_2, \tau) = \sum_{a=0}^1\sum_{a_1=0}^1\sum_{a_2=0}^1
(\boldsymbol{\Lambda})_a (\mathbf{T}(\tau))_{a_1,a} (\mathbf{T}(\tau))_{a_2,a} M_{a_1}(k_1;m_1) M_{a_2}(k_2;m_2).
\]
The indices in the matrix $\mathbf{T}$ and the vector $\boldsymbol{\Lambda}$ denote row and column, respectively. This expression makes use of the fact that the polymorphisms dynamics (reflected by $M_a(k;m)$) are independent of the substitution dynamics (reflected by $\mathbf{T}(\tau)$) in the weak mutation regime set by $\theta\ll1$.
The outgroup-directed allele frequency as used in the data analysis is now simply a sum over the two cases in which the single outgroup-sample carries either of the two alleles:
\[
P(k; m, \tau, \theta, \sigma, \nu) = \sum_{k'=0}^1 M(k, k'; m, 1, \tau, \theta, \sigma, \nu).
\]
This is the most general allele frequency probability distribution, on which all models for parameter estimation as described in Material and Methods are based on. The following figure shows this probability for $\theta=0.025$ and $\tau=5$ and different values for $\sigma$ and $\nu$ as indicated:
\begin{center}
\includegraphics[width=0.6\textwidth]{theory_full.pdf}
\end{center}
Note that this probability is independent with respect to the sign of $\sigma$. The reason is that we compute the difference in the two species, without direction of an ancestral vs. derived allele. If $\sigma$ is negative, it means that the $A$ allele is the fitter one, but the probability to observe $k$ out of $m$ samples with a different allele than the outgroup is the same if $B$ was the fitter allele. We therefore treat $\sigma$ as a parameter on the positive domain of real values.
In all of the above we have considered $N=N_0$, i.e. $\lambda=1$ (see section \ref{sec_def}). We can generalize by scaling all scaled parameters by $\lambda$:
\[
P(k; m, \tau, \theta, \sigma, \nu, \lambda) = \sum_{k'=0}^1 M(k, k'; m, 1, \tau/\lambda, \lambda\theta, \lambda\sigma, \lambda\nu).
\]
\section{Models}
\subsection{Basic Models for synonymous sites}
To work with real data and to infer parameters reliably, we define simplified subsets of the full model, by setting some parameters to default values. In particular, we define these basic models:
\textbf{Unlinked adaptation model: }The unlinked model has as free parameters only the scaled mutation rate $\theta$ and the divergence time $\tau$. Other parameters are fixed, so that the resulting probability can be written as:
\[
P_\mathrm{unlinked}(k;m,\tau,\theta)=P(k;m,\tau,\theta,0,0,1).
\]
\textbf{Background selection model: }The background selection model (BGS) has as additional free parameter the effective population size $\lambda$:
\[
P_\mathrm{BGS}(k;m,\tau,\theta,\lambda)=P(k;m,\tau,\theta,0,0,\lambda)
\]
\textbf{Linked Adaptation model: }With hitchhiking, we use one further parameter $\nu$:
\[
P_\mathrm{linked}(k;m,\tau,\theta,\nu,\lambda)=P(k;m,\tau,\theta,0,\nu,\lambda)
\]
\textbf{Directional selection model: }We also define a model with background selection and direct selection (see Supplementary Figures S2 and S3):
\[
P_\mathrm{sel}(k;m,\tau,\theta,\sigma,\lambda)=P(k;m,\tau,\theta,\sigma,0,\lambda)
\]
\subsection{Mixed model for heterogeneous data sets}
For our mixed model we add together these components with different weights:
\begin{itemize}
\item Neutral component: A fraction $c_n$ of sites evolves neutrally, but generally under hitchhiking.
\item Weakly selected component: A fraction $c_w$ of sites evolves under weak directional selection.
\item Adaptive component: At a fraction $c_a$ of sites we assume that adaptive evolution has generated fixed differences between the two species. This fraction accounts for an observed surplus of substitutions with respect to the neutral expectation.
\item Conserved component: The remainder of the above, with fraction $c_c=1-c_n-c_w-c_a$ is assumed to be under strong directional selection which accounts for an observed surplus of conserved sites with respect to the neutral expectation.
\end{itemize}
Each component has its own specific outgroup-directed allele frequency distribution. First, the neutral component is simply one of the above derived basic models without selection:
\begin{equation}
\label{eq_Pn}
P_n(k;m,\tau,\theta,\nu, \lambda) =
\begin{cases}
P_\mathrm{unlinked}(k; m,\tau,\theta)\\
P_\mathrm{BGS}(k; m,\tau,\theta, \lambda)\\
P_\mathrm{linked}(k; m,\tau,\theta, \nu, \lambda),
\end{cases}
\end{equation}
defining three separate mixed models. The weakly selected component uses the full probability derived above, including a selection coefficient $\sigma$, which we typically constrain to $1<\sigma<150$:
\begin{equation}
\label{eq_Pw}
P_w = P(k;m,\tau,\theta,\sigma,\nu,\lambda).
\end{equation}
The adaptive component is only a surplus of fixed differences between the two species, so it is simply
\begin{equation}
\label{eq_Pa}
P_a(k;m) = \delta_{k,m}
\end{equation}
with the Kronecker-Delta which sets this probability to zero everywhere except at $k=m$ where it is one. Analogously, the conserved component is:
\[
P_c(k;m) = \delta_{k,0}
\]
Note that each of these components is normalized across $0\le k\le m$.
The full probability of the mixed model is then:
\[
P(k;m,\Theta) = c_n P_n(k;m,\tau,\theta,\nu, \lambda) + c_w P_n(k;m,\tau,\theta,\sigma,\nu, \lambda) + c_a P_a(k;m) + (1-c_n-c_w-c_a) P_c(k;m).
\]
This full model has 7 parameters $\Theta=\{\tau, \theta, \sigma, \nu, c_n, c_w, c_a\}$.
\subsection{Maximum Likelihood estimation}
We consider a data set of outgroup-directed allele frequencies with a fixed sample size $m$. We denote the number of sites with allele frequency $k$ by $n_k$. The total likelihood of the data given parameters $\Theta$ is then:
\[
\mathcal{L}({n_k};m,\Theta) = \prod_{k=0}^m P(k;m,\Theta)^{n_k}.
\]
In practice we use the log-Likelihood
\[
\log\mathcal{L}({n_k};m,\Theta) = \sum_{k=0}^m n_k \log P(k;m,\Theta).
\]
The parameters are then learned by maximization of the log-Likelihood:
\[
\hat{\Theta} = \mathrm{argmax}_{\Theta'} \log\mathcal{L}({n_k};m,\Theta').
\]
As pointed out in the text, we typically follow a hierarchical protocol to estimate parameters from data. Assuming, all sites have been binned according to the local recombination rate (see Methods in the main article), we first use the unlinked model to infer $\tau$ (under free variation of $\theta$) from synonymous sites in the highest recombination bin. We then use the background selection and linked adaptation models to infer $\theta$, $\lambda$ and $\nu$ for each bin, keeping $\tau$ fixed at the value inferred from the highest recombination bin using the unlinked model. We then learn the rest of the parameters from non-neutral annotation categories using the mixed models, keeping the neutral parameters fixed to their values obtained from the background selection or the linked adaptation model. Numerical maximization is implemented using Powell's method \cite{Press}.
\subsection{Types of substitutions in the mixed model}
In the mixed model, we implemented different components which contribute differently to the amount of fixed differences between species. We make use of the three components $P_n$, $P_w$ and $P_a$ as defined in equations \ref{eq_Pn}, \ref{eq_Pw}, \ref{eq_Pa}. First, we can estimate the fraction of sites with neutral substitutions that have been fixed by drift alone:
\[
f_\mathrm{drift} =c_n P_n(k=m;m,\tau,\theta,\nu=0,\lambda)e^{-\nu},
\]
where $e^{-\nu}$ is the probability that no linked sweep occurs during the typical time of fixation of a neutral variant ($2N_0$ generations).
We also define the fraction of sites with neutral substitutions fixed by drift and by hitchhiking:
\[
f_\mathrm{drift+HH} =c_n P_n(k=m;m,\tau,\theta,\nu,\lambda).
\]
From these two, we obtain the hitchhiking fraction via:
\[
f_\mathrm{HH}=f_\mathrm{drift+HH} -f_\mathrm{drift}.
\]
We obtain the fraction of sites with weakly selected substitutions that have been fixed by drift via:
\[
f_\mathrm{sel,drift} =c_w P_w(k=m;m,\tau,\theta,\sigma,\nu=0,\lambda)e^{-\nu},
\]
and the fraction of sites with weakly selected substitutions fixed by drift and hitchhiking
\[
f_\mathrm{sel,drift+HH} =c_w P(k=m;m,\tau,\theta,\sigma,\nu,\lambda),
\]
which allows us to derive the fraction of sites with weakly selected substitutions fixed by deleterious hitchhiking:
\[
f_\mathrm{sel,HH} =f_\mathrm{sel,drift+HH} -f_\mathrm{sel,drift}.
\]
We can also write down the fraction of sites with adaptive substitutions:
\[
f_\mathrm{adaptive} = c_a.
\]
\section{Fitness Flux}
Fitness flux was introduced in \cite{Mustonen:2007im} as the product of the rate of substitutions and their average selection coefficient. To estimate fitness flux, we first compute the rate of adaptive substitutions per scaled unit time, $k_a$, from our fraction $f_a=c_a$ above:
\[
k_a = \frac{c_a}{2\tau},
\]
because $2\tau$ is the total branch length between the two species, and $c_a$ is the expected number of substitutions per site. If we knew the average selection coefficient of adaptive substitutions, $s_a$, the fitness flux $\Phi$ would be
\begin{equation}
\label{eq_Phiplus}
\Phi = k_a s_a,
\end{equation}
per scaled unit time. We cannot measure $s_a$ directly from our framework, but we have an indirect measure using the rate of linked sweeps, $\nu$. As has been shown in \cite{Smith:1974vn} and \cite{Kaplan:1989tm}, the typical ``effect width'' of a single selective sweep with selection coefficient $s_a$ is
\[
w = \alpha \frac{s_a}{r},
\]
where $r$ is the recombination rate, and $\alpha$ is a constant, which depends on model assumptions and parameters such as the absolute effective population size. For relevant parameters, $\alpha$ lies between 0.1 and 0.5, as computed in \cite{Kaplan:1989tm}. Here we fix $\alpha=0.1$ to be conservative in estimating fitness flux and $s_a$ as follows: We can now relate the effective rate of linked sweeps, $\nu$, which is simply to total rate of adaptive mutations within a window of width $w$, with the fitness flux (see also \cite{Mustonen:2007im}):
\[
\nu = w k_a \gtrsim 0.1 k_a \frac{s_a}{r} = 0.1 \frac{\Phi}{r},
\]
So the rate of linked sweeps $\nu$ is directly linked to the fitness flux per recombination map length. In case we do not observe any positive rate of linked drivers, we estimate a conservative upper bound on the fitness flux using the inequality $\nu<1$ (with $\nu=1$ being a typical value that we can measure with confidence). In summary, we compute fitness flux as:
\[
\Phi \begin{cases}
\lesssim 10 \nu r & \text{if $\nu>1$}\\
\lesssim 10 r & \text{if $\nu\leq 1$}
\end{cases},
\]
as an upper bound on fitness flux in regions with $r>0$. Having estimated $\Phi$, we can simply solve for $s_a$ using equation \ref{eq_Phiplus}.
Similarly to the rate of beneficial mutations, we can estimate the total rate of weakly deleterious substitutions via
\[
k_d = \frac{f_\mathrm{sel,drift+HH}}{2\tau},
\]
which then lets us estimate a negative component to the fitness flux as
\[
\Phi_- = \frac{k_d}{2}\frac{\sigma}{2N_0},
\]
where $\sigma$ is the scaled selection coefficient as used in the mixed model, and the factor $1/2$ accounts for the fact that in equilibrium only half of the substitutions are deleterious, the other half is compensatory and hence beneficial. We used a diploid population size of $1.78\times 10^6$, as used in \cite{Sella:2009hs} and \cite{Andolfatto:2007cq}. We show this estimate in Supplementary Figure 6c). In the definitions above, fitness flux is defined in units of $1/2N_0$. However, it is intuitive to use as units $\mu/2N_0$, with $\mu$ estimated from our estimates of $\theta$ and the above population size. In these units, a fitness flux of $1$ can for example be generated by mutations with selection coefficients of $1/2N$, which fix with the neutral substitution rate ($\mu$). We use these units in Figure 5.
To report a fitness flux per million years, we assume a generation time of 0.1 years in Drosophila, which yields a neutral substitution rate of one substitution per $0.1/\mu\approx 30\times10^6$ years.
We also translate these rates to fitness flux per gene, for which we use an average number of 1,000 nonsynonymous sites per gene in the autosomes in Drosophila, obtained from the annotations described in methods.
\section{Genetic Load}
Genetic load as used here quantifies the amount of fitness loss generated by fixed deleterious mutations. The probability for a weakly selected site to be in the less fit state follows from the equilibrium state of the Markov process defined in equation \ref{eq_rateM}:
\[
\lambda_- = \frac{u_-}{u_+ + u_-},
\]
where the fixation rates $u_+$ and $u_-$ are described in section \ref{sec_subst}. The genetic load is then simply:
\[
l = \sigma c_w \lambda_-
\]
where $c_w$ is the fraction of weakly selected sites, and $\sigma$ is their selection coefficient, as defined in the mixed model.
|
1,314,259,994,904 | arxiv | \section{Introduction}
In this paper, we study the $C^0$ (non-Lagrange) finite element
approximations of the linear elliptic equations in non-divergence form
\begin{equation} \label{eq:nondiv}
Lu := A: D^2 u = f \quad \text{in }\Omega, \qquad
u = 0 \quad \text{on }\partial \Omega,
\end{equation}
and the Hamilton-Jacobi-Bellman (HJB) equations
\begin{equation} \label{eq:HJB}
\sup_{\alpha \in \Lambda} (L^\alpha u - f^\alpha) = 0
\quad \text{in }\Omega, \qquad
u = 0 \quad \text{on }\partial\Omega,
\end{equation}
where $\Lambda$ is a compact metric space, and
$$
L^\alpha v := A^\alpha : D^2 v + \bm{b}^\alpha \cdot \nabla
v - c^\alpha v.
$$
Here, $D^2u$ and $\nabla u$ denote the Hessian and gradient of
real-valued function $u$, respectively. $\Omega$ is a bounded, open,
convex polytope in $\mathbb{R}^n~(n=2,3)$. Problems \eqref{eq:nondiv}
and \eqref{eq:HJB} arise in many applications from areas such as
probability and stochastic processes. In particular, the HJB equations
\eqref{eq:HJB}, which are of the class of fully nonlinear second order
partial differential equations (PDEs), play a fundamental role in the
field of stochastic optimal control \cite{fleming2006controlled}. In
the study of fully nonlinear second order PDEs, linearization
techniques naturally lead to problems such as \eqref{eq:nondiv}
\cite{caffarelli1997properties}.
In contract to the PDEs in divergence form, the theory of linear
elliptic equations in non-divergence form and, more generally, fully
nonlinear PDEs, hinges on different solution concepts such as strong
solutions, viscosity solutions, or $H^2$ solutions. For these
different solution concepts, numerical methods for \eqref{eq:nondiv}
and \eqref{eq:HJB} have experienced some rapid developments in recent
years. In \cite{feng2017finite, feng2018interior}, the authors proposed
non-standard primal finite element methods for \eqref{eq:nondiv} with
the coefficient matrix $A \in C^0(\bar{\Omega}; \mathbb{R}^{n\times
n})$. They showed the convergence in the sense of $W^{2,p}$ strong
solutions by establishing the discrete Calderon-Zygmund estimates. In
\cite{nochetto2018discrete}, the authors studied a two-scale method
based on the integro-differential formulation of \eqref{eq:nondiv},
where the monotonicity under the weakly acute mesh condition and
discrete Alexandroff-Bakelman-Pucci estimates were established to
obtain the pointwise error estimates. In the Barles-Souganidis
framework \cite{barles1991convergence}, the monotonicity is a key
concept of numerical schemes for the convergence to the viscosity
solutions of fully nonlinear PDEs, which is also applicable to the
monotone finite difference methods (FDM) for the HJB equations
\cite{bonnans2003consistency, fleming2006controlled}. A monotone
finite element like scheme for the HJB equations was proposed in
\cite{camilli2009finite}. The convergent monotone finite element
methods (FEM) for the viscosity solutions of parabolic HJB equations
were proposed and analyzed in \cite{jensen2013convergence,
jensen2017finite}. For a general overview, we refer the reader to the
survey articles \cite{feng2013recent, neilan2017numerical}. Recently
a narrow-stencil FDM for the HJB equations based on the generalized
monotonicity was proposed in \cite{feng2019narrow}, where the
numerical solution was proved to be convergent to the viscosity
solution.
For the PDE theory of $H^2$ solutions, it is allowable to have the
discontinuous coefficient matrix $A$ in \eqref{eq:nondiv}. As a compensation,
the coefficients are required to satisfy the {\it Cordes
condition} stated below in Definition \ref{df:nondiv-Cordes} for
\eqref{eq:nondiv} and Definition \ref{df:HJB-Cordes} for
\eqref{eq:HJB}, respectively. We refer the reader to
\cite{maugeri2000elliptic} for the analysis of PDEs with discontinuous
coefficients under the Cordes condition. In addition, the analysis of
the problems \eqref{eq:nondiv} and \eqref{eq:HJB} hinges on the
following Miranda-Talenti estimate.
\begin{lemma}[Miranda-Talenti estimate] \label{lm:MT}
Suppose that $\Omega \subset
\mathbb{R}^n$ is a bounded convex domain. Then, for any $v \in
H^2(\Omega) \cap H_0^1(\Omega)$,
\begin{equation} \label{eq:MT}
|v|_{H^2(\Omega)} \leq \|\Delta v\|_{L^2(\Omega)}.
\end{equation}
\end{lemma}
For the finite element approximations of $H^2$ solutions,
the most straightforward way is to apply the $C^1$-conforming finite
elements \cite{ciarlet1978finite}, which are sometimes considered
impractical on unstructured meshes (at least $\mathcal{P}_5$ in 2D and
$\mathcal{P}_9$ in 3D). It is worth mentioning that the $H^2$
nonconforming elements would fail to mimic the Miranda-Talenti
estimate at the discrete level. For example, a direct calculation
shows that three basis functions of the Morley element
\cite{ciarlet1978finite} are harmonic.
Instead of the $C^1$-conforming finite element methods, the first
discontinuous Galerkin (DG) method for \eqref{eq:nondiv} in the case
of discontinuous coefficients with Cordes condition was proposed
in \cite{smears2013discontinuous}, which has been extended to the
elliptic and parabolic HJB equations in \cite{smears2014discontinuous,
smears2016discontinuous}. In \cite{neilan2019discrete}, the authors
developed the $C^0$-interior penalty DG methods to both
\eqref{eq:nondiv} and \eqref{eq:HJB}. The above DG methods are
applicable when choosing the penalization parameters suitably
large. A mixed method for \eqref{eq:nondiv} based on the stable finite
element Stokes spaces was proposed in \cite{gallistl2017variational},
which has been extended to the HJB equations in
\cite{gallistl2019mixed}. Other numerical methods for
\eqref{eq:nondiv} include the discrete Hessian method
\cite{lakkis2011finite}, weak Galerkin method \cite{wang2018primal},
and least square methods \cite{li2019sequential, qiu2019adaptive}.
The primary goal of this paper is to develop and analyze the $C^0$
primal finite element approximations of \eqref{eq:nondiv} and
\eqref{eq:HJB} without introducing any penalization parameter. In
view of the proof of Miranda-Talenti estimate
\cite{maugeri2000elliptic}, the difference between $\|\Delta
u\|_{L^2(\Omega)}^2$ and $|u|_{H^2(\Omega)}^2$ is a positive term
that involves the mean curvature of $\partial \Omega$. The key idea
in this paper is to adopt the $C^0$ finite element with the enhanced
regularity on some subsimplex so as to have the ability to detect the
information of mean curvature. More precisely, we show in Section
\ref{sec:FEM} that, a feasible condition pertaining to the finite
element is the $C^1$-continuity on the $(n-2)$-dimensional subsimplex.
In 2D case, a typical family of finite elements that meets this
requirement is the family of $\mathcal{P}_k$-Hermite finite elements
$(k\geq 3)$ depicted in Figure \ref{fig:Hermite-2Dk3}. In 3D case, the
family of Argyris finite elements with local space $\mathcal{P}_k (k
\geq 5)$ \cite{neilan2015discrete, christiansen2018nodal} satisfies
the condition.
Having the families of 2D Hermite finite elements and 3D Argyris
finite elements, we prove a discrete analog of the Miranda-Talenti
estimate in Lemma \ref{lm:Hermite-MT}, on any conforming triangulation
of convex polytope $\Omega$. The jump terms in the discrete
Miranda-Talenti-type estimate \eqref{eq:Hermite-MT} naturally induce
the interior penalty in the variational form of the linear elliptic
equations in non-divergence form \eqref{eq:nondiv-bilinear} or the HJB
equations \eqref{eq:HJB-form}. However, in the same sprite of
\cite{feng2017finite}, the proposed methods are not the DG methods per
se since no penalization parameter is used. The convergence analysis
mimics the analysis of $H^2$ solutions to a great extent. As a
striking feature of the proposed methods, the coercivity constant
(resp. monotonicity constant) for the linear elliptic equations in
non-divergence form (resp. the HJB equations) at discrete level is
exactly the same as that from PDE theory. Since the methods are
consistent, the coercivity or monotonicity naturally leads to the
energy norm error estimates.
The rest of the paper is organized as follows. In Section
\ref{sec:pre}, we establish the notation and state some preliminaries
results. In Section \ref{sec:FEM}, we state the finite element spaces
on which the discrete Miranda-Talenti-type estimate holds. We then
give the applications to the discretizations of the linear elliptic
equations in non-divergence form \eqref{eq:nondiv} and the HJB equations
\eqref{eq:HJB}, respectively, in Section \ref{sec:nondiv} and Section
\ref{sec:HJB}. Numerical experiments are presented in Section
\ref{sec:numerical}.
For convenience, we use $C$ to denote a generic positive
constant that may stand for different values at its different
occurrences but is independent of the mesh size $h$. The notation $X
\lesssim Y$ means $X \leq CY$.
\section{Preliminaries} \label{sec:pre}
In this section, we first review the $H^2$ solutions to the linear
elliptic equations in non-divergence form and the HJB equations under the
Cordes conditions. Let $\Omega$ be a bounded, open, convex domain in
$\mathbb{R}^n~(n=2,3)$, in which the Miranda-Talenti estimate
\eqref{eq:MT} holds. We shall use $U$ to denote a generic subdomain of
$\Omega$ and $\partial U$ denotes its boundary. Given an integer $k
\geq 0$, let $H^k(U)$ and $H_0^k(U)$ denote the usual Sobolev spaces.
We also denote $V := H^2(\Omega) \cap H_0^1(\Omega)$.
\subsection{Review of strong solutions to the linear elliptic equations in
non-divergence form}
For the problem \eqref{eq:nondiv}, it is assumed that $A\in
L^\infty(\Omega;\mathbb{R}^{n\times n})$, and that $L$ is uniformly
elliptic, i.e., there exists $\underline{\nu}, \bar{\nu} > 0$ such
that
\begin{equation} \label{eq:nondiv-elliptic}
\underline{\nu}|\bm{\xi}|^2 \leq \bm{\xi}^tA(x)\bm{\xi} \leq \bar{\nu}
|\bm{\xi}|^2 \qquad \forall \bm{\xi} \in \mathbb{R}^n, \text{ a.e. in
}\Omega.
\end{equation}
It is well-known that, the above assumptions can not guarantee the
well-posedness of \eqref{eq:nondiv}; See, for instance, the example in
\cite[p. 185]{gilbarg2015elliptic}. In addition, we assume the
coefficients satisfy the Cordes condition below.
\begin{definition}[Cordes condition for \eqref{eq:nondiv}]
\label{df:nondiv-Cordes}
The coefficient satisfies that there is an $\varepsilon \in (0, 1]$
such that
\begin{equation} \label{eq:nondiv-Cordes}
\frac{|A|^2}{({\rm tr} A)^2} \leq \frac{1}{n-1 + \varepsilon}
\qquad \text{a.e. in }\Omega.
\end{equation}
\end{definition}
In two dimensions, it is not hard to show that uniform ellipticity
implies the Cordes condition \eqref{eq:nondiv-Cordes} with
$\varepsilon = 2\underline{\nu} / (\underline{\nu} + \bar{\nu})$, see
\cite{smears2014discontinuous}. Define the strictly positive function
$\gamma \in L^\infty(\Omega)$ by
\begin{equation} \label{eq:nondiv-gamma}
\gamma := \frac{{\rm tr} A}{|A|^2}.
\end{equation}
Hence, the variational formulation of \eqref{eq:nondiv} reads
\begin{equation} \label{eq:nondiv-B}
B_0(u, v) = \int_\Omega \gamma f \Delta v\,\mathrm{d}x \quad \forall v \in
V,
\end{equation}
where the bilinear form $B_0: V \times V \to \mathbb{R}$ is defined by
\begin{equation} \label{eq:nondiv-B-def}
B_0(w,v):= \int_\Omega \gamma Lw \Delta v\,\mathrm{d}x \quad \forall w, v \in
V.
\end{equation}
The well-posedness hinges on the following lemma; See \cite[Lemma
1]{smears2013discontinuous} for a proof.
\begin{lemma} \label{lm:nondiv-Cordes-prop}
Under the Cordes condition \eqref{eq:nondiv-Cordes}, for any $v\in
H^2(\Omega)$ and open set $U \subset \Omega$, the following inequality
holds a.e. in $U$:
\begin{equation} \label{eq:nondiv-Cordes-prop}
|\gamma L v - \Delta v| \leq \sqrt{1-\varepsilon} |D^2v|.
\end{equation}
\end{lemma}
Using the Miranda-Talenti estimate in Lemma \ref{lm:MT}, Lemma
\ref{lm:nondiv-Cordes-prop} and the Lax-Milgram Lemma, it is readily
seen that there exists a unique strong solution
to \eqref{eq:nondiv-B} under the Cordes condition
\eqref{eq:nondiv-Cordes}. The proof is given in \cite[p.
256]{maugeri2000elliptic}, see also \cite{smears2013discontinuous,
neilan2019discrete}.
\subsection{Review of strong solutions to the HJB equation}
For the HJB equations \eqref{eq:HJB}, the coefficient $A^\alpha \in
C(\bar{\Omega} \times \Lambda;\mathbb{R}^{n\times n})$ is assumed
uniformly elliptic, i.e., there exists $\underline{\nu}, \bar{\nu} >
0$ such that
\begin{equation} \label{eq:HJB-elliptic}
\underline{\nu}|\bm{\xi}|^2 \leq \bm{\xi}^tA^\alpha(x)\bm{\xi} \leq \bar{\nu}
|\bm{\xi}|^2 \qquad \forall \bm{\xi} \in \mathbb{R}^n, \text{ a.e. in
}\Omega, ~\forall \alpha \in \Lambda.
\end{equation}
The corresponding Cordes condition for $A^\alpha$, $\bm{b}^\alpha \in
C(\bar{\Omega} \times \Lambda;\mathbb{R}^n)$ and $c^\alpha \in
C(\bar{\Omega} \times \Lambda)$ can be stated as follows.
\begin{definition}[Cordes condition for \eqref{eq:HJB}]
\label{df:HJB-Cordes}
The coefficients satisfy
\begin{subequations} \label{eq:HJB-Cordes}
\begin{enumerate}
\item whenever $\bm{b}^\alpha \not\equiv \bm{0}$ or $c^\alpha
\not\equiv 0$ for some $\alpha \in \Lambda$, there exist $\lambda > 0$
and $\varepsilon\in (0,1]$ such that
\begin{equation} \label{eq:HJB-Cordes1}
\frac{|A^\alpha|^2 + |\bm{b}^\alpha|^2/(2\lambda) +
(c^\alpha/\lambda)^2}{({\rm tr} A^\alpha + c^\alpha/\lambda)^2} \leq \frac{1}{n +
\varepsilon} \qquad \text{a.e. in
}\Omega, ~\forall \alpha \in \Lambda.
\end{equation}
\item whenever $\bm{b}^\alpha\equiv \bm{0}$ and $\bm{c}^\alpha \equiv
0$ for all $\alpha \in \Lambda$, there is an $\varepsilon \in (0, 1]$
such that
\begin{equation} \label{eq:HJB-Cordes2}
\frac{|A^\alpha|^2}{({\rm tr} A^\alpha)^2} \leq \frac{1}{n-1 + \varepsilon}
\qquad \text{a.e. in }\Omega.
\end{equation}
\end{enumerate}
\end{subequations}
\end{definition}
Similar to \eqref{eq:nondiv-gamma} for the linear elliptic equations
in non-divergence form, for each $\alpha \in \Lambda$, we define
\begin{equation} \label{eq:HJB-gamma}
\gamma^\alpha := \left\{
\begin{array}{cl}
\displaystyle\frac{{\rm tr} A^\alpha + c^\alpha/\lambda}{|A^\alpha|^2 +
|\bm{b}^\alpha|^2/(2\lambda) +
(c^\alpha/\lambda)^2} & ~~\bm{b}^\alpha \not\equiv \bm{0} \text{
or } c^\alpha \not\equiv 0 ~ \text{ for some }\alpha \in \Lambda,
\\
\displaystyle\frac{{\rm tr} A^\alpha}{|A^\alpha|^2} & ~~\bm{b}^\alpha
\equiv \bm{0} \text{ and } c^\alpha \equiv 0 ~ \text{ for all }
\alpha \in \Lambda.
\end{array}
\right.
\end{equation}
Note here that the continuity of data implies $\gamma^\alpha \in
C(\bar{\Omega}\times \Lambda)$. Define the operator $F_\gamma:
H^2(\Omega) \to L^2(\Omega)$ by
\begin{equation} \label{eq:HJB-F}
F_\gamma[v] := \sup_{\alpha \in \Lambda} \gamma^\alpha(L^\alpha v -
f^\alpha).
\end{equation}
It is readily seen that the HJB equations \eqref{eq:HJB} is in fact
equivalent to the problem $F_\gamma[u] = 0$ in $\Omega$, $u = 0$ on
$\partial \Omega$. The Cordes condition \eqref{eq:HJB-Cordes} leads to
the following lemma; See \cite[Lemma 1]{smears2014discontinuous} for a
proof.
\begin{lemma} \label{lm:HJB-Cordes-prop}
Under the Cordes condition \eqref{eq:HJB-Cordes}, for any open set $U
\subset \Omega$ and $w, v \in H^2(U)$, $z:= w-v$, the following
inequality holds a.e. in $U$:
\begin{equation} \label{eq:HJB-Cordes-prop}
|F_\gamma[w] - F_\gamma[v] - L_\lambda z| \leq \sqrt{1-\varepsilon}
\sqrt{|D^2 z|^2 + 2\lambda|\nabla z|^2 + \lambda^2|z|^2}.
\end{equation}
\end{lemma}
For $\lambda$ as in \eqref{eq:HJB-Cordes1}, we define a linear operator
$L_\lambda$ by
\begin{equation} \label{eq:L-lambda}
L_\lambda v := \Delta v - \lambda v \qquad v \in V.
\end{equation}
Let the operator $M: V \to V^*$ be
\begin{equation} \label{eq:HJB-M}
\langle M[w], v \rangle := \int_\Omega F_\gamma[w] L_\lambda v
\,\mathrm{d}x \qquad \forall w, v \in V.
\end{equation}
where $\langle \cdot, \cdot \rangle$ is the dual pairing between $V^*$
and $V$. Under the condition of Miranda-Talenti estimate in Lemma \ref{lm:MT}, it is
straightforward to show that
\begin{equation} \label{eq:L-lambda-bound}
\|L_\lambda v\|_{L^2(\Omega)}^2 \geq \int_{\Omega} |D^2v|^2 +
2\lambda|\nabla v|^2 + \lambda^2|v|^2\, \mathrm{d}x \qquad \forall v \in
V.
\end{equation}
By using the Cordes condition \eqref{eq:HJB-Cordes}, one can show the
strong monotonicity of $M$. Together with the Lipschitz continuity of
$M$ by the compactness of $\Lambda$ and the Browder-Minty Theorem
\cite[Theorem 10.49]{renardy2006introduction}, one can show the
existence and uniqueness of the following problem: Find $u \in V$ such
that
\begin{equation} \label{eq:HJB-var}
\langle M[u], v \rangle = 0 \qquad \forall v \in V.
\end{equation}
We refer to \cite[Theorem 3]{smears2014discontinuous} for a detailed
proof.
\section{Finite element spaces and discrete Miranda-Talenti-type estimate}
\label{sec:FEM}
Let $\mathcal{T}_h$ be a conforming and shape-regular simplicial
triangulation of $\Omega$ and $\mathcal{F}_h$ be the set of all
faces of $\mathcal{T}_h$. Let $\mathcal{F}_h^i := \mathcal{F}_h
\setminus \partial \Omega$ and $\mathcal{F}_h^\partial := \mathcal{F}_h
\cap \partial\Omega$. Here, $h := \max_{K\in \mathcal{T}_h} h_K$,
and $h_K$ is the diameter of $K$ (cf. \cite{ciarlet1978finite,
brenner2007mathematical}). Since each element has piecewise flat
boundary, the faces may also be chosen to be flat.
We define the jump of a vector function $\bm{w}$ on an interior
face $F = \partial K^+ \cap \partial K^-$ as follows:
$$
\llbracket \bm{w} \rrbracket|_F := \bm{w}^+ \cdot \bm{n}^+|_F +
\bm{w}^- \cdot \bm{n}^-|_F,
$$
where $\bm{w}^\pm = \bm{w}|_{K^\pm}$ and $\bm{n}^\pm$ is the outward
unit normal vector of $K^\pm$, respectively. We also denote $\omega_F:= K^+ \cup K^-$
for any $F\in \mathcal{F}_h^i$. For an element $K \in \mathcal{T}_h$,
$(\cdot, \cdot)_K$ denotes the standard inner product on $L^2(K)$. The
standard inner products $\langle \cdot, \cdot \rangle_{\partial K}$
and $\langle \cdot, \cdot \rangle_F$, are defined in a similar
way.
For $F \in \mathcal{F}_h$, following \cite{smears2013discontinuous,
smears2014discontinuous}, we define the tangential gradient
$\nabla_T: H^s(F) \to \bm{H}_T^{s-1}(F)^n$ and the tangential
divergence $\mathrm{div}_T: \bm{H}_T^s(F) \to H^{s-1}(F)$, where $s
\geq 1$. Here, $\bm{H}_T^s(F):= \{\bm{v} \in H^s(F)^n:~\bm{v}\cdot
\bm{n}_F = 0~\text{on }F\}$. Let $\{\bm{t}_i\}_{i=1}^{n-1}$ be an
orthogonal coordinate system on $F$. Then, for $w \in H^s(F)$ and
$\bm{v} = \sum_{i=1}^{n-1} v_i \bm{t}_i$ with $v_i \in H^s(F)$ for
$i=1,\cdots, n-1$, define
\begin{equation}
\nabla_T w := \sum_{i=1}^{n-1} \bm{t}_i \frac{\partial w}{\partial
\bm{t}_i}, \qquad \mathrm{div}_T \bm{v} := \sum_{i=1}^{n-1}
\frac{\partial v_i}{\partial \bm{t}_i}.
\end{equation}
We also define $\Delta_T w := \mathrm{div}_T\nabla_T w$ for $w \in
H^s(F)$, where $s \geq 2$.
\subsection{The families of 2D Hermite finite elements and 3D Argyris
finite elements}
In this subsection, we shall describe the finite elements that will be
used to solve the linear elliptic equations in non-divergence form
\eqref{eq:nondiv} and the HJB equations \eqref{eq:HJB}. More
precisely, we adopt the Hermite elements in 2D and the Argyris
elements in 3D \cite{neilan2015discrete, christiansen2018nodal}, both
of which have the $C^0$-continuity on the face (namely,
$(n-1)$-dimensional subsimplex) and $C^1$-continuity on the
$(n-2)$-dimensional subsimplex.
\paragraph{The family of 2D Hermite finite elements} Following the
description of \cite{ciarlet1978finite, brenner2007mathematical}, the
geometric shape of Hermite elements is triangle $K$. The shape
function space is given as $\mathcal{P}_k(K)~(k\geq 3)$, where
$\mathcal{P}_k(K)$ denotes the set of polynomials with total degree
not exceeding $k$ on $K$. In $K$ the degrees of freedom are defined as
follows (cf. Figure \ref{fig:Hermite-2D}):
\begin{itemize}
\item Function value $v(\bm{a})$ and first order derivatives
$\partial_i v(\bm{a})$, $i=1,2$ at each vertex;
\item Moments $\int_e v q\,\mathrm{d}s, \forall q \in
\mathcal{P}_{k-4}(e)$ on each edge $e$;
\item Moments $\int_K v q\,\mathrm{d}x, \forall q \in
\mathcal{P}_{k-3}(K)$ on element $K$.
\end{itemize}
It is simple to check that the degrees of freedom given above form a
unisolvent set of $\mathcal{P}_k(K)$ for $k \geq 3$
\cite{ciarlet1978finite}.
\begin{figure}[!htbp]
\centering
\captionsetup{justification=centering}
\subfloat[2D Hermite element, $k=3$]{
\includegraphics[width=0.24\textwidth]{Hermite2Dk3.pdf}
\label{fig:Hermite-2Dk3}
}\qquad %
\subfloat[2D Hermite element, $k=4$]{
\includegraphics[width=0.24\textwidth]{Hermite2Dk4.pdf}
\label{fig:Hermite-2Dk4}
}
\caption{Degrees of freedom of 2D $\mathcal{P}_k$ Hermite elements, in
the case of $k=3$ and $k=4$}
\label{fig:Hermite-2D}
\end{figure}
\paragraph{The family of 3D Argyris finite elements} In 3D case, the
finite elements are required to be $C^0$ on face and $C^1$ on edge.
The typical elements that meet the requirement in 3D are the Argyris
elements \cite{christiansen2018nodal}, which coincide with each
component of the velocity finite elements in the 3D smooth de Rham
complex \cite{neilan2015discrete}. Given a tetrahedron $K$, the shape
function space is given as $\mathcal{P}_k(K)$, for $k \geq 5$. In $K$
the degrees of freedom are defined as follows (cf. Figure
\ref{fig:Argyris-3D}):
\begin{itemize}
\item One function value and (nine) derivatives up to second
order at each vertex;
\item Moments $\int_e v q \,\mathrm{d}s, \forall q \in
\mathcal{P}_{k-6}(e)$ on each edge $e$;
\item Moments $\int_e \frac{\partial v}{\partial \bm{n}_{e,i}} q
\,\mathrm{d}s, \forall q \in \mathcal{P}_{k-5}(e)$, $i=1,2$ on each edge $e$;
\item Moments $\int_F v q \,\mathrm{d}s, \forall q \in
\mathcal{P}_{k-6}(F)$ on each face $F$;
\item Moments $\int_T v q \,\mathrm{d}x, \forall q \in
\mathcal{P}_{k-4}(K)$ on element $K$.
\end{itemize}
Here, $\bm{n}_{e,i}~(i=1,2)$ are two unit orthogonal normal vectors that are orthogonal to the edge $e$.
\begin{figure}[!htbp]
\centering
\captionsetup{justification=centering}
\subfloat[3D Argyris elements, $k=5$]{
\includegraphics[width=0.28\textwidth]{Argyris3Dk5.pdf}
\label{fig:Argyris-3Dk5}
}\qquad %
\subfloat[3D Argyris elements, $k=6$]{
\includegraphics[width=0.28\textwidth]{Argyris3Dk6.pdf}
\label{fig:Argyris-3Dk6}
}
\caption{Degrees of freedom of 3D $\mathcal{P}_k$ Argyris elements, in
the case of $k=5$ and $k=6$}
\label{fig:Argyris-3D}
\vspace{-4mm}
\end{figure}
We sketch the main argument of the unisolvent property. We first
note that the number of degrees of freedom above is
$$
4\times 10 + 6\times(k-5) + 6\times 2(k-4) + 4\times\frac{(k-4)(k-3)}{2}
+ \frac{(k-1)(k-2)(k-3)}{6},
$$
which is exactly the dimension of $\mathcal{P}_k(K)$. Therefore,
it suffices to show that $v\in \mathcal{P}_k(K)$ vanishes if it
vanishes at all the degrees of freedom. It is readily seen that the
trace of $v|_F \in \mathcal{P}_k(F)$, which has to be zero since the
degrees of freedom on the face are that of 2D Argyris elements \cite{ciarlet1978finite} (this
also shows the $C^0$-continuity on face). Therefore $v = b_K p$ for
some $p \in \mathcal{P}_{k-4}(K)$, where $b_K$ is the quartic volume
bubble function. By the set of degrees of freedom on element $K$, we
deduce $v \equiv 0$. The $C^1$-continuity on the edge follows from the
$C^1$-continuity of the Argyris elements in 2D.
\subsection{Finite element spaces}
For every triangulation $\mathcal{T}_h$ of the polytope $\Omega$, we
are now ready to define the finite element spaces $V_h$ as
\begin{subequations} \label{eq:FEM-space}
\begin{enumerate}
\item For $n = 2$, with $k \geq 3$,
\begin{equation} \label{eq:FEM-Hermite}
V_h := \{v \in H_0^1(\Omega): v|_K \in \mathcal{P}_k(K), \forall K \in
\mathcal{T}_h, ~v \text{ is }C^1 \text{ at all vertices}\}.
\end{equation}
\item For $n=3$, with $k \geq 5$,
\begin{equation} \label{eq:FEM-Argyris}
\begin{aligned}
V_h := \{v \in H_0^1(\Omega): v|_K \in \mathcal{P}_k(K), \forall K \in
\mathcal{T}_h, ~& v \text{ is
}C^1 \text{ on all edges}, \\
& v \text{ is }C^2 \text{ at all vertices}\}.
\end{aligned}
\end{equation}
\end{enumerate}
\end{subequations}
A unisolvent set of degrees of freedom of $V_h$ is given locally by
that of 2D Hermite elements or 3D Argyris elements. Since the finite
elements may have extra continuity on subsimplex, we briefly explain
the implementation of the boundary conditions.
\paragraph{Boundary condition of 2D Hermite FEM space} Since the first
order derivatives are imposed at vertices, for any $v_h \in V_h$,
the tangential derivatives along the boundary should also be zero due
to the consistency. In this case, we follow the terminology in
\cite{falk2013stokes, christiansen2018nodal} to introduce the
definition of {\it corner vertices}.
\begin{definition} \label{df:corner-vertex-2D}
A boundary vertex is called a \it{corner vertex} if the two adjacent
boundary edges sharing this vertex do not lie on a straight line.
\end{definition}
At each corner boundary vertex, since the two tangential directions
along the boundary form a basis of $\mathbb{R}^2$, we should impose
both the first order derivatives to be zero. At a non-corner boundary
vertex, the two tangential derivatives along its two adjacent edges
coincide up to a sign. In this case, we only specify the degree of freedom value of
this tangential derivative.
\paragraph{Boundary condition of 3D Argyris FEM space} The enhanced
continuities are imposed on the edge and vertex for the 3D Argris
finite element. Therefore, we follow the terminology in
\cite{christiansen2018nodal} to introduce the {\it corner vertices and
corner edges}.
\begin{definition}\label{df:corner-3D}
A boundary vertex is called corner vertex in 3D if the adjacent
boundary edges sharing this vertex are not coplanar. A boundary edge
is called a corner edge in 3D if the two adjacent faces on the
boundary sharing this edge are not coplanar.
\end{definition}
\begin{figure}[!htbp]
\centering
\captionsetup{justification=centering}
\subfloat[corner vertex $a'$ and corner edge $e'$]{
\includegraphics[width=0.25\textwidth]{Corner3D.pdf}
\label{fig:3D-corner}
}\qquad %
\subfloat[non-corner vertex $a$ and non-corner edge $e$]{
\includegraphics[width=0.25\textwidth]{NonCorner3D.pdf}
\label{fig:3D-non-corner}
}
\caption{Corner/non-corner vertex and edge in 3D.}
\label{fig:3D-bc}
\vspace{-3mm}
\end{figure}
For a corner boundary vertex in 3D, there are three linearly
independent boundary edges connected to it. Hence, all the degrees of
freedom at the corner boundary vertex should be set to zero.
Similarly, on a corner edge, derivatives of a function along two
normal directions can be determined by the function value on the
boundary. Hence, all the degrees of freedom at the corner boundary
vertex should be set to zero.
On the other hand, there are only two independent directions, namely
$\bm{t}_1$ and $\bm{t}_2$ along the boundary at a non-corner boundary
vertex or on a non-corner edge (cf. Figure \ref{fig:3D-non-corner}).
Therefore at a non-corner vertex, function value, two tangential
first order derivatives (i.e. $\partial_{\bm{t}_1}$, $\partial_{\bm{t}_2}$)
and three tangential second order derivatives (i.e.
$\partial^2_{\bm{t}_1\bm{t}_1}$, $\partial^2_{\bm{t}_2\bm{t}_2}$,
$\partial^2_{\bm{t}_1\bm{t}_2}$) are set to be zero. The degrees of
freedom corresponding to the normal first order derivative (i.e.
$\partial_{\bm{n}}$) and three second order derivatives (i.e.
$\partial^2_{\bm{n}\bm{n}}$, $\partial^2_{\bm{t}_1\bm{n}}$,
$\partial^2_{\bm{t}_2\bm{n}}$) should be treated as knowns. The treatment
of degree of freedoms on
non-corner edge follows a similar way.
\subsection{Discrete Miranda-Talenti-type estimate}
The following lemma is crucial in the design and analysis of $C^0$ (non-Lagrange)
finite element approximations of the linear elliptic equations in non-divergence
form \eqref{eq:nondiv} and the HJB equations \eqref{eq:HJB}.
\begin{lemma}[Discrete Miranda-Talenti-type estimate] \label{lm:Hermite-MT}
Let $\Omega \subset \mathbb{R}^n~(n=2,3)$ be a bounded Lipschitz polytopal
domain and $\mathcal{T}_h$ be a conforming triangulation. For $v \in
V_h$, it holds that
\begin{equation} \label{eq:Hermite-MT}
\sum_{K\in \mathcal{T}_h} \|\Delta v_h\|_{L^2(K)}^2 =
\sum_{K\in \mathcal{T}_h} \|D^2 v_h\|_{L^2(K)}^2 + 2\sum_{F \in \mathcal{F}_h^i}
\langle \llbracket \nabla v_h \rrbracket, \Delta_T v_h \rangle_{F}.
\end{equation}
\end{lemma}
\begin{proof}
For any simplicial element $K \in \mathcal{T}_h$, the outward unit
normal vector of $\partial K$ is piecewise constant. Using integration by
parts, we obtain (see e.g. \cite[Equ. (3.7)]{smears2013discontinuous})
\begin{equation} \label{eq:Hermite-MT1}
\begin{aligned}
\|\Delta v_h\|_{L^2(K)}^2 &=
\|D^2 v_h\|_{L^2(K)}^2 + \langle \Delta v_h, \frac{\partial
v_h}{\partial
\bm{n}} \rangle_{\partial K} - \langle \nabla \frac{\partial
v_h}{\partial \bm{n}}, \nabla v_h \rangle_{\partial K} \\
& = \|D^2 v_h\|_{L^2(K)}^2 + \langle \Delta_T v_h, \frac{\partial
v_h}{\partial
\bm{n}} \rangle_{\partial K} - \langle \nabla_T \frac{\partial
v_h}{\partial \bm{n}}, \nabla_T v_h \rangle_{\partial K}.
\end{aligned}
\end{equation}
Here, the common term $\langle \frac{\partial^2 v_h}{\partial \bm{n}^2},
\frac{\partial v_h}{\partial \bm{n}} \rangle_{\partial K}$ is cancelled in the last step. We use $\bm{n}_{\bm{t}_F}$ to denote the
outward unit normal vector of $\partial F$ coplanar to $F$. Using integration by parts on $F \subset \partial K$, we have
$$
\begin{aligned}
\langle \nabla_T \frac{\partial v_h}{\partial \bm{n}}, \nabla_T v_h \rangle_{\partial K} &= \sum_{F\subset \partial K}
\int_F \nabla_T \frac{\partial v_h}{\partial \bm{n}} \cdot \nabla_T v_h \,\mathrm{d}s \\
&= \sum_{F\subset \partial K} \left(
-\int_F \frac{\partial v_h}{\partial \bm{n}} \Delta_T v_h \,\mathrm{d}s
+ \int_{\partial F} \frac{\partial v_h}{\partial \bm{n}} \frac{\partial v_h}{\partial \bm{n}_{\bm{t}_F}} \,\mathrm{d}s
\right).
\end{aligned}
$$
Hence, \eqref{eq:Hermite-MT1} can be reformulated as
$$
\|\Delta v_h\|_{L^2(K)}^2 =
\|D^2 v_h\|_{L^2(K)}^2 + 2 \langle \Delta_T v_h, \frac{\partial
v_h}{\partial \bm{n}} \rangle_{\partial K} - \sum_{F \subset
\partial K}
\langle \frac{\partial v_h}{\partial \bm{n}},
\frac{\partial v_h}{\partial \bm{n}_{\bm{t}_F}} \rangle_{\partial F}.
$$
Thanks to the $C^0$-continuity, it is readily seen that $\Delta_T v_h$ is continuous across the face
and vanishes on the boundary. Summing over all elements yields
\begin{equation} \label{eq:Hermite-MT2}
\begin{aligned}
\sum_{K\in \mathcal{T}_h}\|\Delta v_h\|_{L^2(K)}^2 &= \sum_{K \in \mathcal{T}_h}\|D^2 v_h\|_{L^2(K)}^2 +
2\sum_{F \in \mathcal{F}_h^i} \langle \llbracket \nabla v_h \rrbracket, \Delta_T v_h \rangle_F \\
& \qquad\qquad - \sum_{K \in \mathcal{T}_h} \sum_{F\subset \partial K} \langle \frac{\partial v_h}{\partial \bm{n}},
\frac{\partial v_h}{\partial \bm{n}_{\bm{t}_F}} \rangle_{\partial F}.
\end{aligned}
\end{equation}
For any $F\in \mathcal{F}_h^\partial$, the boundary condition implies
that $\frac{\partial v_h}{\partial \bm{n}_{\bm{t}_F}} = 0$. For any
interior face $F = \partial K^+ \cap \partial K^-$, the
$C^1$-continuity on the $(n-2)$-dimensional subsimplex implies that
$$
\left.\frac{\partial v_h^+}{\partial \bm{n}^+}\right|_{\partial F} = -\left.\frac{\partial v_h^-}{\partial \bm{n}^-}\right|_{\partial F}, \quad
\left.\frac{\partial v_h^+}{\partial \bm{n}_{\bm{t}_F}}\right|_{\partial F} = \left.\frac{\partial v_h^-}{\partial \bm{n}_{\bm{t}_F}}\right|_{\partial F}.
$$
Then, we deduce that the last term in \eqref{eq:Hermite-MT2} vanishes,
which gives the desired result \eqref{eq:Hermite-MT}.
\end{proof}
\section{Applications to the linear elliptic equations in
non-divergence form}
\label{sec:nondiv}
In this section, we apply the $C^0$ (non-Lagrange) finite element method to solve the
linear elliptic equations in non-divergence form \eqref{eq:nondiv}.
\subsection{Numerical scheme}
Define the broken bilinear form for $w \in V+V_h$ and $v_h \in V_h$:
\begin{equation} \label{eq:nondiv-bilinear}
B_{0,h}(w, v_h) := \sum_{K\in \mathcal{T}_h}(\gamma Lw, \Delta v_h)_K -
(2-\sqrt{1-\varepsilon}) \sum_{F\in \mathcal{F}_h^i} \langle
\llbracket \nabla w\rrbracket, \Delta_T v_h\rangle_F.
\end{equation}
We propose the following finite element scheme to approximate the
solution to linear elliptic equations in non-divergence form
\eqref{eq:nondiv}: Find $u_h \in V_h$ such that
\begin{equation} \label{eq:nondiv-h}
B_{0,h}(u_h, v_h) = \sum_{K \in \mathcal{T}_h}(\gamma f, \Delta v_h)_K
\qquad \forall v_h \in V_h.
\end{equation}
We emphasize that no penalty or stabilization parameter is involved in
the scheme above. The broken norm is introduced on $V+V_h$:
\begin{equation} \label{eq:nondiv-norm}
\|v\|_{0,h}^2 := \sum_{K \in \mathcal{T}_h} \|D^2 v\|_{L^2(K)}^2 \quad
\forall v \in V + V_h.
\end{equation}
Note that $\|v\|_{0,h} = 0$ implies that $D^2v|_K = 0$ on each element
$K$. Together with the $C^1$-continuity on $(n-2)$-dimensional
subsimplex, we immediately have that $v$ is a linear polynomial on
$\Omega$, which means $v\equiv 0$ since $v$ vanishes on
$\partial\Omega$. The following coercivity result follows directly
from the discrete Miranda-Talenti-type estimate in Lemma \ref{lm:Hermite-MT}.
\begin{lemma} \label{lm:nondiv-MT}
There holds that
\begin{equation} \label{eq:nondiv-coercivity}
B_{0,h}(v_h, v_h) \geq (1-\sqrt{1-\varepsilon})\|v_h\|_{0,h}^2 \qquad
\forall v_h\in V_h.
\end{equation}
\end{lemma}
\begin{proof}
By using Lemma \ref{lm:nondiv-Cordes-prop} and Cauchy-Schwarz
inequality, we have
\begin{equation} \label{eq:nondiv-coer2}
\begin{aligned}
B_{0,h}(v_h, v_h)
&= \sum_{K \in \mathcal{T}_h} (\gamma Lv_h - \Delta v_h,
\Delta v_h)_K + \sum_{K \in \mathcal{T}_h} \|\Delta v_h\|_{L^2(K)}^2 \\
&\qquad\qquad\qquad\qquad
- (2-\sqrt{1-\varepsilon})\sum_{F\in \mathcal{F}_h^i} \langle
\llbracket\nabla v_h \rrbracket, \Delta_T v_h \rangle_F \\
(\mbox{by } \eqref{eq:nondiv-Cordes-prop})~~
& \geq \sum_{K\in \mathcal{T}_h}\|\Delta v_h\|_{L^2(K)}^2 -
\sqrt{1-\varepsilon}\sum_{K\in \mathcal{T}_h} \|D^2v_h\|_{L^2(K)}\|\Delta v_h\|_{L^2(K)}
\\
&\qquad\qquad\qquad\qquad
-(2-\sqrt{1-\varepsilon}) \sum_{F\in \mathcal{F}_h^i} \langle
\llbracket \nabla v_h \rrbracket, \Delta_Tv_h \rangle_F \\
& \geq \sum_{K \in \mathcal{T}_h} \|\Delta v_h\|_{L^2(K)}^2 -
\frac{\sqrt{1-\varepsilon}}{2} \sum_{K\in\mathcal{T}_h} \big(\|D^2v_h\|_{L^2(K)}^2
+ \|\Delta v_h\|_{L^2(K)}^2 \big) \\
&\qquad\qquad\qquad\qquad
- (2-\sqrt{1-\varepsilon}) \sum_{F\in \mathcal{F}_h^i} \langle
\llbracket \nabla v_h \rrbracket, \Delta_T v_h \rangle_F \\
& = (1-\frac{\sqrt{1-\varepsilon}}{2}) \Big( \sum_{K \in \mathcal{T}_h}\|\Delta
v_h\|_{L^2(K)}^2 - 2\sum_{F\in \mathcal{F}_h^i}\langle
\llbracket \nabla v_h \rrbracket, \Delta_T v_h \rangle_F \Big)
\\
&\qquad\qquad\qquad\qquad
- \frac{\sqrt{1-\varepsilon}}{2} \sum_{K\in \mathcal{T}_h}\|D^2 v_h\|_{L^2(K)}^2 \\
(\mbox{recall } \eqref{eq:nondiv-norm})~~ & = (1-\sqrt{1-\varepsilon})\|v_h\|_{0,h}^2,
\end{aligned}
\end{equation}
where the discrete Miranda-Talenti-type estimate \eqref{eq:Hermite-MT} is
used in the last step.
\end{proof}
We note that the coercivity constant (namely, $1 - \sqrt{1-\varepsilon}$) under the
broken norm $\|\cdot\|_{0,h}$ is exactly the same as that for the PDE theory. As a corollary, the uniqueness of the finite element method
\eqref{eq:nondiv-h} implies the existence, since equation \eqref{eq:nondiv} is linear.
Further, assume that the solution $u \in
H^2(\Omega)\cap H_0^1(\Omega)$, a straightforward argument shows the
consistency, namely
\begin{equation} \label{eq:nondiv-consistency}
B_{0,h}(u,v_h) = \sum_{K\in \mathcal{T}_h} (\gamma L u, \Delta v_h)_K =
\sum_{K \in \mathcal{T}_h} (\gamma f, \Delta v_h)_K
\qquad \forall v_h \in V_h.
\end{equation}
\begin{remark} \label{rk:tilde-varepsilon}
We note that the bilinear form \eqref{eq:nondiv-bilinear} explicitly uses the constant $\varepsilon$ in Cordes condition \eqref{eq:nondiv-Cordes}.
In case that the optimal value of $\varepsilon$ is not easy to compute, a simple modification of \eqref{eq:nondiv-bilinear} reads
$$
\tilde{B}_{0,h}(w, v_h) := \sum_{K\in \mathcal{T}_h}(\gamma Lw, \Delta v_h)_K -
(2-\sqrt{1-\tilde{\varepsilon}}) \sum_{F\in \mathcal{F}_h^i} \langle
\llbracket \nabla w\rrbracket, \Delta_T v_h\rangle_F,
$$
where $\tilde{\varepsilon}$ is an approximation of $\varepsilon$ that satisfies $\sqrt{1 - \tilde{\varepsilon}} + \frac{1-\varepsilon}{\sqrt{1 - \tilde{\varepsilon}}} < 2$.
Using the inequality
$$
2\|D^2v_h\|_{L^2(K)}\|\Delta v_h\|_{L^2(K)} \leq \frac{\sqrt{1- \varepsilon}}{\sqrt{1- \tilde\varepsilon}}
\|D^2v_h\|_{L^2(K)}^2 + \frac{\sqrt{1- \tilde\varepsilon}}{\sqrt{1- \varepsilon}} \|\Delta v_h\|_{L^2(K)}^2 \quad \forall K \in \mathcal{T}_h,
$$
we have the coercivity result
$$
\tilde{B}_{0,h}(v_h, v_h) \geq \big( 1 - \frac{\sqrt{1 - \tilde\varepsilon}}{2} - \frac{1-\varepsilon}{2\sqrt{1 - \tilde\varepsilon}} \big) \|v_h\|_{0,h}^2 \qquad \forall v_h \in V_h,
$$
following a similar argument as Lemma \ref{lm:nondiv-MT}. Clearly, the optimal coercivity constant is attained at $\tilde{\varepsilon} = \varepsilon$. Even if there is no any a priori estimate of $\varepsilon$, one may simply take $\tilde{\varepsilon} = 0$, which leads to the coercivity constant $\frac{\varepsilon}{2}$.
\end{remark}
\subsection{Error estimate}
Thanks to the coercivity result \eqref{eq:nondiv-coercivity} and the
consistency \eqref{eq:nondiv-consistency}, we then arrive at the
quasi-optimal error estimate.
\begin{theorem} \label{tm:nondiv-estimate}
Let $\Omega$ be a bounded, convex polytope in $\mathbb{R}^n$, and let
$\mathcal{T}_h$ be a simplicial, conforming, shape-regular mesh.
Suppose that the coefficients satisfy the Cordes condition
\eqref{eq:nondiv-Cordes} and the solution to \eqref{eq:nondiv} $u \in
H^s(\Omega) \cap
H_0^1(\Omega)$ for some $s \geq 2$. Then, there holds
\begin{equation} \label{eq:nondiv-estimate}
\|u - u_h\|_{0,h}^2:= \sum_{K \in \mathcal{T}_h} \|D^2(u - u_h)\|_{K}^2 \leq C
\sum_{K\in \mathcal{T}_h} h_K^{2t-4} \|u\|_{H^s(K)}^2,
\end{equation}
where $t = \min\{s, k+1\}$.
\end{theorem}
\begin{proof}
Since the sequence of meshes is shape regular, it follows from the
standard polynomial approximation theory \cite{brenner2007mathematical} that, there exists a $z_h \in
V_h$, such that
\begin{subequations}
\begin{align}
\|u - z_h\|_{H^q(K)} &\leq C h_K^{t-q}\|u\|_{H^s(\omega_K)}, \quad 0 \leq q \leq 2,
\label{eq:nondiv-app1}\\
\|D^\beta(u - z_h)\|_{L^2(\partial K)} &\leq C h_K^{t-q-1/2}
\|u\|_{H^s(\omega_K)} \quad \forall |\beta| = q, \quad 0 \leq q \leq 1, \label{eq:nondiv-app2}
\end{align}
\end{subequations}
where $\omega_K$ represents the union of the local neighborhood of element $K$.
Let $\psi_h = z_h - u_h$. Then, by the coercivity result
\eqref{eq:nondiv-coercivity}, we obtain
\begin{equation} \label{eq:nondiv-error}
\begin{aligned}
\|z_h - u_h\|_{0,h}^2 &\lesssim B_{0,h}(z_h - u_h, \psi_h) =
B_{0,h}(z_h, \psi_h) - \sum_{K\in \mathcal{T}_h}(\gamma f, \Delta \psi_h)_K \\
&= \underbrace{\sum_{K \in \mathcal{T}_h}(\gamma L (z_h - u), \Delta
\psi_h)_K}_{E_1}
-
(2 - \sqrt{1-\varepsilon}) \underbrace{\sum_{F\in \mathcal{F}_h^i} \langle
\llbracket \nabla z_h \rrbracket, \Delta_T \psi_h \rangle_F}_{E_2}.
\end{aligned}
\end{equation}
By the boundedness of the data, the fact that
$\|\Delta\psi_h\|_{L^2(K)} \leq \sqrt{n} \|D^2
\psi_h\|_{L^2(K)}$ and the approximation result
\eqref{eq:nondiv-app1}, we have
$$
\begin{aligned}
|E_1| &\leq
\sum_{K \in \mathcal{T}_h}
\|\gamma L (u-z_h)\|_{L^2(K)} \|\Delta \psi_h\|_{L^2(K)} \\
&\leq \sum_{K \in \mathcal{T}_h}
\sqrt{n} \|\gamma\|_{L^\infty(K)} \|A\|_{L^\infty(K)}\|D^2(u -
z_h)\|_{L^2(K)}\|D^2\psi_h\|_{L^2(K)} \\
& \lesssim \Big(
\sum_{K \in \mathcal{T}_h} h_K^{2t-4}\|u\|_{H^s(K)}^2
\Big)^{1/2} \|\psi_h\|_{0,h}.
\end{aligned}
$$
Further, the local trace inequality implies that $\|\Delta_T \psi_h
\|_{L^2(\partial K)} \lesssim h_K^{-1/2}\|D^2 \psi_h\|_{L^2(K)}$,
together with the approximation result \eqref{eq:nondiv-app2}, we have
$$
\begin{aligned}
|E_2| & =
\left| \sum_{F \in \mathcal{F}_h}
\langle\llbracket \nabla u - \nabla z_h \rrbracket, \Delta_T \psi_h
\rangle_F \right| \leq \sum_{F \in \mathcal{F}_h^i} \|\llbracket \nabla
(u-z_h)\rrbracket \|_{L^2(F)} \|\Delta_T \psi_h\|_{L^2(F)} \\
& \lesssim \Big(
\sum_{K \in \mathcal{T}_h} h_K^{2t-4}\|u\|_{H^s(K)}^2
\Big)^{1/2} \|\psi_h\|_{0,h}.
\end{aligned}
$$
The above inequalities and \eqref{eq:nondiv-error} give rise to
$$
\|z_h - u_h\|_{0,h} \leq C \Big( \sum_{K\in \mathcal{T}_h} h_K^{2t-4}
\|u\|_{H^s(K)}^2 \Big)^{1/2},
$$
which implies the desired result by triangle inequality.
\end{proof}
\section{Applications to the Hamilton-Jacobi-Bellman equations}
\label{sec:HJB}
In this section, we apply the $C^0$ (non-Lagrange) finite element method to solve the
HJB equations \eqref{eq:HJB}, which can be viewed as a natural extension of the numerical scheme for the linear elliptic equations in non-divergence form.
Since an additional parameter $\lambda >
0$ is introduced in the Cordes condition \eqref{eq:HJB-Cordes1}, the
broken norm is defined as
\begin{equation} \label{eq:HJB-norm}
\|v\|_{\lambda, h}^2 := \sum_{K\in \mathcal{T}_h} \|v\|_{\lambda, h,
K}^2 := \sum_{K \in \mathcal{T}_h} \big(\|D^2 v\|_{L^2(K)}^2 +
2\lambda \|\nabla v\|_{L^2(K)}^2 + \lambda^2 \|v\|_{L^2(K)}^2\big).
\end{equation}
Thanks to the discussion of $\|\cdot\|_{0,h}$ in
\eqref{eq:nondiv-norm}, it is readily seen that $\|\cdot\|_{\lambda, h}$
is indeed a norm on $V + V_h$ for all $\lambda \geq 0$.
\subsection{Numerical scheme}
We describe the finite element method. In light of
\eqref{eq:HJB-M}, we define the operator $M_h: V+V_h \to V_h^*$
by
\begin{equation} \label{eq:HJB-form}
\langle M_h[w], v_h \rangle := \sum_{K \in \mathcal{T}_h} (F_\gamma[w],
L_\lambda v_h)_K - (2-\sqrt{1-\varepsilon})\sum_{F\in \mathcal{F}_h^i}
\langle \llbracket \nabla w \rrbracket, \Delta_T v_h - \lambda
v_h\rangle_F,
\end{equation}
where we recall that $L_\lambda v = \Delta v - \lambda v$ in
\eqref{eq:L-lambda}. The following finite element method is proposed
to approximate the solution to the HJB equations \eqref{eq:HJB}: Find
$u_h \in V_h$ such that
\begin{equation} \label{eq:HJB-h}
\langle M_h[u_h], v_h \rangle = 0 \qquad \forall v_h \in V_h.
\end{equation}
\begin{lemma} \label{lm:HJB-monotone}
For every $w_h, v_h \in V_h$, we have
\begin{equation} \label{eq:HJB-monotone}
\langle M_h[w_h] - M_h[v_h], w_h-v_h \rangle
\geq (1-\sqrt{1-\varepsilon})\|w_h - v_h\|_{\lambda, h}^2.
\end{equation}
\end{lemma}
\begin{proof}
Set $z_h = w_h - v_h$. Using the discrete Miranda-Talenti-type
estimate \eqref{eq:Hermite-MT} and integration by parts, we obtain
\begin{equation} \label{eq:Hermite-MT-lambda}
\begin{aligned}
& \quad~ \sum_{K \in \mathcal{T}_h} \|L_\lambda z_h\|_{L^2(K)}^2 \\
&=
\sum_{K\in \mathcal{T}_h}\|\Delta z_h\|_{L^2(K)}^2 - 2\lambda\sum_{K\in
\mathcal{T}_h} (z_h, \Delta z_h)_K + \lambda^2 \|z_h\|_{L^2(\Omega)}^2
\\
& =
\sum_{K\in \mathcal{T}_h}\|\Delta z_h\|_{L^2(K)}^2 + 2\lambda \|\nabla
z_h\|_{L^2}^2 + \lambda^2 \|z_h\|_{L^2(\Omega)}^2 - 2\lambda\sum_{K\in
\mathcal{T}_h} \int_{\partial K} z_h \frac{\partial z_h}{\partial
\bm{n}} \,\mathrm{d}s \\
& = \|z_h\|_{\lambda, h}^2 + 2 \sum_{F\in \mathcal{F}_h^i} \langle
\llbracket \nabla z_h \rrbracket, \Delta_Tz_h - \lambda z_h \rangle_F,
\end{aligned}
\end{equation}
where we use the definition of $\|\cdot\|_{\lambda, h}$
\eqref{eq:HJB-norm} in the last step. Further, by Lemma
\ref{lm:HJB-Cordes-prop}, we have
$$
\begin{aligned}
& \quad~ \langle M_h[w_h] - M_h[v_h], z_h \rangle \\
&= \sum_{K\in \mathcal{T}_h} (F_\gamma[w_h] - F_\gamma[v_h] -
L_\lambda z_h, L_\lambda z_h)_K + \sum_{K \in \mathcal{T}_h} \|L_\lambda
z_h\|_{L^2(K)}^2 \\
& \qquad\qquad\qquad
- (2-\sqrt{1-\varepsilon})\sum_{F\in \mathcal{F}_h^i} \langle
\llbracket \nabla z_h \rrbracket, \Delta_T z_h - \lambda z_h \rangle_F \\
&\geq \sum_{K\in \mathcal{T}_h}\|L_\lambda z_h\|_{L^2(K)}^2 -
\sqrt{1-\varepsilon} \sum_{K\in \mathcal{T}_h} \|z_h\|_{\lambda, h,K} \|L_\lambda
z_h\|_{L^2(K)}
\\
& \qquad\qquad\qquad
- (2-\sqrt{1-\varepsilon})\sum_{F\in \mathcal{F}_h^i} \langle
\llbracket \nabla z_h \rrbracket, \Delta_T z_h -\lambda z_h\rangle_F
\\
&\geq \frac{2-\sqrt{1-\varepsilon}}{2}\big(
\sum_{K\in \mathcal{T}_h} \|L_\lambda z_h\|_{L^2(K)}^2 - 2\sum_{F\in \mathcal{F}_h^i}
\langle \llbracket \nabla z_h \rrbracket, \Delta_T z_h -\lambda
z_h\rangle_F \big) \\
&\qquad\qquad\qquad
- \frac{\sqrt{1-\varepsilon}}{2} \|z_h\|_{\lambda, h}^2.
\end{aligned}
$$
Applying \eqref{eq:Hermite-MT-lambda}, we conclude the strong
monotonicity of $M_h$ in \eqref{eq:HJB-monotone}.
\end{proof}
Again, the monotonicity constant under the broken norm is exactly the
same as that for the PDE theory. Similar to the Remark \ref{rk:tilde-varepsilon}, the monotonicity constant becomes $1 - \frac{\sqrt{1 - \tilde\varepsilon}}{2} - \frac{1-\varepsilon}{2\sqrt{1 - \tilde\varepsilon}}$ if $\varepsilon$ is replaced by its apporixomation $\tilde\varepsilon$.
Next, we show that $M_h$ is Lipschitz
continuous on $V_h$ with respect
to $\|\cdot\|_{\lambda, h}$.
\begin{lemma} \label{lm:HJB-Lipschitz}
For any $v_h, w_h, z_h \in V_h$,
\begin{equation} \label{eq:HJB-Lipschitz}
|\langle M_h[w_h] - M_h[v_h], z_h\rangle| \leq C
\|w_h-v_h\|_{\lambda,h} \|z_h\|_{\lambda,h}.
\end{equation}
\end{lemma}
\begin{proof}
In light of the definition of $M_h$ in \eqref{eq:HJB-form}, using
Cauchy-Schwarz inequality, we have
$$
\begin{aligned}
& \quad |\langle M_h[w_h] - M_h[v_h], z_h \rangle| \\
& \leq \underbrace{\sum_{K \in \mathcal{T}_h}\|F_\gamma[w_h] -
F_\gamma[v_h] - L_\lambda(w_h - v_h)\|_{L^2(K)} \|L_\lambda
z_h\|_{L^2(K)}}_{I_1}
\\
& \quad
+ \underbrace{\sum_{K\in \mathcal{T}_h} \|L_\lambda(w_h - v_h)\|_{L^2(K)}
\|L_\lambda z_h\|_{L^2(K)}}_{I_2} \\
& \quad +
(2-\sqrt{1-\varepsilon}) \underbrace{\sum_{F\in
\mathcal{F}_h^i}\|\llbracket \nabla(w_h-v_h) \rrbracket \|_{L^2(F)}
\Big(
\|\Delta_T z_h\|_{L^2(F)} + \|\lambda z_h\|_{L^2(F)}
\Big)}_{I_3}.
\end{aligned}
$$
Revoking Lemma \ref{lm:HJB-Cordes-prop}, and the fact that
$\sum_{K\in \mathcal{T}_h} \|L_\lambda v\|_{L^2(\mathcal{T}_h)}^2 \leq 2n \|v\|_{\lambda,
h}^2$ for any $v\in V+V_h$, we have
$$
I_1 \leq \sqrt{2n(1-\varepsilon)}\|w_h -
v_h\|_{\lambda,h}\|z_h\|_{\lambda,h}, \qquad
I_2 \leq 2n\|w_h - v_h\|_{\lambda, h} \|z_h\|_{\lambda, h}.
$$
For any interior face $F = \partial K^+ \cap \partial K^-$, the
standard scaling argument \cite{ciarlet1978finite, brenner2007mathematical} gives
$$
\|\llbracket \nabla(w_h - v_h) \rrbracket \|_{L^2(F)}^2 \lesssim
h_F \sum_{K \in \{K^+, K^-\}}\|D^2 (w_h - v_h)\|_{L^2(K)}^2,
$$
where the $C^0$-continuity at face and $C^1$-continuity at
$(n-2)$-dimensional subsimplex guarantee that the piecewise linear
function on $\omega_F = K^+ \cup K^-$ has to be a linear function on the $\omega_F$.
Further, By the local trace inequality, we have that for $F
\subset \partial K$
$$
\|\Delta_T z_h\|_{L^2(F)}^2 \lesssim h_F^{-1} \|D^2 z_h\|_{L^2(K)},
\qquad \|\lambda z_h\|_{L^2(F)}^2 \lesssim h_F^{-1}
\lambda^2 \|z_h\|_{L^2(K)}^2.
$$
Hence, we have $I_3 \lesssim \|w_h - v_h\|_{\lambda,h}
\|z_h\|_{\lambda,h}$. The bound \eqref{eq:HJB-Lipschitz} is obtained
from the above estimates of $I_i~(i=1,2,3)$.
\end{proof}
Having the strong monotonicity and the Lipschitz continuity, by the
Browder-Minty Theorem, there exists a unique solution $u_h \in V_h$ to
\eqref{eq:HJB-h}.
\subsection{Error estimate}
The consistency of \eqref{eq:HJB-h} follows naturally since the term
$\sum_{F\in \mathcal{F}_h^i} \langle \llbracket \nabla u \rrbracket,
\Delta_T v_h - \lambda v_h\rangle_F$ vanishes for $u \in H^2(\Omega)
\cap H_0^1(\Omega)$. Finally, we arrive at the quasi-optimal error
estimate.
\begin{theorem} \label{tm:HJB-estimate}
Let $\Omega$ be a bounded, convex polytope in $\mathbb{R}^n$, and let
$\mathcal{T}_h$ be a simplicial, conforming, shape-regular mesh. Let
$\Lambda$ be a compact metric space. Suppose that the coefficients
satisfy the Cordes condition \eqref{eq:HJB-Cordes}. Then, there exists
a unique solution $u_h \in V_h$ satisfying \eqref{eq:HJB-h}.
Moreover, there holds that
\begin{equation} \label{eq:HJB-estimate}
\|u - u_h\|_{\lambda,h}^2 \leq C
\sum_{K\in \mathcal{T}_h} h_K^{2t-4} \|u\|_{H^s(K)}^2,
\end{equation}
where $t = \min\{s, k+1\}$ provided that $u \in H^s(\Omega) \cap
H_0^1(\Omega)$ for some $s \geq 2$.
\end{theorem}
\begin{proof}
Since the sequence of meshes is shape regular, it follows from the
standard polynomial approximation theory \cite{brenner2007mathematical} that, there exists a $z_h \in
V_h$, such that
\begin{subequations}
\begin{align}
\|u - z_h\|_{H^q(K)} &\leq C h_K^{t-q}\|u\|_{H^s(\omega_K)}, \quad 0 \leq q \leq 2,
\label{eq:HJB-app1}\\
\|D^\beta(u - z_h)\|_{L^2(\partial K)} &\leq C h_K^{t-q-1/2}
\|u\|_{H^s(\omega_K)} \quad \forall |\beta| = q, \quad 0 \leq q \leq 1. \label{eq:HJB-app2}
\end{align}
\end{subequations}
Let $\psi_h = z_h - u_h$. In light of the consistency, the strong
monotonicity of $M_h$ on $V_h$, as shown in Lemma
\ref{lm:HJB-monotone}, yields
\begin{equation} \label{eq:HJB-error}
\begin{aligned}
\|\psi_h\|_{\lambda, h}^2 &\lesssim \langle M_h[z_h] - M_h[u_h], \psi_h
\rangle = \langle M_h[z_h] - M[u], \psi_h \rangle \\
&= \underbrace{\sum_{K \in \mathcal{T}_h} (F_\gamma[z_h] - F_\gamma[u] -
L_\lambda(z_h - u), L_\lambda \psi_h)_K}_{E_1} \\
&\quad + \underbrace{\sum_{K \in \mathcal{T}_h}(L_\lambda(z_h - u),
L_\lambda \psi_h)_K}_{E_2} \\
&\quad - (2-\sqrt{1-\varepsilon}) \underbrace{\sum_{F\in
\mathcal{F}_h^i} \langle \llbracket \nabla z_h \rrbracket, \Delta_T
\psi_h - \lambda \psi_h \rangle_F}_{E_3}.
\end{aligned}
\end{equation}
Similar to the proof of Lemma \ref{lm:HJB-Lipschitz}, we obtain
$$
\begin{aligned}
|E_1| &\leq \sqrt{2n(1-\varepsilon)} \|u - z_h\|_{\lambda, h}
\|\psi_h\|_{\lambda, h}
\lesssim
\Big( \sum_{K \in \mathcal{T}_h} h_K^{2t-4}\|u\|_{H^s(K)}^2
\Big)^{1/2} \|\psi_h\|_{\lambda,h}, \\
|E_2| &\leq 2n \|u - z_h\|_{\lambda, h} \|\psi_h\|_{\lambda,h}
\lesssim
\Big( \sum_{K \in \mathcal{T}_h} h_K^{2t-4}\|u\|_{H^s(K)}^2
\Big)^{1/2} \|\psi_h\|_{\lambda,h}.
\end{aligned}
$$
By \eqref{eq:HJB-app2} and the local trace inequality, we have
$$
\begin{aligned}
|E_3|
& = \Big|\sum_{F\in \mathcal{F}_h^i} \langle \llbracket \nabla
(u-z_h) \rrbracket, \Delta_T \psi_h - \lambda \psi_h\rangle_F \Big| \\
& \lesssim
\Big|\sum_{F\in \mathcal{F}_h^i} \|\llbracket \nabla
(u-z_h) \rrbracket\|_{L^2(F)}
\big(
\|\Delta_T \psi_h\|_{L^2(F)} + \lambda\|\psi_h\|_{L^2(F)} \big) \Big|
\\
& \lesssim
\Big( \sum_{K \in \mathcal{T}_h} h_K^{2t-4}\|u\|_{H^s(K)}^2
\Big)^{1/2} \|\psi_h\|_{\lambda,h}.
\end{aligned}
$$
The above estimates of $E_i~(i=1,2,3) $ and \eqref{eq:HJB-error} yield
$$
\|z_h - u_h\|_{\lambda,h} \leq C \Big( \sum_{K\in \mathcal{T}_h} h_K^{2t-4}
\|u\|_{H^s(K)}^2 \Big)^{1/2},
$$
which implies the desired result by triangle inequality.
\end{proof}
\subsection{Semismooth Newton method}
We use the semismooth Newton method \cite{ulbrich2002semismooth} to
solve the discrete problem \eqref{eq:HJB-h}. We follow a similar
argument as \cite{smears2014discontinuous} in this subsection. Since
transferring the proofs in \cite{smears2014discontinuous} to our
setting is straightforward, we only describe the algorithm and the
convergence result.
Following the discussion in \cite{smears2014discontinuous}, we define
the set of admissible maximizers for any $v \in V + V_h$,
\begin{equation} \label{eq:maximizer}
\Lambda[v] :=
\left\{
\parbox{5.2em}{
$g: \Omega \to \Lambda$ \\
measurable}
\Bigg|~
\parbox{20em}{
$
\displaystyle g(x) \in
\mathop{\arg\max}_{\alpha\in\Lambda}(A^\alpha:D_h^2v + \bm{b}^\alpha
\cdot \nabla v - c^\alpha v - f^\alpha)$ \\
for almost every $x\in \Omega$
}
\right\},
\end{equation}
where $D_h^2 v$ denotes the broken Hessian of $v$. As shown in
\cite[Lemma 9, Theorem 10]{smears2014discontinuous}, the set
$\Lambda[v]$ is nonempty for any $v \in V + V_h$, where a selection
theorem in \cite{kuratowski1965general} is applied. For any measurable
$g(x): \Omega \to \Lambda$, thanks to the uniform continuity of
$\gamma^\alpha$ defined in \eqref{eq:HJB-gamma} on $\Omega \times
\Lambda$, $\gamma^g := \gamma^\alpha|_{\alpha = g(x)}$ satisfies
$\gamma^{g}\in L^\infty(\Omega)$ and $\|\gamma^g\|_{L^\infty(\Omega)}
\leq \|\gamma^\alpha\|_{C(\bar{\Omega} \times \Lambda)}$. The
functions $A^g$, $\bm{b}^g$, $c^g$ and $f^g$ and the operator $L^g$
are defined in a similar way and are likewise bounded.
The semismooth Newton algorithm for solving \eqref{eq:HJB-h} is
described as follows.
\vspace{2mm}
\noindent {\bf Input:} Given initial guess $u_h^0 \in V_h$ and a
stopping criterion.
\noindent {\bf for} $j = 0,1,2,\cdots$ {\bf until} termination {\bf
do}
Choose any $\alpha_j \in \Lambda[u_h^j]$ and compute compute
$u_h^{j+1} \in V_h$ as the solution to the linear problem
\begin{equation} \label{eq:semismooth-algo}
B^{j}_{\lambda, h}(u_h^{j+1}, v_h) = \sum_{K \in \mathcal{T}_h} (
\gamma^{\alpha_j} f^{\alpha_j}, L_\lambda v_h)_K \qquad \forall v_h
\in V_h,
\end{equation}
where the bilinear form $B^j_{\lambda, h}: V_h \times V_h \to
\mathbb{R}$ is defined by
\begin{equation} \label{eq:semismooth-linear}
\begin{aligned}
B^j_{\lambda, h}(w_h, v_h) &:= \sum_{K\in \mathcal{T}_h}
(\gamma^{\alpha_j} L^{\alpha_j}w_h, L_\lambda v_h)_K \\
&\qquad
- (2-\sqrt{1-\varepsilon})\sum_{F \in \mathcal{F}_h^i} \langle
\llbracket \nabla w_h \rrbracket, \Delta_T v_h - \lambda v_h
\rangle_{F}.
\end{aligned}
\vspace{-4mm}
\end{equation}
\noindent {\bf end do}
\begin{remark} \label{rk:lower-order}
We note here that \eqref{eq:semismooth-algo} is indeed a finite
element scheme for solving the linear elliptic equations in
non-divergence form with lower-order terms:
$$
L^{\alpha_j} u^{j+1} := A^{\alpha_j}:D^2u^{j+1} +
\bm{b}^{\alpha_j}\cdot \nabla u^{j+1}
- c^{\alpha_j} u^{j+1} = f^{\alpha_j}
\quad \text{in }\Omega, \quad
u^{j+1} = 0 \quad\text{on }\partial\Omega,
$$
where the coefficients, which is allowed to be discontinuous, satisfy
a similar Cordes condition as \eqref{eq:HJB-Cordes} with $\alpha =
\alpha_j$. Using very similar arguments as Lemma \ref{lm:HJB-monotone}
and Lemma \ref{lm:HJB-Lipschitz}, the coercivity and boundedness of
$B_{\lambda, h}^j$ regarding to $\|\cdot\|_{\lambda, h}$ can be
proved. A quasi-optimal error estimate then follows directly, which is
also confirmed numerically in subsection \ref{subsec:experiment2}.
\end{remark}
We state the main result as follows.
\begin{theorem} \label{tm:semismooth-convergence}
Under the hypotheses of Theorem \ref{tm:HJB-estimate}, there exists a
constant $R > 0$ that may depend on $h$ as well as on the polynomial
degree, such that if $\|u_h - u_h^0\|_{\lambda, h} < R$, then the
sequence $\{u_h^j\}_{j=1}^\infty$ generated by the semismooth Newton
algorithm converges to $u_h$ with a superlinear convergence rate.
\end{theorem}
\begin{proof}
The proof is similar to \cite[Theorem 11]{smears2014discontinuous} and
is therefore omitted here.
\end{proof}
\section{Numerical experiments} \label{sec:numerical}
In this section we present some numerical experiments of the $C^0$
(non-Lagrange) finite element methods for the linear elliptic
equations in non-divergence form \eqref{eq:nondiv} and the HJB
equations \eqref{eq:HJB}. For all the convergence order experiments,
the convergence history plots are logarithmically scaled.
\subsection{First experiment}
In the first experiment, we consider the problem
\eqref{eq:nondiv} in two dimensions on the domain $\Omega = (-1,1)^2$.
The coefficient matrix is set to be
\begin{equation} \label{eq:nondiv-test1}
A = \begin{pmatrix}
2 & \frac{x_1 x_2}{|x_1 x_2|} \\
\frac{x_1 x_2}{|x_1 x_2|} & 2
\end{pmatrix}.
\end{equation}
A straightforward calculation shows that, for the coefficient matrix
in \eqref{eq:nondiv-test1}, the Cordes condition
\eqref{eq:nondiv-Cordes} is satisfied with $\varepsilon = 3/5$. We
note here that the coefficient matrix is discontinuous across the set
$\{(x_1,x_2)\in \Omega: x_1=0 \text{ or } x_2=0\}$. In order to test
the convergence order, the smooth solution
\begin{equation} \label{eq:nondiv-test1-u}
u(x) = (x_1 \mathrm{e}^{1-|x_1|} - x_1)(x_2 \mathrm{e}^{1-|x_2|} - x_2)
\end{equation}
is considered from many works (e.g., \cite{smears2013discontinuous,
gallistl2017variational}). The right hand side $f:= A:D^2u$ is
directly calculated from the coefficient matrix and solution.
On a sequence of uniform triangulations $\{\mathcal{T}_h\}_{0<h<1}$,
we apply the numerical scheme \eqref{eq:nondiv-h} to the problem with
2D Hermite finite element spaces for polynomial degrees $k=3$ and
$k=4$. After computing \eqref{eq:nondiv-h} for various $h$, we report
the errors in Figure \ref{fig:test1}. The expected optimal convergence
rate $\|D^2 u -D^2 u_h\|_{L^2(\mathcal{T}_h)} = \mathcal{O}(h^{k-1})$
is observed, which is in agreement with Theorem
\ref{tm:nondiv-estimate}. Further, the experiments indicate that the
scheme converges with (sub-optimal) second order convergence in both
$H^1$ and $L^2$ when $k=3$. As for $k=4$, the $H^1$ error converges
with (optimal) fourth order, and the $L^2$ error converges with
(sub-optimal) fourth order.
\begin{figure}[!htbp]
\centering
\captionsetup{justification=centering}
\subfloat[Convergence rate, $k=3$]{
\includegraphics[width=0.48\textwidth]{Test1_k3-eps-converted-to.pdf}
\label{fig:test1-k3}
}%
\subfloat[Convergence rate, $k=4$]{
\includegraphics[width=0.48\textwidth]{Test1_k4-eps-converted-to.pdf}
\label{fig:test1-k4}
}
\caption{Convergence rate for the numerical scheme \eqref{eq:nondiv-h}
applied to the linear elliptic equations in non-divergence form \eqref{eq:nondiv} for
Experiment 1.}
\label{fig:test1}
\end{figure}
\subsection{Second experiment} \label{subsec:experiment2}
In this experiment, we consider the linear elliptic equations in non-divergence form with
lower-order terms on $\Omega = (-1,1)^2$:
$$
A:D^2u + \bm{b}\cdot \nabla u - cu = f \quad \text{in }\Omega, \qquad
u = 0 \quad\text{on }\partial\Omega.
$$
Here, $A$ is taken the same as \eqref{eq:nondiv-test1}, $\bm{b} =
(x_1, x_2)^T$, $c = 3$. By choosing $\lambda = 1$, we have
$$
\frac{|A|^2 + |\bm{b}|^2/(2\lambda) + (c/\lambda)^2}{({\rm tr} A +
c/\alpha)^2} = \frac{19 + (x_1^2+x_2^2)/2}{49}\leq \frac{20}{49},
$$
which means that the Cordes condition holds for $\varepsilon = 9/20$
(see Remark \ref{rk:lower-order}). The right hand side $f$ is chosen
so that the exact solution is \eqref{eq:nondiv-test1-u}. The scheme
converges with the optimal order $h^{k-1}$ in the broken $H^2$ norm,
as shown in Table \ref{tab:test2}. The convergence orders in $H^1$
and $L^2$ norms are similar to the Experiment 1.
\begin{table}[!htbp]
\centering
\captionsetup{justification=centering}
\scriptsize
\begin{tabular}{cc|cc|cc|cc}
\hline
&$h$ & $\|u-u_h\|_{L^2(\Omega)}$ & Order & $|u -
u_h|_{H^1(\Omega)}$ & Order & $\|D^2u -
D^2u\|_{L^2(\mathcal{T}_h)}$
& Order \\ \hline
&$2^{-2}$ &
1.72705E-03 & --- & 1.17301E-02 & --- & 1.41330E-01 & --- \\
&$2^{-3}$ &
4.10225E-04 & 2.07 & 2.33362E-03 & 2.33 & 3.59360E-02 & 1.98 \\
$k=3$ & $2^{-4}$ &
1.00457E-04 & 2.03 & 5.42524E-04 & 2.10 & 9.03321E-03 & 1.99 \\
&$2^{-5}$ &
2.49068E-05 & 2.01 & 1.33476E-04 & 2.02 & 2.26200E-03 & 2.00 \\
&$2^{-6}$ &
6.20697E-06 & 2.00 & 3.32792E-05 & 2.00 & 5.65735E-04 & 2.00 \\
\hline
&$2^{-1}$ &
1.78055E-03 & --- & 6.63776E-03 & --- & 6.80847E-02 & --- \\
&$2^{-2}$ &
1.21503E-04 & 3.87 & 4.62102E-04 & 3.84 & 8.63084E-03 & 2.98 \\
$k=4$ & $2^{-3}$ &
7.79999E-06 & 3.96 & 2.96137E-05 & 3.96 & 1.06983E-03 & 3.01 \\
&$2^{-4}$ &
4.88884E-07 & 4.00 & 1.85296E-06 & 4.00 & 1.32677E-04 & 3.01 \\
&$2^{-5}$ &
2.88593E-08 & 4.08 & 1.13437E-07 & 4.03 & 1.65056E-05 & 3.01 \\
\hline
\end{tabular}
\caption{Errors and observed convergence orders for Experiment 2.}
\label{tab:test2}
\end{table}
\subsection{Third experiment}
In this experiment, we solve the nonlinear HJB equations
\eqref{eq:HJB} in two dimensions on the domain $\Omega = (0,1)^2$.
Following \cite{smears2014discontinuous}, we take $\Lambda = [0,
\pi/3] \times \mathrm{SO}(2)$, where $\mathrm{SO}(2)$ is the set of
$2\times 2$ rotation matrices. The coefficients are given by
$\bm{b}^\alpha = 0$, $c^\alpha = \pi^2$, and
$$
A^\alpha = \frac{1}{2}\sigma^\alpha (\sigma^\alpha)^T, \qquad
\sigma^\alpha = R^T
\begin{pmatrix}
1 & \sin\theta \\
0 & \cos\theta
\end{pmatrix}, \qquad \alpha = (\theta, R) \in \Lambda.
$$
Since ${\rm tr} A^\alpha = 1$ and $|A^\alpha|^2 = (1+\sin^2\theta)/2 \leq
7/8$, the Cordes condition \eqref{eq:HJB-Cordes1} holds with
$\varepsilon = 1/7$ by taking $\lambda = 8\pi^2/7$. We choose
$f^\alpha = \sqrt{3}\sin^2\theta / \pi^2 + g$, $g$ independent of
$\alpha$ so that the exact solution of the HJB equations
\eqref{eq:HJB} is $ u(x_1, x_2) = \exp(x_1x_2) \sin(\pi x_1) \sin(\pi
x_2)$.
On a sequence of uniform triangulations, we apply the numerical scheme
\eqref{eq:HJB-h} to the HJB equations \eqref{eq:HJB}. The finite element
spaces are defined by employing the 2D Hermite finite elements for
polynomial degrees $k=3$ and $k=4$. The plots of the errors given
in Figure \ref{fig:test3-k3-err} and \ref{fig:test3-k4-err} show that
the scheme converges with $\|u - u_h\|_{\lambda, h} =
\mathcal{O}(h^{k-1})$, which is in agreement with Theorem
\ref{tm:HJB-estimate}. The convergence orders in $H^1$ and $L^2$ norms
are similar to the Experiment 1 and Experiment 2.
In the semismooth Newton algorithm, the initial guess is $u_h^0 = 0$,
and the stopping criterion is set to be $\|u_h^{j} - u_h^{j-1}\|_{\lambda, h} <
10^{-8}$. The convergence histories shown in Figure
\ref{fig:test3-k3-newton} and \ref{fig:test3-k3-newton} demonstrate
the fast convergence of the algorithm.
\begin{figure}[!htbp]
\centering
\captionsetup{justification=centering}
\subfloat[Convergence rate, $k=3$]{
\includegraphics[width=0.48\textwidth]{Test3_k3_err-eps-converted-to.pdf}
\label{fig:test3-k3-err}
}%
\subfloat[Semismooth Newton, $k=3$]{
\includegraphics[width=0.48\textwidth]{Test3_k3_newton-eps-converted-to.pdf}
\label{fig:test3-k3-newton}
} \\
\subfloat[Convergence rate, $k=4$]{
\includegraphics[width=0.48\textwidth]{Test3_k4_err-eps-converted-to.pdf}
\label{fig:test3-k4-err}
} %
\subfloat[Semismooth Newton, $k=4$]{
\includegraphics[width=0.48\textwidth]{Test3_k4_newton-eps-converted-to.pdf}
\label{fig:test3-k4-newton}
}
\caption{Numerical scheme \eqref{eq:HJB-h} applied to the HJB equations
\eqref{eq:HJB} for Experiment 3: Convergence rate and convergence
histories of the semismooth Newton method.}
\label{fig:test3}
\end{figure}
\subsection{Fourth experiment}
In the last example, we consider a test case of the HJB equations
\eqref{eq:HJB} from \cite{smears2014discontinuous, gallistl2019mixed}
with near degenerate diffusion and a boundary layer in the solution.
Let $\Omega = (0,1)^2$, $\bm{b}^\alpha = (0,1)^T$, $c^\alpha = 10$,
and define
$$
A^\alpha = \alpha^T
\begin{pmatrix}
20 & 1 \\
1 & 1/10
\end{pmatrix}
\alpha, \qquad \forall \alpha \in \Lambda:= \mathrm{SO}(2).
$$
For this choice of parameters and $\lambda = 1/2$, the Cordes
condition \eqref{eq:HJB-Cordes1} is satisfied for $\varepsilon =
0.0024$ (cf. \cite{smears2014discontinuous}). Let $\delta = 0.01$ and
$f^\alpha=A^\alpha:D^2u + \bm{b}^\alpha \cdot \nabla - c^\alpha u$,
where the exact solution is
$$
u(x) = (2x_1 - 1)\big(\exp(1-|2x_1 - 1|) - 1 \big) \Big(x_2 +
\frac{1-\exp(x_2/\delta)}{\exp(1/\delta) - 1} \Big).
$$
This solution exhibits a sharp boundary layer near the line
$\bar{\Omega} \cap \{x_2 = 1\}$. Following
\cite{smears2014discontinuous}, we use a sequence of graded bisection
meshes $\{\mathcal{T}_\ell\}_{\ell\in \mathbb{N}_0}$ with grading
factor $1/2$, see Figure \ref{fig:test4-mesh} for example.
More precisely, we mark the element $T$ whose $|T| >
C(x_{2,T}-1)^2/\#\mathcal{T}_\ell$, where $x_{2,T}$ is the second
component of barycenter of the element $T$, $\#\mathcal{T}_\ell$ is
the number of elements in $\mathcal{T}_\ell$. In the experiment, we
set $C = 120$, and use the 2D Hermite finite element spaces for
polynomial degrees $k=3$. In Figure \ref{fig:test4-k3}, we plot the
errors in broken $H^2$-seminorm, $H^1$-seminorm and $L^2$ on the
graded bisection meshes, against the number of degrees of freedom
(ndof). We observe a reduction in the order of error
$\mathcal{O}(\text{ndof}^{-1})$, which confirms the second-order
convergence as shown in Theorem \ref{tm:HJB-estimate}.
\begin{figure}[!htbp]
\centering
\captionsetup{justification=centering}
\subfloat[Graded mesh at level 13: 3,587 vertices, 17,629 degrees of
freedom]{
\includegraphics[width=0.43\textwidth]{Test4_mesh-eps-converted-to.pdf}
\label{fig:test4-mesh}
} %
\subfloat[Convergence rate, $k=3$]{
\includegraphics[width=0.53\textwidth]{Test4_err-eps-converted-to.pdf}
\label{fig:test4-k3}
}
\caption{Convergence rate for the numerical scheme \eqref{eq:HJB-h}
applied to the HJB equations \eqref{eq:HJB} for
Experiment 4.}
\label{fig:test4}
\end{figure}
\section*{Acknowledgments}
The author would like to express his gratitude to Guangwei Gao and
Prof. Jun Hu in Peking University for their helpful discussions.
\bibliographystyle{siamplain}
|
1,314,259,994,905 | arxiv | \section{Introduction}
In recent years helical curves are studied widely in Euclidean and non-Euclidean spaces. Generalized helices, slant helices, isophote curves and relatively normal slant helices are such curves. When we study about curves, we need a frame of reference along the curve, as a result many moving frames along a curve have been established, among which Frenet frame and Darboux frame are prominent. We get a type of helical curve when one of the vector fields in the chosen frame makes a constant angle with a fixed vector. Helical curves are most fascinating curves that attracted attention in a wide range of disciplines such as Mathematics, Physics, Biology, Architecture and Computer Science.
\noindent In 3-dimensional Euclidean space $E^3$, a curve $\gamma$ with moving Frenet frame $\{ T, n, b\}$ is a helix if its tangent vector field $T$ makes a constant angle with a fixed direction $d$, called the axis of the helix. Helices are characterized by the fact that the ratio $\tau/\kappa$ is constant along the curve, where $\tau$ and $\kappa$ stands for the torsion and curvature of $\gamma$, respectively [3]. The notion of helix in Minkowski 3-space $E_1^3$ is developed similarly. In $[7]$, Izumiya and Takeuchi defined a slant helix in $E^3$, by the property that the principal normal $n$ to the curve makes a constant angle with a fixed direction, called its axis. In $[1]$, Ali and Lopez looked into slant helices in Minkowski $3$-space $E_1^3$ and they obtained some characterizations of slant helix and its axis.
\noindent Using Darboux frame $ \{ T, B = N \times T, N\}$ of a curve, lying on a surface with unit normal $N$, In 2015, Dogan and Yayli investigated isophote curve, in the Euclidean space $[4]$. They found the axis of an isophote curve via its Darboux frame and then gave some characterizations about the isophote curve and its axis in Euclidean 3-space. Isophote curve is defined as a locus of points of a surface at which the normal to the surface makes a constant angle with a fixed direction, called its axis. Isophote curves are one of the characteristics curves on a surface such as geodesic, asymptotic or line of curvature. In [5], Dogan defined Isophote curves on timelike surfaces in Minkowski 3-space and found the axes of spacelike and timelike isophote curves via their Darboux frames. In [6], Dogan and Yayli defined Isophote curves on spacelike surfaces in Lorentz-Minkowski space and they found its axis as timelike and spacelike vectors via the Darboux frame. They also gave some relations between isophote curves and special curves on surfaces such as geodesic curves, asymptotic curves or lines of curvature.
\noindent Recently, Macit and Duldul $[9]$ defined the relatively normal-slant helix on a surface in the Euclidean space $E^3$ via Darboux frame $\{T, B = N \times T, N\}$, along the curve whose vector field $B$ makes a constant angle with a fixed direction, called its axis. They gave some characterizations for such curves and obtained relations between some special curves (general helices, integral curves, etc.) and relatively
normal-slant helices. In this paper we study the relatively normal-slant helix in Minkowski 3-space $E_1^3$, where we obtain the axis of the relatively normal-slant helix and get some characteristics of the curve. Dogan $[5]$ while investigating isophote curves noticed that the curve which is both a geodesic and a slant helix is an isophote curve, interestingly we come to know that the curve which is both an asymptotic curve and a slant helix is a relatively normal-slant helix. The paper is arrange as follws: In section 2, we discuss some basic theory of unit speed parametrized curve on a smooth surface in Minkowski $3$-space $E_1^3$. In sections (3, 4, 5), we find the axis of a relatively normal-slant helix on a spacelike and timelike surface. In section 6, we establish characterization theorems for spacelike and timelike relatively normal-slant helices in Minkowski $3$-space $\mathbb{E}_1^3$. Finally, we find relationship between relatively normal-slant helices and slant helices on timelike as well as spacelike surfaces.
\section{Preliminaries}
First of all we give some brief introduction of Minkowski $3$-space $E_1^3$. The space $\mathbb{E}_1^3$ is a three dimensional real vector space endowed with the dot product
\begin{equation}
\langle x, y \rangle = - x_1y_1 + x_2y_2 + x_3y_3,
\end{equation}
\noindent where $x=(x_1, x_2, x_3)$, $y=(y_1, y_2, y_3)$ $\in E_1^3$. This space is also known as Lorentz-Minkowski space. A vector $x \in E_1^3$ is said to be spacelike when $\langle x, x \rangle >$ 0 or x = 0, timelike when $\langle x, x \rangle <$ 0 and lightlike(null) when $\langle x, x \rangle$ = 0. A curve $\gamma: I \rightarrow E_1^3$ is called spacelike, timelike or lightlike when the velocity vector of the curve is spacelike, timelike or lightlike, respectively. While a surface $M$ is called spacelike, timelike or lightlike when the unit normal of the surface is timelike, spacelike or lightlike, respectively.
\noindent The Lorentzian cross product of $x$ = ($x_1, x_2, x_3$) and $y$ = ($y_1, y_2, y_3$) $\in E_1^3$ is defined as follows
$x \times y =$ $
\begin{vmatrix}
e_1 & -e_2 & -e_3\\
x_1 & x_2 & x_3\\
y_1 & y_2 & y_3
\end{vmatrix}
$ = $(x_2y_3 - x_3y_2, x_1y_3 - x_3y_1, x_2y_1 - x_1y_2)$,
\noindent where $e_i = (\delta_{i1}, \delta_{i2}, \delta_{i3})$, $ \delta_{ij} $ is Kronecker delta and $e_1 \times e_2$ = $-e_3$, $e_2 \times e_3 = e_1$, $e_3 \times e_1 = -e_2$.
\noindent Let $\{ T, n, b\}$ be the moving Frenet frame along the curve $\gamma$ with arc-length parameter $s$ with curvature $\kappa$ and torsion $\tau$.
\noindent For a spacelike curve $\gamma$, the Frenet-Serret equations are given by
\begin{center}
$ \begin{bmatrix}
T' \\
n' \\
b'
\end{bmatrix}$ = $ \begin{bmatrix}
0 & \kappa & 0\\
-\epsilon\kappa & 0 & \tau\\
0 & \tau & 0
\end{bmatrix}$ = $ \begin{bmatrix}
T \\
n\\
b
\end{bmatrix},$
\end{center}
\noindent where $\langle T, T \rangle$ = 1, $\langle n, n \rangle$ = $\epsilon$, $\langle b, b \rangle$ = $-\epsilon$, $\langle T, b \rangle$ = $\langle T, n \rangle$ = $\langle n, b \rangle$ = 0. When $\epsilon = 1$, $\gamma(s)$ is a spacelike curve with spacelike principal normal n and timelike binormal b while if $\epsilon $ = -1 then $\gamma$ is a spacelike curve with timelike principal normal n and spacelike binormal b.
\noindent For a timelike curve $\gamma$, the Frenet-Serret equations are given by
\begin{center}
$ \begin{bmatrix}
T' \\
n' \\
b'
\end{bmatrix}$ = $ \begin{bmatrix}
0 & \kappa & 0\\
\kappa & 0 & \tau\\
0 & -\tau & 0
\end{bmatrix}$ = $ \begin{bmatrix}
T \\
n\\
b
\end{bmatrix},$
\end{center}
\noindent where $\langle T, T\rangle$ = -1, $\langle n, n\rangle$ = 1, $\langle b, b\rangle$ = 1, $\langle T, n\rangle$ = $\langle T, b\rangle$ = $\langle n, b\rangle$ = 0.
\noindent \begin{definition}$[12]$ Let $v$ and $w$ be two spacelike vectors of $E_1^3$. Then we have the following:
\noindent (a) If $v$ and $w$ span a spacelike vector subspace of $E_1^3$, then there is a unique non-negative real number $\theta \geq 0$ such that $\langle v, w \rangle = \|v\| \|w\| \cos\theta$.
\noindent (b) If $v$ and $w$ span a timelike vector subspace of $E_1^3$, then there is a unique non-negative real number $\theta \geq 0$ such that $\langle v, w \rangle = \|v\| \|w\| \cosh\theta$.
\end{definition}
\begin{definition} $[12]$ Let $v$ be a spacelike vector and $w$ be a timelike vector in $\mathbb{E}_1^3$. Then there is a unique non-negative real number $\theta \geq$ 0, such that
\begin{center}
\noindent $\langle v, w \rangle = \|v\| \|w\| \sinh\theta$.
\end{center}
\end{definition}
\begin{definition} $[10]$ Let $v$ be a timelike vector and $w$ be a timelike vector in same time cone of $\mathbb{E}_1^3$, i.e. $\langle v, w\rangle < 0$. Then there is a unique non-negative real number $\theta \geq$ 0, such that
\begin{center}
\noindent $\langle v, w \rangle = -\|v\| \|w\| \cosh\theta$.
\end{center}
\end{definition}
\noindent Let $M$ be a smooth spacelike surface in $\mathbb{E}_1^3$ and $\gamma: I \rightarrow E_1^3$ be a unit speed spacelike curve on the surface. Then the Darboux frame $\{T, B = N \times T, N \}$ along the curve is well-defined and positively oriented along the curve, where $T$ is the tangent vector field of $\gamma$, $N$ is the unit normal of $M$ and $B$ is intrinsic normal of $\gamma$. The Darboux equations are given by
\begin{align}\label{2.2}
T^{'} = \kappa_g B + \kappa_n N,\;
B^{'} = - \kappa_g T + \tau_g N,\;
N^{'} = \kappa_n T + \tau_g B,
\end{align}
\noindent where $\kappa_{g}$, $\kappa_{n}$ and $\tau_{g}$ are the geodesic curvature, normal curvature and geodesic torsion, respectively, and $\langle T, T\rangle = \langle B, B\rangle =1$, $\langle N, N\rangle = -1$, $\langle n, n\rangle = 1$.
\noindent By using \eqref{2.2} we get,
\begin{equation}\label{2.3}
\kappa^{2} = \kappa_g^{2} - \kappa_n^{2},\;
\kappa_n = \kappa \sinh \phi, \;
\kappa_g = \kappa \cosh \phi, \;
\tau_g = \tau + \phi^{'},
\end{equation}
\noindent where $\phi$ is the angle between the surface normal $N$ and the principal normal $n$ to the curve $\gamma$.
\noindent Now, let $M$ be a smooth timelike surface in $E_1^3$. Then the tangent space of the surface is timelike therefore a curve lying on the surface could be either timelike or spacelike. Let $\gamma : I \rightarrow E_1^3$ be a unit speed timelike curve lying on the surface $M$. Then Darboux equations are given by
\begin{equation}\label{2.4}
T^{'} = \kappa_g B + \kappa_n N, \;
B^{'} = \kappa_g T - \tau_g N, \;
N^{'} = \kappa_n T + \tau_g B,
\end{equation}
\noindent
where $\kappa_{g}$, $\kappa_{n}$ and $\tau_{g}$ are the geodesic curvature, normal curvature and geodesic torsion, respectively and $\langle T, T\rangle = -1, \langle B, B\rangle = \langle N, N\rangle = \langle n, n\rangle = 1$.
\noindent By using \eqref{2.4} we get,
\begin{equation}\label{2.5}
\kappa^{2} = \kappa_g^{2} + \kappa_n^{2}, \;
\kappa_n = \kappa \cos \phi, \;
\kappa_g = \kappa \sin \phi, \;
\tau_g = \tau + \phi^{'},
\end{equation}
\noindent where $\phi$ is the angle between the surface normal $N$ and the principal normal $n$ to the curve $\gamma$.
\noindent
If $\gamma : I \rightarrow E_1^3$ is a unit speed spacelike curve on the timelike surface $M$, then Darboux equations are given by
\begin{equation}\label{2.6}
T^{'} = \kappa_g B - \kappa_n N, \;
B^{'} = \kappa_g T + \tau_g N, \;
N^{'} = \kappa_n T + \tau_g B,
\end{equation}
\noindent
where $\kappa_{g}$, $\kappa_{n}$ and $\tau_{g}$ are the geodesic curvature, normal curvature and geodesic torsion, respectively, and $\langle T, T\rangle = \langle N, N\rangle = \langle n, n\rangle = 1, \langle B, B\rangle = -1$.
\noindent By using Eq.\eqref{2.6} we get,
\begin{equation}\label{2.7}
\kappa^{2} = \kappa_n^{2} + \kappa_g^{2}, \;
\kappa_n = \kappa \cosh \phi, \;
\kappa_g = \kappa \sinh \phi, \;
\tau_g = \tau + \phi^{'} ,
\end{equation}
\noindent where $\phi$ is the angle between the surface normal $N$ and the principal normal $n$ to the curve $\gamma$.
\begin{theorem} $[1]$ \label{2.4}
Let $\gamma$ be a unit speed spacelike curve in $E_1^3$. If the normal vector of the $\gamma$ is spacelike, then $\gamma$ is a slant helix if and only if one of the two functions
$\frac{\kappa^2}{(\kappa^2 - \tau^2)^{3/2}} \left (\frac{\tau}{\kappa} \right )' $ \; and $ \frac{\kappa^2}{(\tau^2 - \kappa^2)^{3/2}} \left (\frac{\tau}{\kappa} \right )' $
is a constant everywhere $\tau^2 - \kappa^2$ does not vanish .
\end{theorem}
\begin{theorem} $[1]$ \label{2.5}
Let $\gamma$ be a unit speed timelike curve in $E_1^3$. Then $\gamma$ is a slant helix if and only if one of the two functions
$\frac{\kappa^2}{(\kappa^2 - \tau^2)^{3/2}} \left (\frac{\tau}{\kappa} \right )' $ \; and $ \frac{\kappa^2}{(\tau^2 - \kappa^2)^{3/2}} \left (\frac{\tau}{\kappa} \right )' $
is a constant everywhere $\tau^2 - \kappa^2$ does not vanish .
\end{theorem}
\section{The axis of a spacelike relatively normal-slant helix on a spacelike surface}
\noindent Let $\gamma$ be a unit speed spacelike curve on an spacelike surface $M$ and $\{T, B, N\}$ be the Darboux frame along $\gamma(s)$. The curve $\gamma$ is called a relatively normal-slant helix if the vector field $B$ of $\gamma$ makes a constant angle with a fixed direction $d$. The unit vector $d$ is called the axis of the relatively normal-slant helix. In this section, we find the fixed vector(axis) of a spacelike relatively normal-slant helix via Darboux frame on a spacelike surface immersed in Minkowski $3$-space. We examine the two different cases of the axis $d$.
\noindent $\textbf{Case(1).}$
If the vector d is spacelike vector, then since the intrinsic normal B to the curve $\gamma$ is spacelike, from Definition 2.1(a), we have $\langle B, d\rangle = \cos \theta $ and from Definition 2.1(b), we have $\langle B, d\rangle = \cosh \beta $, where $\theta$ and $\beta$ are the fixed angles between the vectors $B$ and $d$, respectively.
\noindent $\textbf{(a)}$ Let $\langle B, d\rangle = \cosh \beta.$ If we differentiate this equation with respect to $s$ along the curve $\gamma$ and then by using \eqref{2.2}, we get $\langle B', d\rangle = 0\;\implies \langle -\kappa_g T + \tau_g N, d\rangle = 0 \;\implies \langle T, d\rangle = \frac{\tau_g}{\kappa_g} \langle N, d\rangle.$ Let us say $\langle N, d\rangle =$ a, then $\langle T, d\rangle = a\frac{\tau_g}{\kappa_g}$ and hence the axis can be written as $d = a\frac{\tau_g}{\kappa_g} T + \cosh \beta B - a N$.
\noindent Then
$ \langle d, d\rangle$ = $a^2(\frac{\tau_g}{\kappa_g})^2 + \cosh^2 \beta - a^2 = 1$, which gives $a = \pm \frac{\kappa_g}{\sqrt{\kappa_g^2 - \tau_g^2}} \sinh \beta$.
\noindent Thus the spacelike axis $d$ is given as,
\begin{equation}\label{3.1}
d = \pm \frac{\tau_g}{\sqrt{\kappa_g^2 - \tau_g^2}} \sinh \beta T+ \cosh \beta B \mp \frac{\kappa_g}{\sqrt{\kappa_g^2 - \tau_g^2}} \sinh \beta N.
\end{equation}
If we differentiate $B'$ in \eqref{2.2} and $\langle B', d\rangle = 0$ with respect to $s$, we get
\begin{equation} \label{3.2}
B''= (- \kappa_g' + \tau_g \kappa_n) T + (\tau_g^2 - \kappa_g^2) B + (\tau_g' - \kappa_g \kappa_n) N \; and \; \langle B'', d\rangle = 0.
\end{equation}
From \eqref{3.1} and \eqref{3.2}, we get
\begin{equation}
\langle B'', d\rangle = \pm \frac{(\kappa_g \tau_g' - \kappa_g' \tau_g) - \kappa_n (\kappa_g^2 -\tau_g^2) }{\sqrt{\kappa_g^2 - \tau_g^2}} \sinh \beta - (\kappa_g^2 -\tau_g^2) \cosh \beta = 0, \nonumber
\end{equation}
which implies
\begin{equation} \label{3.3}
\coth \beta = \pm \left [ \frac{\kappa_g^2}{(\kappa_g^2 - \tau_g^2)^{3/2}} \left (\frac{\tau_g}{\kappa_g} \right )' - \frac{\kappa_n}{(\kappa_g^2 - \tau_g^2)^{1/2}} \right ].
\end{equation}
\noindent Now we prove that $d$ is a constant vector, i.e. $d'$ = 0. So we differentiate $d$ in (3.1), with respect to $s$ and obtain
\begin{align}
d' =& \pm \sinh \beta \left [ (\frac{\tau_g}{\sqrt{\kappa_g^2 - \tau_g^2}} )' T + \frac{\tau_g}{\sqrt{\kappa_g^2 - \tau_g^2}}(\kappa_g B + \kappa_n N) \right ] \nonumber \\
& \pm \sinh \beta \left [ -(\frac{\kappa_g}{\sqrt{\kappa_g^2 - \tau_g^2}} )' N - \frac{\kappa_g}{\sqrt{\kappa_g^2 - \tau_g^2}}(\kappa_n T + \tau_g B) \right ] + \cosh \beta ( -\kappa_g T + \tau_g N ), \nonumber
\end{align}
which implies
\begin{align}
d' =& \pm \left (\sinh \beta \left [ (\frac{\tau_g}{\sqrt{\kappa_g^2 - \tau_g^2}} )' - \frac{\kappa_n \kappa_g}{\sqrt{\kappa_g^2 - \tau_g^2}} \right ] - \kappa_g \cosh \beta \right ) T \\
& \pm \left ( \sinh \beta \left [ \frac{\tau_g \kappa_n}{\sqrt{\kappa_g^2 - \tau_g^2}} - (\frac{\kappa_g}{\sqrt{\kappa_g^2 - \tau_g^2}} )' \right ] + \tau_g \cosh \beta \right ) N. \nonumber
\end{align}
Also from (3.3), we get
\begin{equation}
\cosh \beta = \pm \sinh \beta \; \frac{\tau_g' \kappa_g - \kappa_g' \tau_g - \kappa_n(\kappa_g^2 - \tau_g^2)}{(\kappa_g^2 - \tau_g^2)^{3/2}}.
\end{equation}
Now from (3.4) and (3.5), we get
\begin{center}
$d' = \pm \sinh \beta$ $\begin{pmatrix}
\frac{\tau_g'(\kappa_g^2 - \tau_g^2) - \tau_g \kappa_g \kappa_g' + \tau_g^2 \tau_g' - \kappa_n \kappa_g (\kappa_g^2 - \tau_g^2)}{(\kappa_g^2 - \tau_g^2)^{3/2}} \\
- \frac{\tau_g' \kappa_g^2 - \kappa_g' \kappa_g \tau_g - \kappa_n \kappa_g(\kappa_g^2 - \tau_g^2)}{(\kappa_g^2 - \tau_g^2)^{3/2}}
\end{pmatrix}$ T
$\pm \sinh \beta$ $\begin{pmatrix}
\frac{\tau_g \kappa_n(\kappa_g^2 - \tau_g^2) - \kappa_g' (\kappa_g^2 - \tau_g^2) + \kappa_g^2 \kappa_g' - \kappa_g \tau_g \tau_g'}{(\kappa_g^2 - \tau_g^2)^{3/2}} \\
+ \frac{\tau_g \tau_g' \kappa_g - \kappa_g' \tau_g^2 - \kappa_n \tau_g(\kappa_g^2 - \tau_g^2)}{(\kappa_g^2 - \tau_g^2)^{3/2}}
\end{pmatrix}$ N.
\end{center}
By straight forward calculation we see that the above expression vanishes, i.e. $d'$ = 0. Hence $d$ is a constant vector.
\noindent $\textbf{(b)}$ Let $\langle B, d\rangle = \cos \theta$. Proceeding in similar way as Case 1(a) we conclude that $\langle T, d \rangle = \frac{\tau_g}{\kappa_g} \langle N, d \rangle$. If we consider $\langle N, d\rangle = a$, then $d = a \frac{\tau_g}{\kappa_g} T + \cos \theta B - a N$. Now since $d$ is spacelike, therefore 1 = $\langle d, d \rangle = a^2 (\frac{\tau_g}{\kappa_g})^2 + \cos^2\theta - a^2 $, so $ a = \pm \frac{\kappa_g}{\sqrt{\tau_g^2 - \kappa_g^2}} \sin \theta.$
\noindent
Thus the spacelike axis $d$ can be written as,
\begin{equation}\label{3.6}
d = \pm \frac{\tau_g}{\sqrt{\tau_g^2 - \kappa_g^2}} \sin \theta T + \cos \theta B \mp \frac{\kappa_g}{\sqrt{\tau_g^2 - \kappa_g^2}} \sin \theta N.
\end{equation}
Now, since $B'' = (- \kappa_g' + \tau_g \kappa_n) T + (\tau_g^2 - \kappa_g^2) B + (\tau_g' - \kappa_g \kappa_n) N$ and $\langle B'', d \rangle = 0$, proceeding as Case 1(a), we obtain
\begin{equation} \label{3.7}
\cot \theta = \mp \left [ \frac{\kappa_g^2}{(\tau_g^2 - \kappa_g^2)^{3/2}} \left (\frac{\tau_g}{\kappa_g} \right )' - \frac{\kappa_n}{\tau_g^2 - \kappa_g^2)^{1/2}} \right ].
\end{equation}
\noindent
On the other hand we can do similar calculations to show that $d'$=0, consequently $d$ is a constant vector.
\noindent
$\textbf{Case 2.}$ Let the axis $d$ be timelike. Then by Definition 2.2, $\langle B, d\rangle = \sinh \alpha$, where $\alpha$ is the constant angle between $B$ and $d$. In this case, we get
\begin{equation}\label{3.8}
d = \pm \frac{\tau_g}{\sqrt{\kappa_g^2 - \tau_g^2}} \cosh \alpha \; T + \sinh \alpha\; B \mp \frac{\kappa_g}{\sqrt{\kappa_g^2 - \tau_g^2}} \cosh \alpha \;N,
\end{equation}
\begin{equation} \label{3.9}
\tanh \alpha = \pm \left [ \frac{\kappa_g^2}{(\kappa_g^2 - \tau_g^2)^{3/2}} \left (\frac{\tau_g}{\kappa_g} \right )' - \frac{\kappa_n}{(\kappa_g^2 - \tau_g^2)^{1/2}} \right ].
\end{equation}
\section{The axis of a spacelike relatively normal-slant helix on a timelike surface}
Let $\gamma$ be a unit speed spacelike curve on a timelike surface $M$ and $\{T, B, N\}$ be the Darboux frame along $\gamma(s)$. In this section, we find the fixed vector (axis) of a spacelike relatively normal-slant helix via Darboux frame on a timelike surface immersed in Minkowski $3$-space. We examine the two different cases of the axis $d$. Since $T$ is spacelike, also $N$ is spacelike, B has to be timelike vector.
\noindent $\textbf{Case 1.}$
If $d$ is a timelike vector such that $B$ and $d$ are in same time-cone i.e. $\langle B, d\rangle < 0$. Then by the Definition 2.3, we have $\langle B, d\rangle = - \cosh \delta$, where $\delta$ is the constant angle between $B$ and $d$. Differentiating $\langle B, d\rangle = - \cosh \delta$ with respect to $s$, we get $\langle B', d\rangle = 0$, then by using (2.6) for $B'$, we obtain $\langle T, d\rangle = - \frac{\tau_g}{\kappa_g} \langle N, d\rangle$. Let us say $ \langle N, d\rangle $= a, then the timelike axis $d$ can be written as
\begin{equation} \label{4.1}
d = - a \frac{\tau_g}{\kappa_g} T + \cosh \delta B + a N.
\end{equation}
Since $\langle d, d\rangle = -1$ therefore from (4.1) $a = \pm \frac{\kappa_g}{\sqrt{\kappa_g^2 + \tau_g^2}} \sinh \delta$ and hence
\begin{equation}\label{4.2}
d = \mp \frac{\tau_g}{\sqrt{\kappa_g^2 + \tau_g^2}} \sinh \delta\; T+ \cosh \delta\; B \pm \frac{\kappa_g}{\sqrt{\kappa_g^2 + \tau_g^2}} \sinh \delta \;N.
\end{equation}
\noindent Also $\langle B'', d \rangle = 0$, proceeding as Case 1(a) of the Section: 3 we obtain
\begin{equation} \label{4.3}
\coth \delta = \pm \left [ \frac{\kappa_g^2}{(\kappa_g^2 + \tau_g^2)^{3/2}} \left (\frac{\tau_g}{\kappa_g} \right )' - \frac{\kappa_n}{(\kappa_g^2 + \tau_g^2)^{1/2}} \right ] .
\end{equation}
\noindent $\textbf{Case 2.}$
If $d$ is a spacelike vector, then by Definition 2.2, $\langle B, d\rangle = \sinh \zeta$. In this case, we obtain
\begin{equation}\label{4.4}
d = \mp \frac{\tau_g}{\sqrt{\kappa_g^2 + \tau_g^2}} \cosh \zeta\; T - \sinh \zeta\; B \pm \frac{\kappa_g}{\sqrt{\kappa_g^2 + \tau_g^2}} \cosh \zeta \;N,
\end{equation}
\begin{equation} \label{4.5}
\tanh \zeta = \mp \left [ \frac{\kappa_g^2}{(\kappa_g^2 + \tau_g^2)^{3/2}} \left (\frac{\tau_g}{\kappa_g} \right )' - \frac{\kappa_n}{(\kappa_g^2 + \tau_g^2)^{1/2}} \right ].
\end{equation}
\section{The axis of a timelike relatively normal-slant helix on a timelike surface}
\noindent Let $\gamma$ be a unit speed timelike relatively normal slant helix on a timelike surface $M$. Then $T$ is a timellike vector, $N$ is a spacelike vector which imply that $B$ is a spacelike vector. In this situation, we have two different cases for the axis $d$ of the curve $\gamma$.
\noindent
$\textbf{Case 1.}$ If the vector d is spacelike vector, then from Definition: 2.1 (a), we have $\langle B, d\rangle = \cos \nu $ and from Definition: 2.1 (b), we have $\langle B, d\rangle = \cosh \xi $, where $\nu$ and $\xi$ are the constant angle between the vector $B$ and $d$ respectively.
\noindent \textbf{(a)} Let $\langle B, d\rangle = \cosh \xi$. Then differentiating the equation with respect to $s$ and using Darboux equation (2.4), we obtain
\begin{equation} \label{5.1}
d = - a \frac{\tau_g}{\kappa_g} T + \cosh \xi B + a N,
\end{equation}
where $a = \langle N, d\rangle$. Also $\langle d, d\rangle = 1$, which gives $a = \pm \frac{\kappa_g}{\sqrt{\tau_g^2 - \kappa_g^2}} \sinh \xi$, and hence we obtain from (5.1)
\begin{equation}\label{5.2}
d = \mp \frac{\tau_g}{\sqrt{\tau_g^2 - \kappa_g^2}} \sinh \xi \; T + \cosh \xi \;B \pm \frac{\kappa_g}{\sqrt{\tau_g^2 - \kappa_g^2}} \sinh \xi \;N.
\end{equation}
Now by using $\langle B'', d\rangle = 0$, we obtain
\begin{equation} \label{5.3}
\coth \xi = \mp \left [ \frac{\kappa_g^2}{(\tau_g^2 - \kappa_g^2)^{3/2}} \left (\frac{\tau_g}{\kappa_g} \right )' + \frac{\kappa_n}{(\tau_g^2 - \kappa_g^2)^{1/2}} \right ].
\end{equation}
\textbf{(b)} Let $\langle B, d\rangle = \cos \nu$. Then in similar way as Case 1(a), we obtain
\begin{equation}\label{5.4}
d = \mp \frac{\tau_g}{\sqrt{\kappa_g^2 - \tau_g^2}} \sin \nu \; T + \cos \nu \; B \pm \frac{\kappa_g}{\sqrt{\kappa_g^2 + \tau_g^2}} \sin \nu \;N,
\end{equation}
\begin{equation} \label{5.5}
\cot \nu = \mp \left [ \frac{\kappa_g^2}{(\kappa_g^2 - \tau_g^2)^{3/2}} \left (\frac{\tau_g}{\kappa_g} \right )' + \frac{\kappa_n}{(\kappa_g^2 - \tau_g^2)^{1/2}} \right ].
\end{equation}
\noindent $\textbf{Case 2.}$ When $d$ is timelike, by Definition 2.2 we have, $\langle B, d\rangle$ = $\sinh \psi$. In this case we obtain
\begin{equation}\label{5.6}
d = \mp \frac{\tau_g}{\sqrt{\tau_g^2 - \kappa_g^2}} \cosh \psi \; T + \sinh \psi \;B \pm \frac{\kappa_g}{\sqrt{\tau_g^2 - \kappa_g^2}} \cosh \psi \;N,
\end{equation}
\begin{equation} \label{5.7}
\tanh \psi = \mp \left [ \frac{\kappa_g^2}{(\tau_g^2 - \kappa_g^2)^{3/2}} \left (\frac{\tau_g}{\kappa_g} \right )' + \frac{\kappa_n}{(\tau_g^2 - \kappa_g^2)^{1/2}} \right ].
\end{equation}
\section{Main Theorems}
In this section, we give main theorems that characterize relatively normal slant helices on a smooth surface immersed in Minkowski $3$-space.
\begin{theorem} \label{6.1}
A unit speed spacelike curve on a spacelike surface is a relatively normal slant helix if and only if any one of the following three functions,
\noindent
(i) $\tanh \alpha = \mu(s) = \pm \left [ \frac{\kappa_g^2}{(\kappa_g^2 - \tau_g^2)^{3/2}} \left (\frac{\tau_g}{\kappa_g} \right )' - \frac{\kappa_n}{(\kappa_g^2 - \tau_g^2)^{1/2}} \right ], $
\noindent
(ii) $ \coth \beta = \eta(s) = \pm \left [ \frac{\kappa_g^2}{(\kappa_g^2 - \tau_g^2)^{3/2}} \left (\frac{\tau_g}{\kappa_g} \right )' - \frac{\kappa_n}{(\kappa_g^2 - \tau_g^2)^{1/2}} \right ], $
\noindent
(iii) $ \cot \theta = \psi(s) = \mp \left [ \frac{\kappa_g^2}{(\tau_g^2 - \kappa_g^2)^{3/2}} \left (\frac{\tau_g}{\kappa_g} \right )' - \frac{\kappa_n}{(\tau_g^2 - \kappa_g^2)^{1/2}} \right ] $
\noindent
is a constant function.
\end{theorem}
\begin{proof}
Let the unit speed spacelike curve $\alpha$ on a surface $M$ with spacelike principal normal be a relatively normal slant helix, and hence the intrinsic normal $B$ to the curve makes a constant angle with a fixed direction. The relatively normal indicatrix i.e. the Gaussian mapping
$B|_{\alpha} : I \rightarrow S_1^2$ along the curve $\gamma$ is a part of a circle on the Lorentzian unit sphere $S_1^2$. Therefore the relatively normal indicatrix has constant geodesic curvature and the normal curvature is one.
\noindent
For the curve $B|_{\alpha} : I \rightarrow S_1^2$, $B'$ is the tangent vector, and by using (2.2), we get
\begin{center}
$B' = -\kappa_g\; T + \tau_g\; N$,
\end{center}
\begin{center}
$B'' = (- \kappa_g' + \tau_g \kappa_n) T + (\tau_g^2 - \kappa_g^2) B + (\tau_g' - \kappa_g \kappa_n) N.$
\end{center}
\noindent
Now, we have $N \times T = B, B\times N = T$ and $T\times B = N$. Thus
\begin{center}
$B' \times B'' = \tau_g(\kappa_g^2 - \tau_g^2) T + (\kappa_g \tau_g' -\tau_g \kappa_g' + \kappa_n(\tau_g^2 - \kappa_g^2)) B - \kappa_g (\tau_g^2 - \kappa_g^2) N$,
\end{center}
which implies
\noindent
$\| B' \times B'' \|^2 = - (\kappa_g^2 - \tau_g^2)^3 + (\kappa_n (\tau_g^2 - \kappa_g^2) + \kappa_g^2 (\frac{\tau_g}{\kappa_g})')^2$ and $\| B' \|^2 = \kappa_g^2 - \tau_g^2$.
\noindent Let $\bar{\kappa}$ = Curvature of the relatively normal indicatrix. Then
\begin{center}
$\bar{\kappa} = \frac{\| B' \times B'' \|}{\| B' \|^3} = \sqrt{\sigma_B^2 - 1}$,
\end{center}
where $\sigma_B = \pm \frac{1}{(\kappa_g^2 - \tau_g^2)^{\frac{3}{2}}} \left ( \kappa_g^2(\frac{\tau_g}{\kappa_g})' - \kappa_n(\kappa_g^2 - \tau_g^2) \right )$ and from (3.3), we have
\begin{equation}
\coth \beta = \pm \left [ \frac{\kappa_g^2}{(\kappa_g^2 - \tau_g^2)^{3/2}} \left (\frac{\tau_g}{\kappa_g} \right )' - \frac{\kappa_n}{(\kappa_g^2 - \tau_g^2)^{1/2}} \right ]. \nonumber
\end{equation}
Now if $\bar{\kappa_g}$ and $\bar{\kappa_n}$ are the geodesic and normal curvature of the relatively normal indicatrix, respectively. We have $\bar{\kappa}^2 = \bar{\kappa_g}^2 - \bar{\kappa_n}^2$ and we know that normal curvature of any spherical curve is 1, so $\bar{\kappa_n} = 1$, this imply that $\bar{\kappa_g}^2 - 1 = \sigma_B^2 - 1$, which implies
\begin{center}
$\bar{\kappa_g}^2 = \pm \sigma_B = \pm \coth \beta$.
\end{center}
Thus $\bar{\kappa_g}(s) $ is a constant function if and only if $\coth \beta(s)$ is a constant function. That means, relatively normal indicatrix of $\gamma$ is a part of a circle on $S_1^2$ if and only if
\begin{equation}
\coth \beta (s)= \pm \left [ \frac{\kappa_g^2}{(\kappa_g^2 - \tau_g^2)^{3/2}} \left (\frac{\tau_g}{\kappa_g} \right )' - \frac{\kappa_n}{(\kappa_g^2 - \tau_g^2)^{1/2}} \right ] \nonumber
\end{equation}
is a constant function.
\noindent
Proof of (i) and (iii) can be done similarly.
\end{proof}
\begin{theorem} \label{6.2}
A unit speed spacelike curve on a timelike surface is a relatively normal slant helix if and only if any one of the following two functions,
\noindent
(i) $\coth \delta = \lambda (s) = \pm \left [ \frac{\kappa_g^2}{(\kappa_g^2 + \tau_g^2)^{3/2}} \left (\frac{\tau_g}{\kappa_g} \right )' - \frac{\kappa_n}{(\kappa_g^2 + \tau_g^2)^{1/2}} \right ]$,
\noindent
(ii) $\tanh \zeta = \rho (s) = \mp \left [ \frac{\kappa_g^2}{(\kappa_g^2 + \tau_g^2)^{3/2}} \left (\frac{\tau_g}{\kappa_g} \right )' - \frac{\kappa_n}{(\kappa_g^2 + \tau_g^2)^{1/2}} \right ]$
\noindent is a constant function.
\noindent
The proof is similar to the proof of Theorem 6.1 .
\end{theorem}
\begin{theorem} \label{6.3}
A unit speed timelike curve on a timelike surface is a relatively normal slant helix if and only if any one of the following three functions,
\noindent (i) $\coth \xi = \chi (s) = \mp \left [ \frac{\kappa_g^2}{(\tau_g^2 - \kappa_g^2)^{3/2}} \left (\frac{\tau_g}{\kappa_g} \right )' + \frac{\kappa_n}{(\tau_g^2 - \kappa_g^2)^{1/2}} \right ],$
\noindent (ii) $\cot \nu = \omega (s) = \mp \left [ \frac{\kappa_g^2}{(\kappa_g^2 - \tau_g^2)^{3/2}} \left (\frac{\tau_g}{\kappa_g} \right )' + \frac{\kappa_n}{(\kappa_g^2 - \tau_g^2)^{1/2}} \right ],$
\noindent (iii) $\tanh \psi = \mu (s) = \mp \left [ \frac{\kappa_g^2}{(\tau_g^2 - \kappa_g^2)^{3/2}} \left (\frac{\tau_g}{\kappa_g} \right )' + \frac{\kappa_n}{(\tau_g^2 - \kappa_g^2)^{1/2}} \right ]$
\noindent is a constant function.
\noindent
The proof is similar to the proof of Theorem 6.1 .
\end{theorem}
\begin{proposition}
Let $\gamma$ be a unit speed spacelike relatively normal slant helix on a spacelike surface. Then
\noindent
(i) $\gamma$ is a asymptotic curve on the surface if and only if $\gamma$ is a slant helix with the spacelike axis
\begin{equation}
d = \pm \frac{\tau_g}{\sqrt{\kappa_g^2 - \tau_g^2}} \sinh \beta T+ \cosh \beta B \mp \frac{\kappa_g}{\sqrt{\kappa_g^2 - \tau_g^2}} \sinh \beta N, \nonumber
\end{equation}
(ii) $\gamma$ is a asymptotic curve on the surface if and only if $\gamma$ is a slant helix with the spacelike axis
\begin{equation}
d = \pm \frac{\tau_g}{\sqrt{\tau_g^2 - \kappa_g^2}} \sin \theta T + \cos \theta B \mp \frac{\kappa_g}{\sqrt{\tau_g^2 - \kappa_g^2}} \sin \theta N, \nonumber
\end{equation}
(iii) $\gamma$ is a asymptotic curve on the surface if and only if $\gamma$ is a slant helix with the timelike axis
\begin{equation}
d = \pm \frac{\tau_g}{\sqrt{\kappa_g^2 - \tau_g^2}} \cosh \alpha \; T + \sinh \alpha\; B \mp \frac{\kappa_g}{\sqrt{\kappa_g^2 - \tau_g^2}} \cosh \alpha \;N,\nonumber
\end{equation}
(for the Case 1(a), 1(b) and Case 2, respectively).
\end{proposition}
\begin{proof}
(i) Since $\gamma$ is asymptotic, therefore $\kappa_n = 0 $. From (2.3) it follows that $\kappa_g = \kappa$ and $\kappa_n = \kappa \sinh \phi = 0$ which imply that $\phi = 0 \implies \phi' = 0$ so $\tau_g = \tau + \phi' \implies \tau_g = \tau$.
\noindent
Now by substituting $\kappa_g = \kappa, \kappa_n = 0$ and $\tau_g = \tau$ in (3.3), we obtain
\begin{equation}
\eta (s) = \pm \left [ \frac{\kappa^2}{(\kappa^2 - \tau^2)^{3/2}} \left (\frac{\tau}{\kappa} \right )' \right ] \nonumber
\end{equation}
is a constant function. Then by Theorem 2.4, $\gamma$ is a slant helix. From (3.1) we get the spacelike axis of the slant helix as
\begin{equation}
d = \pm \frac{\tau}{\sqrt{\kappa^2 - \tau^2}} \sinh \beta T+ \cosh \beta B \mp \frac{\kappa}{\sqrt{\kappa^2 - \tau^2}} \sinh \beta N.
\end{equation}
Conversely, let $\gamma$ be a slant helix with the spacelike axis $d$ as given in the above equation. Then since $\gamma$ is a relatively normal slant helix also, the fixed axis is
\begin{equation}
d = \pm \frac{\tau_g}{\sqrt{\kappa_g^2 - \tau_g^2}} \sinh \beta T+ \cosh \beta B \mp \frac{\kappa_g}{\sqrt{\kappa_g^2 - \tau_g^2}} \sinh \beta N. \nonumber
\end{equation}
Since both $B$ and $n$ are in the same plane and both make constant angle with the same fixed vector $d$, angle between $B$ and $n$ is also constant, i.e. $\phi = constant$ and hence $\phi'$ = 0. From (2.3), we get $\tau_g = \tau$ and $\kappa_g = \kappa \implies \kappa_n = 0$, that means $\gamma$ is a asymptotic curve.
\noindent
Also, (ii) and (iii) can be proved similarly.
\end{proof}
\begin{proposition}
Let $\gamma$ be a unit speed timelike relatively normal slant helix on a timelike surface. Then
\noindent
(i) $\gamma$ is a asymptotic curve on the surface if and only if $\gamma$ is a slant helix with the spacelike axis
\begin{equation}
d = \mp \frac{\tau_g}{\sqrt{\tau_g^2 - \kappa_g^2}} \sinh \xi \; T + \cosh \xi \;B \pm \frac{\kappa_g}{\sqrt{\tau_g^2 - \kappa_g^2}} \sinh \xi \;N, \nonumber
\end{equation}
(ii) $\gamma$ is a asymptotic curve on the surface if and only if $\gamma$ is a slant helix with the spacelike axis
\begin{equation}
d = \mp \frac{\tau_g}{\sqrt{\kappa_g^2 - \tau_g^2}} \sin \nu \; T + \cos \nu \; B \pm \frac{\kappa_g}{\sqrt{\kappa_g^2 + \tau_g^2}} \sin \nu \;N, \nonumber
\end{equation}
(iii) $\gamma$ is a asymptotic curve on the surface if and only if $\gamma$ is a slant helix with the timelike axis
\begin{equation}
d = \mp \frac{\tau_g}{\sqrt{\tau_g^2 - \kappa_g^2}} \cosh \psi \; T + \sinh \psi \;B \pm \frac{\kappa_g}{\sqrt{\tau_g^2 - \kappa_g^2}} \cosh \psi \;N,
\nonumber
\end{equation}
(for the Case 1(a), 1(b) and Case 2, respectively)
\end{proposition}
\begin{proof}
Proof is same as the proof of the Proposition 6.1.
\end{proof}
\begin{proposition}
If $\gamma$ is a unit speed (spacelike and timelike) relatively normal slant helix with(spacelike and timelike, respectively) axis $d$ on a (spacelike/timelike and timelike, respectively) surface then $\gamma$ can not be a line of curvature.
\end{proposition}
\begin{proof}
Suppose $\gamma$ is a line of curvature, then $\tau_g = 0$ and by Theorem 6.1, we have
\noindent
$ \coth \beta = \pm \left [ \frac{\kappa_g^2}{(\kappa_g^2 - \tau_g^2)^{3/2}} \left (\frac{\tau_g}{\kappa_g} \right )' - \frac{\kappa_n}{(\kappa_g^2 - \tau_g^2)^{1/2}} \right ]. $
\noindent
If we put $\tau_g = 0, \coth \beta = \mp \frac{\kappa_n}{\kappa_g} = \mp \frac{\kappa \sinh \phi}{\kappa \cosh \phi} = \mp \tanh \phi$, which has no solution.
\noindent
So, $\gamma$ can not be a line of curvature.
Other cases can be proved similarly.
\end{proof}
\begin{proposition}
Let $\gamma$ be a unit speed spacelike relatively normal slant helix with timelike axis $d$. Then $\gamma$ is a plane curve provided $\gamma$ is a line of curvature on $M$.
\end{proposition}
\begin{proof} By Theorem 6.1, we have
\begin{equation}
\tanh \alpha = \pm \left [ \frac{\kappa_g^2}{(\kappa_g^2 - \tau_g^2)^{3/2}} \left (\frac{\tau_g}{\kappa_g} \right )' - \frac{\kappa_n}{(\kappa_g^2 - \tau_g^2)^{1/2}} \right ] \nonumber
\end{equation}
is a constant function. Assume $\gamma$ is a line of curvature then $\tau_g = 0$, putting this into above equation we get $\tanh \alpha = \mp \frac{\kappa_n}{\kappa_g} = \mp \tanh \phi$. This implies that $\alpha = \pm \phi $ , and since $\alpha$ is constant therefore $\phi'$ = 0. Thus we get $\tau_g = \tau + \phi' = \tau$ and since $\tau_g = 0$, $\tau = 0$ as a result $\gamma$ is a planer curve.
\end{proof}
\begin{corollary}
Let $\gamma$ be a spacelike relatively normal slant helix with spacelike axis on a timelike surface $M$, then $d$ cannot be orthogonal to the tangent line of $\gamma$.
\end{corollary}
\begin{proof} Suppose $\gamma$ be a unit speed spacelike relatively normal slant helix with spacelike axis on the timelike surface $M$. Then by using (4.4), we have
\begin{equation}
\langle T, d\rangle = \mp \frac{\tau_g}{\sqrt{\kappa_g^2 + \tau_g^2}} \cosh \zeta. \nonumber
\end{equation}
From Proposition 6.3, we have $\tau_g \neq 0$ also $\cosh \zeta $ never zero. This implies that $\langle T, d\rangle \neq 0$. Hence $T$ is not orthogonal to $d$.
\end{proof}
\begin{corollary}
Let $\gamma$ be a spacelike relatively normal slant helix with timelike axis on a spacelike surface $M$, then $d$ cannot be orthogonal to $N$.
\end{corollary}
\begin{proof}
Let $\gamma$ be a unit speed spacelike relatively normal slant helix with timelike axis on the spacelike surface $M$. Then from (3.8), we have
\begin{equation}
\langle N, d\rangle = \mp \frac{\kappa_g}{\sqrt{\kappa_g^2 - \tau_g^2}} \cosh \alpha. \nonumber
\end{equation}
Since $\cosh \alpha $ never zero and $\kappa_g \neq 0 $ (unless $ \kappa = 0$) because $\kappa_g = \kappa \cosh \phi$ therefore $\langle N, d\rangle \neq 0$.
\end{proof}
\begin{corollary}
Let $\gamma$ be a timelike unit speed relatively normal slant helix with timelike axis d on a timelike surface $M$. Then
\noindent
(i) $d$ cannot be orthogonal to the tangent line of $\gamma$,
\noindent
(ii) $d$ is orthogonal to the surface normal $N$ if and only if $\gamma$ is a geodesic.
\end{corollary}
\begin{proof}
Let $\gamma$ be a timelike unit speed relatively normal slant helix with timelike axis d on a timelike surface $M$. Then by (5.6), we have
\noindent
\begin{equation}
\langle T, d\rangle = \mp \frac{\tau_g}{\sqrt{\tau_g^2 - \kappa_g^2}} \cosh \psi.\nonumber
\end{equation}
From Proposition 6.3, we have $\tau_g \neq 0$ and we know $\cosh \psi$ never zero, this imply that $\langle T, d\rangle \neq 0$, so $d$ cannot be orthogonal to $T$.
\noindent
Also from (5.6), we have
\begin{equation}
\langle N, d\rangle = \pm \frac{\kappa_g}{\sqrt{\tau_g^2 - \kappa_g^2}} \cosh \psi. \nonumber
\end{equation}
Thus $ \langle N, d\rangle = 0$ iff $\kappa_g$ = 0. Hence $d$ is orthogonal to $N$ if and only if $\gamma$ is a geodesic curve.
\end{proof}
|
1,314,259,994,906 | arxiv | \section{Introduction}
Point cloud instance segmentation is a classic task in 3D computer vision, and it can be applied in many fields, including indoor navigation systems, augmented reality, and robotics. The fully supervised instance segmentation methods~\cite{liang2021instance,chen2021hierarchical,jiang2020pointgroup,hui2022graphcut} have achieved impressive results, but they rely on numerous manually labeled data. However, annotating a large number of point clouds is extremely time-consuming and expensive. Thus, it is meaningful to segment point clouds in a semi-/weakly supervised manner that requires a small number of annotations. However, how to fully exploit the limited labels to improve the performance of instance segmentation is still a challenging problem.
Few efforts have been dedicated to semi-/weakly supervised point cloud instance segmentation. As a pioneer, Liao~\emph{et al.}~\cite{liao2021point} proposed a semi-supervised point cloud instance segmentation method using bounding boxes as supervision, where a network is used to generate bounding box proposals. And instance segmentation is achieved by refining the point cloud within the bounding box proposals. Besides, Tao~\emph{et al.}~\cite{tao2020seggroup} proposed a two-stage seg-level supervision 3D instance and semantic segmentation method, which first leverages a segment grouping network to generate pseudo labels for the whole scenes, and then the generated pseudo point-level labels are used as the ground truth to train the network. However, these simple pseudo label generation strategies cannot effectively generate high-quality pseudo labels, resulting in poor 3D instance segmentation results.
In this paper, we propose a simple yet effective weakly supervised 3D instance segmentation framework, which can achieve impressive results with one point annotation per instance. For weakly supervised point cloud instance segmentation with few annotated labels, our intuition lies in two folds: (1) Under rare annotations, effective label propagation is essential to produce high-quality pseudo labels, especially in 3D instance segmentation. (2) Weakly supervised 3D instance segmentation is more challenging than weakly supervised 3D semantic segmentation, so we consider introducing the object volume constraint to improve the instance segmentation results. Specifically, we first use an unsupervised method~\cite{landrieu2018large} to oversegment the point cloud into superpoints and build the superpoint graph. In this way, point-level labels can be extended to superpoint-level labels. Then, we propose an inter-superpoint affinity mining module to generate high-quality pseudo labels based on a few annotated superpoint-level labels. Based on the superpoint graph, we leverage the semantic and spatial information of adjacent superpoints to adaptively learn inter-superpoint affinity, which can be used to propagate superpoint labels along the superpoint graph via semantic-aware random walk. Finally, we propose a volume-aware instance refinement module to improve instance segmentation performance. Based on the trained model using superpoint-level propagation, we can obtain coarse instance segmentation results through superpoint clustering and further infer the object volume information from the instance segmentation results. The object volume information contains the number of voxels and the radius of the object. The inferred object volume information is regarded as the ground truth of the corresponding instance to retrain the network. In the test phase, based on the object volume information, we utilize the predicted object volume information to introduce a volume-aware instance clustering algorithm for segmenting high-quality instances. Extensive experiments on the ScanNet-v2~\cite{dai2017scannet} and S3DIS~\cite{armeni20163d} datasets can demonstrate the effectiveness of our method.
The main contributions of our paper are as follows:
\begin{itemize}
\item We present an inter-superpoint affinity mining module that considers the semantic and spatial relation to adaptively learn inter-superpoint affinity for random-walk based label propagation.
\item We present a volume-aware instance refinement module, which guides the superpoint clustering on the superpoint graph to segment instances by using the object volume information.
\item Our simple yet effective framework achieves state-of-the-art weakly supervised 3D instance segmentation performance on popular datasets ScanNet-v2 and S3DIS.
\end{itemize}
\section{Related Work}
\subsection{3D Semantic Segmentation}
{\bf Fully supervised 3D semantic segmentation}. Many methods have been proposed to achieve point cloud semantic segmentation. Some methods~\cite{lawin2017deep,tatarchenko2018tangent,jaritz2019multi} project point clouds into a series of regular 2D images from different views, and then fuse features extracted through 2D convolutional neural networks (CNNs). To apply 3D CNNs on the irregular point cloud and alleviate large memory costs, many efforts~\cite{graham20183d,choy20194d} first voxelize the point cloud into voxels and then utilize the sparse convolutional neural network to extract features of the point cloud. PointNet~\cite{qi2017pointnet} directly extracts features from points with shared multi-layer perceptrons and max-pooling layer. Inspired by PointNet, different local feature aggregation operators~\cite{qi2017pointnet++,thomas2019kpconv,wu2019pointconv,cheng2020cascaded} are proposed to work on point cloud, which directly consume point cloud. Besides, various methods~\cite{wang2019dynamic,hui2021superpoint} capture intrinsic spatial and geometric features by constructing the graph on the point cloud. Various approaches exploit different local feature aggregation networks to extract discriminative point features and use multi-layer perceptrons to achieve 3D semantic segmentation.
{\bf Semi-/Weakly supervised 3D semantic segmentation}. Inspired by class activation map in 2D images, Wei~\emph{et al.}~\cite{wei2020multi} introduce a multi-path region mining module to generate pseudo labels, which only requires cloud-level weak labels. Xu~\emph{et al.}~\cite{xu2020weakly} use three additional losses to constrain on unlabeled points, achieving impressive performance with 10\% labels. Cheng~\emph{et al.}~\cite{cheng2021sspc} use a dynamic label propagation strategy to generate pseudo labels, and learn discriminative features with a coupled attention module. Zhang~\emph{et al.}~\cite{zhang2021perturbed} exploit the consistency generated by perturbation to obtain additional supervision and propagate implicit labels by constructing the graph topology of the point cloud. Liu~\emph{et al.}~\cite{liu2021one} first build a supervoxel graph on the point cloud and then conduct label propagation by learning the similarity among graph nodes. Li~\emph{et al.}~\cite{li2022hybridcr} utilize a hybrid contrastive regularization strategy with point cloud augmentation to provide additional constraints for network training. To generate pseudo labels for outdoor point cloud scenes, Shi~\emph{et al.}~\cite{shi2022weakly} design a matching module to propagate pseudo labels in both temporal and spatial spaces.
\subsection{3D Instance Segmentation}
{\bf Fully supervised 3D instance segmentation}. Compared with point cloud semantic segmentation, instance segmentation is more challenging because it not only requires predicting semantic scores but also distinguishing instances of the same class. According to the different manners of generating instances, instance segmentation methods can be mainly divided into clustering-based methods and proposal-based methods. Given point clouds as input, clustering-based methods regard instance segmentation as the post-processing task after network inference, and the result is obtained by clustering on point clouds with the predicted features. As a pioneer, Wang~\emph{et al.}~\cite{wang2018sgpn} introduce a similarity matrix to measure the distances between the features of all point pairs, which guides clustering points as proposals. Wang~\emph{et al.}~\cite{wang2019associatively} integrate semantic and instance segmentation into a parallel framework, which benefits from each other task. Lahoud~\emph{et al.}~\cite{lahoud20193d} design a multi-task neural network architecture, where instances are simultaneously separated in the feature vector space and direction vector space by a discriminative loss~\cite{de2017semantic} and a directional loss. Jiang~\emph{et al.}~\cite{jiang2020pointgroup} generate proposals by clustering points on the original and offset-shifted coordinate spaces, which benefits from both advantages. Hou~\emph{et al.}~\cite{hou20193d} jointly learn color and geometry features for instance segmentation from different modalities. Lately, Chen~\emph{et al.}~\cite{chen2021hierarchical} introduce a hierarchical aggregation method that iteratively clusters point clouds into instance proposals. Liang~\emph{et al.}~\cite{liang2021instance} propose a semantic superpoint tree structure and achieved instance segmentation by tree traversal and splitting. Vu~\emph{et al.}~\cite{vu2022softgroup} design a soft group algorithm to reduce the semantic prediction errors to significantly boost the segmentation performance.
For proposal-based methods, instance segmentation consists of two procedures, first generating rough proposals and then predicting precise instance masks. Yang~\emph{et al.}~\cite{yang2019learning} propose an end-to-end trainable network which directly generates 3D bounding boxes as proposals and infers point-wise instance masks for points inside proposals. Instead of getting proposals via 3D bounding box regression, Yi~\emph{et al.}~\cite{yi2019gspn} introduce an approach to obtain proposals by object generation, and then predict instance masks within proposals.
{\bf Semi-/Weakly supervised 3D instance segmentation}. Few efforts have been made on semi-/weakly supervised point cloud instance segmentation. Tao~\emph{et al.}~\cite{tao2020seggroup} propose a method to generate pseudo labels for the whole training scene and the generated pseudo point-level labels are used to train existing full supervised methods for point cloud instance segmentation, where one point per instance is clicked as the weak label. Nonetheless, the quality of pseudo labels is limited due to lack of learning discriminative instance features. With bounding boxes as weak labels, Liao~\emph{et al.}~\cite{liao2021point} propose a semi-supervised point cloud instance segmentation method, where a network is leveraged to generate bounding box proposals and instance segmentation is achieved by refining points within bounding box proposals.
\section{Method}
The overall architecture of our method is depicted in Fig.~\ref{fig:pipeline}. The backbone network (Sec.~\ref{sec:backbone}) first takes the point cloud and superpoint graph as input and predicts superpoint-wise semantic labels and offset vectors. Then, the inter-superpoint affinity mining module (Sec.~\ref{sec:affinity}) propagates labels on the superpoint graph via semantic-aware random walk. Finally, the volume-aware instance refinement module (Sec.~\ref{sec:ins_size}) learns object volume information to improve instance segmentation performance.
\begin{figure*}[t]
\centering
\includegraphics[width=1\textwidth]{./figures/pipeline_20220707_v2_crop.pdf}
\caption{Overview of our framework for weakly supervised 3D instance segmentation. We first oversegment point cloud (a) to build the superpoint graph (b) and extend weak labels (c) to the corresponding superpoint. Then, based on superpoint graph, the random walk with the predicted affinity and semantic is used for label propagation (d). Finally, combining the predicted semantic and offset, learning pseudo object volume (e) is achieved. $c$ is the number of categories, $d$ is the feature dimension, $N$ is the number of points, $|V|$ is the number of superpoints, and $|E|$ is the number of edges.}
\label{fig:pipeline}
\end{figure*}
\subsection{Backbone Network}\label{sec:backbone}
{\bf Superpoint graph construction.} Following~\cite{landrieu2018large,liang2021instance}, we adopt an unsupervised point cloud oversegmentation method to generate the superpoints and construct the superpoint graph. The superpoint graph is a geometry representation of point clouds, defined as $G=\left(V, E\right)$. Vertice $V$ means superpoints that are generated by aggregating points with similar geometric characteristics, and edge $E$ indicates the prior connection relationship between adjacency superpoints which are constructed by linking $k$-nearest superpoints. In weakly supervised 3D instance segmentation, the benefits of the use of superpoints are two-fold. On one hand, the superpoint is a geometrically homogeneous unit, so we can extend annotated point label to the corresponding superpoint, thereby alleviating the sparsity of point-level annotations. On the other hand, the superpoint graph captures the spatial relationship of different instances so that we can utilize it to perform label propagation efficiently.
{\bf Superpoint feature extraction.} Specifically, we first extract point features using 3D U-Net~\cite{graham20183d} on the point cloud and then aggregate point features into superpoint features by average pooling. After that, based on the superpoint graph, we use the edge-conditioned convolutions (ECC)~\cite{simonovsky2017dynamic} to extract superpoint features. Finally, we use the superpoint features to predict the semantic label of superpoints.
\subsection{Inter-Superpoint Affinity Mining}\label{sec:affinity}
To perform label propagation, we develop an inter-superpoint affinity mining module to learn the superpoint relationship in the semantic and coordinate spaces. By using the learned inter-superpoint affinity, we design a simple semantic-aware random walk algorithm for label propagation on the superpoint graph.
{\bf Superpoint affinity learning.} Based on the superpoint graph, we learn the relationship between two adjacent superpoints to characterize their affinity. It is desired that the learned affinity between two adjacent superpoints can guide the label propagation along the edge of the superpoint graph. Assuming the learned superpoint embedding in the backbone network is $\bm{X}\in\mathbb{R}^{|V|\times d}$, where $|V|$ is the number of superpoints and $d$ is the feature dimension. Given the $i$-th superpoint embedding $\bm{X}_{i}\in\mathbb{R}^{d}$ and its first-order neighbors $\mathcal{N}_i$, we leverage the semantic and spatial information of the superpoints to adaptively learn inter-superpoint affinity. The affinity $A_{ij}$ between the $i$-th superpoint and its $j$-th neighbor is formulated as:
\begin{equation}
\setlength{\abovedisplayskip}{1pt}
\setlength{\belowdisplayskip}{1pt}
A_{ij} = \frac{\exp(\sigma(\phi(\bm{X}_{i}), \psi( \bm{X}_{j})) * \gamma(\bm{p}_i-\bm{p}_j))}{ \sum_{k \in \mathcal{N}_i} \exp(\sigma(\phi(\bm{X}_{i}), \psi(\bm{X}_{k})) * \gamma(\bm{p}_i-\bm{p}_k))}
\label{eqn:affinity}
\end{equation}
where $\bm{X}_{i}\in\mathbb{R}^{d}$ and $\bm{X}_{j}\in\mathbb{R}^{d}$ are the superpoint embedding. $\bm{p}_i\in\mathbb{R}^3$ and $\bm{p}_j\in\mathbb{R}^3$ are the centroid coordinate of the superpoints. $\phi(\cdot)$ and $\psi(\cdot)$ are linear projections, and $\gamma(\cdot)$ is a multi-layer perceptron. $\sigma(\cdot,\cdot)$ is the dot production for learning the similarity of the $i$-th and $g$-th superpoints. In~Eq.~(\ref{eqn:affinity}), semantic similarity is measured by dot production while spatial similarity is measured by subtraction. As a result, the affinity $A_{ij}$ considers the semantic and spatial information of the superpoints. After that, we use the learned inter-superpoint affinity to update the superpoint embeddings. For the $i$-th superpoint, the new superpoint embedding $\widetilde{\bm{X}}_{i}\in\mathbb{R}^{d}$ is written as:
\begin{equation}
\setlength{\abovedisplayskip}{1pt}
\setlength{\belowdisplayskip}{1pt}
\widetilde{\bm{X}}_{i} = A_{ij} \cdot \rho (\bm{X}_{j}) + \bm{X}_{i}
\label{eqn:new_sp_emb}
\end{equation}
where $\rho(\cdot)$ is linear projection. During the training, we employ a discriminative loss (dubbed $\mathcal{L}_{\text{aff}}$) used in~\cite{de2017semantic} to draw $\widetilde{\bm{X}}$ belonging to the same object towards each other, and make $\widetilde{\bm{X}}$ in different objects away. It is expected that the affinity $A_{ij}$ between the superpoints of the same instance can be enhanced.
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\textwidth]{./figures/label_prop_pipeline_20220707_crop.pdf}
\caption{The process of label propagation with predicted instance affinity and semantics.}
\label{fig:label_prop_declare}
\end{figure*}
{\bf Label propagation via semantic-aware random walk.} After obtaining the inter-superpoint affinity $\bm{A}\in\mathbb{R}^{|V|\times|V|}$, we design a simple semantic-aware random walk to propagate labels over the superpoint graph, as shown in Fig.~\ref{fig:label_prop_declare}. Specifically, our semantic-aware random walk propagates labels over the superpoints graph with the same predicted semantic labels. For the $c$-th class, assume that its semantic matrix is $\bm{S}^c\in\mathbb{R}^{|V|\times|V|}$. In $\bm{S}^c$, if the semantic class of the $i$-th and $j$-th superpoints are the same, then $S^{c}_{ij}=1$, otherwise $ S^{c}_{ij}=0$. For label propagation, we first using the semantic matrix $\bm{S}^c$, superpoint affinity $\bm{A}\in\mathbb{R}^{|V|\times|V|}$ and adjacency matrix $\bm{M}\in\mathbb{R}^{|V|\times|V|}$ of the graph $G=(V,E)$ to compute the weight $\bm{P}^c\in\mathbb{R}^{|V|\times|V|}$ for the $c$-th class, which is formulated as:
\begin{equation}
\setlength{\abovedisplayskip}{1pt}
\setlength{\belowdisplayskip}{1pt}
\bm{P}^c = \bm{M} \odot \bm{S}^{c} \odot \bm{A}
\label{eqn:mat_p}
\end{equation}
where $\odot$ is Hadamard product. Note that the weight $\bm{P}^c$ considers the semantic information and superpoint affinity simultaneously. Then, we derive the transition probability matrix $\bm{T}^c\in\mathbb{R}^{|V|\times|V|}$ for the $c$-th class, and it is defined as:
\begin{equation}
\setlength{\abovedisplayskip}{1pt}
\setlength{\belowdisplayskip}{1pt}
\bm{T}^c = \bm{D}^{-1}\bm{P}^c, \ \text{where} \ D_{ii} = \sum\nolimits_{j} P_{ij}^c
\end{equation}
The diagonal matrix $\bm{D}$ is used for the row normalization of the matrix $\bm{P}^c$. Finally, the pseudo instance label $I_j$ of the $j$-th superpoint is propagated by:
\begin{equation}
\setlength{\abovedisplayskip}{1pt}
\setlength{\belowdisplayskip}{1pt}
I_j =I_k, \text{where}~ k=\mathop{\operatorname{argmax}}\limits_{i=1,\ldots,|V|}( \bm{\hat{T}}^c_{ij} ) , \ \bm{\hat{T}}^c = (\bm{T}^c)^{t}
\label{eqn:pl}
\end{equation}
where $\bm{\hat{T}}^c_{ij}$ indicates the probability of propagating the instance label of the $i$-th superpoint to the $j$-th superpoint, and $t$ is the iteration number. In Eq.~(\ref{eqn:pl}), the $j$-th superpoint selects the instance label of the $k$-th superpoint, which has the highest probability of propagating to the $j$-th superpoint. In this way, we can propagate the instance label of each annotated superpoint to unlabeled superpoints on the superpoint graph.
\subsection{Volume-Aware Instance Refinement}\label{sec:ins_size}
We propose the volume-aware instance refinement module to segment instances by using object volume information. We first introduce how to predict object volume via pseudo instances. Then, we present the volume-aware instance clustering algorithm to generate instances on the superpoint graph.
{\bf Object volume prediction via pseudo instance.} In order to predict object volume information, we first train the network with the pseudo labels generated by label propagation in the first stage. As shown in Fig.~\ref{fig:pipeline}~(e), we use the pre-trained model in the first stage to generate the pseudo instances by voting the superpoints to the closest annotated point. Specifically, by adding the predicted offset vector (refer to the first stage in Sec.~\ref{sec:network_training}) to the corresponding superpoint center, we can shift each superpoint center closer to the center of the corresponding object. To generate instances, the shifted superpoints are assigned the same instance labels as the closest annotated points with the same semantic labels. Here we regard the generated instances as the pseudo instances. According to the generated pseudo instances, we compute its volume information. We consider the number of voxels inside the instance and the instance radius to measure the volume of the instance. The instance radius is defined as the distance between the instance center and the farthest point. Thus, for each pseudo instance, we can obtain its volume information after the first stage.
\begin{figure}[!t]
\centering
\begin{minipage}[t]{.85\linewidth}
\begin{algorithm}[H]
\caption{Volume-Aware Instance Clustering Algorithm.}
\textbf{Input:}~superpoint shifted coordinate $\{ \widetilde{\bm{p}}_1, \ldots, \widetilde{\bm{p}}_{|V|} \} \in \mathbb{R}^{|V| \times 3} $; ~superpoint semantic label $\{ s_1, \ldots, s_{|V|} \} \in \mathbb{R}^{|V| \times C}$;~the predicted voxel number $\{ u_1, \ldots, u_{|V|} \} \in \mathbb{R}^{|V|}$;~the predicted radius $\{ r_1, \ldots, r_{|V|} \} \in \mathbb{R}^{|V|}$
\textbf{Output:}~generated instances $\textbf{I} \in \{ I_1, \ldots, I_m \}$, $m$ is the number of instances.
\begin{algorithmic}[1] \label{algo:bfs_cluster}
\STATE Initialize an array $f$ (visited flag) of length $|V|$ with all zeros
\STATE Initialize an empty proposal set $H$, an empty instance set \textbf{I}
\STATE // Using the predicted radius $r$ to filter superpoints
\FOR{ $v=1$ to $|V|$ }
\IF{ $f_v == 0$ }
\STATE Initialize an empty queue $Q$
\STATE Initialize a set $H$
\STATE $f_v = 1$ ; $Q$.enqueue($v$) ; add $v$ to $H$
\WHILE{ $Q$ is not empty }
\STATE $j=Q$.dequeue()
\FOR{ each $k \in \{ k \mid k \in \mathcal{N}_j, s_k == s_j, \Vert\widetilde{\bm{p}}_k - \widetilde{\bm{p}}_j \Vert_{2} < \lambda r_j \}$ }
\IF{ $f_k == 0$ }
\STATE $f_k = 1$ ; $Q$.enqueue($k$) ; add $k$ to $H$
\ENDIF
\ENDFOR
\ENDWHILE
\STATE add $H$ to \textbf{H}
\ENDIF
\ENDFOR
\STATE // Using the predicted voxel numbers $u$ to filter proposals
\FOR{ each $ H \in \textbf{H} $}
\STATE compute $\bar{w}$ = avg($\{u_i \mid i \in H \}$), $w$ for $H$
\IF{ $w > \beta \bar{w}$}
\STATE add $H$ to \textbf{I}
\ENDIF
\ENDFOR
\FOR{ each $ H \in \textbf{H} $}
\STATE compute $\bar{w}$ = avg($\{u_i \mid i \in H \}$), $w$ for $H$
\IF{ $w \leq \beta \bar{w}$}
\STATE $ I_{closest} $ = findClosestInstance($\{ I \mid I \in \textbf{I}, s_I == s_H \}$)
\STATE $ I_{closest} = I_{closest} \cup H $
\ENDIF
\ENDFOR
\RETURN \textbf{I}
\end{algorithmic}
\end{algorithm}
\end{minipage}
\end{figure}
{\bf Volume-aware instance clustering.} After predicting the volume of the object in the first stage, we additionally use the predicted volume as the supervision to retrain the network (refer to the second stage in Sec.~\ref{sec:network_training}), which is regarded as the second stage. Thus, in the second stage, we can additionally predict the instance volume information (the number of voxels and the radius) for each superpoint. For the $i$-th superpoint, assume that the predicted semantic is $s_i\in\mathbb{R}^{1\times C}$, offset vector is $o_i\in\mathbb{R}^3$, the number of voxels is $u_i$, and the radius is $r_i$. Note that the predicted semantic $s_i$ is the one hot label. We first obtain the shifted coordinate of the superpoint by $\widetilde{\bm{p}}_i = \bm{p}_i + \bm{o}_i$, which makes the superpoint close to the corresponding instance center. Based on the shifted coordinate and the graph structure, the $i$-th superpoint merges its neighbors $\left\{ j \mid j \in \mathcal{N}_i, s_i = s_j, \Vert\widetilde{\bm{p}}_i - \widetilde{\bm{p}}_j \Vert_{2} < \lambda r_i \right\}$ into the same cluster, where the hyperparameter $\lambda$ is set to 0.25 empirically. Note that the radius $r_i$ is used to filter the superpoints far from the object center. Here, we use the breadth-first search on the superpoint graph to group nodes in the same cluster for generating compact instance proposals. After that, we further count the number of voxels $w$ in the proposal to filter the fragmented proposals. The predicted number of voxels $\bar{w}$ of the proposal is computed by averaging the predicted voxel numbers of superpoints within the proposal. If $w > \beta \bar{w}$, the corresponding proposal can be regarded as the instance. The hyperparameter $\beta$ is empirically set to 0.3. Finally, the remaining proposals are aggregated to the closest instance with the same semantic label. The volume-aware instance clustering algorithm is shown in Algorithm~\ref{algo:bfs_cluster}.
\subsection{Network Training}\label{sec:network_training}
Our method is a two-stage framework. As shown in Fig.~\ref{fig:pipeline}, the first stage learns the inter-superpoint affinity to propagate labels via random walk, while the second stage leverage the object volume information to refine the instance.
\textbf{First stage.} As shown in Fig.~\ref{fig:pipeline}, the first stage is supervised by the semantic loss $\mathcal{L}_{\text{sem}}$, offset loss $\mathcal{L}_{\text{offset}}$, and affinity loss $\mathcal{L}_{\text{aff}}$. The semantic loss $\mathcal{L}_{\text{sem}}$ is defined as the cross-entropy loss:
\begin{equation}
\setlength{\abovedisplayskip}{1pt}
\setlength{\belowdisplayskip}{1pt}
\mathcal{L}_{\text{sem}} = \frac{1}{ \sum\nolimits_{i=1}^{|V|} \mathbbm{I}(v_i) } \sum\nolimits_{i=1}^{|V|} \text{CE}(s_i, s_i^*) \cdot \mathbbm{I}(v_i)
\end{equation}
where $s_i$ is the predicted label and $s_i^*$ is the ground truth label. Note that the original annotated labels and generated pseudo labels are all regarded as the ground truth labels. If the superpoint $v_i$ has the label or is assigned with the pseudo label, the indicator function $\mathbbm{I}(v_i)$ is equal to 1, otherwise 0. In addition, we use MLP to predict the offset vector $\bm{o}_i \in \mathbb{R}^{3}$. the offset loss $\mathcal{L}_{\text{offset}}$ is used to minimize the predicted offset of the superpoint to its instance center. $\mathcal{L}_{\text{offset}}$ is defined as:
\begin{equation}
\setlength{\abovedisplayskip}{1pt}
\setlength{\belowdisplayskip}{1pt}
\mathcal{L}_{\text{offset}} = \frac{1}{ \sum\nolimits_{i=1}^{|V|} \mathbbm{I}(v_i) } \sum\nolimits_{i=1}^{|V|} {\Vert \bm{o}_i - \bm{o}_i^* \Vert}_1 \cdot \mathbbm{I}(v_i)
\end{equation}
where the $\bm{o}_i$ is the predicted superpoint offset and the $\bm{o}_i^*$ is the ground truth offset. Note the $\bm{o}_i^*$ is computed by coarse pseudo instance labels. Following~\cite{de2017semantic}, the affinity loss $\mathcal{L}_{\text{aff}}$ (refer to Sec.~\ref{sec:affinity}) is formulated as:
\begin{equation}
\setlength{\abovedisplayskip}{0pt}
\setlength{\belowdisplayskip}{0pt}
\begin{aligned}
\mathcal{L}_{\text{var}} = \frac{1}{I} \sum_{i=1}^{I} \frac{1}{ \sum_{j=1}^{|V|} \mathbbm{I}(v_j, i) } \sum\nolimits_{j=1}^{|V|} {\left[ {\Vert {\bm{\mu}}_i - \widetilde{\bm{X}}_{j} \Vert}_2 - {\delta}_v \right]}_+^2 \cdot \mathbbm{I}(v_j, i)\\
\mathcal{L}_{\text{dist}} = \frac{1}{I(I-1)} \mathop{ \sum\nolimits_{i_A=1}^{I} \sum\nolimits_{i_B=1}^{I} }\limits_{i_A \neq i_B} {\left[ 2{\delta}_d - {\Vert {\bm{\mu}}_{i_A} - {\bm{\mu}}_{i_B} \Vert}_2 \right]}_+^2 \\
\mathcal{L}_{\text{aff}} = \mathcal{L}_{\text{var}} + \mathcal{L}_{\text{dist}} + \alpha\cdot\mathcal{L}_{\text{reg}}, \text{where}~\mathcal{L}_{\text{reg}} = \frac{1}{I} \sum\nolimits_{i=1}^{I} {\Vert {\bm{\mu}}_i \Vert}_2
\end{aligned}
\end{equation}
where $I$ is the number of instances (equal to the number of the annotated points, \emph{i.e}, one point per instance). ${\bm{\mu}}_i$ is the mean embedding of the $i$-th instance and $\widetilde{\bm{X}}_{j}$ is the embedding of the $j$-th superpoint in Eq.~(\ref{eqn:new_sp_emb}). According to~\cite{de2017semantic}, the margins ${\delta}_v$ and ${\delta}_d$ are set to 0.1 and 1.5, respectively. The parameter $\alpha$ is set to 0.001, and ${\left[x\right]}_+ = max(0, x)$ denotes the hinge. $\mathbbm{I}(v_j, i)$ is the indicator function, and $\mathbbm{I}(v_j, i)$ equals to 1 if superpoint $v_j$ is labeled as the $i$-th instance. Note that we only perform $\mathcal{L}_{\text{aff}}$ on the superpoints with annotated labels or pseudo labels. The final loss function in the first stage is defined as:
\begin{equation}
\setlength{\abovedisplayskip}{1pt}
\setlength{\belowdisplayskip}{1pt}
\mathcal{L}_{\text{stage1}} = \mathcal{L}_{\text{sem}} + \mathcal{L}_{\text{offset}} + \mathcal{L}_{\text{aff}}
\end{equation}
\textbf{Second stage.} As shown in Fig.~\ref{fig:pipeline}, the second stage is supervised by the semantic loss $\mathcal{L}_{\text{sem}}$, offset loss $\mathcal{L}_{\text{offset}}$, and volume loss $\mathcal{L}_{\text{volume}}$. As the affinity loss is used for label propagation, we remove it in the second stage. For the volume loss $\mathcal{L}_{\text{volume}}$, it uses the predicted object volume information as the ground truth to train the network (refer to Sec.~\ref{sec:ins_size}). The $\mathcal{L}_{\text{volume}}$ is formulated as:
\begin{equation}
\setlength{\abovedisplayskip}{1pt}
\setlength{\belowdisplayskip}{1pt}
\mathcal{L}_{\text{volume}} = \frac{1}{K} \sum\nolimits_{i=1}^{K}\sum\nolimits_{j=1}^{I}( {\Vert u_i - \hat{u}_j \Vert}_1 + {\Vert r_i - \hat{r}_j \Vert}_1 )\cdot \mathbbm{I}(i,j)
\label{eq:size}
\end{equation}
where $K$ is the number of labeled superpoints, including the original annotated labels and the generated pseudo labels. If the $i$-th superpoint belongs to the $j$-th instance, the indicator function $\mathbbm{I}(i,j)$ is equal to 1, otherwise 0. $\hat{u}_j$ and $\hat{r}_j$ indicate the ground truth voxel numbers and radius counted from the pseudo instances, respectively. The generation of the pseudo instances refers to Sec.~\ref{sec:ins_size}. The final loss function in the second stage is defined as:
\begin{equation}
\setlength{\abovedisplayskip}{1pt}
\setlength{\belowdisplayskip}{1pt}
\mathcal{L}_{\text{stage2}} = \mathcal{L}_{\text{sem}} + \mathcal{L}_{\text{offset}} + \mathcal{L}_{\text{volume}}
\end{equation}
\section{Experiments}
\subsection{Experimental Settings}
{\bf Datasets.} ScanNet-v2~\cite{dai2017scannet} and S3DIS~\cite{armeni20163d} are used in our experiments to conduct 3D instance segmentation. ScanNet-v2 contains 1,613 indoor RGB-D scans with dense semantic and instance annotations. The dataset is split into 1,201 training scenes, 312 validation scenes, and 100 hidden test scenes. The instance segmentation is evaluated on 18 object categories. S3DIS contains 6 large-scale indoor areas, which has 272 rooms and 13 categories. For the ScanNet-v2 dataset, we report both validation and online test results. For the S3DIS dataset, we report both Area 5 and the 6-fold cross validation results.
{\bf Evaluation metrics.} For the ScanNet-v2 dataset, the mean average precision at the overlap 0.25 ($\text{AP}_{25}$), 0.5 ($\text{AP}_{50}$) and overlaps from 0.5 to 0.95 (AP) are reported. For the S3DIS dataset, we additionally use mean coverage (mCov), mean weighted coverage (mWCov), mean precision (mPrec), and mean recall (mRec) with the IoU threshold of 0.5 as evaluation metrics.
{\bf Annotation of weak labels.} To generate weak labels of point clouds, we randomly click one point of each instance as the ground truth label. Note that our annotation strategy is the same as SegGroup~\cite{tao2020seggroup}. Unlike our method and SegGroup, SPIB~\cite{liao2021point} adopts 3D box-level annotation, which annotates each instance with bounding box. Compared with time-consuming box-level annotation, clicking one point per instance is faster and more convenient.
\begin{table}[!t]
\centering
\caption{3D instance segmentation results on the ScanNet-v2 validation set and online test set. ``Baseline'' means the model trained with the initial annotated labels only.}
\resizebox{0.95\textwidth}{!}{
\begin{tabular}{@{}ccc|ccc|ccc@{}}
\toprule
\multicolumn{2}{c|}{Method} & \;Annotation\; & \;\;AP\;\; & \;$\text{AP}_{50}$\; & \;$\text{AP}_{25}$\;\; & \;\;AP\;\; & \;$\text{AP}_{50}$\; & \;$\text{AP}_{25}$\;\; \\
\midrule
\multicolumn{2}{c}{} & & \multicolumn{3}{c|}{Validation Set} & \multicolumn{3}{c}{Online Test Set} \\ \midrule
\multicolumn{1}{c|}{\multirow{7}{*}{Fully Sup.}} & \multicolumn{1}{l|}{SGPN~\cite{wang2018sgpn}} & 100\% & - & 11.3 & 22.2 & 4.9 & 14.3 & 39.0 \\
\multicolumn{1}{c|}{} & \multicolumn{1}{l|}{3D-SIS~\cite{hou20193d}} & 100\% & - & 18.7 & 35.7 & 16.1 & 38.2 & 55.8 \\
\multicolumn{1}{c|}{} & \multicolumn{1}{l|}{MTML~\cite{lahoud20193d}} & 100\% & 20.3 & 40.2 & 55.4 & 28.2 & 54.9 & 73.1 \\
\multicolumn{1}{c|}{} & \multicolumn{1}{l|}{PointGroup~\cite{jiang2020pointgroup}} & 100\% & 34.8 & 56.9 & 71.3 & 40.7 & 63.6 & 77.8 \\
\multicolumn{1}{c|}{} & \multicolumn{1}{l|}{HAIS~\cite{chen2021hierarchical}} & 100\% & 43.5 & 64.1 & 75.6 & 45.7 & 69.9 & 80.3 \\
\multicolumn{1}{c|}{} & \multicolumn{1}{l|}{SSTNet~\cite{liang2021instance}} & 100\% & 49.4 & 64.3 & 74.0 & 50.6 & 69.8 & 78.9 \\
\multicolumn{1}{c|}{} & \multicolumn{1}{l|}{SoftGroup~\cite{vu2022softgroup}} & 100\% & 46.0 & 67.6 & 78.9 & 50.4 & \textbf{76.1} & \textbf{86.5} \\
\multicolumn{1}{c|}{} & \multicolumn{1}{l|}{GraphCut~\cite{hui2022graphcut}} & 100\% & \textbf{52.2} & \textbf{69.1} & \textbf{79.3} & \textbf{55.2} & 73.2 & 83.2 \\ \midrule
\multicolumn{1}{c|}{\multirow{4}{*}{Weakly Sup.}} & \multicolumn{1}{l|}{SPIB~\cite{liao2021point}} & 0.16\% & - & 38.6 & 61.4 & - & - & 63.4 \\
\multicolumn{1}{c|}{} & \multicolumn{1}{l|}{SegGroup~\cite{tao2020seggroup}} & 0.02\% & 23.4 & 43.4 & 62.9 & 24.6 & 44.5 & 63.7 \\
\multicolumn{1}{c|}{} & \multicolumn{1}{l|}{Baseline} & 0.02\% & 21.2 & 39.0 & 61.3 & - & - & - \\
\multicolumn{1}{c|}{} & \multicolumn{1}{l|}{3D-WSIS~(\textbf{ours})} & 0.02\% & \textbf{28.1} & \textbf{47.2} & \textbf{67.5} & \textbf{25.1} & \textbf{47.0} & \textbf{67.8} \\ \bottomrule
\end{tabular}
}
\label{tab:ScanNet_val}
\end{table}
\begin{figure}[!t]
\centering
\includegraphics[width=0.95\textwidth]{./figures/ins_seg_scannet_s3dis_crop.pdf}
\caption{Visualization of the 3D instance segmentation results on the validation of ScanNet-v2 (left) and S3DIS (right). We randomly select colors for different instances.}
\label{fig:scannet_s3dis_ins}
\end{figure}
\begin{table}[!t]
\centering
\caption{3D instance segmentation results on the S3DIS dataset. ``Baseline'' means the model trained with the initial annotated labels only.}
\resizebox{0.95\textwidth}{!}{
\begin{tabular}{@{}ccc|ccc|cccc@{}}
\toprule
\multicolumn{2}{c|}{Method} &\;Annotation\; &\;\;AP\;\; &\;$\text{AP}_{50}$\; &\;$\text{AP}_{25}$\; & \;mCov & mWCov & mPrec & \;mRec\; \\
\midrule
\multicolumn{10}{c}{6-fold Cross Validation} \\ \midrule
\multicolumn{1}{c|}{\multirow{6}{*}{Fully Sup.}} & \multicolumn{1}{l|}{SGPN~\cite{wang2018sgpn}} &100\% & - & - & - & 37.9 & 40.8 & 38.2 & 31.2 \\
\multicolumn{1}{c|}{} & \multicolumn{1}{l|}{ASIS~\cite{wang2019associatively}} &100\% & - & - & - & 51.2 & 55.1 & 63.6 & 47.5 \\
\multicolumn{1}{c|}{} & \multicolumn{1}{l|}{PointGroup~\cite{jiang2020pointgroup}} &100\% & - & 64.0 & - & - & - & 69.6 & 69.2 \\
\multicolumn{1}{c|}{} & \multicolumn{1}{l|}{HAIS~\cite{chen2021hierarchical}} &100\% & - & - & - & 67.0 & 70.4 & 73.2 & 69.4 \\
\multicolumn{1}{c|}{} & \multicolumn{1}{l|}{SSTNet~\cite{liang2021instance}} &100\% & 54.1 & 67.8 & - & - & - & 73.5 & 73.4 \\
\multicolumn{1}{c|}{} & \multicolumn{1}{l|}{SoftGroup~\cite{vu2022softgroup}} &100\% & 54.4 & \textbf{68.9} & - & 69.3 & 71.7 & \textbf{75.3} & 69.8 \\
\multicolumn{1}{c|}{} & \multicolumn{1}{l|}{GraphCut~\cite{hui2022graphcut}} &100\% & \textbf{56.3} & 68.2 & - & \textbf{72.8} & \textbf{75.0} & 74.4 & \textbf{73.7} \\ \midrule
\multicolumn{1}{c|}{\multirow{2}{*}{Weakly Sup.}} & \multicolumn{1}{l|}{Baseline} &0.02\% & 19.5 & 30.5 & 42.0 & 41.1 & 42.3 & 13.3 & 37.1 \\
\multicolumn{1}{c|}{} & \multicolumn{1}{l|}{SegGroup} &0.02\% & 23.1 & 37.6 & 48.5 & 45.5 & 47.6 & 56.7 & 43.3 \\
\multicolumn{1}{c|}{} & \multicolumn{1}{l|}{3D-WSIS~(\textbf{ours})} &0.02\% & \textbf{26.7} & \textbf{40.4} & \textbf{52.6} & \textbf{48.0} & \textbf{50.5} & \textbf{59.3} & \textbf{46.7} \\ \midrule
\multicolumn{10}{c}{Area 5} \\ \midrule
\multicolumn{1}{c|}{\multirow{6}{*}{Fully Sup.}} & \multicolumn{1}{l|}{SGPN~\cite{wang2018sgpn}} &100\% & - & - & - & 32.7 & 35.5 & 36.0 & 28.7 \\
\multicolumn{1}{c|}{} & \multicolumn{1}{l|}{ASIS~\cite{wang2019associatively}} &100\% & - & - & - & 44.6 & 47.8 & 55.3 & 42.4 \\
\multicolumn{1}{c|}{} & \multicolumn{1}{l|}{PointGroup~\cite{jiang2020pointgroup}} &100\% & - & 57.8 & - & - & - & 61.9 & 62.1 \\
\multicolumn{1}{c|}{} & \multicolumn{1}{l|}{HAIS~\cite{chen2021hierarchical}} &100\% & - & - & - & 64.3 & 66.0 & 71.1 & 65.0 \\
\multicolumn{1}{c|}{} & \multicolumn{1}{l|}{SSTNet~\cite{liang2021instance}} &100\% & 42.7 & 59.3 & - & - & - & 65.5 & 64.2 \\
\multicolumn{1}{c|}{} & \multicolumn{1}{l|}{SoftGroup~\cite{vu2022softgroup}} &100\% & 51.6 & 66.1 & - & 66.1 & 68.0 & 73.6 & 66.6 \\
\multicolumn{1}{c|}{} & \multicolumn{1}{l|}{GraphCut~\cite{hui2022graphcut}} &100\% & \textbf{54.1} & \textbf{66.4} & - & \textbf{67.5} & \textbf{68.7} & \textbf{74.7} & \textbf{67.8} \\ \midrule
\multicolumn{1}{c|}{\multirow{2}{*}{Weakly Sup.}} & \multicolumn{1}{l|}{Baseline} &0.02\% & 18.9 & 26.8 & 37.7 & 36.3 & 37.1 & 11.1 & 28.5 \\
\multicolumn{1}{c|}{}
& \multicolumn{1}{l|}{SegGroup} &0.02\% & 21.0 & 29.8 & 41.9 & 39.1 & 40.8 & 47.2 & 34.9 \\
\multicolumn{1}{c|}{}
& \multicolumn{1}{l|}{3D-WSIS~(\textbf{ours})} &0.02\% & \textbf{23.3} & \textbf{33.0} & \textbf{48.3} & \textbf{42.2} & \textbf{44.2} & \textbf{50.8} & \textbf{38.9} \\
\bottomrule
\end{tabular}
}
\label{tab:S3DIS}
\end{table}
\subsection{Results}
{\bf ScanNet-v2.} Tab.~\ref{tab:ScanNet_val} reports the quantitative results on the ScanNet-v2 validation set and hidden testing set. Compared with the existing semi-/weakly supervised point cloud instance segmentation methods, our approach achieves state-of-the-art performance and improves the $\text{AP}_{25}$ from 62.9\% to 67.5\% with a gain of about 5\% on the ScanNet-v2 validation set. Note that SPIB~\cite{liao2021point} uses the bounding box of each instance as weak labels, which is different from ours. Although we have the same number of annotated instances, clicking one point per instance provides less information than eight corners of the box. Nonetheless, our method can still achieve higher performance than SPIB.
{\bf S3DIS.} Tab.~\ref{tab:S3DIS} reports the results on Area 5 and 6-fold cross validation of the S3DIS dataset. Compared with the fully supervised 3D instance segmentation methods, our model can still achieve good results, even outperforming the fully supervised methods such as SGPN~\cite{wang2018sgpn}. The quantitative results demonstrate the effectiveness of our method on weakly supervised 3D instance segmentation. %
{\bf Visualization results.} Fig.~\ref{fig:scannet_s3dis_ins} shows the visualization results of instance segmentation on the ScanNet-v2 validation set and S3DIS. It can be found that although some objects of the same class are close to each other, like chairs, the instances are still segmented properly. Since we additionally predict the size of the objects in the network, our method can effectively use the size information of the objects to guide the clustering, thereby segmenting different instances that are close to each other.
\subsection{Ablation Study}
{\bf Effect of pseudo labels.} We conduct experiments on the ScanNet-v2 validation set to verify the effectiveness of our label propagation. The quantitative results are reported in Tab.~\ref{tab:abl_performance}. ``Baseline'' indicates that our method trained with initial annotated labels, without pseudo labels. In the first stage, we report the mean average precision at different iterations, respectively. It can be observed that the performance gradually increases as the number of iterations increases. Furthermore, based on the stage one (dubbed ``Stage 1''), it can be found that the performance is greatly improved after training in the second stage (dubbed ``Stage 2''). Since the number of generated pseudo labels in the first stage is still less than that of fully annotated labels, it is difficult to effectively segment instances. In the second stage, we use the model trained in the first stage to cluster pseudo instances, so we can regard the obtained object volume as the additional supervision to train the network. The quantitative results in the second stage further demonstrate that using the predicted object volume can indeed improve the performance of weakly supervised 3D instance segmentation.
\begin{table}[t]
\centering
\caption{The ablation study of different components on the ScanNet-v2 validation set and S3DIS Area 5. ``Baseline'' means the model trained the initial annotated labels only. Note that ``Stage 2'' is performed based on ``Stage 1''
}
\resizebox{0.95\textwidth}{!}{
\begin{tabular}{@{}cc|ccc|ccccccc@{}}
\toprule
\multicolumn{2}{c|}{} & \multicolumn{3}{c|}{ScanNet-v2 Val} & \multicolumn{7}{c}{S3DIS Area 5} \\ \midrule
\multicolumn{2}{c|}{Settings} & \;\;AP\;\; & $\text{AP}_{50}$ & \;$\text{AP}_{25}$\; & \;\;AP\;\; & $\text{AP}_{50}$ & \;$\text{AP}_{25}$\; & mCov & mWCov & mPrec & mRec \\ \midrule
\multicolumn{2}{c|}{Baseline} & 21.2 & 39.0 & 61.3 & 18.9 & 26.8 & 37.7 & 36.3 & 37.1 & 11.1 & 28.5 \\ \midrule
\multicolumn{1}{c|}{\multirow{3}{*}{\;Stage 1\;}} & \;Iter. 1\; & 23.4 & 42.2 & 62.8 & 19.9 & 27.7 & 40.2 & 38.5 & 39.4 & 21.7 & 33.1 \\
\multicolumn{1}{c|}{} & \;Iter. 2\; & 24.5 & 43.6 & 64.4 & 20.1 & 28.0 & 40.7 & 39.7 & 40.4 & 22.2 & 33.8 \\
\multicolumn{1}{c|}{} & \;Iter. 3\; & 25.4 & 45.3 & 65.8 & 20.9 & 28.3 & 41.9 & 40.0 & 40.8 & 23.1 & 34.1 \\ \midrule
\multicolumn{2}{c|}{Stage 2 } & \textbf{28.1} & \textbf{47.2} & \textbf{67.5} & \textbf{23.3} & \textbf{33.0} & \textbf{48.3} & \textbf{42.2} & \textbf{44.2} & \textbf{50.8} & \textbf{38.9} \\ \bottomrule
\end{tabular}
}
\label{tab:abl_performance}
\end{table}
\begin{table}[t]
\centering
\caption{The ablation study results (proportion/accuracy) of pseudo labels at different iterations on the ScanNet-v2 training set. The proportion and accuracy of pseudo labels are computed at the point level.}
\resizebox{0.95\textwidth}{!}{
\begin{tabular}{@{}c|ccc@{}}
\toprule
\;\;\;\;Stage 1\;\;\;\; & \;\;\;\;Only Random\;\;\;\; & Random+Affinity\;\; & Random+Affinity+Semantic \\ \midrule
Iter. 1 & 33.7~/~39.9 & 29.1~/~52.7 & 18.2~/~81.9 \\
Iter. 2 & 47.7~/~38.4 & 35.1~/~71.6 & 30.9~/~82.1 \\
Iter. 3 & 48.2~/~38.3 & 35.2~/~73.1 & 31.4~/~82.5 \\
\bottomrule
\end{tabular}
}
\label{tab:abl_label}
\end{table}
{\bf Quality of pseudo labels.} The pseudo labels are generated on the training set to increase supervision during training, so their quality affects network training. To further study the quality of the generated pseudo labels, we count the proportion and accuracy of pseudo labels on the ScanNet-v2 training set during training. The results are listed in Tab.~\ref{tab:abl_label}. When only using random walk (dubbed ``Only Random''), the proportion of pseudo labels is high, but the accuracy is low (39.9\% at ``Iter. 1''). The low-accuracy pseudo labels will affect the training of the network. If we add the extra affinity constraint (dubbed ``Random+Affinity''), we can observe that the proportion of pseudo labels is lower, but the accuracy is greatly improved (52.7\% at ``Iter. 1''). Due to the affinity constraint, the proportion of wrong label propagation is reduced. Therefore, the quality of pseudo labels is improved and high-quality supervision is provided for network training. Furthermore, when we add the semantic constraints (dubbed ``Random+Affinity+Semantic''), the accuracy of pseudo labels improves from 52.7\% (``Random+Affinity'') to 81.9\% (``Random+Affinity+Semantic''), which shows that the semantic constraint is useful for the weakly supervised 3D instance segmentation task. As constraints are added gradually, the proportion of the generated pseudo labels decreases, while the accuracy increases.
\textbf{Label propagation times.} Different label propagation times influence the quality of pseudo labels. As shown in Tab.~\ref{tab:abl_label}, the proportion and accuracy of pseudo labels at three iterations (dubbed ``Iter. 3'') is comparable to two iterations (dubbed ``Iter. 2''), and performing more iterations consumes more resources, thus we choose three iterations for label propagation.
\begin{figure*}[t]
\centering
\includegraphics[width=0.98\textwidth]{./figures/label_prop_20220707_crop.pdf}
\caption{Visualization results of pseudo label generation. Note that we remove the superpoints on the walls and floor for a better view.}
\label{fig:label_prop}
\end{figure*}
{\bf Visualization of pseudo labels.} Fig.~\ref{fig:label_prop} shows the pseudo labels at different iterations in the first stage on the superpoint graph. We can observe that the initial annotated labels are extremely sparse, and only a few superpoints are annotated in the superpoint graph. In the first stage, as the iteration number of label propagation increases, the labels spread to the surrounding superpoints in the graph gradually. With the constraints of the predicted affinity and semantic, the propagation of labels is restricted to the same object. In the second stage, we use the model trained in the first stage to predict pseudo instances by performing clustering on the superpoint graph. The last column shows the predicted pseudo instances. It can be observed that different instances can be effectively separated.
\section{Conclusion}
In this paper, we proposed a simple yet effective method for weakly supervised 3D instance segmentation with extremely few labels. To exploit few point-level annotations, we used an unsupervised point cloud oversegmentation method on the point cloud to generate superpoints and construct the superpoint graph. Based on the constructed superpoint graph, we developed an inter-superpoint affinity mining module to adaptively learn inter-superpoint affinity for label propagation via random walk. We further developed a volume-aware instance refinement module to guide the superpoint clustering on the superpoint graph by learning the object volume information. Experiments on the ScanNet-v2 and S3DIS datasets demonstrate that our method achieves state-of-the-art performance on weakly supervised 3D instance segmentation.
\bibliographystyle{splncs04}
|
1,314,259,994,907 | arxiv | \section{Introduction}
Experimental data for pion transition form-factor $F_{\pi\gamma^*\gamma}$ obtained by the BaBar collaboration~\cite{Aubert:2009mc} indicate growth of $Q^2 F_{\pi\gamma^*\gamma}$ at large $Q^2$ that is inconsistent with rigid prediction of QCD factorization theorems~\cite{Lepage:1980fj}
\begin{equation}
\label{form-factor_factorization_asym}
F_{\pi\gamma^*\gamma}\sim \frac{\sqrt{2} f_\pi}{Q^2},
\end{equation}
where $f_\pi=131\ \mathrm{MeV}$. Published later results of experiments on pion transition form-factor $F_{\pi\gamma^*\gamma}$ carried out by the Belle~\cite{Uehara:2012ag} collaboration demonstrate qualitatively different behavior at large momenta, though they still allow violation of bound~\eqref{form-factor_factorization_asym}.
Particular source of contributions that could explain BaBar trend is the QCD vacuum described by ensemble of almost everywhere homogeneous Abelian (anti-)self-dual fields with nonzero scalar condensate $\langle F^2 \rangle$ --- the key feature of the domain model~\cite{EN1,EN,NK1,NK4,Nedelko:2016gdk} --- that can possibly mix short and long-range dynamics. Short-range fluctuations in the model are separated from long-range modes with scalar condensate and integrated out, while the latter are treated nonperturbatively during bosonization procedure. Chiral symmetry is spontaneously broken by vacuum field, there exists nonzero scalar quark condensate $\langle \bar{q} q\rangle$ (see~\cite{Nedelko:2016gdk} for details). The $U_A(1)$ problem is resolved without introducing the strong charge-parity ($CP$) violation. This approach demonstrated its power in description of wide range of meson phenomenology (masses of light, heavy-light and double-heavy mesons, leptonic decay constants, transition constants, all of the above including excited mesons). Overall precision of the approach in the lowest approximation is about $10-15\%$ with a few exceptions. The model also allows consistent treatment of processes involving higher number of hadrons. Results of calculations of decay constants $g_{VPP}$ are given below.
Propagators and meson-quark vertices in the vacuum field are not translation-invariant, therefore momentum is not conserved in every vertex but only in an entire diagram. The purpose of the present paper is to investigate how this feature of the domain model affects transition form-factors of pseudoscalar mesons. All calculations are performed with the same set of parameters that was used for calculation of meson spectra~\cite{Nedelko:2016gdk}, see Table~\ref{values_of_parameters}.
\begin{table}
\begin{tabular}{@{}ccccccc@{}}
\hline\hline
$m_{u/d}$(MeV)&$m_s$(MeV)&$m_c$(MeV)&$m_b$(MeV)&$\Lambda$(MeV)&$\alpha_s$&$R$(fm)\\
\hline
$145$&$376$&$1566$&$4879$&$416$&$3.45$&$1.12$\\
\hline\hline
\end{tabular}
\caption{Values of parameters used for calculations.
\label{values_of_parameters}}
\end{table}
\section{Decay constants and transition form-factor of pion\label{section_formfactors}}
Interaction of a meson with two photons is described by the diagrams shown in Fig. \ref{F_P_gamma_picture}. Currently, the inhomogeneity of the vacuum gluon ensemble is not taken into account completely. As a consequence, only the diagram A gives nonzero contribution.
\setlength{\fboxsep}{0pt}
\begin{figure}
{\centering
\parbox{0.26\textwidth}{\includegraphics[scale=0.3]{F_P_gamma3}}
\parbox{0.22\textwidth}{\includegraphics[scale=0.3]{F_P_gamma3_disconnected}}
\parbox[c][\totalheight][t]{0.42\textwidth}{
\raisebox{-\height}{\includegraphics[scale=0.3]{F_P_gamma1}}
\hspace*{0.3em}
\raisebox{-\height}{\includegraphics[scale=0.3]{F_P_gamma2}}
}}\\[1em]
\parbox{0.26\textwidth}{\centering A}
\parbox{0.22\textwidth}{\centering B}
\parbox{0.13\textwidth}{\centering C}
\hspace*{0.3em}
\parbox{0.24\textwidth}{\centering D}
\caption{Possible contributions to transition form factor. \label{F_P_gamma_picture}}
\end{figure}
Result of computation of pion transition form-factor in comparison with experimental data is shown in Fig. \ref{pi_transition_figure}.
\begin{figure}
\sidecaption
\includegraphics[width=0.49\textwidth]{Qsqr_F_pi_gamma}
\caption{Pion transition form-factor in asymmetric (solid line) and symmetric (dashed line) kinematics. $g_{\pi\gamma\gamma}=F_{\pi\gamma^*\gamma(0)}=0.272\ \mathrm{GeV}^{-1}$. The data are taken from \cite{Behrend:1990sr,Gronberg:1997fj,Aubert:2009mc,Uehara:2012ag}.
\label{pi_transition_figure}}
\end{figure}
It turns out that nonconservation of momentum in every single vertex does not cause growth of $Q^2 F_{\pi\gamma^*\gamma}$ at large $Q^2$, and calculations show that at asymptotically large $Q^2$
\begin{equation*}
F_{\pi\gamma^*\gamma}\sim \varkappa\frac{\sqrt{2}f_\pi}{Q^2},\quad \varkappa\approx 1.24.
\end{equation*}
Qualitatively, asymtotic behavior of $Q^2 F_{\pi\gamma^*\gamma}$ at large $Q^2$ coincides with the factorization prediction, but the value of constant $\varkappa$ substantially differs from unity. This difference is a result of translational noninvariance of vertices. At the same time, this noninvariance does not affect asymptotic behavior of form-factor in symmetric kinematics (two photons with equal virtuality $Q^2$). Thus, asymptotics calculated within the model
\begin{equation*}
F_{\pi\gamma^*\gamma^*}\sim \varkappa^*\frac{\sqrt{2}f_\pi}{3Q^2},\quad \varkappa^*=1
\end{equation*}
matches factorization prediction
\begin{equation}
\label{form-factor_factorization_sym}
F_{\pi\gamma^*\gamma^*}\sim \frac{\sqrt{2} f_\pi}{3Q^2}
\end{equation}
The fact that prediction~\eqref{form-factor_factorization_sym} is reproduced by the model, while prediction~\eqref{form-factor_factorization_asym} is not, comes as no surprise because straightforward QCD factorization works well in the former case only, i.e.~when both photons are highly virtual~\cite{Radyushkin:1996tb,Mikhailov:2009kf}.
\section{Strong decays of vector mesons\label{section_vpp_decays}}
The diagram describing process of decay of vector meson into a couple of pseudoscalar mesons is shown in Fig.~\ref{VPP_decay_diagram}. This process is treated consistently with previously obtained within the model meson observables. Calculated values of $g_{VPP}$ are given in Table~\ref{table_vpp_decays}.
\begin{figure}
\sidecaption
\includegraphics[scale=0.35]{triang2}
\caption{Diagram of decay of vector meson into a couple of pseudoscalar mesons.} \label{VPP_decay_diagram}
\end{figure}
\begin{table}
\begin{tabular}{|c|c|c|c|}
\hline
Decay & $g_{VPP}$ \cite{PDG} & $g_{VPP}$&$g^*_{VPP}$\\
\hline
$\rho^0\rightarrow \pi^+ \pi^-$ & $5.95$ & $7.61$&$1.14$\\
\hline
$\omega\rightarrow \pi^+ \pi^-$ & $0.17$ & $0$&$0$\\
\hline
$K^{*\pm} \rightarrow K^\pm \pi^0$ & $3.23$ &$3.56$&$0.65$\\
\hline
$K^{*\pm} \rightarrow K^0 \pi^\pm$ & $4.57$ &$5.03$&$0.91$\\
\hline
$\varphi\rightarrow K^+ K^-$ & $4.47$ &$5.69$&$1.11$\\
\hline
$D^{*\pm}\rightarrow D^0 \pi^\pm$ & $8.41$ &$7.94$&$16.31$\\
\hline
$D^{*\pm}\rightarrow D^\pm \pi^0$ & $5.66$ &$5.62$&$11.53$\\
\hline
\end{tabular}
\caption{Results of calculation of $g_{VPP}$ for various decays. $g^*_{VPP}$ are results that one would obtain if local gauge invariance was neglected. Parameters of the model are given in Table~\ref{values_of_parameters}. \label{table_vpp_decays}}
\end{table}
Invariance of meson-meson amplitude under local background field gauge transformation turns out to be of crucial importance for correct description of the decays. If one reduces local gauge invariance to just the global one, decay constants change dramatically, especially $g_{\rho\pi\pi}$ (compare third and fourth column of Table~\ref{table_vpp_decays}). This is to be compared with usually underestimated $g_{\rho\pi\pi}$ decay constant~\cite{Bernard:1993wf,Deng:2013uca}. Value of $g_{\omega\pi\pi}$ is exactly zero because of ideal mixing of $\omega$ and $\phi$ mesons and employed approximation of $SU(2)$ isospin symmetry ($m_u=m_d$).
\section{Outlook\label{section_outlook}}
It is important to describe transition form-factors of $\eta,\eta',\eta_c$ simultaneously with pion. It was shown that in the instanton liquid model diagrams of type B in Fig.~\ref{F_P_gamma_picture} contribute to transition form-factors~\cite{Kochelev:2009nz}. While this contribution is suppressed by difference of masses of up and down quarks and negligible in the present consideration, it should be taken into account in order to describe transition form-factors of other mesons. However, incorporating this contribution into the domain model requires explicit accounting for inhomogeneity of the vacuum ensemble.
|
1,314,259,994,908 | arxiv | \section{Introduction}
A key insight of statistical mechanics is that {\em
equilibrium} states can be accurately described in terms of only a
small number of thermodynamic variables, such as temperature and
pressure. For non-equilibrium systems such as glasses no similar
simplification exists {\em a
priori}; the whole past history of a sample is in principle required
to specify its state at a given time. This complexity makes
theoretical analysis awkward, and one is led
instead to look for a description of non-equilibrium states in
terms of a few effective thermodynamic parameters. Much work in recent
years has focussed on one such parameter, the {\em effective
temperature}. This can be defined on the basis of fluctuation-dissipation
(FD) relations between correlation and response functions and has
proved to be very fruitful in mean field
systems~\cite{CugKurPel97,CriRit03}.
The use of FD relations to quantify the out-of-equilibrium dynamics in
glassy systems is motivated by the occurrence of {\em
aging}~\cite{BouCugKurMez98}: the time scale of response to an
external perturbation increases with the age (time since preparation)
$t_{\rm w}$ of the system. As a consequence, time translational invariance
and the equilibrium fluctuation-dissipation
theorem~\cite{Reichl80} (FDT) relating correlation and response
functions break down. To quantify this, one considers the correlation
function of two generic observable $A$ and $B$ of a system, defined as
\begin{equation}
C(t,t_{\rm w})=\left\langle A(t)B(t_{\rm w})\right\rangle -\left\langle A(t) \right\rangle \left\langle B(t_{\rm w}) \right\rangle
\label{corr}
\end{equation}
The associated (impulse) response function can be defined as
\[
R(t,t_{\rm w})=\left.\frac{\delta \left\langle A(t)\right\rangle}{\delta h_B(t_{\rm w})}\right|_{h_B=0}
\]
and gives the linear response of $A$ at time $t$ to a small impulse in
the field $h_B$ conjugate to $B$ at the earlier time $t_{\rm w}$. (The
latter is normally thought of as a ``waiting time'' since preparation
of the system at time 0.) Equivalently one can work with the
susceptibility
\begin{equation}
\chi(t,t_{\rm w})=\int_{t_{\rm w}}^t\! dt'\, R(t,t')
\label{switch_on}
\end{equation}
which encodes the response of $A(t)$ to a small step
$h_B(t)=h_B\Theta(t-t_{\rm w})$ in the field starting at $t_{\rm w}$. In
equilibrium, FDT implies that
$-\partial_{t_{\rm w}} \chi(t,t_{\rm w}) = R(t,t_{\rm w}) =
{T}^{-1}\partial_{t_{\rm w}}C(t,t_{\rm w})$. Out of equilibrium, the
violation of FDT can be measured by an FD ratio (FDR), $X$, defined
through~\cite{CugKur93,CugKur94}
\begin{equation}
\label{eqn:non_eq_fdt}
-\partial_{t_{\rm w}} \chi(t,t_{\rm w}) = R(t,t_{\rm w}) =
\frac{X(t,t_{\rm w})}{T}\partial_{t_{\rm w}}C(t,t_{\rm w})
\end{equation}
This implies that $X$ can be read off from the slope $-X/T$ of a
parametric FD plot showing $\chi$ vs $C$, at fixed $t$ and with $t_{\rm w}$
as the curve parameter. This remains the case if both axes are
normalized by the equal-time variance of $A$, $C(t,t)$, a procedure
which is helpful
in fixing the scale of the plot in situations where $C(t,t)$ varies
significantly with time~\cite{FieSol02,SolFieMay02}. In equilibrium, the FD
plot is a straight line with slope $-1/T$.
In mean-field spin glasses~\cite{CugKurPel97,CugKur93,CugKur94} one
finds that FD plots of autocorrelation and response of local spins and
similar observables approach a limiting shape for large $t$. This is
typically composed of
two straight line segments. In the first of these one finds $X=1$,
corresponding to
quasi-equilibrium dynamics for time differences $t-t_{\rm w}$ that do not
grow with the age of the system. The second line segment has $X<1$ and
reflects the dynamics on aging timescales, i.e.\ time differences
growing (in the simplest case linearly) with $t_{\rm w}$. One can use this
to define a non-equilibrium {\em effective temperature} $T_{\rm
eff}=T/X$, which has been shown to have many of the properties of a
thermodynamic temperature~\cite{CugKurPel97,CugKur93,CugKur94}.
How well this physically very attractive mean-field scenario transfers
to more realistic non-equilibrium systems with short-range
interactions has been a matter of intense research
recently~\cite{CriRit03}. A useful class of systems for studying this
question in detail is provided by ferromagnets quenched from high
temperature to the critical temperature (see e.g.\
Refs.~\cite{GodLuc00b,MayBerGarSol03} and the recent
review~\cite{CalGam05}) or below. The system then coarsens -- by the
growth of domains with the equilibrium magnetization, for $T<T_c$ --
and exhibits aging; in an infinite system equilibrium is never
reached. The aging is clearly related to the growth of a
lengthscale~\cite{Bray94} (domain size for $T<T_{\rm c}$, or correlation
length for $T=T_{\rm c}$), and this makes ferromagnets attractive
``laboratory'' systems for understanding general properties of
non-equilibrium dynamics. They are of course not completely generic;
compared to e.g.\ glasses they lack features such as thermal
activation over energetic or entropic barriers.
We focus in this paper mostly on ferromagnets quenched to $T_{\rm c}$, i.e.\
on critical coarsening dynamics. Some care is needed in this case with
the interpretation of limiting FD plots: while in the mean-field
situation $X$ becomes at long times a function of $C$ only, as implied
by the existence of a nontrivial limit plot, in critical coarsening
$X$ approaches a function of $t/t_{\rm w}$~\cite{GodLuc00b}. In the
interesting regime where $t/t_{\rm w}$ is finite but $>1$, time differences
$t-t_{\rm w}={{\mathcal O}}(t_{\rm w})$ are then large, and e.g.\ spin autocorrelation
functions have decayed to a small part of their initial value. In the
limit $t,t_{\rm w}\to\infty$ the FD plot then assumes a pseudo-equilibrium
shape, with all nontrivial detail compressed into a vanishing region
around $C=0$.
The fact that the FDR is a smooth function of $t/t_{\rm w}$ makes the
interpretation of $T/X$ as an effective temperature less obvious than
in mean-field spin glasses, where $T/X$ is constant within each time
sector ($t-t_{\rm w}={{\mathcal O}}(1)$ vs $t-t_{\rm w}$ growing with $t_{\rm w}$). To eliminate
the time-dependence one can consider the limit of times that are both
large and well-separated. This defines an {\em asymptotic FDR}
\begin{equation}
X^\infty=\lim_{t_{\rm w}\to\infty}\lim_{t\to\infty}X(t,t_{\rm w})
\end{equation}
An important property of this quantity is that it should be {\em
universal}~\cite{GodLuc00b,CalGam05} in the sense that its value is
the same for different systems falling into the same universality
class of critical non-equilibrium dynamics. This makes a study of
$X^\infty$ interesting in its own right, even without an
interpretation in terms of effective temperatures.
If one nevertheless wants to pursue such an interpretation, the
resulting value of
the effective temperature $T/X^\infty$ should be the same for all (or
at least a large class of) observables $A$. The observable-dependence
of $X^\infty$ therefore becomes a key
question~\cite{FieSol02,SolFieMay02,MayBerGarSol03,CalGam04}.
Conventionally, much work on non-equilibrium ferromagnets has focussed
on the local spin autocorrelation function and associated response. An
obvious alternative is the long-wavelength analogue, i.e.\ the
correlation function of the fluctuating magnetization. Exact
calculations for the Ising chain~\cite{MayBerGarSol03,MaySol04} as
well as numerical simulations~\cite{MayBerGarSol03,MayBerGarSol04} in
dimension $d=2$ show that the resulting global $X^\infty$ is always
identical to the local version. This local--global correspondence,
which can also be obtained by field-theoretic
arguments~\cite{CalGam05,CalGam02c,CalGam02}, arises physically
because the long wavelength Fourier components of the spins are
slowest to relax and dominate the long-time behaviour of both local
and global quantities.
The local--global correspondence does of course not address the full
range of observable-dependence of the asymptotic FDR; one might ask
about other observables which are linear combinations not of spins
but for example products of interacting spins. In the critical Ising
model in $d=2$, numerical
simulations~\cite{MayBerGarSol03,MayBerGarSol04} suggest that even
these give the same $X^\infty$, so that an interpretation of
$T/X^\infty$ in terms of an effective temperature appears
plausible. One of the motivations for the current study was to verify
whether this {\em observable-independence} of $X^\infty$ across
different types of observables holds in an exactly solvable model, the
spherical ferromagnet~\cite{BerKac52,Joyce72}. In addition, we will study what
effect different {\em initial conditions} have on $X^\infty$. This is
motivated by our recent study of Ising models in the classical regime
of large $d$,
where critical fluctuations are irrelevant~\cite{GarSolPagRit05}. It
turned out that magnetized initial do states produce a different value of
$X^\infty$, so that critical coarsening in the presence of a nonzero
magnetization is in a new dynamical universality class even though the
magnetization does decay to zero at long times.
We begin in Sec.~\ref{sec:Gaussian} with a brief review of the
standard setup for the dynamics of the spherical model, as used in
e.g.~\cite{GodLuc00b}. Fluctuations in an effective Lagrange
multiplier enforcing the spherical constraint are neglected, leading
to a theory where all spins are Gaussian random variables. In
Sec.~\ref{sec:finite-range} this is applied to various observables of
finite range, by which we mean correlations and responses probing
lengthscales that can be large but remain small compared to the
overall system size. For spin observables we show that the expected
equality of $X^\infty$ between local and long-range quantities holds
(Sec.~\ref{sec:spin_observables}). We check observable-independence of
$X^\infty$ further by considering bond and spin product observables,
in Sec.~\ref{sec:bond_observables} and~\ref{sec:product_observables}
respectively.
The major part of the paper is then devoted to a study of FDRs for
{\em global} observables, with a focus on the energy, i.e.\ the global
bond observable. Because of the weak infinite-range interaction
generated by the spherical constraint, such observables behave
differently from their long-range analogues in the spherical
model. Calculations of correlation and response functions are
technically substantially more difficult because Lagrange multiplier
fluctuations can no longer be neglected. To account for them we construct
in Sec.~\ref{sec:setup} a systematic expansion of the dynamically
evolving spins in $N^{-1/2}$. This allows us to calculate the leading
non-Gaussian corrections that we need for global correlations, as
shown for the case of the energy in
Sec.~\ref{sec:energy_general}. After a brief digression to equilibrium
dynamics, we evaluate the resulting expressions in
Sec.~\ref{sec:noneq_large_d} for $d$ above the critical dimension
$d_{\rm c}=4$, and in Sec.~\ref{sec:noneq_small_d} for
$d<4$. Importantly, we will find that in the latter case the
asymptotic FDR is different from those for finite-range
observable. This means that an effective temperature interpretation of
$X^\infty$ is possible at best in a very restricted sense. However, we
will find that our results are in agreement with recent
renormalization group (RG) calculations near $d=4$~\cite{CalGam04} in
the $O(n\to\infty)$-model. This suggests that the non-Gaussian
effects captured in global observables are important for linking the
spherical model to more realistic systems with only short-range
interactions. Finally, we turn in Sec.~\ref{sec:magnetized} to
critical coarsening starting from magnetized initial conditions. Here
already the global {\em spin} observable is affected by non-Gaussian
corrections. Once these are accounted for, we find $X^\infty=4/5$ for
$d>4$ as in the Ising case~\cite{GarSolPagRit05}. For $d<4$ we provide
the first exact values of the asymptotic FDR in the presence of a
nonzero magnetization; these turn out to be highly nontrivial even to
first order in $4-d$ and $d-2$. Our results are summarized and
discussed in Sec.~\ref{sec:conclusion}. Technical details are relegated to two
appendices.
\section{Langevin dynamics and Gaussian theory}
\label{sec:Gaussian}
We consider the standard spherical model Hamiltonian
\begin{equation}
H = \frac{1}{2} \sum_{(ij)} (S_i-S_j)^2
\label{H_spherical}
\end{equation}
The sum runs over all nearest neighbour (n.n.) pairs on a
$d$-dimensional (hyper-)cubic lattice; the lattice constant is taken
as the unit length. At each of the $N$ lattice sites $\mathbf{r}_i$ there is a
real-valued spin $S_i$. The spherical constraint $\sum_i S_i^2=N$ is
imposed, which can be motivated by analogy with Ising spins $S_i=\pm
1$~\cite{BerKac52}.
The Langevin dynamics for this model can be written as
\begin{equation}
\partial_t S_i = - \frac{\partial H}{\partial S_i} + \xi_i - \frac{1}{N}
\sum_k S_k\left(-\frac{\partial H}{\partial S_k} + \xi_k\right)S_i
\label{eqn_motion}
\end{equation}
with $\xi_i$ Gaussian white noise with zero mean and covariance $\left\langle
\xi_i(t)\xi_j(t')\right\rangle = 2T\delta_{ij} \delta(t-t')$. The last term
in\eq{eqn_motion}, i.e.\ the sum over $k$, enforces the
spherical constraint at all times by removing the component of the
velocity vector
$(\partial_t S_1,\ldots,\partial_t S_N)$ along $(S_1,\ldots,S_N)$. We
use here the Stratonovic
convention for products like $S_k\xi_k$. This allows the ordinary
rules of calculus to be used when evaluating derivatives such as
$\partial_t S_i^2$. Physically it corresponds to the intuitively reasonable
scenario where the noise $\xi_i$ is regarded as a smooth random
process but with a correlation time much shorter than any other
dynamical timescale.
The prefactor of $S_i$ in the last term of\eq{eqn_motion}, being an
average of $N$ contributions, will have fluctuations of ${{\mathcal O}}(N^{-1/2})$.
Conventionally one ignores these and approximates the equation of
motion as
\begin{equation}
\partial_t S_i = - \frac{\partial H}{\partial S_i} + \xi_i - z(t) S_i
\label{eqn_motion_Gaussian}
\end{equation}
where $z(t)$ can be viewed as an effective time-dependent Lagrange multiplier
implementing the spherical constraint. This approximation works for
local quantities, but as we will see can give incorrect results when
one considers {\it e.g.}\ fluctuations of the magnetization or the energy,
which involve correlations across the entire system. One can see
directly that\eq{eqn_motion_Gaussian} is an approximation from the
fact that it corresponds to Langevin dynamics with the effective
Hamiltonian $H+\frac{1}{2} z(t)\sum_i S_i^2$. Since the latter is
time-dependent, this dynamics does not satisfy detailed balance. It is
simple to check, on the other hand, that the original equation of
motion\eq{eqn_motion} does satisfy detailed balance and leads to the
correct equilibrium distribution $P\eql(\{S_i\})\propto \exp(-\beta
H)\delta(\sum S_i^2-N)$ where $\beta=1/T$ is the inverse temperature
as usual.
The key advantage of the approximation\eq{eqn_motion_Gaussian} is, of
course, that the spins are Gaussian random variables at all times as
long as the initial condition is of this form. Explicitly, if we
define a matrix $\Omega$ with $\Omega_{ij}=-1$ for n.n.\ sites $i,j$ and
$\Omega_{ii}=2d$, the Gaussian equation of motion is
\begin{equation}
\partial_t S_i = -\sum_j \Omega_{ij}S_j - z(t) S_i + \xi_i
\label{real_space_dotSi}
\end{equation}
We review briefly how this is solved (see e.g.~\cite{GodLuc00b} and
references therein), since these results form the
basis for all later developments. In terms of the Fourier components
$S_\mathbf{q} = \sum_i S_i \exp(-i\mathbf{q}\cdot\mathbf{r}_i)$ of the spins,
equation\eq{real_space_dotSi} reads
\begin{equation}
\partial_t S_\mathbf{q} = -(\omega_\mathbf{q}+z(t))S_\mathbf{q} + \xi_\mathbf{q}
\label{dotSq}
\end{equation}
where $\omega_\mathbf{q} = 2\sum_{a=1}^d (1-\cos q_a)$; we mostly write just
$\omega$. The Fourier mode response function can be read off as
\begin{equation}
R_\mathbf{q}(t,t_{\rm w}) = \exp\left(-\omega(t-t_{\rm w})-\int_{t_{\rm w}}^t dt'\,
z(t')\right) \equiv \sqrt{\frac{g(t_{\rm w})}{g(t)}}e^{-\omega(t-t_{\rm w})}
\label{Rq}
\end{equation}
where
\begin{equation}
g(t) = \exp\left(2\int_{0}^t dt'\, z(t')\right)
\label{g_def}
\end{equation}
In terms of this, the time-dependence of the $S_\mathbf{q}$ becomes
\begin{equation}
S_\mathbf{q}(t) = R_\mathbf{q}(t,0) S_\mathbf{q}(0) + \int_0^t dt'\,R_\mathbf{q}(t,t')\xi_\mathbf{q}(t')
\label{Sqt}
\end{equation}
The equal-time correlator $C_\mathbf{q}(t,t) = (1/N) \left\langle
S_\mathbf{q}(t)S_\mathbf{q}^*(t)\right\rangle$ follows as
\begin{eqnarray}
C_\mathbf{q}(t,t) &=& C_\mathbf{q}(0,0) R_\mathbf{q}^2(t,0) + 2T \int_0^t dt'\,R_\mathbf{q}^2(t,t') \\
&=& \frac{C_\mathbf{q}(0,0)}{g(t)}e^{-2\omega t} + 2T \int_0^t dt'\,
\frac{g(t')}{g(t)} e^{-2\omega(t-t')}
\label{Cqtt}
\end{eqnarray}
and we note for later the identity
\begin{equation}
\partial_t C_\mathbf{q}(t,t) = 2T -
\left(2\omega+\frac{g'(t)}{g(t)}\right) C_\mathbf{q}(t,t)
\label{Cqtt_deriv}
\end{equation}
The two-time correlator $C_\mathbf{q}(t,t_{\rm w}) = (1/N) \left\langle
S_\mathbf{q}(t)S_\mathbf{q}^*(t_{\rm w})\right\rangle$ can be deduced from the analogue of\eq{Sqt}
for initial time $t_{\rm w}$
\begin{equation}
S_\mathbf{q}(t) = R_\mathbf{q}(t,t_{\rm w}) S_\mathbf{q}(t_{\rm w}) + \int_{t_{\rm w}}^t dt'\,R_\mathbf{q}(t,t')\xi_\mathbf{q}(t')
\end{equation}
as
\begin{equation}
C_\mathbf{q}(t,t_{\rm w}) = R_\mathbf{q}(t,t_{\rm w}) C_\mathbf{q}(t_{\rm w},t_{\rm w})
\label{C_twotime}
\end{equation}
The position-dependent correlation and response functions
$C_{ij}(t,t_{\rm w})$ and $R_{ij}(t,t_{\rm w})$ are then just the inverse Fourier
transforms of $C_\mathbf{q}(t,t_{\rm w})$ and $R_\mathbf{q}(t,t_{\rm w})$, respectively, with
$\mathbf{q}$ conjugate to $\mathbf{r}_j-\mathbf{r}_i$.
\subsection{The function $g(t)$}
The calculations outlined above show that the Gaussian dynamics is
fully specified once the function $g(t)$ is known. The latter can be
found from the spherical constraint, which imposes $\int(dq)\,
C_\mathbf{q}(t,t)=1$. Here and below we abbreviate $(dq) \equiv
d\mathbf{q}/(2\pi)^d$, where the integrals runs over the first Brillouin zone
of the hypercubic lattice, i.e.\ $\mathbf{q}\in[-\pi,\pi]^d$. Using\eq{Cqtt}
this constraint gives an integral equation for $g(t)$:
\begin{equation}
g(t) =
\int(dq)\, C_\mathbf{q}(0,0) e^{-2\omega t} + 2T \int_0^t dt'\, f(t-t')g(t')
\label{g_eq}
\end{equation}
where
\begin{equation}
f(t) = \int(dq)\, e^{-2\omega t} = [e^{-4t}I_0(4t)]^d \approx
(8\pi t)^{-d/2}
\label{f_def}
\end{equation}
Here $I_0$ denotes a modified Bessel function and the final expression
gives the asymptotic behaviour for large $t$. In terms of Laplace transforms
$\hat{g}(s)=\int_0^\infty dt\, \exp(-st)g(t)$, eq.\eq{g_eq} then has the
solution
\begin{equation}
\hat{g}(s) = \frac{1}{1-2T\hat{f}(s)} \int(dq)\, \frac{C_\mathbf{q}(0,0)}{s+2\omega}
\label{hat_g_general}
\end{equation}
With the exception of Sec.~\ref{sec:magnetized}, we focus in this
paper on random initial conditions, $C_\mathbf{q}(0,0)=1$, corresponding to a
quench at time $t=0$ from equilibrium at infinite temperature. In this
case the $\mathbf{q}$-integral in the last equation is just $\hat{f}(s)$, so
that
\begin{equation}
\hat{g}(s) = \frac{\hat{f}(s)}{1-2T\hat{f}(s)}
\end{equation}
The asymptotics of the corresponding $g(t)$ are well known; see
{\it e.g.}~\cite{GodLuc00b,CorLipZan03}. For $T$ above the critical
temperature $T_{\rm c}$, which is given by
\begin{equation}
T_{\rm c}^{-1}=2 \hat{f}(0) = \int(dq)\, \frac{1}{\omega}
\label{Tc}
\end{equation}
there is a pole in $\hat{g}(s)$ at $s=2z\eql$. Here $z\eql$ is found from the
condition $2T\hat{f}(2z\eql)=1$ or
\begin{equation}
\int(dq)\, \frac{T}{z\eql+\omega} = 1
\label{zeql_cond}
\end{equation}
The presence of this pole tells us that $g(t)\sim \exp(2z\eql t)$ for
long times,
implying that the Lagrange multiplier $z(t)$ approaches $z\eql$ for
$t\to\infty$. Correspondingly, the condition\eq{zeql_cond} is just the
spherical constraint at equilibrium, bearing in mind that
$C^{\rm eq}_\mathbf{q}(t,t)=T/(z\eql+\omega)$ from\eq{dotSq}. Because
$\omega\approx q^2$
for small $q=|\mathbf{q}|$, the phase space factor in the $\mathbf{q}$-integrals is
$(dq) \sim d\omega\, \omega^{d/2-1}$ for small $q$ or $\omega$. This
shows that $T_{\rm c}$ as given by\eq{Tc} vanishes as $d\to 2$ from above;
consequently we will always restrict ourselves to dimensions $d$ above this
lower critical dimension.
At criticality ($T=T_{\rm c}$), $z\eql$ vanishes, and $g(t)$ therefore no
longer grows exponentially; instead one
finds~\cite{GodLuc00b,CorLipZan03}
\begin{equation}
g(t) \sim t^{-\kappa}, \quad \kappa =
\left\{\begin{array}{lll}
(4-d)/2 & \mbox{for} & 2<d<4\\
0 & \mbox{for} & d>4
\end{array}\right.
\label{g_asympt}
\end{equation}
It is this case, of a quench to the critical temperature, that we will
concentrate on throughout most of this paper. This is because here the
FDR has the most interesting behaviour.
We note briefly that, in principle, $\int(dq)\,$ should be written as
$(1/N)\sum_\mathbf{q}$, with the sum running over all $\mathbf{q}$ whose components
are integers in the range $-L/2\ldots -1,0,1,\ldots L/2-1$ (assuming
$L$ is even) multiplied by an overall factor $2\pi/L$; there are $N$
such $\mathbf{q}$. When considering continuous functions of $\mathbf{q}$ this sum
can be replaced by the integral $\int(dq)\,$, and this will almost always be
the case in our analysis. Exceptions are situations with a nonzero
magnetization, where the wavevector $\mathbf{q}=\mathbf{0}$ is special and has to be
treated separately. This is relevant in equilibrium below $T_{\rm c}$, which
we discuss briefly in Sec.~\ref{sec:energy_FDT_eq}, and for
non-equilibrium dynamics starting from magnetized initial states
(Sec.~\ref{sec:magnetized}).
\subsection{Long-time scaling of $C_\mathbf{q}$}
It will be useful later to have a simplified long-time expression for
$C_\mathbf{q}(t_{\rm w},t_{\rm w})$ for the case of a critical quench. At zero wavevector
one has
\begin{equation}
C_\mathbf{0}(t_{\rm w},t_{\rm w}) = \frac{1}{g(t_{\rm w})}\left(1 + 2T \int_0^{t_{\rm w}} dt'\,
g(t')\right) \approx \frac{2Tt_{\rm w}}{1-\kappa}
\label{C0}
\end{equation}
where the last approximation is based on\eq{g_asympt} and is valid for large
$t_{\rm w}$. For nonzero $\mathbf{q}$, on the other hand,
\begin{equation}
C_\mathbf{q}(t_{\rm w},t_{\rm w}) = \frac{1}{g(t_{\rm w})}\left(e^{-2\omega t_{\rm w}}
+ 2T \int_0^{t_{\rm w}} dt'\, g(t') e^{-2\omega (t_{\rm w}-t')}\right)
\approx \frac{T}{\omega}
\label{Cq}
\end{equation}
which is as expected since all nonzero Fourier modes eventually
equilibrate. The crossover between the two limits takes place when
$\omegat_{\rm w} \sim 1$, or $q\sim t_{\rm w}^{-1/2}$; physically this represents
the growth of the time-dependent correlation length as $\sim t_{\rm w}^{1/2}$.
We therefore introduce the scaling variable $w=\omega t_{\rm w}$:
\begin{equation}
\frac{C_\mathbf{q}(t_{\rm w},t_{\rm w})}{C_\mathbf{0}(t_{\rm w},t_{\rm w})}
= \frac{e^{-2w} + 2T\omega^{-1} \int_0^{w} dy\,
g(y/\omega) e^{-2(w-y)}}
{1+ 2T\omega^{-1} \int_0^{w} dy\,
g(y/\omega)}
\end{equation}
Now keep $w$ constant and let $t_{\rm w}\to\infty$, {\it i.e.}\ $\omega\to 0$.
Then $g(y/\omega) \sim (y/\omega)^{-\kappa}$ and the second terms dominate
in denumerator and nominator to give
\begin{equation}
\frac{C_\mathbf{q}(t_{\rm w},t_{\rm w})}{C_\mathbf{0}(t_{\rm w},t_{\rm w})} = (1-\kappa)\int_0^1 dy\,y^{-\kappa}
e^{-2w(1-y)}
\label{Cratio_scaling}
\end{equation}
Combining\eq{Cratio_scaling} with\eq{C0} then gives the desired
long-time scaling form
\begin{equation}
C_\mathbf{q}(t_{\rm w},t_{\rm w}) = \frac{T}{\omega}\sc{C}(\omega t_{\rm w}), \quad
\sc{C}(w) = 2w\int_0^1 dy\,y^{-\kappa} e^{-2w(1-y)}
\label{C_scaling}
\end{equation}
For $d>4$ ($\kappa=0$) this simplifies to $\sc{C}(w)=1-e^{-2w}$. As
the derivation shows, eqs.\eq{Cratio_scaling} and\eq{C_scaling} are
valid whenever $t_{\rm w}\gg 1$, even for $\omega={{\mathcal O}}(1)$. The latter
case corresponds to $w\to\infty$ and gives $\sc{C}(w)=1$, which
is indeed consistent with\eq{Cq}.
For quantities such as $C_\mathbf{q}(t_{\rm w},t_{\rm w})$ that depend only on a single
time variable, what is meant by the long-time limit is
unambiguous. For two-time quantities like $C_\mathbf{q}(t,t_{\rm w})$ we use the
following terminology: the {\em long-time} limit refers to the regime
$t\gg 1$ and $t_{\rm w}\gg 1$ but without any restriction on $t-t_{\rm w}$, which
in particular is allowed to be short, i.e.\ of ${{\mathcal O}}(1)$. The {\em aging}
regime indicates more specifically the limit $t_{\rm w}\to\infty$ at fixed
ratio $x=t/t_{\rm w}>1$, which implies that also $t-t_{\rm w}$ is large, of
${{\mathcal O}}(t_{\rm w})$. Occasionally we specialize further to the regime of
{\em well-separated} times, which corresponds to $t\ggt_{\rm w}\gg1$, i.e.\
the asymptotic behaviour of the aging limit for $x\gg 1$.
To illustrate the difference, consider which wavevectors dominate the
integral $\int(dq)\, C_\mathbf{q}(t,t_{\rm w})$. In the long-time limit at equal times
$t=t_{\rm w}$, the scaling $C_\mathbf{q}(t_{\rm w},t_{\rm w}) \sim 1/\omega$ for $\omega\gg
1/t_{\rm w}$ combined with $(dq) \sim d\omega\, \omega^{d/2-1}$ for small
$\omega$ shows that the integral is divergent at the upper end of the
frequency regime $\omega={{\mathcal O}}(t_{\rm w}^{-1})$ for all $d>2$; in other
words, it is always dominated by values of $\omega$ (and therefore
$q$) of ${{\mathcal O}}(1)$. This remains true for two-time correlations, as
long as $t-t_{\rm w}={{\mathcal O}}(1)$. In the aging limit, however, we have
$t-t_{\rm w}={{\mathcal O}}(t_{\rm w})\gg 1$ and the exponential factor from $R_\mathbf{q}$
in\eq{C_twotime} then ensures that only values of
$\omega<(t-t_{\rm w})^{-1}={{\mathcal O}}(t_{\rm w}^{-1})$ have to be considered in the
integral.
\section{Fluctuation-dissipation relations for finite-range
observables}
\label{sec:finite-range}
In this section we consider FD relations for observables that probe
correlations over a lengthscale that can be much larger than the
lattice spacing, but remains much smaller than the system size. The
latter can then be taken to infinity independently, so that the
${{\mathcal O}}(N^{-1/2})$-fluctuations of the Lagrange multiplier $z$ become irrelevant.
We begin by considering briefly spin observables, and then discuss
bond observables in some more detail.
\subsection{Spin observables}
\label{sec:spin_observables}
Since all observables that are linear in the spins can be written as
superpositions of the Fourier modes $S_\mathbf{q}$, the basic ingredient for
understanding the FD behaviour is the FDR for the
latter. Using\eq{Cqtt_deriv}, this follows after a couple of lines as
($C'\equiv \partial_{t_{\rm w}} C$)
\begin{equation}
X_\mathbf{q}(t,t_{\rm w}) = \frac{TR_\mathbf{q}(t,t_{\rm w})}{C'_\mathbf{q}(t,t_{\rm w})} =
T\left[2T - \left(\omega + \frac{g'(t_{\rm w})}{2g(t_{\rm w})}\right)
C_\mathbf{q}(t_{\rm w},t_{\rm w}) \right]^{-1}
\label{Xq}
\end{equation}
This is {\em independent} of the later time $t$, a feature that is
commonly observed in simple non-equilibrium models~\cite{CriRit03}.
The fluctuating magnetization is simply $S_\mathbf{0}/N$, so setting
$\mathbf{q}=\mathbf{0}$ in\eq{Xq} gives directly the FDR for the magnetization
\begin{equation}
X_\mathbf{0}(t,t_{\rm w}) = T\left[2T - \frac{g'(t_{\rm w})}{2g^2(t_{\rm w})}
\left(1+2T \int_0^{t_{\rm w}} dt'\, g(t')\right) \right]^{-1}
\end{equation}
As $t_{\rm w}$ increases this converges on an ${{\mathcal O}}(1)$ timescale to the
limit-FDR
\begin{equation}
X^\infty = \frac{T}{2T+(\kappa/2)[2T/(1-\kappa)]} = \frac{1-\kappa}{2-\kappa} =
\left\{\begin{array}{lll}
1/2 & (d>4)\\
1-2/d & (d<4)
\end{array}\right.
\label{X_baseline}
\end{equation}
which is identical to the value obtained from the local
magnetization~\cite{GodLuc00b} as one would expect on general
grounds. Without working out the susceptibility explicitly, it is
clear from the $t$-independence of $X_\mathbf{0}(t,t_{\rm w})$ and its fast
convergence to $X^\infty$ that the limiting FD plot is a straight
line. Both of these observations are exactly as in the Ising model in
$d=1$~\cite{MayBerGarSol03}. Simulations have shown that also in the
$d=2$ Ising case the local--global correspondence holds for spin
observables; the limiting FD plot is numerically indistinguishable
from a straight line, though renormalization group arguments suggest
that it should deviate
slightly~\cite{MayBerGarSol04,CalGam02,CalGam05}.
We should clarify that the Gaussian theory above applies directly not
to the FDR for $S_\mathbf{0}$ but to the one for $S_\mathbf{q}$ with $q\ll
t_{\rm w}^{-1/2}$ but $q\gg L^{-1}$, where $L=N^{1/d}$ is the linear system
size. The corresponding physical observable is a ``block''
magnetization, i.e.\ the average of the spins within a block of size
$\ell\sim 1/q$ much larger than the time-dependent correlation length
$\simt_{\rm w}^{1/2}$ but still small compared to the overall system
size. For $\mathbf{q}=\mathbf{0}$, i.e.\ $\ell=L$, one would in principle need to
account for the non-Gaussian fluctuations. However, it turns out that
these are negligible as long as the system is not magnetized on
average (see Sec.~\ref{sec:magnetized}), so that the above results
remain correct even for the global magnetization itself.
More generally, the FDR for any finite-range spin observable can be
expressed as a superposition of those for the Fourier modes; this can
be seen by arguments parallelling those in the $d=1$ Ising
case~\cite{MayBerGarSol03}. As there, one can then show that the
asymptotic FDR that is approached for well-separated times $t\ggt_{\rm w}\gg
1$ is dominated by the
contribution from $\mathbf{q}=\mathbf{0}$, and hence identical to $X^\infty$
calculated above~\cite{CalGam05}. At equal times, on the other hand,
equilibrated
modes with $q={{\mathcal O}}(1)$ dominate and give $X=1$. The crossover
between these two regimes takes place when $t-t_{\rm w}={{\mathcal O}}(t_{\rm w})$ and
follows (by superposition) from the corresponding crossover at
$q={{\mathcal O}}(t_{\rm w}^{-1})$ in the Fourier mode FDRs. From\eq{C_scaling}
and\eq{Xq} the latter can be expressed as
\begin{equation}
\fl X_\mathbf{q}(t,t_{\rm w}) = \sc{X}(\omegat_{\rm w}), \quad \sc{X}^{-1}(w) = 2-(2w-\kappa)
\int_0^1 dy\,y^{-\kappa} e^{-2w(1-y)}
\label{Xq_scaling}
\end{equation}
in the long-time limit, providing the expected interpolation between
$X=X^\infty=1/[2+\kappa/(1-\kappa)]$ for $w\to 0$ and $X=1$ for
$w\to\infty$.
\subsection{Bond observables}
\label{sec:bond_observables}
Next we consider bond energy observables, $\frac{1}{2}(S_i-S_j)^2$, where
$i$ and $j$ are n.n.\ sites. Since all variables are Gaussian, the
connected correlations follow by Wick's theorem. For the correlation
of bond energies one gets
\begin{eqnarray}
\fl C_{ij,kl}(t,t_{\rm w}) &=& 2\frac{1}{4}\left\langle
[S_i(t)-S_j(t)][S_k(t_{\rm w})-S_l(t_{\rm w})]\right\rangle^2
= \frac{1}{2}\left[C_{ik}-C_{il}-C_{jk}+C_{jl}\right]^2
\label{C_bond}
\end{eqnarray}
where time arguments have been
left implicit. For the local case $(i,j)=(k,l)$, this simplifies to
\begin{equation}
C_{ij,ij} = 2[1-C_{ij}]^2
\label{C_bond_local}
\end{equation}
which tends to a nonzero constant for $t=t_{\rm w}\to\infty$ since $C_{ij}$
then approaches its equilibrium value, which is $<1$.
Next we turn to the response function. In general, if one perturbs the
Hamiltonian $H$ by $-hB\delta(t-t_{\rm w})$, then the equation of motion for
$S_i$ acquires an extra term $h(\partial B/\partial S_i)\delta(t-t_{\rm w})$. So
the perturbation in $S_i$ is
\begin{equation}
\delta S_i(t) = h\sum_j R_{ij}(t,t_{\rm w})\frac{\partial B}{\partial S_j}(t_{\rm w})
\end{equation}
Thus the perturbation of an observable $A$ is
\begin{equation} \delta A(t) = h\sum_{ij} \frac{\partial A}{\partial S_i}(t)
R_{ij}(t,t_{\rm w})\frac{\partial B}{\partial S_j}(t_{\rm w}) \end{equation}
giving the response function~\cite{CalGam04}
\begin{equation}
R_{AB}(t,t_{\rm w}) = \sum_{ij} R_{ij}(t,t_{\rm w}) \left\langle \frac{\partial
A}{\partial S_i}(t)\frac{\partial B}{\partial S_j}(t_{\rm w})\right\rangle
\end{equation}
For $A=\frac{1}{2}(S_i-S_j)^2$, $B=\frac{1}{2}(S_k-S_l)^2$ this yields
\begin{equation}
R_{ij,kl}(t,t_{\rm w}) =
[R_{ik}-R_{il}-R_{jk}+R_{jl}][C_{ik}-C_{il}-C_{jk}+C_{jl}]
\end{equation}
We now analyse the scaling of correlation, response and the resulting
FDR. In terms of $C_\mathbf{q}(t,t_{\rm w})$, the bond correlation\eq{C_bond} is
\begin{equation}
\fl C_{ij,kl}(t,t_{\rm w}) = \frac{1}{2}\left[\int(dq)\, C_\mathbf{q}
\left(
e^{i\mathbf{q}\cdot(\mathbf{r}_i-\mathbf{r}_k)}
-e^{i\mathbf{q}\cdot(\mathbf{r}_i-\mathbf{r}_l)}
-e^{i\mathbf{q}\cdot(\mathbf{r}_j-\mathbf{r}_k)}
+e^{i\mathbf{q}\cdot(\mathbf{r}_j-\mathbf{r}_l)}
\right)
\right]^2
\label{C_bond_ijkl}
\end{equation}
We can take out a factor $\exp(i\mathbf{q}\cdot\Delta\mathbf{r})$ from all the
exponentials, where $\Delta\mathbf{r} = \frac{1}{2} (\mathbf{r}_i+\mathbf{r}_j)+\frac{1}{2}(\mathbf{r}_k-\mathbf{r}_l)$ is
the distance vector between the bond midpoints. In the remaining
exponentials, $\mathbf{q}$ is multiplied by vectors with lengths of order
unity.
Now assume $t-t_{\rm w}\gg 1$. As explained above, integrals of two-time
quantities over $\mathbf{q}$ are then dominated by the small-$q$ regime,
$q^2\approx \omega<(t-t_{\rm w})^{-1}$. We can therefore Taylor expand in
$\mathbf{q}$ and get, using the equivalence of the $d$ lattice directions,
\begin{equation}
C_{ij,kl}
= \frac{1}{2}\left[d^{-1}(\mathbf{r}_i-\mathbf{r}_j)\cdot
(\mathbf{r}_k-\mathbf{r}_l) \int(dq)\, C_\mathbf{q} q^2 e^{i\mathbf{q}\cdot \Delta\mathbf{r}}\right]^2
\label{C_bond_Taylor}
\end{equation}
Similarly one finds for the response
\begin{equation}
\fl R_{ij,kl}
= \left[d^{-1}(\mathbf{r}_i-\mathbf{r}_j)\cdot
(\mathbf{r}_k-\mathbf{r}_l)\right]^2 \int(dq)\, R_\mathbf{q} q^2 e^{i\mathbf{q}\cdot \Delta\mathbf{r}}
\int(dq)\, C_\mathbf{q} q^2 e^{i\mathbf{q}\cdot \Delta\mathbf{r}}
\end{equation}
For the {\em local} bond-bond correlation and response one sets
$\Delta\mathbf{r}=\mathbf{0}$ and has $(\mathbf{r}_i-\mathbf{r}_j)\cdot (\mathbf{r}_i-\mathbf{r}_j)=1$, which gives
for the FDR
\begin{equation}
X_{\rm bond}^{\rm loc}(t,t_{\rm w}) = \frac{\int(dq)\, TR_\mathbf{q} q^2} {\int(dq)\, C'_\mathbf{q} q^2}
= \frac{\int(dq)\, R_\mathbf{q} q^2} {\int(dq)\, X^{-1}_\mathbf{q} R_\mathbf{q} q^2 }
\label{X_local_bond}
\end{equation}
So $1/X_{\rm bond}^{\rm loc}(t,t_{\rm w})$ can be thought of as an average of
$X_\mathbf{q}^{-1}(t,t_{\rm w})$ over $\mathbf{q}$, with the weight $R_\mathbf{q}(t,t_{\rm w})
q^{-2}$. The factor $R_\mathbf{q}$ ensures that significant contributions
come only from wavevectors $\mathbf{q}$ up to length $q\sim
(t-t_{\rm w})^{-1/2}$, i.e.\ up to $\omega(t-t_{\rm w})\approx 1$. Thus, when
$t-t_{\rm w}\ll t_{\rm w}$, the result is dominated by the regime $\omega t_{\rm w}\gg
1$, where $X_\mathbf{q} = 1$. For $t-t_{\rm w}\gg t_{\rm w}$, meanwhile, one only gets
contributions from $\omegat_{\rm w}\ll 1$, where $X_\mathbf{q} =X^\infty$. So the
FDR\eq{X_local_bond} for local bond observables is a scaling function
interpolating between $1$ and $X^\infty$, with the same $X^\infty$ as
for the magnetization. Explicitly one has, using\eq{Rq} and changing
integration variable from $\mathbf{q}$ to $w=\omegat_{\rm w}$,
\begin{equation}
X_{\rm bond}^{\rm loc}(t,t_{\rm w}) = \frac{\int_0^{\infty} dw\, w^{d/2} e^{-(x-1)w}}
{\int_0^{\infty} dw\, w^{d/2} e^{-(x-1)w} \sc{X}^{-1}(w)}
\label{X_bond_local}
\end{equation}
with $x=t/t_{\rm w}$ and $\sc{X}$ the scaling form\eq{Xq_scaling} of
$X_\mathbf{q}$. To find the shape of the FD plot, recall that the equal-time
value of the local-bond correlation\eq{C_bond_local} is a constant in
the long-time limit. For $t-t_{\rm w}\gg 1$, on the other hand,
eq.\eq{C_bond_Taylor} shows that $C_{\rm bond}^{\rm loc}$ scales as
\begin{eqnarray}
C_{\rm bond}^{\rm loc}(t,t_{\rm w})
&\sim& \frac{g(t_{\rm w})}{g(t)}\left[\int d\omega\,
\omega^{d/2-1}\frac{T}{\omega}\sc{C}(\omegat_{\rm w}) e^{-\omega(t-t_{\rm w})}
\omega\right]^2 \\
&\sim& \frac{g(t_{\rm w})}{g(t)} t_{\rm w}^{-d}\left[\int dw\,
w^{d/2-1}\sc{C}(w) e^{-w(t-t_{\rm w})/t_{\rm w}}\right]^2
\end{eqnarray}
Since $\sc{C}(w)\to 1$ for $w\to\infty$, the $w$-integral would be
divergent without the exponential cutoff and scales as
$[(t-t_{\rm w})/t_{\rm w}]^{-d/2}$ for $t-t_{\rm w}\ll t_{\rm w}$, so that $C_{\rm bond}^{\rm loc}(t,t_{\rm w}) \sim
(t-t_{\rm w})^{-d}$ in this regime. The regime $t-t_{\rm w}>t_{\rm w}$ where
$X_{\rm bond}^{\rm loc}(t,t_{\rm w})\neq 1$ is therefore compressed into the region where
$C_{\rm bond}^{\rm loc}$ is of order $t_{\rm w}^{-d}$, so that the long-time limit of the
FD plot is a straight line with equilibrium slope. Qualitatively one
thus has the same behaviour as for local bond observables in the Ising
model~\cite{MayBerGarSol03}.
Next consider {\em long-range} bond observables, where we sum $(ij)$ and $(kl)$
over all bonds. The same proviso as above for the magnetization
applies here, i.e.\ by applying the Gaussian theory we are effectively
considering the bond energies averaged over a block that is large but has to
remain nonetheless small compared to the system size. One can show that the
resulting equal-time correlation again approaches a constant value for
$t_{\rm w}\to\infty$. (This follows because for large $\Delta\mathbf{r}$, one can use the
small $q$-expansion\eq{C_bond_Taylor} even for equal times. From
$C_\mathbf{q}(t_{\rm w},t_{\rm w}) \approx T/q^2$ one gets $C(\Delta\mathbf{r};t_{\rm w},t_{\rm w})\sim
|\Delta\mathbf{r}|^{2-d}$ for large $\Delta\mathbf{r}$ and so $\int(dq)\, C_\mathbf{q} q^2 e^{i\mathbf{q}\cdot\Delta\mathbf{r}}
\sim \nabla^2 |\Delta\mathbf{r}|^{2-d} \sim |\Delta\mathbf{r}|^{-d}$. The square $|\Delta\mathbf{r}|^{-2d}$
then yields a convergent sum over $\Delta\mathbf{r}$.) So we focus directly on the
regime $t-t_{\rm w}\gg 1$, where the expansion\eq{C_bond_Taylor} is again
valid. Keeping the bond $(ij)$ fixed, the scalar product
$(\mathbf{r}_i-\mathbf{r}_j)\cdot (\mathbf{r}_k-\mathbf{r}_l)$ means that only bonds $(kl)$
parallel to $(ij)$ contribute, so that the sum over $(kl)$ becomes a
sum over $\Delta\mathbf{r}$, running over all lattice vectors. (For non-parallel
bonds, $\Delta\mathbf{r}$ could also assume other values not corresponding to
lattice vectors.) The sum over $(ij)$ then just gives an overall
factor of $Nd$. Normalizing by $N$, the block bond correlation function is
\begin{equation}
C_{\rm bond}^{\rm block}(t,t_{\rm w}) = \frac{1}{2d}\frac{1}{N}\sum_{\Delta\mathbf{r}} \left[
\int(dq)\, C_\mathbf{q} q^2 e^{i\mathbf{q}\cdot \Delta\mathbf{r}}\right]^2
= \frac{1}{2d} \int(dq)\, C^2_{\mathbf{q}} q^4
\label{C_block_bond}
\end{equation}
and similar arguments give for the (normalized) response
\begin{equation}
R_{\rm bond}^{\rm block}(t,t_{\rm w}) = \frac{1}{d} \int(dq)\, R_\mathbf{q} C_\mathbf{q} q^4
\end{equation}
so that the FDR becomes
\begin{equation}
X_{\rm bond}^{\rm block}(t,t_{\rm w}) = \frac{\int(dq)\, TR_\mathbf{q} C_\mathbf{q} q^4} {\int(dq)\,
C'_\mathbf{q} C_\mathbf{q} q^4}
= \frac{\int(dq)\, R_\mathbf{q} C_\mathbf{q} q^4} {\int(dq)\, X^{-1}_\mathbf{q} R_\mathbf{q} C_\mathbf{q} q^4 }
\label{X_bond_long-range}
\end{equation}
Again, this is the inverse of a weighted average of $X^{-1}_\mathbf{q}$, now
with weight $R_\mathbf{q} C_\mathbf{q} q^4$. The same arguments as
for\eq{X_local_bond} then show that $X^{\rm block}(t,t_{\rm w})$ scales with
$x=t/t_{\rm w}$ and interpolates between $X=1$ for $x=1$ and
$X=X^\infty$ for $x\to\infty$. The value of the
correlator\eq{C_block_bond} decays from ${{\mathcal O}}(1)$ at $t=t_{\rm w}$ to
${{\mathcal O}}(t_{\rm w}^{-d/2})$ at the point $x-1\approx 1$ where aging effects
appear. While this is larger than for the local bond observables, it
still decreases to zero for $t_{\rm w}\to\infty$, so that the limiting FD
plot is again of pseudo-equilibrium form. This is different to the
case of the Ising model, where the global bond observables give
nontrivial limiting FD plots~\cite{MayBerGarSol03}.
In more detail, the scaling of the block bond correlator\eq{C_block_bond}
in the aging regime $t-t_{\rm w}\gg 1$ is
\begin{eqnarray}
C_{\rm bond}^{\rm block}(t,t_{\rm w})
&\sim& \frac{g(t_{\rm w})}{g(t)}
\int\!d\omega\, \omega^{d/2-1} \frac{T_{\rm c}^2}{\omega^2}
\sc{C}^2(\omegat_{\rm w}) e^{-2\omega(t-t_{\rm w})}\omega^2
\\
&\sim& \left(\frac{t}{t_{\rm w}}\right)^{\kappa} t_{\rm w}^{-d/2}
\int\!dw \,w^{d/2-1} \sc{C}^2(w) e^{-2(x-1)w}
\end{eqnarray}
The integral scales as $(x-1)^{-d/2}$ for $x\approx 1$, so
$C_{\rm bond}^{\rm block}(t,t_{\rm w}) \sim (t-t_{\rm w})^{-d/2}$ there. For $x\gg 1$, on the other
hand, the integral becomes $\sim x^{-(d+4)/2}$, so $C_{\rm bond}^{\rm block}(t,t_{\rm w})\sim
t_{\rm w}^{-\kappa+2} t^{\kappa-(d+4)/2}$. Explicitly, in this $t\ggt_{\rm w}$ regime,
$C_{\rm bond}^{\rm block} \sim t_{\rm w}^2 t^{-(d+4)/2}$ for $d>4$, and $C_{\rm bond}^{\rm block} \sim
t_{\rm w}^{d/2} t^{-d}$ for $d<4$. The response function scales in the same
way as $\partial_{t_{\rm w}}C_{\rm bond}^{\rm block}$, because $X$ is everywhere of order unity.
\subsection{Product observables}
\label{sec:product_observables}
Instead of the bond observables $\frac{1}{2}(S_i-S_j)^2$ we could consider
the spin products $A=S_i S_j$, $B=S_k S_l$. The correlations are then
\begin{eqnarray}
\fl C_{ij,kl}(t,t_{\rm w})
&=& \left\langle S_i(t)S_j(t)S_k(t_{\rm w})S_l(t_{\rm w})\right\rangle - \left\langle
S_i(t)S_j(t)\right\rangle \left\langle S_k(t_{\rm w})S_l(t_{\rm w})\right\rangle\\
\fl &=& C_{ik}(t,t_{\rm w})C_{jl}(t,t_{\rm w}) +
C_{il}(t,t_{\rm w})C_{jk}(t,t_{\rm w})
\label{C_prod}
\end{eqnarray}
The local equal-time correlation function $C_{ij,ij}(t_{\rm w},t_{\rm w})$ thus
approaches $2$ for $t_{\rm w}\to\infty$. The corresponding response function
is
\begin{equation}
R_{ij,kl}(t,t_{\rm w}) = R_{ik}C_{jl} + R_{il}C_{jk} + R_{jk}C_{il} + R_{jl}C_{ik}
\end{equation}
In the {\em local} case, one can replace all
functions by local ones in the aging regime -- there are no
cancellations leading to extra
factors of $q^2$ as was the case for bond observables, compare
e.g.\eq{C_bond_ijkl} and\eq{C_bond_Taylor} -- so that the FDR
\begin{equation}
X_{\rm prod}^{\rm loc}(t,t_{\rm w}) = \frac{4T_{\rm c} R_{ii}C_{ii}}{4C'_{ii}C_{ii}}
= \frac{T_{\rm c} R_{ii}}{C'_{ii}}
\label{X_prod_local}
\end{equation}
becomes identical to the one for the spin autocorrelation and
response. In particular, one again gets the same $X^\infty$.
For the global (block) case, we can write
\begin{eqnarray}
\fl C_{\rm prod}^{\rm block}(t,t_{\rm w}) &=& \sum_{(ij),(kl)}C_{ij,kl}(t,t_{\rm w})
\\
\fl &=& \frac{1}{4}\sum_{ijkl} n_{ij} n_{kl} \left[
C_{ik}(t,t_{\rm w})C_{jl}(t,t_{\rm w}) +
C_{il}(t,t_{\rm w})C_{jk}(t,t_{\rm w})\right]
\\
\fl &=& \frac{1}{2}\sum_{ijkl} n_{ij} n_{kl}
C_{ik}(t,t_{\rm w})C_{jl}(t,t_{\rm w})
= \frac{1}{2} \int(dq)\, n^2_\mathbf{q} C_\mathbf{q}^2(t,t_{\rm w})
\label{C_global_product}
\end{eqnarray}
where $n_{ij}=1$ if $i$ and $j$ are nearest neighbours and 0
otherwise, and $n_\mathbf{q}$ is its Fourier transform. For the response
one has similarly
\begin{equation}
R_{\rm prod}^{\rm block}(t,t_{\rm w}) = \int(dq)\, n^2_\mathbf{q} R_\mathbf{q}(t,t_{\rm w}) C_\mathbf{q}(t,t_{\rm w})
\end{equation}
In the aging regime, where $t-t_{\rm w}\gg 1$, the integrals are dominated
by small $q$, where $n_\mathbf{q}$ can be approximated by the constant
$n_\mathbf{0}=2d$. This cancels from the FDR, which becomes
\begin{equation}
X_{\rm prod}^{\rm block}(t,t_{\rm w}) = \frac{\int(dq)\, T_{\rm c} R_\mathbf{q} C_\mathbf{q}}{\int(dq)\, C'_\mathbf{q} C_\mathbf{q}}
\label{Xbl_large_d}
\end{equation}
This is the inverse of the average of $X_\mathbf{q}^{-1}$ with weight $R_\mathbf{q}
C_\mathbf{q}$. Again, this is a scaling function of $t/t_{\rm w}$ interpolating
between $1$ and the same $X^\infty$ as for spin observables.
The scaling of the block product correlation function\eq{C_global_product}
itself is a little more complicated than for the bond observables and
depends on dimensionality. Focussing again on $t-t_{\rm w}\gg 1$ one has
$C_{\rm prod}^{\rm block}(t,t_{\rm w}) \approx 2d^2 \int(dq)\, C_\mathbf{q}^2(t,t_{\rm w})$. The integral defines
the function $CC(t,t_{\rm w})$ discussed in Sec.~\ref{sec:noneq_large_d}
for $d>4$ and Sec.~\ref{sec:noneq_small_d} for $d<4$. In the former
case, one has from (\ref{CC_decomp}--\ref{CC_scaling}) that
$CC(t,t_{\rm w}) = CC\eql(t-t_{\rm w})\sc{CC}(t/t_{\rm w})$ where $CC\eql(t-t_{\rm w})\sim
(t-t_{\rm w})^{(4-d)/2}$ asymptotically; this equilibrium contribution
governs the behaviour of $C_{\rm prod}^{\rm block}(t,t_{\rm w})$ for $t-t_{\rm w}\ll t_{\rm w}$. Where
aging effects appear ($t-t_{\rm w} \sim t_{\rm w}$), $C_{\rm prod}^{\rm block}\sim t_{\rm w}^{(4-d)/2}$
and so one gets a limiting pseudo-equilibrium FD plot. In the regime
of well-separated times $x\gg 1$, the scaling function $\sc{CC}(x)$ decays as
$x^{-2}$ so that $C_{\rm prod}^{\rm block}(t,t_{\rm w}) \sim t_{\rm w}^{2} t^{-d/2}$. These
scalings, though not the overall magnitude of $C_{\rm prod}^{\rm block}$, are the same as
for the energy correlation function $C_E$ in\eq{C4_final} below: both
functions are proportional to $CC(t,t_{\rm w})$ in the aging regime.
In the opposite case $d<4$, the equal-time value $CC(t,t)$ (and
therefore $C_{\rm prod}^{\rm block}(t,t)$) diverges as $t^{(4-d)/2}$,
see\eq{CC_equal_time}. The normalized correlator $CC(t,t_{\rm w})/CC(t,t)$
is a scaling function ${\mathcal{G}}(x)$ of $x=t/t_{\rm w}$, implying that the
normalized FD plot will approach a nontrivial limit form, with
asymptotic slope $X^\infty$ as shown above. Quantitatively, because
${\mathcal{G}}(x)\sim x^{-d/2}$ for $x\gg 1$, one has $C_{\rm prod}^{\rm block}(t,t_{\rm w}) \sim
t^{(4-d)/2}(t_{\rm w}/t)^{d/2} \sim t_{\rm w}^{d/2} t^{2-d}$ for $t\ggt_{\rm w}$.
\section{Correlation and response for global observables}
\label{sec:setup}
We now ask what happens if we go from block observables to
truly global ones, which reflect properties averaged over the entire
system; the total energy is an important example. We anticipate that
here non-Gaussian fluctuations are important. Indeed, the results
above show that this must be case. Otherwise we could directly extend
the Gaussian theory results from block to global observables, with no
change to correlation and response functions. The global {\em bond}
observable is just the energy. Using the spherical constraint, this
can be written as
\begin{equation}
E=\sum_{(ij)}\frac{1}{2}(S_i-S_j)^2=N-\sum_{(ij)}S_i S_j
\end{equation}
and so is identical to the global {\em spin product} observable, up to
a trivial additive constant and sign. So the global bond and product
observables must have identical correlation and response functions;
but we saw above that this requirement is not satisfied by the
Gaussian theory. Thus, non-Gaussian fluctuations are essential to get
correct results for global observables.
Physically, the origin of the distinction between block observables
and global ones is the effective infinite-range interaction induced by the
spherical constraint. In a model with short-range interactions, block
observables will show identical behaviour to global ones whenever the
block size $\ell$ is larger than any correlation length in the system,
whether or not $\ell\ll L$: the behaviour of any large subsystem is
equivalent to that of the system as a whole. In the spherical model,
the infinite-range interaction breaks this connection, and global
correlation and response functions cannot be deduced from those for
block observables.
\subsection{Non-Gaussian fluctuations}
To make progress, we need to return to the original equation of
motion\eq{eqn_motion}. This can be written as
\begin{equation}
\partial_t S_i = -\sum_{j} \Omega_{ij}S_j + \xi_i - (z(t)+N^{-1/2}\Delta z)S_i
\label{eqn_motion2}
\end{equation}
where the notation emphasizes that the fluctuating contribution to the
Lagrange multiplier is of ${{\mathcal O}}(N^{-1/2})$. The latter induces
non-Gaussian fluctuations in the $S_i$ of the same order. This shows
quantitatively why the Gaussian theory works for block observables: as
long as one considers correlations of a number of spins that is $\ll
N$, fluctuations of ${{\mathcal O}}(N^{-1/2})$ can be neglected. For global
observables, on the other hand, we require the correlations of all $N$
spins and the Gaussian approximation then becomes invalid.
To account systematically for non-Gaussian effects we represent the
spins as $S_i = s_i + N^{-1/2} r_i$, where $s_i$ gives the limiting
result for $N\to\infty$, which has purely Gaussian statistics, and
$N^{-1/2}r_i$ is a leading-order fluctuation correction which will be
non-Gaussian. Inserting this decomposition into\eq{eqn_motion2} and
collecting terms of ${{\mathcal O}}(1)$ and ${{\mathcal O}}(N^{-1/2})$ gives $\partial_t s_i =
-\Omega_{ij} s_j -z(t) s_i + \xi_i$ as expected; to lighten the
notation we use the summation convention for repeated indices from now
on. For the non-Gaussian corrections one gets the equation of motion
\begin{equation}
\partial_t r_i = -\Omega_{ij} r_j -z(t)r_i - \Delta z\, s_i
\end{equation}
with solution
\begin{equation}
r_i(t) = R_{ij}(t,0)r_j(0) - \int_0^t dt'\, R_{ij}(t,t')s_j(t')\Delta z(t')
\label{yt}
\end{equation}
The properties of $\Delta z(t')$ can now be determined from the requirement
that, due to the spherical constraint, $N^{-1}\sum_i S^2_i(t)=1$ at
all times. Inserting $S_i = s_i + N^{-1/2} r_i$ and expanding to the
leading order in $N^{-1/2}$ gives the condition
\begin{equation}
\frac{1}{N} \sum_i s_i(t)r_i(t) = -\frac{1}{2} N^{-1/2} \sum_i (s_i^2(t)-1)
\equiv -\frac{1}{2} \Delta(t)
\label{norm_condition}
\end{equation}
where the last equality defines $\Delta(t)$, a fluctuating quantity of
${{\mathcal O}}(1)$ that describes the (normalized) fluctuations of the
squared length of the Gaussian spin vector $s_i$. At $t=0$, the
condition\eq{norm_condition} is solved
to leading order by setting $r_i(0) = -\frac{1}{2} \Delta(0)s_i(0)$, since
$(1/N) \sum_i s_i^2(0)=1+{{\mathcal O}}(N^{-1/2})$. With this assignment, and
setting $a(t)=2\Delta z(t)+\Delta(0)\delta(t)$, eq.\eq{yt} reads
\begin{equation}
r_i(t) = - \frac{1}{2} \int dt'\, R_{ij}(t,t')s_j(t')a(t')
\label{yt2}
\end{equation}
We have left the integral limits unspecified here: the factor $R_{ij}$
automatically enforces $t'<t$, and we use the convention $a(t')=0$ for
$t'<0$. The spherical constraint condition\eq{norm_condition} then becomes
\begin{equation}
\int dt'\, \frac{1}{N} s_i(t)R_{ij}(t,t')s_j(t')a(t')
= \Delta(t)
\label{norm_condition2}
\end{equation}
Now, up to fluctuations of ${{\mathcal O}}(N^{-1/2})$ which are negligible to
leading order (even if they are correlated with $a(t')$), we can
replace $(1/N) s_i(t)R_{ij}(t,t')s_j(t')$ by its average
\begin{equation}
\fl K(t,t') \equiv \frac{1}{N} \left\langles_i(t)R_{ij}(t,t')s_j(t')\right\rangle =
\frac{1}{N} R_{ij}(t,t')C_{ij}(t,t')
= \int(dq)\, R_\mathbf{q}(t,t')C_\mathbf{q}(t,t')
\label{p_def}
\end{equation}
If we then define the inverse operator, $L$, of $K$ via
\begin{equation}
\int dt'\, K(t,t')L(t',t_{\rm w}) = \delta(t-t_{\rm w})
\label{pinv_def}
\end{equation}
for $t_{\rm w}\geq 0$, then the solution to\eq{norm_condition2} is
\begin{equation}
a(t) = \int dt'\, L(t,t')\Delta(t')
\end{equation}
where for consistency we adopt the convention $\Delta(t')=0$ for
$t'<0$. With\eq{yt2} we then get an explicit expression for the
non-Gaussian ${{\mathcal O}}(N^{-1/2})$-corrections to the spins,
\begin{equation}
r_i(t) = - \frac{1}{2} \int dt' dt''\, R_{ij}(t,t')s_j(t')L(t',t'')\Delta(t'')
\label{yt3}
\end{equation}
in terms of the properties of the uncorrected Gaussian spins $s_i$.
\subsection{The functions $K$ and $L$}
\label{sec:KandL}
Before proceeding, we analyse the properties of $K$ and
$L$. From\eq{p_def}, $K(t,t')$ vanishes for $t<t'$ while its
limit value for $t\to t'^{+}$ is
$(1/N)\delta_{ij}C_{ij}(t,t)=(1/N)C_{ii}(t,t)=1$. Inserting\eq{Rq},
\eq{Cqtt_deriv} and\eq{C_twotime} into\eq{p_def}, one also finds that
the equal-time slope has the simple value $\partial_{t'} K
\left.(t,t')\right|_{t=t'^{+}}=2T$. From these properties and the
definition\eq{pinv_def} it follows that
\begin{equation}
\fl L(t,t') = \delta'(t-t') + L^{(1)}(t,t'), \qquad
L^{(1)}(t,t') = 2T\delta(t-t') - L^{(2)}(t,t')
\label{pinv_structure}
\end{equation}
where $L^{(2)}(t,t')$ vanishes for $t<t'$ and jumps to a finite value at
$t=t'^{+}$; otherwise it is smooth and, as we will later see,
positive. The structure of\eq{pinv_structure} can be easily verified
e.g.\ for the limit of equilibrium at high temperature $T$, where
$z\eql=T$ and all $\omega$ can be neglected compared to $z\eql$. One then
has $K(t,t')=\exp(-2T(t-t'))$ and the inverse\eq{pinv_def} can be
calculated by Laplace transform. Since
the Laplace transform of $K(t,t')$ is
$\hatK(s)=1/(s+2T)$ this gives $\hat{L}(s)=s+2T$, which
corresponds to\eq{pinv_structure} with $L^{(2)}\equiv 0$.
We next determine the long-time forms of $K$ and $L^{(2)}$ for
quenches to criticality. In both cases it is useful to factor out the
equilibrium contribution. For $K$ this is, from\eq{p_def} and
using\eq{Rq} and\eq{C_twotime},
\begin{equation}
K\eql(t-t_{\rm w}) = \int(dq)\, e^{-2\omega(t-t_{\rm w})} \frac{T_{\rm c}}{\omega}
\label{p_eql}
\end{equation}
Apart from the factor of 2 in the time argument, this is just the
(critical) equilibrium spin-spin autocorrelation function. One can
also write $K(t)=2T_{\rm c}\int_t^\infty dt'\, f(t')$ from\eq{f_def} and this
shows that $K\eql(t)\sim t^{(2-d)/2}$ for large time differences. The
ratio $K(t,t_{\rm w})/K\eql(t-t_{\rm w})$ will show deviations from 1 when aging
effects appear, i.e.\ when $t-t_{\rm w}\sim t_{\rm w}$. The form of these
deviations can be worked out by using the scaling form\eq{C_scaling} of
$C_\mathbf{q}(t_{\rm w},t_{\rm w})$ and recalling that only the small $q$-regime
contributes, where $(dq)\sim d\omega\, \omega^{d/2-1}$. Changing
integration variable to $w=\omega t_{\rm w}$ gives
\begin{eqnarray}
K(t,t_{\rm w})&=&K\eql(t-t_{\rm w})\sc{K}(t/t_{\rm w})
\label{p_scaling_general}
\\
\sc{K}(x) &=& x^\kappa \frac{\int dw \, w ^{(d-4)/2} e^{-2(x-1)w
} \sc{C}(w)}
{\int dw \, w ^{(d-4)/2} e^{-2(x-1)w }}
\label{p_scaling}
\end{eqnarray}
where the first factor in\eq{p_scaling} arises from the two factors of
$[g(t_{\rm w})/g(t)]^{1/2}$ contributed by $R_\mathbf{q}(t,t_{\rm w})$ and
$C_\mathbf{q}(t,t_{\rm w})$, respectively. By construction, $\sc{K}(x)$ should
approach $1$ for $x\to 1$; indeed, in this
limit the $w$-integrals are dominated by large values of $w\sim 1/(x-1)$,
for which $\sc{C}=1$. The decay for large $x$ follows from
$\sc{C}(w)\sim w$ for small $w$ as $\sc{K}(x)\sim
x^{\kappa-1}$. Explicitly, one finds by using\eq{C_scaling} and carrying
out the $w$-integrals that
\begin{equation}
\sc{K}(x) = \frac{d-2}{2}(x-1)^{(d-2)/2}x^{\kappa} \int_0^1 dy\,
y^{-\kappa}(x-y)^{-d/2}
\label{p_scaling_fn_y_integral}
\end{equation}
For $d>4$, where $\kappa=0$, this gives
\begin{equation}
\sc{K}(x)=1-\left(\frac{x-1}{x}\right)^{(d-2)/2} \qquad (d>4)
\label{FKd_gt_4}
\end{equation}
while for $d<4$ the required indefinite integral is $[(d-2)/2]\int dy\,
y^{(d-4)/2}(x-y)^{-d/2} = -x^{-1}(x/y-1)^{(2-d)/2}$ and one gets simply
\begin{equation}
\sc{K}(x)
=x^{(2-d)/2} \qquad (d<4)
\label{FKd_lt_4}
\end{equation}
Next we determine $L^{(2)}$. Combining\eq{pinv_def}
and\eq{pinv_structure}, the defining equation is
\begin{equation}
\int dt'\, K(t,t')L^{(2)}(t',t_{\rm w}) = 2T K(t,t_{\rm w}) -
\partial_{t_{\rm w}}K(t,t_{\rm w})
\label{ptwo_def}
\end{equation}
Again it makes sense to extract the equilibrium part of $L^{(2)}$. This
is defined by
\begin{equation}
\int dt'\, K\eql(t-t')L^{(2)}\eql(t'-t_{\rm w}) = 2T K\eql(\Delta t) + K\eql'(\Delta t)
\label{ptwo_eql_condition}
\end{equation}
where $\Delta t=t-t_{\rm w}$. Solving by Laplace transform gives
\begin{equation}
\hatL^{(2)}\eql(s) =
\frac{2T\hatK\eql(s)+(s\hatK\eql(s)-1)}{\hatK\eql(s)} =
s + 2T - \frac{1}{\hatK\eql(s)}
\label{ptwo_eql}
\end{equation}
where from\eq{p_eql}, at criticality,
\begin{equation}
\hatK\eql(s) = T_{\rm c} \int(dq)\, \frac{1}{\omega(s+2\omega)}
\label{p_eql_LT}
\end{equation}
The leading small-$s$ behaviour of this is
$\hatK\eql(0)-\hatK\eql(s) \sim s^{(d-4)/2}$ for $d>4$ (plus, for
$d>6$, additional analytic terms of integer order in $s$ which are
irrelevant for us). For $d<4$, on the other hand, $\hatK\eql(s) \sim
s^{(d-4)/2}$ is divergent for $s\to 0$. Inserting these scalings
into\eq{ptwo_eql} and inverting the Laplace transform gives for the
asymptotic behaviour of $L^{(2)}\eql$
\begin{equation}
L^{(2)}\eql(t) \sim \left\{ \begin{array}{ll}
t^{(2-d)/2} & \mbox{($d>4$)} \\
t^{(d-6)/2} & \mbox{($d<4$)}
\end{array}\right.
\label{ptwo_eql_asymptotics}
\end{equation}
It will be important below that, for $d>4$, $K\eql(t)$ and
$L^{(2)}\eql(t)$ both decay asymptotically as $t^{(2-d)/2}$. The ratio
between them can be worked out from\eq{ptwo_eql}, by expanding
for small $s$ as $1/\hatK\eql(s) \approx 1/[\hatK\eql(0)-cs^{(d-4)/2}] =
1/\hatK\eql(0) + cs^{(d-4)/2}/\hatK^2\eql(0)$ where $c$ is some
constant; comparing with $\hatK\eql(s) \approx
\hatK\eql(0)-cs^{(d-4)/2}$ gives
\begin{equation}
L^{(2)}\eql(t)=K\eql(t)/\hatK\eql^2(0)
\label{ptwo_p_link}
\end{equation}
for large time differences $t$.
The integral of $L^{(2)}\eql(t)$ over all times follows
from\eq{ptwo_eql} as
\begin{equation}
\fl \hatL^{(2)}\eql(0) = \int_0^\infty dt\,L^{(2)}\eql(t) = \left\{
\begin{array}{ll}
2T_{\rm c} - \hatK\eql^{-1}(0) = 2T_{\rm c}[1-1/\int(dq)\, (T_{\rm c}/\omega)^{2}]
& \mbox{($d>4$)}\\
2T_{\rm c} & \mbox{($d<4$)}
\end{array}
\right.
\label{L2hat0}
\end{equation}
Using the fact that $\int(dq)\, (T_{\rm c}/\omega)=1$, one has $\int(dq)\,
(T_{\rm c}/\omega)^2>1$ so that $\hatL^{(2)}\eql(0)$ is positive independently
of $d$. This is consistent with the intuition that, with the sign as
chosen in\eq{pinv_structure}, the function $L^{(2)}$ is positive.
With the equilibrium part of $L^{(2)}$ determined we make a long-time
scaling ansatz for $L^{(2)}$,
\begin{equation}
L^{(2)}(t,t_{\rm w}) = L^{(2)}\eql(t-t_{\rm w})\sc{L}(t/t_{\rm w})
\label{L_scaling}
\end{equation}
so that\eq{ptwo_def} becomes
\begin{eqnarray}
\fl\lefteqn{\int dt'\,
K\eql(t-t')L^{(2)}\eql(t'-t_{\rm w}) \sc{K}(t/t')
\sc{L}(t'/t_{\rm w}) = }
\nonumber\\
&=&2T_{\rm c} K\eql(\Delta t)\sc{K}(x)+
K\eql'(\Delta t)\sc{K}(x)+
\frac{t}{t_{\rm w}^2}K\eql(\Delta t)\sc{K}'(x)
\label{ptwo_scaling_cond}
\end{eqnarray}
where $\Delta t=t-t_{\rm w}$ and $x=t/t_{\rm w}$ as before. We now take the aging limit
of large
$t_{\rm w}$ with $\Delta t={{\mathcal O}}(t_{\rm w})$ to determine $\sc{L}$. The second and
third terms on the r.h.s.\ are then smaller by factors of order
$1/t_{\rm w}$ than the first, and can be neglected to leading order. The
second term on the r.h.s.\ of\eq{ptwo_eql_condition} is likewise
subdominant, and this can be used to rewrite the dominant first term
on the r.h.s.\ of\eq{ptwo_scaling_cond}, giving
\begin{eqnarray}
\fl \sc{K}\left(x\right)
&=& \frac{\int dt'\,K\eql(t-t')L^{(2)}\eql(t'-t_{\rm w})
\sc{K}(t/t')\sc{L}(t'/t_{\rm w})}
{\int dt'\, K\eql(t-t')L^{(2)}\eql(t'-t_{\rm w})}
\label{unscaled_ptwo}
\end{eqnarray}
We consider first $d>4$. Then both the functions $K\eql(\Delta t)$ and
$L^{(2)}\eql(\Delta t)$ have finite integrals $\hatK\eql(0)$ and
$\hatL^{(2)}\eql(0)$, respectively, over $\Delta t=0\ldots\infty$. In the
aging limit, the factors
$K\eql(t-t')$ and $L^{(2)}\eql(t'-t_{\rm w})$ therefore act to concentrate
the mass of the integrals appearing in\eq{unscaled_ptwo} around
$t'=t_{\rm w}$ and $t'=t$. This can be seen more formally by changing to
$y=t'/t_{\rm w}$ as the integration variable and taking $t_{\rm w}$
large. Then the factors $K\eql(t_{\rm w}(x-y))$ and $L^{(2)}\eql(t_{\rm w}(y-1))$
produce singularities $\sim(x-y)^{(2-d)/2}$ for $y\to x$ and $\sim
(y-1)^{(2-d)/2}$ for $y\to 1$, respectively, and because these are
non-integrable they dominate the integral for $t_{\rm w}\to\infty$. All
other factors in the integrals are slowly varying near the relevant
endpoints and can be replaced by their value there. In the aging limit we
can therefore write\eq{unscaled_ptwo} as
\begin{equation}
\fl \sc{K}(x)
= \frac{\hatK\eql(0)L^{(2)}\eql(\Delta t) \sc{K}(1)\sc{L}(x)+
K\eql(\Delta t)\hatL^{(2)}\eql(0) \sc{K}(x)\sc{L}(1)}
{\hatK\eql(0)L^{(2)}\eql(\Delta t)+K\eql(\Delta t)\hatL^{(2)}\eql(0)}
\end{equation}
Eq.\eq{ptwo_p_link} tells us that the $\Delta t$-dependent factors cancel,
giving together with\eq{L2hat0} and $\sc{K}(1)=\sc{L}(1)=1$
\begin{equation}
\fl \sc{K}(x)
= \frac{\hatK\eql^{-1}(0) \sc{L}(x)+\hatL^{(2)}\eql(0) \sc{K}(x)}
{\hatK\eql^{-1}(0)+\hatL^{(2)}\eql(0)}
= \frac{\sc{L}(x)+[2T_{\rm c}\hatK\eql(0)-1]\sc{K}(x)}
{2T_{\rm c}\hatK\eql(0)}
\label{ptwo_simple_scaling}
\end{equation}
In $d>4$, where $\hatK\eql(0)$ is finite, we therefore have the
simple result that the scaling functions for $K$ and $L^{(2)}$ are
identical,
\begin{equation}
\sc{L}(x) = \sc{K}(x)
\label{same_sc_fn}
\end{equation}
But in the limit $d\to 4$ from above, $\hatK\eql(0)$ diverges
and\eq{ptwo_simple_scaling} gives no information about
$\sc{L}$. For $d<4$ a different approach is therefore needed
to determine $\sc{L}$. One subtracts from\eq{ptwo_scaling_cond}
the first and second terms on its r.h.s., using\eq{ptwo_eql_condition} to
rewrite them as an integral and changing integration variable from
$t'$ to $y=t'/t_{\rm w}$. This gives
\begin{eqnarray}
\fl\lefteqn{
t_{\rm w}\int_1^x dy\,
K\eql(t_{\rm w}(x-y))L^{(2)}\eql(t_{\rm w}(y-1))\left[\sc{K}\left({x}/{y}\right)
\sc{L}\left(y\right)-\sc{K}(x)\right]
= }\nonumber\\
&=&\frac{t}{t_{\rm w}^2}K\eql(\Delta t)\sc{K}'(x)
\label{L_condition_almost}
\end{eqnarray}
For $y\to x$, $K\eql(t_{\rm w}(x-y))$ contributes a singularity $\sim
(x-y)^{(2-d)/2}$ which is integrable in $d<4$. For $y\to 1$, the terms
in square brackets vanish as $\sim y-1$ since $\sc{L}(y)$ is
smooth at $y=1$ as we will see below, in the sense that
$\sc{L}'(1)$ is finite. These terms combine with the $\sim
(y-1)^{(d-6)/2}$ from $L^{(2)}\eql$ to give an integrable $\sim
(y-1)^{(d-4)/2}$. The contributions from the short time behaviour of
$K\eql$ and $L^{(2)}\eql$ are therefore unimportant in the aging
limit and we can replace these functions by their power-law
asymptotes. Up to overall $d$-dependent numerical factors the
condition\eq{L_condition_almost} then becomes
\begin{eqnarray}
\fl\lefteqn{
t_{\rm w}^{-1}\int_1^x dy\,
(x-y)^{(2-d)/2}(y-1)^{(d-6)/2}\left[\sc{K}\left({x}/{y}\right)
\sc{L}\left(y\right)-\sc{K}(x)\right]
= }\nonumber\\
&=&\frac{t}{t_{\rm w}^2}\Delta t^{(2-d)/2}\sc{K}'(x)
\end{eqnarray}
In the aging limit $\Delta t$ scales as $t_{\rm w}$, and so the l.h.s.\ of this
equation ($\simt_{\rm w}^{-1}$) becomes large compared to the r.h.s.\ ($\sim
t_{\rm w}^{-d/2})$ unless the $y$-integral vanishes. The required
condition for $\sc{L}$ is therefore
\begin{equation}
\fl
\int_1^x dy\,
(x-y)^{(2-d)/2}(y-1)^{(d-6)/2}\left[\sc{K}\left({x}/{y}\right)
\sc{L}\left(y\right)-\sc{K}(x)\right] = 0
\label{pinv_cond}
\end{equation}
This is in principle an integral equation for
$\sc{L}$. Fortunately, however, the solution is the naive
extension of\eq{same_sc_fn} to $d<4$: with $\sc{K}(x)=x^{(2-d)/2}$
from\eq{FKd_lt_4} one sees that for
$\sc{L}(x)=\sc{K}(x)=x^{(2-d)/2}$ the square bracket
in\eq{pinv_cond} vanishes identically. The identity\eq{same_sc_fn}
therefore holds both for $d<4$ and for $d>4$.
In summary, we have determined long-time scaling forms for $K$ and $L$ for
quenches to criticality. For $K$, the result is\eq{p_scaling_general}
with\eq{p_eql} and (\ref{FKd_gt_4},\ref{FKd_lt_4}); for $L$, we
have\eq{L_scaling} with\eqq{ptwo_eql}{ptwo_eql_asymptotics}
and\eq{same_sc_fn}. Combined with\eq{yt3}, this fully determines the
leading non-Gaussian corrections to the spherical model dynamics (at
long times, and after a quench to criticality from a random initial state).
\section{General expressions for energy correlation and response}
\label{sec:energy_general}
In this section we derive general expressions for the two-time correlation and
response functions of the energy, taking into account non-Gaussian
fluctuations. The results will be valid for arbitrary quenches since
we will leave $K$ and $L$ unspecified.
\subsection{Energy correlation function}
\label{sec:energy_corr}
We can write the energy\eq{H_spherical} as $H=\frac{1}{2} S_i\Omega_{ij}
S_j$. Inserting $S_i = s_i + N^{-1/2} r_i$, the energy correlation
function (normalized by $N$) is to leading order
\begin{equation}
\fl C_E(t,t_{\rm w})=\frac{1}{4N}\left\langle \left.\left(
s_i \Omega_{ij} s_j + 2N^{-1/2} s_i \Omega_{ij} r_j
\right)\right|_{t}
\left.\left(
s_k \Omega_{kl} s_l + 2N^{-1/2} s_k \Omega_{kl} r_l
\right)\right|_{t_{\rm w}} \right\rangle'
\label{CE_basic}
\end{equation}
Using\eq{yt3}, all quantities involved can be expressed in terms of
the Gaussian variables $s_i$ so that the average can be performed using
Wick's theorem, i.e.\ by taking products of all possible pairings. We use
the prime on the average to indicate the connected correlation
function. This just means that in the Wick expansion all terms not
containing any pairings of a variable at $t$ with one at $t_{\rm w}$ have to
be discarded, since these terms give the disconnected contribution $\left\langle
\left. (\ldots)\right|_{t}\right\rangle \left\langle\left. (\ldots)\right|_{t_{\rm w}}
\right\rangle$. Multiplying out\eq{CE_basic} one obtains four
contributions. The first one is
\begin{equation}
\fl 4C_E^{(1)} = \frac{1}{N}\Omega_{ij}\Omega_{kl}\left\langle
s_i(t)s_j(t)s_k(t_{\rm w})s_l(t_{\rm w})\right\rangle' =
\frac{2}{N}\Omega_{ij}\Omega_{kl}C_{jk}(t,t_{\rm w})C_{il}(t,t_{\rm w})
\label{C1_first}
\end{equation}
To eliminate one of the factors of $\Omega$, note from\eq{Rq} that
$(\partial_t+z(t)) R_\mathbf{q}(t,t_{\rm w}) = -\omega R_\mathbf{q}(t,t_{\rm w})$ for
$t>t_{\rm w}$. In real space, this reads
\begin{equation}
\left(\partial_t+z(t)\right)R_{ik}(t,t_{\rm w}) =
-\Omega_{ij}R_{jk}(t,t_{\rm w}) =
-R_{ij}(t,t_{\rm w})\Omega_{jk}
\label{Om_elimination}
\end{equation}
and from\eq{C_twotime} an exactly analogous relation holds for
$C_{ij}(t,t_{\rm w})$. Thus
\begin{eqnarray}
\fl 4C_E^{(1)}
&=&
- \frac{2}{N}[(\partial_t+z(t))C_{ik}(t,t_{\rm w})]\Omega_{kl}C_{il}(t,t_{\rm w})
\\
\fl &=&
- \frac{1}{N}(\partial_t+2z(t))C_{ik}(t,t_{\rm w})\Omega_{kl}C_{il}(t,t_{\rm w}) \equiv
- (\partial_t+2z(t))C\Om C(t,t_{\rm w})
\label{C1}
\end{eqnarray}
The last equality defines $C\Om C$, which is just the normalized trace
of the product of the matrices $C_{ik}(t,t_{\rm w})$, $\Omega_{kl}$ and
$C_{il}=C_{li}$; in Fourier space, $C\Om C(t,t_{\rm w}) = \int(dq)\, \omega
C_\mathbf{q}^2(t,t_{\rm w})$.
\begin{figure}
\centerline{\includegraphics[width=7.0cm,clip=true]{C2.eps}}
\caption{Illustration of Wick pairings for $C_E^{(2)}$. (a) The six
Gaussian spins that need to be paired in\eq{C2_raw} are indicated by
circles with site labels. Spins arising from the expansion of $H(t)$
and $H(t_{\rm w})$ are to the left and right of the vertical dashed line,
respectively; any pairing contributing to the connected correlator
must have links across this line. Dotted lines connect spins that
are already coupled in space: $s_i$ and $s_m$ via the factor
$\Omega_{ij}R_{jm}$, and $s_k$ and $s_l$ via $\Omega_{kl}$. The vertical
lines between $s_n$ and $s_n$ indicate that pairings which couple these
spins are not allowed. (b) The solid lines show the only Wick pairing
that contributes to
leading order in $1/N$: it gives two independent groups of spins. (c)
only gives one group and so is subleading. (d) has two groups but is
excluded from the connected correlator because there are no pairs
across the dashed line.
\label{fig:C2}
}
\end{figure}
The second contribution to $C_E$ is
\begin{eqnarray}
4C_E^{(2)} &=& \frac{2}{N^{3/2}}\Omega_{ij}\Omega_{kl}\left\langle
s_i(t)r_j(t)s_k(t_{\rm w})s_l(t_{\rm w})\right\rangle' \\
&=&
-\frac{1}{N} \int dt' dt''\, \Omega_{ij}\Omega_{kl} R_{jm}(t,t')L(t',t'')
\nonumber\\
& &\times \left\langle s_i(t)s_m(t')N^{-1/2}\Delta(t'') s_k(t_{\rm w})s_l(t_{\rm w}) \right\rangle'
\end{eqnarray}
where we have inserted\eq{yt3}. Writing $N^{-1/2}\Delta(t'')$ explicitly this
becomes
\begin{eqnarray}
4C_E^{(2)} &=&
-\frac{1}{N^2} \int dt' dt''\,
\Omega_{ij}\Omega_{kl}R_{jm}(t,t')L(t',t'') \nonumber\\
& &\times
\left\langle s_i(t)s_m(t')\left[s_n^2(t'')-\left\langle
s_n^2(t'')\right\rangle\right] s_k(t_{\rm w})s_l(t_{\rm w}) \right\rangle'
\label{C2_raw}
\end{eqnarray}
We now need to perform the Wick expansion of the average. The
subtraction $s_n^2-\left\langle s_n^2\right\rangle$ means that all terms which would
pair $s_n$ with $s_n$ are excluded; $s_k$ and $s_l$ also cannot be
paired because we are considering the connected correlation. We can
reduce the number of pairings further by considering that we need to
get an overall result of ${{\mathcal O}}(1)$. The index $j$ does not need to
be considered further: after summing over $j$, $\Omega_{ij}R_{jm}$ is
some translationally invariant function of the distance vector
between spins $s_i$ and $s_m$. If the remaining indices $i,k,l,m,n$
where unrestricted, then together with the $1/N^2$ prefactor we would
maximally get an ${{\mathcal O}}(N^3)$ result. Each of the factors $\Omega_{ij}R_{jm}$
and $\Omega_{kl}$ couples two indices and so reduces the order of
the result by $1/N$. Having already got two such couplings outside
the average, we can only ``afford'' one extra coupling from the Wick
pairings to get a contribution of ${{\mathcal O}}(1)$. After some reflection
one sees that this only leaves the pairing $[im][kn][ln]$: $[im]$
introduces no further coupling beyond $\Omega_{ij}R_{jm}$, and
$[kn][ln]$ gives only one further coupling beyond
$\Omega_{kl}$. Alternatively, we can think of this pairing as having the
indices $i,m$ and $k,l,n,n$ in two independent groups; each group
gives a factor of $N$ and this just cancels the $1/N^2$ prefactor. All
other allowed Wick pairings give smaller terms, as illustrated
graphically in Fig.~\ref{fig:C2}a,b. For example,
$[ik][mn][ln]$ together with $\Omega_{ij}R_{jm}$ and $\Omega_{kl}$
couples {\em all} indices into a single group and thus gives a term of
only ${{\mathcal O}}(1/N)$ (Fig.~\ref{fig:C2}c). The pairing
$[in][mn][kl]$ would give two
independent groups and thus an ${{\mathcal O}}(1)$ term, but is excluded
because $k$ and $l$ cannot be paired in the connected correlator
(Fig.~\ref{fig:C2}d). Bearing in mind that our
dominant pairing has a symmetry factor of 2 because the $s_n$'s in
$[kn][ln]$ can be swapped we have thus finally
\begin{eqnarray}
\fl 4C_E^{(2)} &=&
-2 \int dt' dt''\,
\frac{1}{N} \Omega_{ij} R_{jm}(t,t') C_{im}(t,t') L(t',t'')
\nonumber\\
\fl & &\times
\frac{1}{N}\Omega_{kl}C_{nk}(t'',t_{\rm w})C_{nl}(t'',t_{\rm w}) \\
\fl &=&
\int dt' dt''\,
[(\partial_t+2z(t))K(t,t')-\delta(t-t')]
L(t',t'') C\Om C(t'',t_{\rm w})
\end{eqnarray}
In going to the last line we have exploited\eq{Om_elimination} to
eliminate $\Omega_{ij}$. Since $t'$ is an integration variable, we have
also been careful here to subtract off with the $\delta(t-t')$ the
spurious contribution which the $\partial_t$ applied to the step
discontinuity in $K(t,t')$ would otherwise give. The $t'$-integration
can now be carried out using\eq{pinv_def} and we get
\begin{eqnarray}
4C_E^{(2)}
&=&
(\partial_t+2z(t)) C\Om C(t,t_{\rm w})-\int dt' L(t,t') C\Om C(t',t_{\rm w})
\\
&=&
2z(t) C\Om C(t,t_{\rm w})-\int dt' L^{(1)}(t,t') C\Om C(t',t_{\rm w})
\label{C2}
\end{eqnarray}
Note that we could use the same trick here as
in\eqq{Om_elimination}{C1} to write
$C\Om C(t',t_{\rm w})=-(\frac{1}{2}\partial_{t'}+z(t'))CC(t',t_{\rm w})$, with
$CC(t,t_{\rm w})$ defined in the obvious manner as $CC = N^{-1}
C_{ik}C_{ki} = \int(dq)\, \omega C_\mathbf{q}^2$. However, this reduction from
$C\Om C$ to $CC$ applies only for $t'>t_{\rm w}$, while for $t'<t_{\rm w}$ one
would need to take a time derivative w.r.t.\ $t_{\rm w}$ instead of $t'$.
This case distinction would make evaluation of\eq{C2} awkward, so we
retain $C\Om C$ here and below.
The third contribution to $C_E$ is obtained by simply swapping the
roles of $t$ and $t_{\rm w}$ in\eq{C2} and remembering that $C\Om C$ is
symmetric in its time arguments,
\begin{eqnarray}
4C_E^{(3)} &=&
\frac{2}{N^{3/2}}\Omega_{ij}\Omega_{kl}\left\langle s_i(t)s_j(t)s_k(t_{\rm w})r_l(t_{\rm w})\right\rangle' \\
&=&
2z(t_{\rm w}) C\Om C(t,t_{\rm w})-\int dt' L^{(1)}(t_{\rm w},t') C\Om C(t,t')
\label{C3}
\end{eqnarray}
\begin{figure}
\centerline{\includegraphics[width=9.0cm,clip=true]{C4.eps}}
\caption{Wick pairings for $C_E^{(4)}$. (a) represents the constraints
for the possible pairings in\eq{C4_raw}. (b) is the only pairing that
contributes to leading order, forming three independent groups of
spins.
\label{fig:C4}
}
\end{figure}
The fourth and last contribution to $C_E$ is, using again\eq{yt3} and
writing out $\Delta(t'')$ and $\Delta(t_{\rm w}'')$
\begin{eqnarray}
\fl 4C_E^{(4)} &=& \frac{4}{N^2}\Omega_{ij}\Omega_{kl}\left\langle
s_i(t)r_j(t)s_k(t_{\rm w})r_l(t_{\rm w})\right\rangle' \\
\fl &=&
\frac{1}{N^3} \int dt' dt'' dt_{\rm w}' dt_{\rm w}''\,
\Omega_{ij}\Omega_{kl}
R_{jm}(t,t')L(t',t'')R_{ln}(t_{\rm w},t_{\rm w}')L(t_{\rm w}',t_{\rm w}'')
\nonumber\\
\fl & & \times \left\langle
s_i(t) s_m(t') s_o^2(t'')
s_k(t_{\rm w})s_n(t_{\rm w}')s_p^2(t_{\rm w}'')
\right\rangle'
\label{C4_raw}
\end{eqnarray}
where it is understood that, because of subtractions which we have not
written explicitly, pairings of $s_o$ with itself and of $s_p$ with
itself are to be excluded. The only pairing that gives an overall
${{\mathcal O}}(1)$ contribution turns out to be $[im][kn][op][op]$ and gives
the required 3 independent index groups ($i,m$; $k,n$; $o,o,p,p$) to
cancel the $1/N^3$ prefactor; see Fig.~\ref{fig:C4}. With the symmetry
factor 2 for the possible swap of the $[op][op]$ pairings one gets
\begin{eqnarray}
\fl 4C_E^{(4)}
&=&
2 \int dt' dt'' dt_{\rm w}' dt_{\rm w}''\,
\frac{1}{N}\Omega_{ij}R_{jm}(t,t')C_{im}(t,t')
\nonumber\\
\fl & & \times
\frac{1}{N}\Omega_{kl}R_{ln}(t_{\rm w},t_{\rm w}')C_{kn}(t_{\rm w},t_{\rm w}')
L(t',t'')L(t_{\rm w}',t_{\rm w}'')\frac{1}{N} C_{op}^2(t'',t_{\rm w}'')\\
\fl &=&
\frac{1}{2} \int dt' dt'' dt_{\rm w}' dt_{\rm w}''\,
[(\partial_t+2z(t))K(t,t')-\delta(t-t')]L(t',t'')
\nonumber\\
\fl & & \times
[(\partial_{t_{\rm w}}+2z(t_{\rm w}))K(t_{\rm w},t_{\rm w}')-\delta(t_{\rm w}-t_{\rm w}')]
L(t_{\rm w}',t_{\rm w}'') CC(t'',t_{\rm w}'')
\end{eqnarray}
Using\eq{pinv_def} one can carry out
two of the time integrations to get
\begin{eqnarray}
4C_E^{(4)}
&=&
\frac{1}{2} \int dt' dt_{\rm w}' \,
[2z(t)\delta(t-t')-L^{(1)}(t,t')]
\nonumber\\
& & \times
[2z(t_{\rm w})\delta(t_{\rm w}-t_{\rm w}')-L^{(1)}(t_{\rm w},t_{\rm w}')]
CC(t',t_{\rm w}')
\label{C4}
\end{eqnarray}
\subsection{Energy response function}
To find the energy response, consider perturbing the Hamiltonian by a
term $-h\delta(t-t_{\rm w})H=-(h/2)\delta(t-t_{\rm w})S_i\Omega_{ij}S_j$, where $h$
is the field conjugate to the energy. The equation of motion in the
presence of the perturbation is therefore
\begin{equation}
\partial_t S_i = - \Omega_{ij}S_j - (z(t)+h\Delta z(t))S_i +h\delta(t-t_{\rm w})\Omega_{ij}S_j
\end{equation}
where $h \Delta z$ is now the change in the Lagrange multiplier induced by
the perturbation. The fluctuating component of $z$ of ${{\mathcal O}}(N^{-1/2})$
is in principle still present, but negligible compared to $h\Delta z$ for
field strengths that are ${{\mathcal O}}(N^0)$. Inserting a corresponding
expansion of the spins, $S_i=s_i+hr_i$, gives for $s_i$ the unperturbed
equation of motion and for the perturbed component $r_i$
\begin{equation}
\partial_tr_i = -\Omega_{ij}r_j - z(t)r_i -\Delta z(t)s_i +\delta(t-t_{\rm w})\Omega_{ij}s_j
\label{dot_ri}
\end{equation}
The solution of this is
\begin{equation}
r_i(t) = R_{ik}(t,t_{\rm w})\Omega_{kl}s_l(t_{\rm w}) - \int_{t_{\rm w}}^t dt'\,
R_{ik}(t,t') \Delta z(t') s_k(t')
\label{yt_resp}
\end{equation}
One now needs to determine $\Delta z$. This can be done by noting that the
normalized length of $S_i$ is
\begin{equation}
\frac{1}{N}\sum_i S_i^2 = \frac{1}{N}\sum_i s_i^2 + 2h \frac{1}{N}
\sum_i s_i r_i + {{\mathcal O}}(h^2)
\end{equation}
The change to first order in $h$ must vanish to preserve the spherical
constraint, giving the condition $(1/N)\sum_i \left\langle s_i r_i\right\rangle=0$
or, using\eq{yt_resp},
\begin{equation}
\frac{1}{N} R_{ik}(t,t_{\rm w})\Omega_{kl}C_{il}(t,t_{\rm w}) =
\int_{t_{\rm w}}^t dt'\, \frac{1}{N} R_{ik}(t,t')C_{ik}(t,t')\Delta z(t')
\end{equation}
In the integrand one recognizes the definition\eq{p_def} of $K$, so
that one can write the solution of this as
\begin{equation}
\Delta z(t) = \int dt'\, L(t,t') R\Om C(t',t_{\rm w})
\end{equation}
with obvious notation for $R\Om C$.
Now we can find the change in the energy, $1/(2N) \left\langle S_i\Omega_{ij}
S_j\right\rangle$, which is given by $(h/N) \left\langle r_i \Omega_{ij}s_j\right\rangle$ to linear order
in $h$. Dividing by $h$ and using\eq{yt_resp} then gives the energy
response function
\begin{eqnarray}
R_E(t,t_{\rm w}) &=& \frac{1}{N}\left\langle r_i \Omega_{ij}s_j\right\rangle\\
&=&
\frac{1}{N} R_{ik}(t,t_{\rm w})\Omega_{kl}\Omega_{ij}C_{jl}(t,t_{\rm w})
\nonumber\\
& &
{} - {}\int dt' dt''\,
R\Om C(t,t')L(t',t'') R\Om C(t'',t_{\rm w})
\label{RE_general}
\end{eqnarray}
One can eliminate one of the time integrals by
using\eq{Om_elimination}, being careful to remove the unwanted
contribution from differentiating the step discontinuity in
$R\Om C(t,t_{\rm w})$. Using
also\eq{pinv_def} and\eq{pinv_structure} then gives
\begin{eqnarray}
\fl 2R_E
&=&
(-\partial_t-2z(t))R\Om C(t,t_{\rm w}) + \delta(t-t_{\rm w})R\Om C(t_{\rm w}^{+},t_{\rm w})
\nonumber\\
\fl & &{} - {}\int dt' dt''\,
[(-\partial_t-2z(t))K(t,t')+\delta(t-t')]L(t',t'') R\Om C(t'',t_{\rm w})
\\
\fl &=&
- \int dt' L(t,t') R\Om C(t',t_{\rm w})
+ \delta(t-t_{\rm w})R\Om C(t_{\rm w}^{+},t_{\rm w})
\label{RE}
\end{eqnarray}
\subsection{Equilibrium}
\label{sec:energy_FDT_eq}
Above, we derived general expressions for the energy two-time
correlation and response, in terms of the known correlation and
response functions for the Gaussian spins which in turn determine $K$ and
$L$. Before looking at non-equilibrium, we consider briefly the
equilibrium situation; even here the results for the dynamics are new
as far as we are aware.
For the response function one uses that at equilibrium $R\Omega^aC(t) =
\int(dq)\, e^{-2(z\eql+\omega)t}\omega^a [T/(z\eql+\omega)]$ for
$a=1,2$. Here we have retained a possible nonzero equilibrium value $z\eql$ of
the Lagrange multiplier, to include the
case of equilibrium at $T\neq T_{\rm c}$. Inserting into\eq{RE_general} and
taking LTs gives
\begin{eqnarray}
\hat R_{E}^{\rm eq}(s) &=& \int(dq)\,
\frac{T\omega^2}{(z\eql+\omega)[s+2(z\eql+\omega)]} \nonumber\\
& & {} - {} \frac{1}{\hatK\eql(s)} \left(\int(dq)\,
\frac{T\omega}{(z\eql+\omega)[s+2(z\eql+\omega)]}\right)^2
\end{eqnarray}
where $\hatK\eql$ is generalized from\eq{p_eql_LT} to
\begin{equation}
\hatK\eql(s) = T \int(dq)\, \frac{1}{(z\eql+\omega)[s+2(z\eql+\omega)]}
\label{phat_eql}
\end{equation}
Using the spherical constraint condition
$T\int(dq)\, (z\eql+\omega)^{-1} = 1$ one then shows, after a few lines of
algebra, that
\begin{equation}
\hat R_{E}^{\rm eq}(s)=\frac{1}{4}\left(s + 2T -
\frac{1}{\hatK\eql(s)}\right) = \frac{1}{4}\hatL^{(2)}\eql(s)
\label{R_Eql}
\end{equation}
Remarkably, therefore, the equilibrium energy response function
$R_{E}^{\rm eq}(t)$ is directly proportional to the inverse kernel
$\hatL^{(2)}\eql(t)$. Its asymptotics are then given
by\eq{ptwo_eql_asymptotics}. The long-time equilibrium susceptibility
which encodes the response to a
step change in the field is, using $\hatK\eql(0)=\int(dq)\, T/[2(z\eql+\omega)^2]$
and the generalization of\eq{L2hat0} to $T\neq T_{\rm c}$,
\begin{equation}
\fl \chi_{E}^{\rm eq}=\int_0^\infty dt\, R_{E}^{\rm eq}(t) = \frac{1}{4}\hat
L^{(2)}\eql(0) = \frac{T}{2}\left(1-\frac{1}{\int(dq)\, [T/(z\eql+\omega)]^2}\right)
\label{chi_Eql}
\end{equation}
It is easily shown that this is consistent with the known result for
the temperature dependence of the equilibrium energy, $E=\langle
H\rangle = \int(dq)\, \frac{1}{2}\omega[T/(z\eql+\omega)]$: one confirms $\chi_{E}^{\rm eq}
= T\,dE/dT$ as it should be. The factor $T$ arises because our field
$h$ is introduced via $H\to H-hH=(1-h)H$ and so corresponds to a
temperature change of $T/(1-h)-T=hT$ to linear order in $h$. The
inclusion of subleading non-Gaussian fluctuations is crucial for achieving this
consistency; as discussed at the beginning of Sec.~\ref{sec:setup},
the Gaussian theory does not even give the same answers for the
fluctuations of $H$ in its two representations as a bond or spin
product observable. The same phenomenon occurs in a purely static
calculation of the energy fluctuations and response~\cite{Joyce72}.
The temperature-dependence of the susceptibility\eq{chi_Eql} deserves
some comment. As $T$ approaches $T_{\rm c}$ from above, one has $z\eql\to
0$. For $d<4$, the denominator of the second term in\eq{chi_Eql} then
diverges, and $\chi_{E}^{\rm eq}/T$ smoothly approaches the value $1/2$ and
remains constant for $T<T_{\rm c}$. For $d>4$, the denominator has a finite
limit for $z\eql\to 0$. This produces the well-known discontinuity in
$\chi_{E}^{\rm eq}/T$ at $T=T_{\rm c}$, since for $T<T_{\rm c}$ the second term
in\eq{chi_Eql} again does not contribute. To see this explicitly, one
notes that for $T<T_{\rm c}$ the $\omega=0$ term in the spherical constraint
condition, with its weight $1/N$, has to be treated separately:
\begin{equation}
T\left(\int(dq)\, \frac{1}{z\eql+\omega} + \frac{1}{Nz\eql} \right)=1
\end{equation}
For $z\eql\to 0$ (on the scale ${{\mathcal O}}(N^0)$) the first integral is
$\int(dq)\, 1/\omega = 1/T_{\rm c}$ and so $z\eql = (1/N)TT_{\rm c}/(T_{\rm c}-T)$. This then
gives for the denominator integral in\eq{chi_Eql}
\begin{equation}
\int(dq)\, \frac{T^2}{(z\eql+\omega)^2} + \frac{T^2}{Nz\eql^2} \approx
N\frac{(T_{\rm c}-T)^2}{T_{\rm c}^2}
\end{equation}
which diverges for $N\to\infty$ at $T<T_{\rm c}$ as claimed.
An interesting question is how the discontinuity at $T=T_{\rm c}$ in $d>4$
of the susceptibility $\chi_{E}^{\rm eq}$, i.e.\ the {\em integrated} response,
is related to the temperature variation of the {\em time-dependent}
$R_{E}^{\rm eq}(t)$. In\eq{phat_eql} one notes that, for $T<T_{\rm c}$,
$\hatK\eql(s)$ acquires a distinct contribution from the $\omega=0$
mode
\begin{eqnarray}
\hatK\eql(s) &=&
\int(dq)\, \frac{T}{\omega(s+2\omega)} +
\frac{1}{N}\frac{T}{z\eql(s+2z\eql)}\\
& = & \int(dq)\, \frac{T}{\omega(s+2\omega)} + \frac{T_{\rm c}-T}{T_{\rm c} s}
\end{eqnarray}
where in the second line we have neglected $z\eql={{\mathcal O}}(1/N)$
against $s$. From\eq{R_Eql} one sees that the additional contribution
to $\hatK\eql(s)$ produces an extra pole in $\hat R_{E}^{\rm eq}(s)$, which
for small $(T_{\rm c}-T)/T_{\rm c}$ is located at $s=1/t_0$ with
$t_0=[TT_{\rm c}/4(T_{\rm c}-T)]\int(dq)\, \omega^{-2}$. Transforming to the time domain,
one finds that this gives an extra contribution of $\sim 1/t_0
\exp(-t/t_0)$ to $R_{E}^{\rm eq}(t)$. Crucially, this has a finite integral
even in the limit $T\to T_{\rm c}$, where $t_0$ diverges, and the appearance
of this term causes the discontinuity in $\chi_{E}^{\rm eq}$ at
$T=T_{\rm c}$. Translating these results to the time-dependent susceptibility
\begin{equation}
\chi_{E}^{\rm eq}(t) = \int_0^t dt' R_{E}^{\rm eq}(t')
\label{chi_Eql_t}
\end{equation}
one has that, for fixed finite $t$, $\chi_{E}^{\rm eq}(t)$ depends smoothly on
temperature around $T_{\rm c}$. For large $t$, $\chi_{E}^{\rm eq}(t)$ approaches a
plateau value equal to the equilibrium susceptibility at $T=T_{\rm c}^{+}$;
the approach to this plateau is as $\sim t^{(4-d)/2}$. For $T<T_{\rm c}$,
however, $\chi_{E}^{\rm eq}(t)$ eventually increases further on a diverging
timescale $t\sim t_0 \sim 1/(T_{\rm c}-T)$ to approach a higher plateau
value given by the susceptibility at $T=T_{\rm c}^{-}$. By FDT (see below)
the energy correlation function correspondingly shows a power-law
decay to a plateau for $t\ll t_0$, from which it falls to zero only
for $t>t_0$.
A final check on our results is that in equilibrium the energy
response and correlation function should be related by the usual
FDT. This is indeed the case. One combines the results\eq{C1},
\eq{C2}, \eq{C3}, \eq{C4} for the constituent parts of the correlation
function and decouples all the convolution integrals using temporal
Fourier transforms. It is easiest to do this starting from expressions
where all factors of $\Omega$ have been preserved, e.g.\ for $C^{(1)}_E$
one uses\eq{C1_first} rather than\eq{C1}. After some straightforward
but lengthy algebra one then indeed finds the Fourier domain version of
equilibrium FDT, $\int_{-\infty}^{\infty} dt\,C_{E}^{\rm eq}(t)e^{-i\nu t}=T[\hat
R_{E}^{\rm eq}(-i\nu)-\hat R_{E}^{\rm eq}(i\nu)]/(i\nu)$.
\section{Energy correlation and response: Non-equilibrium, $d>4$}
\label{sec:noneq_large_d}
We now evaluate the behaviour of the energy correlation and response
for the out-of-equilibrium dynamics after a quench to criticality. For
the correlation function, we need the long-time behaviour of
\begin{equation}
C\Om C(t,t_{\rm w}) = \int(dq)\, \frac{T_{\rm c}^2}{\omega^2}\sc{C}^2(\omega t_{\rm w})
\frac{g(t_{\rm w})}{g(t)} e^{-2\omega(t-t_{\rm w})}\omega
\end{equation}
At equilibrium, where $\sc{C}=1$ and $g(t)=$ const., this gives
$T_{\rm c}K\eql(t-t_{\rm w})$. In the non-equilibrium case one gets a correction
factor which becomes relevant in the aging regime, where $t-t_{\rm w}\sim
t_{\rm w}\gg 1$:
\begin{eqnarray}
C\Om C(t,t_{\rm w}) &=& T_{\rm c}K\eql(t-t_{\rm w})\sc{C\Om C}\left(t/t_{\rm w}\right)
\label{COC}
\\
\sc{COC}(x) &=& \left(\frac{t}{t_{\rm w}}\right)^{\kappa}
\frac{\int dw \, w ^{(d-4)/2} e^{-2(x-1)w } \sc{C}^2(w)}
{\int dw \, w ^{(d-4)/2} e^{-2(x-1)w }}
\end{eqnarray}
This is valid for all dimensions $d>2$, but in the rest of this
section we focus on the case $d>4$, where $\kappa=0$. The regime $2<d<4$
is more complicated and treated separately in the next section.
Before assembling the four parts of our expression for $C_E(t,t_{\rm w})$,
it is useful to have a guide on what to expect overall. As discussed
above, the equilibrium energy fluctuations remain finite at
criticality. One therefore expects that, in the short-time regime
$t-t_{\rm w}\sim{{\mathcal O}}(1)$, $C_E(t,t_{\rm w})$ will decay as in
equilibrium. From\eq{ptwo_eql_asymptotics}, \eq{R_Eql} and FDT it
follows that this decay becomes a power law, $\sim(t-t_{\rm w})^{(4-d)/2}$
as $t-t_{\rm w}$ becomes large. Aging effects should then appear when
$t-t_{\rm w}\sim t_{\rm w}$ and manifest themselves through a scaling function of
$t/t_{\rm w}$. Overall, one expects in the aging regime
\begin{equation}
C_E(t,t_{\rm w})=(t-t_{\rm w})^{(4-d)/2}\times[\mbox{scaling function of
}t/t_{\rm w}]
\label{CE_expect}
\end{equation}
and this is indeed what we will find. Consider now $C_E^{(1)}$, for
which we obtained in\eq{C1} that $4C_E^{(1)} = -(\partial_t+2z(t))C\Om C(t,t_{\rm w})$.
Now from\eq{g_def} and the fact that $g(t)\to$ const.\ for
$t\to\infty$ and $d>4$ it follows that $z(t)=g'(t)/[2g(t)]$
decreases more quickly than $1/t$. Using also\eq{COC}, we see that the
$z(t)$ term is negligible for large times and that $4C_E^{(1)}$
decays as $(t-t_{\rm w})^{-d/2}$ times an aging correction. This is
negligible compared to\eq{CE_expect}, so we do not need to analyse
this contribution further.
For $C_E^{(2)}$, we have from\eq{C2}
\begin{equation}
\fl 4C_E^{(2)} = 2z(t) C\Om C(t,t_{\rm w})-2T_{\rm c} C\Om C(t,t_{\rm w}) + \int dt' L^{(2)}(t,t')
C\Om C(t',t_{\rm w})
\label{C2_large_d}
\end{equation}
Because of the factor $C\Om C(t',t_{\rm w})$ it is useful to split the
$t'$-integral into the regimes $t'>t_{\rm w}$ and $t'<t_{\rm w}$. The first regime
gives
\begin{equation}
T_{\rm c}\int_{t_{\rm w}} dt' L^{(2)}\eql(t-t')
K\eql(t'-t_{\rm w})
\sc{L}(t/t')
\sc{C\Om C}(t'/t_{\rm w})
\label{C2_part}
\end{equation}
As in the analysis of\eq{ptwo_scaling_cond} one can now argue that,
for large $t-t_{\rm w}$, the integral will be dominated by the regions
$t'\approx t$ and $t'\approx t_{\rm w}$. The weights contributed by these
regions are $K\eql(t-t_{\rm w})\int_0^{\infty} dt'L^{(2)}\eql(t')$ and
$L^{(2)}\eql(t-t_{\rm w})\int_0^{\infty} dt'K\eql(t')$, respectively; the
integrals are both finite, so that both terms scale as
$(t-t_{\rm w})^{(2-d)/2}$. These weights are then multiplied by the relevant
values of scaling functions in the integrand of\eq{C2_part}. The
overall result is smaller by $\sim 1/(t-t_{\rm w})$ than the leading
term\eq{CE_expect} in $C_E$, and can be neglected. The $t'<t_{\rm w}$ part
of the integral from\eq{C2_large_d} is
\begin{equation}
T_{\rm c}\int^{t_{\rm w}} dt' L^{(2)}\eql(t-t')
K\eql(t_{\rm w}-t') \sc{L}(t/t') \sc{C\Om C}(t_{\rm w}/t')
\end{equation}
The factor $K\eql(t_{\rm w}-t')$ concentrates the weight of the integrand
near $t'=t_{\rm w}$, so that for $t-t_{\rm w}$ and $t_{\rm w}\gg 1$ all other factors
can be replaced by their values at $t'=t_{\rm w}$, giving the result
$T_{\rm c}\hatK\eql(0) L^{(2)}\eql(t-t_{\rm w})\sc{L}(x)$. This scales as
$(t-t_{\rm w})^{(2-d)/2}$ times a scaling function of $x$ and so is also
negligible compared to the leading contribution\eq{CE_expect}.
From\eq{COC} the second term on the r.h.s.\ of\eq{C2_large_d} has the
same subleading scaling, and the first term is even smaller. Overall,
$C_E^{(2)}$ therefore gives a subleading contribution to $C_E$ in the
aging regime. By very similar arguments one shows that $C_E^{(3)}$
from\eq{C3} is also subleading.
In the aging regime we therefore only need to consider $C_E^{(4)}$
from\eq{C4}. The long-time behaviour of the function $CC(t,t_{\rm w})$ is
easily worked out as
\begin{eqnarray}
\flCC(t,t_{\rm w}) &=& CC\eql(t-t_{\rm w})\sc{CC}(t/t_{\rm w})
\label{CC_decomp}
\\
\flCC\eql(t-t_{\rm w}) &=& \int(dq)\, \frac{T_{\rm c}^2}{\omega^2}e^{-2\omega(t-t_{\rm w})} = 2T_{\rm c}
\int_{t-t_{\rm w}}^\infty dt'\, K\eql(t') \sim (t-t_{\rm w})^{(4-d)/2}
\label{CC_eql}
\\
\fl\sc{CC}(x) &=& \frac{\int dw \, w ^{(d-6)/2} e^{-2(x-1)w }\sc{C}^2(w )}
{\int dw \, w^{(d-6)/2} e^{-2(x-1)w}}
\label{CC_scaling}
\end{eqnarray}
where the scaling function $\sc{CC}(x)\sim x^{-2}$ for $x\gg 1$. Now
consider the $t'$-integral from\eq{C4},
\begin{equation}
I(t,t_{\rm w}') = \int dt' \,
[(2z(t)-2T_{\rm c})\delta(t-t')+L^{(2)}(t,t')]CC(t',t_{\rm w}')
\label{Idef}
\end{equation}
The equilibrium part $L^{(2)}\eql(t-t')$ of $L^{(2)}(t,t')$ ensures that
this has its mass concentrated near $t'=t$. Because we are looking at
the aging regime, we have $t-t_{\rm w}'(>t-t_{\rm w})\gg 1$, and\eq{CC_eql} then
shows that the function $CC(t',t_{\rm w}')$ is slowly varying near
$t'=t$. It can thus be replaced by its value there, $CC(t,t_{\rm w}')$, to
give to leading order
\begin{equation}
I(t,t_{\rm w}') = (2z(t)-2T_{\rm c}+\hatL^{(2)}\eql(0))CC(t,t_{\rm w}') =
\frac{1}{\hatK\eql(0)}CC(t,t_{\rm w}')
\label{I_approx}
\end{equation}
where in the last step we have used $z(t)\ll 1$. One might suspect
that in the $t'$-integral of\eq{Idef}, the region around $t'\approx
t_{\rm w}'$ also contributes, since this is where $CC(t',t_{\rm w}')$ is
largest. However, it can be shown that this region only gives a
subleading contribution compared to\eq{I_approx}.
In terms of $I(t,t_{\rm w}')$ we can write $C_E^{(4)}$ using\eq{C4} as
\begin{equation}
4C_E^{(4)} =
\frac{1}{2} \int dt_{\rm w}' [(2z(t_{\rm w})-2T_{\rm c})\delta(t_{\rm w}-t_{\rm w}')+L^{(2)}(t_{\rm w},t_{\rm w}')]I(t,t_{\rm w}')
\label{C4_I}
\end{equation}
Again, the integral is dominated by the region $t_{\rm w}'\approx t_{\rm w}$
because of the factor $L^{(2)}\eql(t_{\rm w}-t_{\rm w}')$ and we thus get our final
expression
\begin{equation}
4C_E^{(4)} =
\frac{1}{2\hatK\eql(0)}I(t,t_{\rm w}) =
\frac{1}{2\hatK\eql^2(0)}CC(t,t_{\rm w})
\label{C4_final}
\end{equation}
As argued above, in the aging regime the full energy correlator will
be identical to this.
We can now turn to the evaluation of the out-of-equilibrium response
function. We will not write explicitly the second term in\eq{RE},
which just removes the unwanted $\delta(t-t_{\rm w})$ contribution arising
from the derivative applied to the first term's step
discontinuity. Thus,
\begin{equation}
2R_E = (-\partial_t-2T_{\rm c})R\Om C(t,t_{\rm w}) + \int dt' L^{(2)}(t,t') R\Om C(t',t_{\rm w})
\label{RE_again}
\end{equation}
The long-time scaling of the function $R\Om C$ is
\begin{eqnarray}
\fl R\Om C(t,t_{\rm w}) &=& R\Om C\eql(t-t_{\rm w})\sc{R\Om C}(t/t_{\rm w})
\\
\fl R\Om C\eql(t-t_{\rm w}) &=& T_{\rm c} \int(dq)\, e^{-2\omega(t-t_{\rm w})} = T_{\rm c} f(t-t_{\rm w}) \sim
(t-t_{\rm w})^{-d/2}
\label{ROC_eql}
\\
\fl \sc{R\Om C}(x) &=& \frac{\int dw\, w^{(d-2)/2} e^{-2(x-1)w}\sc{C}(w)}
{\int dw\, w^{(d-2)/2} e^{-2(x-1)w}}
\label{ROC_scaling}
\end{eqnarray}
The integrand in\eq{RE_again} has its mass concentrated near $t'=t$,
from the factor $L^{(2)}\eql(t-t')$, and near $t'=t_{\rm w}$, from the factor
$R\Om C\eql(t'-t_{\rm w})$. This gives
\begin{equation}
\fl 2R_E = (-\partial_t-2T_{\rm c})R\Om C(t,t_{\rm w}) + \hatL^{(2)}\eql(0) R\Om C(t,t_{\rm w}) +
\frac{1}{2}L^{(2)}\eql(t-t_{\rm w})\sc{L}(t/t_{\rm w})
\label{RE_aux}
\end{equation}
using that $\int_0^\infty dt'\, R\Om C\eql(t') = \frac{1}{2} 2T_{\rm c} \int_0^\infty
dt'\, f(t') = \frac{1}{2} K\eql(0) = \frac{1}{2}$ (see after\eq{p_eql}). The last
term in\eq{RE_aux}
scales as $(t-t_{\rm w})^{(2-d)/2}$ times a function of $x=t/t_{\rm w}$, while the
other terms scale at most as $(t-t_{\rm w})^{-d/2}$ and so are
subdominant. Using also that $K\eql$ and $L^{(2)}\eql$ are
asymptotically proportional for $d>4$ from\eq{ptwo_p_link} and that
$\sc{L}=\sc{K}$ then gives the final aging regime expression
\begin{equation}
R_E =
\frac{1}{4}L^{(2)}\eql(t-t_{\rm w})\sc{L}(t/t_{\rm w}) =
\frac{1}{4}L^{(2)}(t,t_{\rm w})
\label{RE_final}
\end{equation}
Interestingly, the simple relation\eq{R_Eql} between the energy
response and $L^{(2)}$ therefore holds not only at equilibrium but also in
the aging regime of the non-equilibrium dynamics, and thus overall
across the entire long-time regime.
We can now, finally, find the FDR for the energy correlation and
response. In the aging regime of interest here, it is useful to
rewrite the response function\eq{RE_final} as
\begin{equation}
R_E(t,t_{\rm w}) = \frac{1}{4\hatK\eql^2(0)}K\eql(t-t_{\rm w})\sc{K}(x) =
\frac{1}{4\hatK\eql^2(0)}K(t,t_{\rm w})
\label{RE_final_K}
\end{equation}
using\eq{ptwo_p_link} and $\sc{L}=\sc{K}$. Recalling the
definition\eq{p_def}, and the fact that $C_E$ is given by\eq{C4_final}
to leading order in the aging limit, we then obtain
\begin{equation}
X_E(t,t_{\rm w}) = \frac{T_{\rm c} R_E(t,t_{\rm w})}{C'_E(t,t_{\rm w})} = \frac{\int(dq)\, T_{\rm c} R_\mathbf{q}
C_\mathbf{q}}{\int(dq)\, C'_\mathbf{q} C_\mathbf{q}}
\label{XE}
\end{equation}
Remarkably, this is in fact {\em identical} to the FDR\eq{Xbl_large_d}
for large blocks of spin product observables. In particular, $X_E$
tends to the limit value $X^\infty=1/2$ for $t\ggt_{\rm w}$, and this is
identical to the one we found for spin and bond observables: for
$d>4$, all observables we have considered lead to a unique value of
$X^\infty=1/2$.
The FD plot for the energy has a limiting pseudo-equilibrium
form; this follows from\eq{CC_eql} and\eq{C4_final}, which show that
$C_E(t,t_{\rm w})$ has decayed to a small value $\simt_{\rm w}^{(4-d)/2}$ at the
point where aging effects begin to appear. More generally, the
correlation function scales as $C_E \sim (t-t_{\rm w})^{(4-d)/2}$ for
$x\approx 1$, and for $x\gg1$, where $\sc{CC}(x) \sim x^{-2}$
from\eq{CC_scaling}, as $C_E \sim t^{(4-d)/2}x^{-2} =
t_{\rm w}^{2}t^{-d/2}$.
It should be pointed out that while the FDR for the energy matches
that for blocks of product observables at long times, the correlation
and response functions themselves do not. This follows from the
nontrivial proportionality factors $1/{\hatK\eql^2(0)}$
in\eq{C4_final} and\eq{RE_final_K}. The latter are required to match
the limiting behaviour for $t-t_{\rm w}\llt_{\rm w}$ of the aging regime results
to the asymptotics of the equilibrium results for $t-t_{\rm w}\gg
1$. Indeed, by combining the effective equilibrium behaviour for
$t-t_{\rm w}={{\mathcal O}}(1)$ with the above aging expressions we can write
\begin{eqnarray}
C_E(t,t_{\rm w}) &=& C_{E}^{\rm eq}(t-t_{\rm w})\sc{CC}(x)
\label{CE_d_gt_4_longtime}
\\
R_E(t,t_{\rm w}) &=& R_{E}^{\rm eq}(t-t_{\rm w})\sc{K}(x)
\label{RE_d_gt_4_longtime}
\end{eqnarray}
and these are now valid throughout the long-time regime, i.e.\ for
large $t_{\rm w}$ but independently of whether $t-t_{\rm w}$ is large or not. As
promised they match with the aging expressions for $1\ll \Delta t
\llt_{\rm w}$. For the response this is obvious from\eq{R_Eql}. For $C_E$,
\eq{ptwo_p_link} and\eq{R_Eql} show that $-\partial_{\Delta t} C_{E}^{\rm eq}(\Delta t) =
(T_{\rm c}/4)L^{(2)}\eql(\Delta t)\approx [T_{\rm c}/4\hatK\eql^2(0)]K\eql(\Delta t)$ for
large $\Delta t$; from\eq{CC_eql} this agrees with the corresponding
derivative of the result in\eq{C4_final},
$-\partial_{\Delta t}CC\eql(\Delta t)/[8\hatK\eql^2(0)] =
[2T_{\rm c}/8\hatK\eql^2(0)]K\eql(\Delta t)$. Note from the discussion
after\eq{chi_Eql_t} that the function $C_{E}^{\rm eq}(\Delta t)$ discontinuously
acquires an additive constant (non-decaying) part as $T$ crosses $T_{\rm c}$ from
above in $d>4$. What we mean in\eq{CE_d_gt_4_longtime} is the limiting
form for $T\searrow T_{\rm c}$ {\em from above}, which does not contain this
plateau. That this is the correct choice can be seen as follows: the
non-decaying part of $C_{E}^{\rm eq}(\Delta t)$ at $T=T_{\rm c}^{-}$ arises from the
fluctuations of the $\mathbf{q}=\mathbf{0}$ Fourier mode, i.e.\ of the magnetization,
which are larger by ${{\mathcal O}}(N)$ than those of the other Fourier
modes. In the context of our non-equilibrium calculation, all Fourier
modes have fluctuations of comparable order (in system size), so that
this contribution is absent; it would appear only for times $t_{\rm w}$ that
scale with system size $N$.
Finally, the long-time
expressions\eqq{CE_d_gt_4_longtime}{RE_d_gt_4_longtime} also show that
an energy FD plot would have a pseudo-equilibrium form at long times,
because e.g.\ in $C_E(t,t_{\rm w})$ the equilibrium factor has already
decayed to $\sim t_{\rm w}^{(4-d)/2}$ of its initial value when aging
effects appear around $\Delta t\sim t_{\rm w}$.
\section{Energy correlation and response: Non-equilibrium, $d<4$}
\label{sec:noneq_small_d}
In dimension $d<4$, the evaluation of the energy correlation function in the
aging regime is somewhat more complicated. One can nevertheless show
that, as before, the dominant contribution to $C_E$ is $C_E^{(4)}$,
with the other three terms being subleading. We thus again need to
consider the function $I(t',t_{\rm w})$ defined in\eq{Idef}, and then work
out $C_E$ using\eq{C4_I}. One can see clearly that the approach
leading above to\eq{I_approx} no longer works: in $d<4$,
$\hatL^{(2)}\eql(0)=2T_{\rm c}$, so the leading terms in\eq{I_approx} actually
cancel. To treat this cancellation accurately, we
use\eq{ptwo_eql_condition} to write the coefficient $2T_{\rm c}$ in the
following way
\begin{equation}
2T_{\rm c} = - \frac{K\eql'(t)}{K\eql(t)} + \int dt'\,
\frac{K\eql(t')}{K\eql(t)} L^{(2)}\eql(t-t')
\end{equation}
This allows us to express\eq{Idef} as
\begin{eqnarray}
\fl I(t,t_{\rm w}') &=& \left[2z(t)+\frac{K\eql'(t)}{K\eql(t)}\right]
CC(t,t_{\rm w}')
\label{I_subtract}
\\
\fl & & {}+{} \int_0^t
dt'\,L^{(2)}\eql(t-t') \left[\sc{L}(t/t')CC(t',t_{\rm w}')
-\frac{K\eql(t')}{K\eql(t)}CC(t,t_{\rm w}')\right]
\nonumber
\end{eqnarray}
To make progress, we need the behaviour of $CC$ for $d<4$. For long
times we have
\begin{equation}
\fl CC(t,t_{\rm w}) = \int(dq)\, C^2_\mathbf{q}(t,t_{\rm w}) = T_{\rm c}^2 \frac{g(t_{\rm w})}{g(t)}\int(dq)\,
\omega^{-2} \sc{C}^2(\omegat_{\rm w}) e^{-2\omega(t-t_{\rm w})}
\label{CC_gen_dlt4}
\end{equation}
The equal-time value thus scales as
\begin{equation}
CC(t,t) \sim \int d\omega\, \omega^{d/2-3} \sc{C}^2(\omega t) =
t^{(4-d)/2}\int dw \, w ^{(d-6)/2} \sc{C}^2(w )
\label{CC_equal_time}
\end{equation}
hence $CC(t,t)=\gamma_d t^{(4-d)/2}$ with some constant $\gamma_d$. A scaling
function is then obtained if we normalize $CC(t,t_{\rm w})$ by this
equal-time value:
\begin{equation}
\frac{CC(t,t_{\rm w})}{CC(t,t)} =
\frac{g(t_{\rm w})}{g(t)}
\frac{t_{\rm w}^{(4-d)/2}}{t^{(4-d)/2}}
\frac{\int dw \, w ^{(d-6)/2} \sc{C}^2(w ) e^{-2w (t-t_{\rm w})/t_{\rm w}}}
{\int dw \, w ^{(d-6)/2} \sc{C}^2(w )}
\label{sc_CC_ord}
\end{equation}
This is a function ${\mathcal{G}}(x)$ of $x=t/t_{\rm w}$; note that the first two
fractions cancel since $g(t)\sim t^{-\kappa}=t^{(d-4)/2}$. So far this
scaling expression holds for $t>t_{\rm w}$. To be able to use it also for
non-ordered times, note that for $t<t_{\rm w}$,
\begin{equation}
\frac{CC(t,t_{\rm w})}{CC(t,t)}=\left(\frac{t_{\rm w}}{t}\right)^{(4-d)/2}
\frac{CC(t_{\rm w},t)}{CC(t_{\rm w},t_{\rm w})}=x^{(d-4)/2}
{\mathcal{G}}(t_{\rm w}/t)
\label{sc_CC_dis}
\end{equation}
So we have overall
\begin{equation}
\fl\frac{CC(t,t_{\rm w})}{CC(t,t)} ={\mathcal{G}}(t/t_{\rm w}),
\quad
{\mathcal{G}}(x)=\left\{
\begin{array}{ll}
{\displaystyle\frac{\int dw\, w^{(d-6)/2} \sc{C}^2(w) e^{-2(x-1)w}}
{\int dw\, w^{(d-6)/2} \sc{C}^2(w)}} & \mbox{for $x\geq 1$} \\
x^{(d-4)/2}{\mathcal{G}}(1/x) & \mbox{for $x\leq 1$}
\end{array}
\right.
\label{CC_d_lt_4}
\end{equation}
One easily works out the asymptotics of ${\mathcal{G}}$: ${\mathcal{G}}(x)\sim x^{-d/2}$ for
$x\to\infty$, ${\mathcal{G}}(x)\sim x^{d-2}$ for $x\to 0$. For $|x-1|\ll1$, on
the other hand, $1-{\mathcal{G}}(x)\sim |x-1|^{(4-d)/2}$. This corresponds to
$CC(t,t)-CC(t,t_{\rm w})\sim |t-t_{\rm w}|^{(4-d)/2}$ for $1\ll |t-t_{\rm w}|\ll
t_{\rm w}$. (The behaviour of this difference for smaller time intervals
$t-t_{\rm w}={{\mathcal O}}(1)$ is not captured accurately by the scaling
form\eq{CC_d_lt_4}, but integrals over this regime contribute
only subleading corrections to the results below.)
We can now insert the scaling form\eq{CC_d_lt_4} of $CC$
into\eq{I_subtract}. The non-integral terms turn out to be subleading
(see below), so
\begin{eqnarray}
\fl I(t,t_{\rm w}') &=& \int dt' \, L^{(2)}\eql(t-t')
\nonumber\\
\fl & &\times\left[\sc{L}(t/t')CC(t',t')
{\mathcal{G}}(t'/t_{\rm w}')
-\frac{K\eql(t')}{K\eql(t)}CC(t,t){\mathcal{G}}(t/t_{\rm w}')\right]
\\
\fl &=& CC(t,t) \int dt' \,
L^{(2)}\eql(t-t')\left[\frac{t'}{t}{\mathcal{G}}(t'/t_{\rm w}') - \frac{K\eql(t')}{K\eql(t)}
{\mathcal{G}}(t/t_{\rm w}')\right]
\label{I_sub2}
\end{eqnarray}
where we have used that, from\eq{FKd_lt_4} and\eq{same_sc_fn},
$\sc{L}(t/t')CC(t',t')/CC(t,t) = (t'/t)^{(d-2)/2}
(t'/t)^{(4-d)/2} = t'/t$. The remaining ``subtracted'' integral now no
longer has its mass concentrated near $t'=t$ because the terms in
square brackets give a factor $t-t'$ there. The whole integration
range contributes, so that we can replace $L^{(2)}\eql(t-t')$ by its
asymptotic form\eq{ptwo_eql_asymptotics}, $L^{(2)}\eql(t-t')\approx\lambda_d
(t-t')^{(d-6)/2}$ with a $d$-dependent constant $\lambda_d$. Similarly, the
ratio ${K\eql(t')}/{K\eql(t)}$ can be replaced by $(t'/t)^{(2-d)/2}$
to leading order. Scaling the integration variable as $y=t'/t_{\rm w}'$ then
gives
\begin{equation}
\fl I(t,t_{\rm w}') = I(x')
= \gamma_d\lambda_d x'^{-2}\int_0^{x'} dy\,(1-y/x')^{(d-6)/2}y\left[{\mathcal{G}}(y) -
y^{-d/2}{\mathcal{G}}(x')x'^{d/2}\right]
\label{I_sub3}
\end{equation}
This shows that in the aging regime $I$ depends on $x'=t/t_{\rm w}'$
only. One can now also see that the neglected terms
from\eq{I_subtract} are indeed subleading: they scale as
$t^{-1}CC(t,t_{\rm w}')\sim t^{(2-d)/2}{\mathcal{G}}(t/t_{\rm w}')$.
We can now proceed to simplifying\eq{C4_I} in the aging regime. Using
the same subtraction method as above, and remembering that
$C_E=C_E^{(4)}$ to leading order, one finds by analogy with\eq{I_subtract}
\begin{equation}
\fl 8C_E(t,t_{\rm w}) = \int dt_{\rm w}' \,
L^{(2)}\eql(t_{\rm w}-t_{\rm w}')\left[\sc{L}(t_{\rm w}/t_{\rm w}')
I(t,t_{\rm w}') - \frac{K\eql(t_{\rm w}')}{K\eql(t_{\rm w})}I(t,t_{\rm w})\right]
\label{C4_sub}
\end{equation}
Here subleading terms similar to those in the first line
of\eq{I_subtract} have already been neglected. With the scaled
integration variable $y=t_{\rm w}'/t_{\rm w}$, and using the asymptotic forms of
$L^{(2)}\eql(t_{\rm w}-t_{\rm w}')$ and $K\eql(t_{\rm w}')$ as in\eq{I_sub3} one gets
\begin{equation}
\fl 8C_E(t,t_{\rm w}') = \lambda_d t_{\rm w}^{(d-4)/2}
\int_0^1 dy \, (1-y)^{(d-6)/2}\left[y^{(d-2)/2} I(x/y) - y^{(2-d)/2}I(x)\right]
\label{CE_sub}
\end{equation}
This shows that $C_E$ scales as $t_{\rm w}^{(d-4)/2}$ times a function of
$x=t/t_{\rm w}$. It is difficult to work out the whole functional dependence
on $x$. We therefore focus below on the asymptotic behaviour for
$x\to\infty$, which gives the asymptotic FDR. First, though, we check that in
the limit $x\to 1$, where aging effects are unimportant, our
expression matches with the equilibrium result
$C_{E}^{\rm eq}(t-t_{\rm w})$. From\eq{R_Eql} the latter behaves as $C_{E}^{\rm eq}(\Delta t) =
(T_{\rm c}/4)\int_{\Delta t}^\infty dt'\,L^{(2)}\eql(t') \approx
[\LdT_{\rm c}/2(4-d)]\Delta t^{(d-4)/2}$ for large $\Delta t$. To compare with the
prediction from\eq{CE_sub}, one uses that $I(x') \sim \ln(x'-1)$ for
$x'\approx1$; this follows from the behaviour of ${\mathcal{G}}(x)$ for $x\approx
1$. (Note that the proportionality constant in $I\sim \ln(x'-1)$ is
{\em positive}, so that $I$ itself is -- in contrast to the case $d>4$
-- {\em negative}.) Inserting into\eq{CE_sub} one then finds that for
$x\approx 1$ the integral scales as $(x-1)^{(d-4)/2}$. Overall, one
gets $C_E(t,t_{\rm w}) \sim t_{\rm w}^{(d-4)/2}(x-1)^{(d-4)/2} =
(t-t_{\rm w})^{(d-4)/2}$, consistent with the expectation from the
equilibrium result. One can work out the prefactor of the power law
and finds that this, too, agrees as it should.
Turning now to the behaviour of\eq{CE_sub} for large $x$, we need the
asymptotics of $I(x')$.
The leading tail $\sim y^{-d/2}$ of ${\mathcal{G}}(y)$ is subtracted off in the
expression in square brackets in\eq{I_sub3}, leaving as the next term
$y^{-(d+2)/2}$. Even together with the additional factor $y$ this
makes the integral convergent at the upper end for $x'\to\infty$. In
the limit we thus get $I(x') \approx -\alpha_d\gamma_d\lambda_d x'^{-2}$, with
\begin{equation}
\alpha_d = \int_0^\infty dy\,y\left[g_d y^{-d/2}-{\mathcal{G}}(y)\right]
\label{Ad}
\end{equation}
and $g_d=\lim_{x\to\infty} {\mathcal{G}}(x)x^{d/2}$.
Inserting this inverse-square asymptotic behaviour of $I(x')$ gives
for the integral in\eq{CE_sub} the scaling $\alpha_d\beta_d\gamma_d\lambda_d x^{-2}$ with
\begin{equation}
\fl \beta_d = \int_0^1 dy \,
(1-y)^{(d-6)/2}\left(y^{(2-d)/2}-y^{(d+2)/2}\right)
= -
\frac{\Gamma\left(\frac{d-4}{2}\right)\Gamma\left(\frac{d+4}{2}\right)}
{\Gamma(d)}
\label{CE_factor}
\end{equation}
It then follows, finally, that $C_E(t,t_{\rm w})=(\alpha_d\beta_d\gamma_d\lambda_d^2/8)
t_{\rm w}^{d/2} t^{-2}$ for $t\ggt_{\rm w}$, and
\begin{equation}
C'_E(t,t_{\rm w}) = (\alpha_d\beta_d\gamma_d\lambda_d^2 d/16) t_{\rm w}^{(d-2)/2} t^{-2}
\label{CE_prime}
\end{equation}
To get the FDR, we now need the response function $R_E$. This can be
evaluated very similarly to the case $d>4$ and one finds that the last
term in\eq{RE_aux} is again the dominant one, giving
\begin{equation}
R_E(t,t_{\rm w}) = \frac{1}{4}L^{(2)}\eql(t-t_{\rm w})\sc{L}(t/t_{\rm w}) =
\frac{1}{4}L^{(2)}(t,t_{\rm w})
\label{RE_d_lt_4}
\end{equation}
(The $R\Om C(t,t_{\rm w})$ terms in\eq{RE_aux} look dangerous: they scale as
$(t-t_{\rm w})^{-d/2}$ and are thus {\em larger} than $L^{(2)}\eql(t-t_{\rm w})\sim
(t-t_{\rm w})^{(d-6)/2}$ in $d<3$. However, their prefactors
$-2T_{\rm c}+\hatL^{(2)}\eql(0)$ cancel. Treating this cancellation more
carefully then shows that these terms do remain subleading compared
to\eq{RE_d_lt_4}.) As before, $4R_E$ equals $L^{(2)}$ in the aging
regime, and this then holds across the whole long-time
regime since for $t-t_{\rm w}={{\mathcal O}}(1)$ it matches the equilibrium
behaviour $4R_E(t,t_{\rm w})=L^{(2)}\eql(t-t_{\rm w})$. For $t\ggt_{\rm w}$, on the other
hand, the response\eq{RE_d_lt_4} becomes
\begin{equation}
R_E(t,t_{\rm w}) = \frac{1}{4}\lambda_d t^{(d-6)/2}
\left(\frac{t}{t_{\rm w}}\right)^{(2-d)/2} = \frac{\lambda_d}{4} t_{\rm w}^{(2-d)/2} t^{-2}
\end{equation}
Comparing with\eq{CE_prime} then finally gives for the asymptotic FDR
for the energy in $d<4$,
\begin{equation}
X_E^\infty = \frac{4T_{\rm c}}{\alpha_d\beta_d\gamma_d\lambda_d d}
\label{Xinf_dlt4}
\end{equation}
After evaluating the various numerical factors (see
\ref{sec:X_E_infty}) this can be written as
\begin{equation}
X_E^\infty = \frac{2}{d\tilde{\alpha}_d}
\frac{\Gamma(d)\Gamma{\textstyle\left(\frac{4-d}{2}\right)}}
{\Gamma\left(\frac{d+4}{2}\right)}
\label{Xinf_dlt4_explicit}
\end{equation}
where
\begin{equation}
\fl\tilde{\alpha}_d = \frac{\pi}{\sin[\pi(4-d)/2]}
+\int_1^\infty dx
\int_0^1 dy\, y^{(d-4)/2} \frac{(x-y)^{(2-d)/2}}{1+x-y}
\left(1-y-x^{-(d+2)/2}\right)
\label{Atd}
\end{equation}
Near $d=4$, one can expand in $\epsilon=4-d$. It can be shown by
explicit calculation that the integral in\eq{Atd} is exactly zero for
$d=4$, giving $\tilde{\alpha}_d = 2/\epsilon+{{\mathcal O}}(\epsilon)$ and so
\begin{equation}
X_E^\infty = \frac{\epsilon
\Gamma(4-\epsilon)\Gamma(\epsilon/2)}{(4-\epsilon)\Gamma(4-\epsilon/2)}
+ {{\mathcal O}}(\epsilon^2) = \frac{1}{2}-\frac{\epsilon}{3} +
{{\mathcal O}}(\epsilon^2)
\end{equation}
Notably, this is {\em different} from the FDR
$X^\infty=1-2/d=1/2-\epsilon/8+{{\mathcal O}}(\epsilon^2)$ for all the other,
finite-range, observables that we considered previously in $d<4$. It is,
however, consistent with RG calculations to
${{\mathcal O}}(\epsilon)$ for the $O(n)$ model in the limit $n\to\infty$, for
an analogous choice of observable~\cite{CalGam04}. Remarkably,
therefore, the
non-Gaussian fluctuations induced by the weak infinite-range interaction
in the spherical model seem to mimick precisely the effects that are
seen in more realistic models such as the $O(n)$, even though in the
latter all interactions are short-ranged and there is no difference
between the behaviour of block observables and global ones.
\begin{figure}
\centerline{\includegraphics[width=11.0cm,clip=true]{XE.eps}}
\caption{Asymptotic FDR $X^\infty$ vs dimension $d$ for finite-range
observables (equation~(\protect\ref{X_baseline}), dashed) and energy
(equation~(\protect\ref{Atd}), solid). Dotted lines indicate the
corresponding linear expansions $1/2-\epsilon/8$ and $1/2-\epsilon/3$ in
$\epsilon=4-d$.
\label{fig:X_normal_and_E}
}
\end{figure}
We show in Fig.~\ref{fig:X_normal_and_E} the dependence of the
asymptotic energy-FDR $X_E^\infty$ on dimension $d$ and compare with
the result for finite-range observables. They agree in $d\geq 4$, but the
difference between them grows as $d$ decreases towards $d=2$, with the
energy FDR always having the lower value. In the limit $d\to 2$, both
FDRs converge to zero, but while the finite-range FDR
$X^\infty=\epsilon'/2+{{\mathcal O}}(\epsilon'^2)$ does so linearly in
$\epsilon'=d-2$, the energy FDR vanishes quadratically as
$X_E^\infty=\epsilon'^2/8+{{\mathcal O}}(\epsilon'^3)$, due to the divergence
of $\tilde{\alpha}_d=4/\epsilon'^2+{{\mathcal O}}(1/\epsilon')$.
As in the case $d>4$, an energy FD plot would have a
pseudo-equilibrium form which hides all non-equilibrium effects at
long times. Indeed, one could write\eq{CE_sub} in the form
$C_E(t,t_{\rm w})=C_{E}^{\rm eq}(t-t_{\rm w})\sc{{C_E}}(x)$, and the decay of the
equilibrium factor $C_{E}^{\rm eq}$ squeezes all details about the aging factor
$\sc{{C_E}}(x)$ into a vanishingly small region of the FD plot for
long times. While we have not calculated $\sc{{C_E}}$ explicitly, the
discussion after\eq{CE_sub} shows that $\sc{{C_E}}(1)=1$ as it should
be. For large $x$, on the other hand, we saw in \eq{CE_prime} that
$C_E(t,t_{\rm w})\sim t_{\rm w}^{d/2} t^{-2}$, implying $\sc{{C_E}}(x)\sim
t_{\rm w}^{d/2}t^{-2}(t-t_{\rm w})^{(4-d)/2} \sim x^{-d/2}$. This matches
continuously at $d=4$ with the prediction\eq{CE_d_gt_4_longtime} for
$d>4$, where the aging correction decays as $\sc{CC}(x) \sim x^{-2}$.
\section{Magnetized initial states}
\label{sec:magnetized}
In this final section we consider the dynamics for initial
configurations with nonzero
magnetization, focussing as before on the non-equilibrium
dynamics that results when the system is subsequently brought to the
critical temperature $T_{\rm c}$. Physically, such a situation could arise
in an ``up-quench'', where the system is equilibrated at $T<T_{\rm c}$ and
temperature is then increased to $T=T_{\rm c}$. As explained in the
introduction, our interest in this
scenario arises from recent results~\cite{GarSolPagRit05} which show
that such initial conditions produce FDRs that differ nontrivially
from those for zero magnetization. The analysis
of~\cite{GarSolPagRit05} was limited to high dimensions $d$ or
infinite-range interactions, however; the calculation below will allow us
to see explicitly how the results change in finite dimension. In
particular, we will obtain exact FDRs for magnetized coarsening
below the upper critical dimension, i.e.\ $d<4$ in the spherical
model.
We will continue to use the notation $C_\mathbf{q}(t,t_{\rm w})=(1/N) \left\langle
S_\mathbf{q}(t)S_\mathbf{q}^*(t)\right\rangle$ for Fourier mode correlations. For $\mathbf{q}=\mathbf{0}$
this is now a full, unsubtracted correlator, with
$C_\mathbf{0}(t,t)=Nm^2(t)+{{\mathcal O}}(1)$ and $m(t)=(1/N)\left\langle S_\mathbf{0}(t)\right\rangle$ the
time-dependent magnetization. The difference $\Cc_\zv(t,t_{\rm w}) =
C_\mathbf{0}(t,t_{\rm w})-Nm(t)m(t_{\rm w})$ is the connected correlator, which has
values of ${{\mathcal O}}(1)$ and is the relevant quantity for analysing the
FD behaviour. For $\mathbf{q}\neq\mathbf{0}$, connected and full
correlators coincide:
\begin{equation}
\tilde{C}_\mathbf{q}(t,t_{\rm w}) = C_\mathbf{q}(t,t_{\rm w})-(1/N)\left\langle S_\mathbf{q}(t)\right\rangle \langle
S^*_\mathbf{q}(t_{\rm w})\rangle = C_\mathbf{q}(t,t_{\rm w})
\end{equation}
We now need to check how the analysis in the previous sections is
modified by the
presence of a nonzero magnetization. The Fourier space equation of
motion\eq{dotSq} remains valid, and so do the resulting expressions
for the response function $R_\mathbf{q}(t,t_{\rm w})$\eq{Rq} and the full
correlator $C_\mathbf{q}(t,t_{\rm w})$\eqq{Cqtt}{C_twotime}.
The expression\eq{hat_g_general} for the Laplace transform of $g(t)$ that
results from the spherical constraint also still holds, but the
solution is now different. In the $\mathbf{q}$-integral,
the $\mathbf{q}=\mathbf{0}$ contribution $(1/N)C_\mathbf{0}(0,0)/s =
m^2(0)/s$ has to be treated separately. In fact, one sees that for
$s\to 0$ this term always dominates the rest of the integral, which
diverges less strongly. At criticality, where $T=T_{\rm c}=[\int(dq)\,
1/\omega]^{-1}=[2\hat{f}(0)]^{-1}$, one thus has for $s\to 0$
\begin{equation}
2T_{\rm c} \hat{g}(s) = \frac{m^2(0)}{s} \left[\int(dq)\,
\left(\frac{1}{2\omega}-\frac{1}{s+2\omega}\right)\right]^{-1}
\label{gs}
\end{equation}
which using\eq{p_eql_LT} can be rearranged into
\begin{equation}
\frac{\hat{g}(s)}{m^2(0)}=s^{-2}\left[\int (dq)
\frac{T_{\rm c}}{\omega(2\omega+s)}\right]^{-1} = [s^{2}\hat{K}\eql(s)]^{-1}
\label{gs2}
\end{equation}
For $d>4$, $\hat{K}\eql(0)$ is finite so this scales as $s^{-2}$; for
$d<4$, on the other hand, $\hat{K}\eql(s)$ diverges as $s^{(d-4)/2}$
so that $\hat{g}(s) \sim s^{-d/2}$. Translating back to the time
domain, $g(t)$ behaves for large $t$ as
\begin{equation}
g(t)\sim m^2(0)\,t^\alpha, \qquad \alpha = \left\{\begin{array}{lll}
1 & \mbox{ } & (d>4)\\ (d-2)/2 & \mbox{ } & (d<4) \end{array}\right.
\label{g_scaling_magn}
\end{equation}
Note that this asymptotic behaviour is independent of any details of
the initial condition except for the presence of a nonzero $m(0)$; it
depends on the actual value of $m(0)$ only through the prefactor
$m^2(0)$. For the time-dependence of $m(t)$, one gets by taking an
average of\eq{Sqt}
\begin{equation}
m(t) = R_\mathbf{0}(t,0) m(0) = \frac{m(0)}{\sqrt{g(t)}}
\label{m_in_terms_of_g}
\end{equation}
Because of the proportionality of $g(t)$ to $m^2(0)$ for large $t$,
the asymptotic decay of $m(t)$ is independent of the initial
conditions, in terms of {\em both} the decay exponent {\em and} the prefactor.
\subsection{Finite-range spin observables}
We first analyse the correlation and response functions for
observables that relate to a number of spins that is much smaller
than $N$. As for the unmagnetized case, the fluctuations of the
Lagrange multiplier $z$ can then be neglected. To understand the
magnetized case, it is useful to shift the spin variables by $m(t)$.
We will see that the equations of motion then acquire the same form as
before, so that we can directly transfer the main results from the
unmagnetized
case. Explicitly, we consider the following decomposition of the spin
variables
\begin{equation}
S_i=m+U_i
\end{equation}
where $U_i$ is a zero-mean variable. The equation of
motion\eq{real_space_dotSi} for $S_i$ then gives
\begin{equation}
\partial_t m+\partial_t{U_i}=-\Omega_{ij}(m+U_j)+\xi_i-z(m+U_i)
\end{equation}
From\eq{m_in_terms_of_g} and the definition\eq{g_def}, $\partial_t\ln m(t) =
-(1/2)\partial_t\ln g(t) = -z(t)$, so $\partial_t m = -zm$. Also $\sum_j
\Omega_{ij}=0$, giving
\begin{equation}
\partial_t{U_i} = -\Omega_{ij}U_j + \xi_i - zU_i
\label{dot_Ui}
\end{equation}
This is the same as the equation for $S_i$ in the unmagnetized case,
and so one can deduce directly the solutions for the correlation
functions of the $U_i$; these are the connected correlations
$\tilde{C}$. The initial values again become unimportant for long times,
allowing us to work out the scaling of the $\tilde{C}$, and then together
with the response $R$ also the FDR $X$. It is clear from the
description in terms of the subtracted spins $U_i$ that there is
nothing special about the case $\mathbf{q}=\mathbf{0}$, and all results will have a
smooth limit as $\mathbf{q}\to \mathbf{0}$. Because we are neglecting the
fluctuations of the Lagrange multiplier $z$, this limit again has to be
understood as that of a block magnetization calculated over a
region much larger than the correlation length (in the time regime
being considered) but much smaller than the linear system size, so
that $q\gg 1/L$.
Applying\eq{Cqtt}, we can now write down directly the connected
correlation function at equal times as
\begin{equation}
\tilde{C}_\mathbf{q}(t_{\rm w},t_{\rm w})=\tilde{C}_\mathbf{q}(0,0)\frac{e^{-2\omegat_{\rm w}}}{g(t_{\rm w})}
+2T_{\rm c}\frac{e^{-2\omegat_{\rm w}}}{g(t_{\rm w})}\int_0^{t_{\rm w}}dt'e^{2\omega t'}g(t')
\label{Cqtt_magn}
\end{equation}
At long times, the first term is subleading due to the
scaling\eq{g_scaling_magn}, and one has the behaviour
\begin{equation}
\tilde{C}_\mathbf{q}(t_{\rm w},t_{\rm w}) = \frac{T_{\rm c}}{\omega}\sc{C}(\omega t_{\rm w}), \quad
\sc{C}(w) = 2w\int_0^1 dy\,y^{\alpha} e^{-2w(1-y)}
\label{C_scaling_magn}
\end{equation}
This result is of course the same as\eq{Xq_scaling}, except for the
replacement of $-\kappa$ by $\alpha$ which reflects the difference in the
asymptotic behaviour of $g(t)$.
The two-time connected correlations are
$\tilde{C}_\mathbf{q}(t,t_{\rm w})=R_\mathbf{q}(t,t_{\rm w})\tilde{C}_\mathbf{q}(t_{\rm w},t_{\rm w})$ with $R_\mathbf{q}(t,t_{\rm w})$
given by\eq{Rq} as before. As a consequence, the expression\eq{Xq} for
the FDR $X_\mathbf{q}$ also remains valid, and one finds the scaling form
$X_\mathbf{q}(t,t_{\rm w}) = \sc{X}(\omegat_{\rm w})$ with
\begin{equation}
\sc{X}^{-1}(w) = 2-(2w+\alpha)\int_0^1 dy\
y^{\alpha}e^{-2w(1-y)}
\end{equation}
which is directly analogous to\eq{Xq_scaling}. In the limit
$w=\omegat_{\rm w}\rightarrow\infty$, $X_\mathbf{q}=1$, which means that all
modes with $\mathbf{q}\neq 0$ equilibrate once $t_{\rm w}\gg 1/\omega$. In the
opposite limit $w\rightarrow 0$, corresponding to $t_{\rm w}\ll
1/\omega$,
\begin{equation}
X_\mathbf{q}=\frac{\alpha+1}{\alpha+2} = \left\{\begin{array}{lll}
2/3 & \mbox{ } & (d>4) \\
d/(d+2) & \mbox{ } & (d<4)
\end{array}
\right.
\label{Xqneq}
\end{equation}
For $\mathbf{q}\to\mathbf{0}$ this result applies independently of the value of
$t_{\rm w}$ as long as $t_{\rm w}\gg 1$, so that the FDR for the block
magnetization will be a straight line with slope\eq{Xqneq}. This is as
for the unmagnetized case, but the actual value of the FDR is now {\em
different}. It is also different from the value $X^\infty=4/5$
predicted for Ising models in the limit of large
$d$~\cite{GarSolPagRit05}; we will see below that the latter value is
obtained for the {\em global} magnetization, which is affected by
local spin fluctuations of ${{\mathcal O}}(N^{-1/2})$.
For later reference we write down the long-time forms of the
correlation and response functions for $\mathbf{q}\to\mathbf{0}$. By setting
$\omega=0$ in\eq{Cqtt_magn} and taking the long-time limit where the
first term becomes negligible, we find for the connected equal-time
correlator
\begin{equation}
\tilde{C}_\mathbf{0}(t_{\rm w},t_{\rm w}) = \frac{2\Tct_{\rm w}}{1+\alpha}
\end{equation}
The response function is, from\eq{Rq} and\eq{m_in_terms_of_g},
\begin{equation}
R_\mathbf{0}(t,t_{\rm w}) = \sqrt{\frac{g(t_{\rm w})}{g(t)}} = \frac{m(t)}{m(t_{\rm w})}
= \left(\frac{t_{\rm w}}{t}\right)^{\alpha/2}
\label{R_simplification_magn}
\end{equation}
where the last equality holds for long times. The two-time correlator
is therefore
\begin{equation}
\tilde{C}_\mathbf{0}(t,t_{\rm w})=\frac{2\Tct_{\rm w}}{1+\alpha}\left(\frac{t_{\rm w}}{t}\right)^{\alpha/2}
\label{Cc_0}
\end{equation}
From these results one of course retrieves the long-time FDR
$X_\mathbf{0}(t,t_{\rm w})={T_{\rm c} R_\mathbf{0}(t,t_{\rm w})}/{\tilde{C}_\mathbf{0}'(t,t_{\rm w})}=({\alpha+1})/({\alpha+2})$,
obtained in\eq{Xqneq} via the limit $\omegat_{\rm w}\to 0$. As explained
above, these results apply in the regime $1/L\ll q\ll
1$. For $\mathbf{q}=\mathbf{0}$ itself, they capture only the Gaussian part of the
spin-fluctuations, and non-Gaussian corrections become relevant as
discussed in the next section.
\subsection{General expressions for magnetization correlation and response}
We now turn to the FD behaviour of the global magnetization,
corresponding to $\mathbf{q}=\mathbf{0}$ rather than $q\gg {1}/{L}$. All $N$ spins
are now involved and one needs to account for the fluctuating
contribution of the Lagrange multiplier, which we write as
$z+N^{-1/2}\Delta z$ as before. To understand why this is necessary in
the magnetized case, but was not in the unmagnetized scenario,
consider the equation of motion\eq{dotSq} for the zero-wavevector
Fourier component of the spins,
\begin{equation}
\partial_t S_\mathbf{0} = -(z+N^{-1/2}\Delta z)S_\mathbf{0} + \xi_\mathbf{0}
\end{equation}
In the unmagnetized case, $S_\mathbf{0}$ is a zero-mean quantity of
${{\mathcal O}}(N^{1/2})$. The $\Delta z$-term then contributes only subleading
${{\mathcal O}}(1)$ fluctuations. For nonzero magnetization, on the other hand, the
mean of $S_\mathbf{0}$ is $Nm$, with fluctuations around this of
${{\mathcal O}}(N^{1/2})$. The coupling of $\Delta z$ with $m$ then gives an
${{\mathcal O}}(N^{1/2})$ contribution to $\partial_t S_\mathbf{0}$, which is no
longer negligible.
To find the resulting non-Gaussian fluctuations in $S_\mathbf{0}$ explicitly,
we make the decomposition $S_i=s_i+N^{-1/2}r_i$ as before. The
discussion in Sec.~\ref{sec:setup} then goes through, and we
retrieve\eq{yt3} for the ${{\mathcal O}}(N^{-1/2})$-corrections $r_i$ to the
spins. For the zero Fourier mode, in particular, we have
\begin{equation}
r_\mathbf{0}(t)=-\frac{1}{2} \int dt' dt''R_\mathbf{0}(t,t') s_\mathbf{0}(t')
L(t',t'')\Delta(t'')
\label{r0}
\end{equation}
To simplify the calculation of connected correlations, we now
decompose the Gaussian part of the spins into $s_i = m + u_i$, so
that the $u_i$ are zero-mean Gaussian variables. This corresponds to a
decomposition $U_i=u_i+\sqrt{N}r_i$ of the fluctuating parts of the
spins into leading Gaussian terms and small non-Gaussian corrections,
in analogy to the representation $S_i=s_i+\sqrt{N}r_i$ in the
unmagnetized case. The $u_i$ obey the equation of motion\eq{dot_Ui},
and their correlation and response functions are the $\tilde{C}_\mathbf{q}$ and
$R_\mathbf{q}$ calculated previously.
We will write the connected correlation function for the global
magnetization which {\em includes} non-Gaussian corrections as
$C_{\rm m}(t,t_{\rm w})=(1/N)\left\langle U_\mathbf{0}(t)U_\mathbf{0}(t_{\rm w})\right\rangle$. Making the
decomposition into Gaussian and non-Gaussian parts, this reads
\begin{equation}
C_{\rm m}(t,t_{\rm w})=\frac{1}{N}
\langle[u_\mathbf{0}(t)+N^{-1/2}r_\mathbf{0}(t)][u_\mathbf{0}(t_{\rm w})+N^{-1/2}r_\mathbf{0}(t_{\rm w})]\rangle
\label{concor}
\end{equation}
with, from\eq{r0},
\begin{eqnarray}
r_\mathbf{0}(t) &=& -\frac{1}{2}\int dt'\,dt''\,
R_\mathbf{0}(t,t')[Nm(t')+u_\mathbf{0}(t')]L(t',t'')\Delta(t'') \\
&=& -\frac{N}{2}\int dt'\,M(t,t')\Delta(t')
\label{r}
\end{eqnarray}
Here we have defined
\begin{equation}
M(t,t_{\rm w})=\int dt'\,R_\mathbf{0}(t,t') m(t') L(t',t_{\rm w}) =
m(t) \int^t dt'\,L(t',t_{\rm w})
\label{Mdef}
\end{equation}
where the second form follows from\eq{R_simplification_magn}. $M$ is,
like $R$ and $L$, causal and so vanishes for $t<t_{\rm w}$. In\eq{r}
we have also discarded the Gaussian fluctuation term $u_\mathbf{0}$, which is
of ${{\mathcal O}}(N^{1/2})$ and so negligible against the term $Nm$. This is
in line with the intuition discussed earlier that non-Gaussian
fluctuations arise only from the coupling of $\Delta z$ to $m$. Note
also that $r_\mathbf{0}$ is ${{\mathcal O}}(N)$, so that in\eq{concor} the
non-Gaussian correction $N^{-1/2}r_\mathbf{0}$ is of the same order as the
Gaussian fluctuation $u_\mathbf{0}$, again as expected.
Substituting\eq{r} into\eq{concor}, we see that we need the two-time
correlations of $u_\mathbf{0}$ and $\Delta$. In the presence of a nonzero $m$
the latter becomes
\begin{equation}
\Delta=\frac{1}{\sqrt{N}}\sum_i(s_i^2-1) = \frac{1}{\sqrt{N}}\sum_i
(m^2-1+2u_i m+u_i^2)
\label{Delta_m_nonzero}
\end{equation}
The required correlations are therefore $\left\langle u_\mathbf{0}(t)u_\mathbf{0}(t_{\rm w})\right\rangle =
N\tilde{C}_\mathbf{0}(t,t_{\rm w})$ and
\begin{eqnarray}
\fl \langle u_\mathbf{0}(t)\Delta(t')\rangle &=& \frac{1}{\sqrt{N}}\sum_{ij}
\langle
u_i(t)[m^2(t')-1+2u_j(t')m(t')+u_j^2(t')]\rangle
\\
\fl
&=& \frac{2}{\sqrt{N}} m(t')
\sum_{ij}\langle u_i(t)u_j(t')\rangle= 2m(t')\sqrt{N}\tilde{C}_\mathbf{0}(t,t')
\end{eqnarray}
For the autocorrelation of $\Delta$, we can exploit the fact
that $\left\langle \Delta\right\rangle = 0$ to write\eq{Delta_m_nonzero} as $\Delta = N^{-1/2}
\sum_i (2u_i m+u_i^2 - \left\langle u_i^2\right\rangle)$. This gives
\begin{eqnarray}
\fl \langle \Delta(t')\Delta (t_{\rm w}')\rangle
&=&
\frac{1}{N}\sum_{ik}\langle[2u_i(t')m(t')+u_i^2(t')-\langle
u_i^2(t')\rangle][\cdots
t'\rightarrow t_{\rm w}' \cdots]\rangle
\\
\fl &=& \frac{4}{N}m(t')m(t_{\rm w}')\sum_{ij}\langle u_i(t')u_j(t_{\rm w}')\rangle
+\frac{2}{N}\sum_{ij}\langle u_i(t')u_j(t_{\rm w}')\rangle^2
\\
\fl &=& 4m(t')m(t_{\rm w}')\tilde{C}_\mathbf{0}(t',t_{\rm w}')+2\int(dq)\,\tilde{C}_\mathbf{q}^2(t',t_{\rm w}')
\end{eqnarray}
where we have used Wick's theorem to simplify the fourth-order average
$\langle (u_i^2-\langle u_i^2\rangle)(u_j^2-\langle
u_j^2\rangle)\rangle = \langle u_i^2 u_j^2\rangle - \langle u_i^2
\rangle \langle u_j^2\rangle = 2\langle u_i
u_j\rangle^2$. Abbreviating the $q$-integral as $\tilde{C}\Cc(t',t_{\rm w}')$, the
full connected correlation function\eq{concor} can thus be written as
\begin{eqnarray}
\fl C_{\rm m}(t,t_{\rm w})&=&
\tilde{C}_\mathbf{0}(t,t_{\rm w})
-\frac{1}{2\sqrt{N}} \int dt'\,M(t,t')\langle u_\mathbf{0}(t_{\rm w})\Delta(t')\rangle
\nonumber
\\
\fl &&{}-{}
\frac{1}{2\sqrt{N}} \int dt'\, M(t_{\rm w},t')
\langle u_\mathbf{0}(t)\Delta(t')\rangle
\nonumber
\\
\fl &&{}+{}\frac{1}{4}\int\,dt'\,dt_{\rm w}'\,
M(t,t') M(t_{\rm w},t_{\rm w}') \langle \Delta(t_{\rm w}')\Delta(t')\rangle
\\
\fl &=&
C_{\rm m}^{(1)}(t,t_{\rm w})+C_{\rm m}^{(2)}(t,t_{\rm w})
\label{DM}
\end{eqnarray}
where
\begin{eqnarray}
\fl C_{\rm m}^{(1)}(t,t_{\rm w})&=&\tilde{C}_\mathbf{0}(t,t_{\rm w})-\int dt'\,
[M(t,t')\tilde{C}_\mathbf{0}(t_{\rm w},t')+M(t_{\rm w},t')\tilde{C}_\mathbf{0}(t,t')]m(t')
\nonumber\\
\fl & &{}+{}\int\, dt'\,dt_{\rm w}'\, M(t,t')M(t_{\rm w},t_{\rm w}')m(t')m(t_{\rm w}')\tilde{C}_\mathbf{0}(t',t_{\rm w}')
\label{c_m1_unscaled}
\\
\fl &=& \int\, dt'\,dt_{\rm w}'\, [\delta(t-t')-M(t,t')m(t')]
\nonumber\\
\fl & & \times [\delta(t_{\rm w}-t_{\rm w}')-M(t_{\rm w},t_{\rm w}')m(t_{\rm w}')]\tilde{C}_\mathbf{0}(t',t_{\rm w}')
\label{DM_part}
\end{eqnarray}
and
\begin{equation}
\fl C_{\rm m}^{(2)}(t,t_{\rm w})=\frac{1}{2}\int\, dt'\,dt_{\rm w}'\, M(t,t')M(t_{\rm w},t_{\rm w}')\tilde{C}\Cc(t',t_{\rm w}')
\label{c_m2}
\end{equation}
Next we derive an expression for the corresponding magnetization response
function. To this purpose we expand the spins for small fields as
\begin{equation}
S_i=s_i+hr_i
\end{equation}
where $s_i$ are the unperturbed spins and we neglect the
${{\mathcal O}}(N^{-1/2})$ non-Gaussian corrections as irrelevant, as in the
unmagnetized case. The Lagrange multiplier is similarly written as
$z+h\Delta z$. By collecting the ${{\mathcal O}}(h)$ terms from the equation
of motion for the $S_i$, we then find by analogy with\eq{dot_ri} that
the $r_i$ obey
\begin{equation}
\partial_t{r_i}=-\Omega_{ij}r_j-zr_i-\Delta z\, s_i+\delta(t-t_{\rm w})
\label{dot_ri_magn}
\end{equation}
Here the last term represents a field impulse at time $t_{\rm w}$, uniform
over all sites as is appropriate for the field conjugate to the
global magnetization. Since $r_i(t)=0$ before the field is applied,
i.e.\ for $t<t_{\rm w}$, this impulse perturbation gives
$r_i(t_{\rm w}^+)=1$. Starting from this value we can then
integrate\eq{dot_ri_magn} forward in time to get
\begin{equation}
r_i(t)=\sum_j R_{ij}(t,t_{\rm w})-\int_{t_{\rm w}}^t dt'\, R_{ij}(t,t')\Delta z(t')s_j(t')
\label{ri_sol_magn}
\end{equation}
The condition we need to impose in order to get $\Delta z$ is
that the spherical constraint $(1/N)\sum_i(s_i+hr_i)^2=1$ needs to be
satisfied to linear order in $h$, giving the condition
$({1}/{N})\sum_i\langle r_is_i\rangle=0$. Inserting\eq{ri_sol_magn} into
this yields
\begin{equation}
R_\mathbf{0}(t,t_{\rm w})m(t)=\int_{t_{\rm w}}^{t}dt'\,K(t,t')\Delta z(t')
\end{equation}
where we have used the definition\eq{p_def} of $K(t,t')$. Applying the
inverse kernel $L$ gives
\begin{equation}
\Delta z(t)=\int_{t_{\rm w}}^t dt'\,L(t,t')m(t')R_\mathbf{0}(t',t_{\rm w})
\end{equation}
Note that this result vanishes when $m=0$, consistent with the fact
that we did not need to consider perturbations of $z$ in our
calculation of the magnetization response in the unmagnetized case. We
can now write down the magnetization response function, which we
denote by $R_{\rm m}(t,t_{\rm w})$. It is given by $R_{\rm m}=({1}/{N})\sum_i\langle r_i
\rangle$; inserting the result for $\Delta z$ into\eq{ri_sol_magn}, we
get explicitly
\begin{eqnarray}
\fl R_{\rm m}(t,t_{\rm w}) &=& R_\mathbf{0}(t,t_{\rm w})-\int
dt''\,dt'\,R_\mathbf{0}(t,t'')m(t'')L(t'',t')m(t')R_\mathbf{0}(t',t_{\rm w})
\\
\fl &=& \int dt'\, [\delta(t-t')-M(t,t')m(t')]R_\mathbf{0}(t',t_{\rm w})
\label{response_magn}
\end{eqnarray}
This completes the derivation of the general expressions for the
magnetization correlation and response. To make progress, we need to
find the kernel $M(t,t')$. This requires $L(t,t')$, which is the
inverse of $K(t,t')=\int(dq)\, R_\mathbf{q}(t,t')C_\mathbf{q}(t,t')$. As is clear from the
discussion in Sec.~\ref{sec:setup}, the correlator occurring here is
the {\em unsubtracted} one. Because $C_\mathbf{0}(t,t')=Nm(t)m(t')$ is
${{\mathcal O}}(N)$, the $\mathbf{q}=\mathbf{0}$ term needs to be treated separately in
spite of its vanishing weight $1/N$. It makes a contribution
$(1/N)R_\mathbf{0}(t,t')Nm(t)m(t')= m^2(t)\theta(t-t')$, where we have
simplified using\eq{R_simplification_magn}. We can thus write
\begin{equation}
K(t,t') = \tilde{K}(t,t') + m^2(t)\theta(t-t')
\label{K_decomp}
\end{equation}
with the $\mathbf{q}\neq \mathbf{0}$ contribution
\begin{equation}
\tilde{K}(t,t') = \int(dq)\, R_\mathbf{q}(t,t')\tilde{C}_\mathbf{q}(t,t')
\end{equation}
We have switched to the connected correlator here; this makes no
difference for $\mathbf{q}\neq 0$, but allows us to include $\mathbf{q}=\mathbf{0}$ in the
integral because $\tilde{C}_\mathbf{0}={{\mathcal O}}(1)$. To say more, we will need to
distinguish between dimensions $d>4$ and $d<4$.
\subsection{Magnetization correlation and response: Non-equilibrium, $d>4$}
The scaling of the connected part $\tilde{K}$ of $K$ can be analysed exactly
as in the case of zero magnetization: it consists of the same equilibrium
time dependence modulated by an aging function,
$\tilde{K}(t,t')=K\eql(t-t')\sc{\tilde{K}}(t/t')$. The aging part can be worked out
exactly as in\eq{p_scaling} with the only difference arising from the
changed asymptotic behaviour of $g(t)\sim t^{\alpha}$ rather than
$t^{-\kappa}$. For $\sc{\tilde{K}}$ we can therefore use
directly\eq{p_scaling_fn_y_integral}, with $\kappa$ replaced by $-\alpha$:
\begin{eqnarray}
\sc{\tilde{K}}(x)&=&
\frac{d-2}{2}(x-1)^{(d-2)/2}x^{-\alpha} \int_0^1 dy\,
y^{\alpha}(x-y)^{-d/2}
\label{sc_Kc}
\end{eqnarray}
In $d>4$, where $\alpha=1$, the integral can be computed explicitly
to give
\begin{equation}
\sc{\tilde{K}}(x)=1-\frac{d-2}{d-4}\left(\frac{x-1}{x}\right)+
\frac{2}{d-4}\left(\frac{x-1}{x}\right)^{\frac{d-2}{2}}
\end{equation}
We will see below that the precise behaviour of this function does not
affect the results. Briefly though, for $x-1\ll 1$ the second term on the
r.h.s.\ is leading so that $\sc{\tilde{K}}$ decreases linearly with $x-1$,
while for large $x$ one finds by expanding in $1/x$ that $\sc{\tilde{K}} \approx
[(d-2)/4]x^{-2}$.
To find the inverse kernel $L$, consider how $K(t,t')$ varies with
$t$. The first part in\eq{K_decomp} starts off close to unity and
decays on ${{\mathcal O}}(1)$ timescales $t-t'$ as $K\eql(t-t')$, with a
modulation by the aging factor $\sc{\tilde{K}}(t/t')$ once $t-t'$ becomes
comparable to $t'$. The second part, on the other hand, is small
initially but only decays on aging timescales. Comparing
$K\eql(t-t')\sim (t-t')^{(2-d)/2}$ to $m^2(t)\sim 1/t$, this second
term therefore eventually becomes dominant, for $t-t'\sim
t^{2/(d-2)}$.
This discussion
suggests that also the inverse kernel $L$ should be composed of two
parts with distinct long-time behaviour. We therefore write
\begin{equation}
L = \tilde{L}+L_\zv
\end{equation}
where $\tilde{L}$ is the inverse of $\tilde{K}$ and $L_\zv$ arises from the
zero-wavevector part of $K$. The continuous part of $L$ is then
similarly decomposed as $L^{(2)} = \tilde{L}^{(2)} + \Ltwo_\zv$.
We proceed by writing the defining equations for $L^{(2)}$ and
$\tilde{L}^{(2)}$. The full inverse $L$ is defined by\eq{pinv_def} and as before has
singular parts which are related to the behaviour of $K(t,t')$ for
$t\to t'$. One can show directly from the definition of $K$, and
exactly as in the unmagnetized case, that
\begin{equation}
K(t'^+,t') = 1, \qquad
\left.\partial_{t'} K(t,t')\right|_{t\to t'^+} = 2T_{\rm c}
\label{K_initial}
\end{equation}
The decomposition\eq{pinv_structure} of the inverse kernel $L$
therefore also remains valid, and from\eq{pinv_def} and\eq{K_decomp}
we get the following equation for its continuous part $L^{(2)}$
\begin{equation}
\fl \int dt'\,[\tilde{K}(t,t')+m^2(t)]L^{(2)}(t',t_{\rm w})
=2T_{\rm c}\tilde{K}(t,t_{\rm w})+2T_{\rm c} m^2(t) -\partial_{t_{\rm w}} \tilde{K}(t,t_{\rm w})
\label{N.6}
\end{equation}
This is the analogue of the relation\eq{ptwo_def} for the case $m=0$. We can
argue similarly for $\tilde{L}$, which is defined by
\begin{equation}
\int dt'\,\tilde{K}(t,t')\tilde{L}(t',t_{\rm w})=\delta(t-t_{\rm w})
\label{defi}
\end{equation}
From\eq{K_decomp} and\eq{K_initial},
\begin{equation}
\tilde{K}(t'^+,t') = 1-m^2(t), \qquad
\left.\partial_{t'}\tilde{K}(t,t')\right|_{t\to t'^+} = 2T_{\rm c}
\end{equation}
and this initial behaviour implies that $\tilde{L}$ can be decomposed as
\begin{equation}
\tilde{L}(t',t_{\rm w})=\frac{\delta'(t'-t_{\rm w})}{1-m^2(t_{\rm w})}
+\frac{2T_{\rm c}}{[1-m^2(t_{\rm w})]^2}\delta(t'-t_{\rm w})-\tilde{L}^{(2)}(t',t_{\rm w})
\label{lc}
\end{equation}
Inserting into the definition\eq{defi} gives for the continuous part
$\tilde{L}^{(2)}$
\begin{equation}
\fl \int dt'\,\tilde{K}(t,t')\tilde{L}^{(2)}(t',t_{\rm w})=
\frac{2T_{\rm c}}{[1-m^2(t_{\rm w})]^2} \tilde{K}(t,t_{\rm w})
-\frac{1}{1-m^2(t_{\rm w})}\partial_{t_{\rm w}}\tilde{K}(t,t_{\rm w})
\label{lc2}
\end{equation}
Now for long times, we can approximate $1-m^2(t_{\rm w})\approx
1$. Then\eq{lc2} becomes identical to the relation\eq{ptwo_def} which
determined $L^{(2)}$ in the unmagnetized case. Since $\tilde{K}$ has the same
scaling form as $K$ in\eq{ptwo_def}, except for the replacement of
$\sc{K}$ by $\sc{\tilde{K}}$, the solution for $\tilde{L}^{(2)}$ can be found in
exactly the same way. In particular, the scaling functions describing
the aging corrections in $\tilde{K}$ and $\tilde{L}^{(2)}$ are again identical, and
we can write directly
\begin{equation}
\tilde{L}^{(2)}(t,t')=L^{(2)}\eql(t-t')\sc{\tilde{K}}(t/t')
\end{equation}
as the long-time form of $\tilde{L}^{(2)}$. Here $L^{(2)}\eql$ is the same function
as in the unmagnetized case, with Laplace transform\eq{ptwo_eql}.
It now remains to find $\Ltwo_\zv$. Subtracting\eq{lc2} from\eq{N.6} gives
\begin{eqnarray}
\fl\lefteqn{
\int_{t_{\rm w}}^t dt'\,\tilde{K}(t,t')\Ltwo_\zv(t',t_{\rm w})+m^2(t)\int_{t_{\rm w}}^t
dt'\,\Ltwo_\zv(t',t_{\rm w}) =
2T_{\rm c} \frac{-2m^2(t_{\rm w})+m^4(t_{\rm w})}{[1-m^2(t_{\rm w})]^2} \tilde{K}(t,t_{\rm w}) }
\nonumber
\\
& &
{}+{}m^2(t)\left[2T_{\rm c}-\int_{t_{\rm w}}^t dt'\,\tilde{L}^{(2)}(t',t_{\rm w})\right]
+\frac{m^2(t_{\rm w})}{1-m^2(t_{\rm w})}\partial_{t_{\rm w}}\tilde{K}(t,t_{\rm w})
\label{deltal2}
\end{eqnarray}
To make progress we assume that, by analogy with the
structure\eq{K_decomp} of $K$, $\Ltwo_\zv$ varies only on aging
timescales; we will find this confirmed {\em a posteriori}. We can
then concentrate on the aging regime $t-t_{\rm w}\simt_{\rm w}\gg 1$. In this
regime, the integral $\int_{t_{\rm w}}^t dt'\,\tilde{L}^{(2)}(t',t_{\rm w}) = \int_{t_{\rm w}}^t dt'\,
L^{(2)}\eql(t'-t_{\rm w}) \sc{\tilde{K}}(t'/t_{\rm w})$ on the r.h.s.\ of\eq{deltal2}
becomes to leading order $\int_{t_{\rm w}}^\infty
dt'\,L^{(2)}\eql(t'-t_{\rm w})=\hatL^{(2)}\eql(0)$; the aging correction
$\sc{\tilde{K}}$ is unimportant because the integral converges for
$t'-t_{\rm w}={{\mathcal O}}(1)$, and the upper integration limit can be set to
infinity for the same reason. From\eq{ptwo_eql},
$\hatL^{(2)}\eql(0)=2T_{\rm c}-1/\hatK\eql(0)$, and so the square bracket on
the r.h.s.\ of\eq{deltal2} becomes simply the constant
$1/\hatK\eql(0)$. In the first and third term on the r.h.s., on the
other hand, $\tilde{K}$ and $\partial_{t_{\rm w}}\tilde{K}$ scale as
$(t-t_{\rm w})^{-(d-2)/2}$ and $(t-t_{\rm w})^{-d/2}$, respectively, and are
negligible compared to the second term in the aging regime.
Disregarding these subleading terms, equation\eq{deltal2}
is transformed to
\begin{equation}
\int_{t_{\rm w}}^t
dt'\,\tilde{K}(t,t')\Ltwo_\zv(t',t_{\rm w})+m^2(t)\int_{t_{\rm w}}^t dt'\,\Ltwo_\zv(t',t_{\rm w})
=\frac{m^2(t)}{\hat{K}\eql(0)}
\end{equation}
In the first integral, if as assumed $\Ltwo_\zv$ varies only on aging
timescales, the integral is dominated by the region $t'\approx t$
because of the factor $K\eql(t-t')$ in $\tilde{K}(t,t')$. This factor again
makes the integral convergent within a region $t-t'={{\mathcal O}}(1)$, and so
we can approximate it by $\hat{K}\eql(0)\Ltwo_\zv(t,t_{\rm w})$. This gives
\begin{equation}
\int_{t_{\rm w}}^{t} dt'\,\Ltwo_\zv(t',t_{\rm w}) =
\frac{1}{\hat{K}\eql(0)}-
\frac{\hat{K}\eql(0)}{m^2(t)}\Ltwo_\zv(t,t_{\rm w})
\label{dl2}
\end{equation}
and specifically in the limit $t/t_{\rm w}\to 1$
\begin{equation}
\Ltwo_\zv(t_{\rm w},t_{\rm w})=\hat{K}\eql^{-2}(0)m^2(t_{\rm w})
\label{dLtwo_initial}
\end{equation}
To find $\Ltwo_\zv$ for $t/t_{\rm w}>1$, we use that $m^2(t)=\mu_d/t$ for large
$t$ and in $d>4$, see\eqq{g_scaling_magn}{m_in_terms_of_g},
with some dimension-dependent coefficient $\mu_d$.
Taking a derivative of\eq{dl2} with respect to $t$ then gives
\begin{equation}
\Ltwo_\zv(t,t_{\rm w})\left[1+\mu_d^{-1}\hat{K}\eql(0)\right]=
-t {\mu_d}^{-1}\hat{K}\eql(0)\partial_t\Ltwo_\zv(t,t_{\rm w})
\end{equation}
This implies $\partial(\ln\Ltwo_\zv)/\partial(\ln t) =
-[1+\mu_d/\hat{K}\eql(0)]$ and so together with\eq{dLtwo_initial}
\begin{equation}
\Ltwo_\zv(t,t_{\rm w}) = \hat{K}\eql^{-2}(0)
\frac{\mu_d}{t_{\rm w}}\left(\frac{t}{t_{\rm w}}\right)^{-[1+\mu_d/\hat{K}\eql(0)]}
\label{incremento}
\end{equation}
This can be simplified because in fact $\mu_d=\hat{K}\eql(0)$. To see
this, note from\eq{gs2} that $\hat{g}(s)/m^2(0)=1/[\hat{K}\eql(0)s^2]$
for small $s$; here we have used that $\hat{K}\eql(0)$ is finite for
$d>4$. Transforming back to the time domain gives
$g(t)/m^2(0)=m^{-2}(t)=t/\hat{K}\eql(0)$ for large $t$, i.e.\
$m^2(t)=\hat{K}\eql(0)/t$.
Thus\eq{incremento} simplifies to
\begin{equation}
\Ltwo_\zv(t,t_{\rm w}) = \frac{t_{\rm w}}{\mu_d t^2}
\label{dLtwo}
\end{equation}
This result is consistent with our assumption that $\Ltwo_\zv$
only varies on aging timescales. Overall, we have thus found for
$L^{(2)}(t,t_{\rm w})$ the following long-time form
\begin{equation}
L^{(2)}(t,t_{\rm w})=L^{(2)}\eql(t-t_{\rm w})\sc{\tilde{K}}\left({t}/{t_{\rm w}}\right)
+\frac{t_{\rm w}}{\mu_d t^2}
\label{Ltwo_high_d}
\end{equation}
Of course, for time differences $t-t_{\rm w}={{\mathcal O}}(1)$, $\Ltwo_\zv(t,t_{\rm w})$
will deviate from the form\eq{dLtwo} derived for the aging
regime. However, one can verify by expanding both sides of\eq{deltal2} to
${{\mathcal O}}(t-t_{\rm w})$ that, even for $t=t_{\rm w}$, $\Ltwo_\zv$ remains of order $1/t_{\rm w}$,
so that these small deviations are always negligible compared to the
first term in\eq{Ltwo_high_d}.
We can now proceed to find $M$ as defined in\eq{Mdef}, and from there
finally the explicit forms of the magnetization correlation and
response functions. Using the general structure\eq{pinv_structure} of
$L$ we have
\begin{equation}
m^{-1}(t)M(t,t_{\rm w})=\delta(t-t_{\rm w})+2T_{\rm c}-\int_{t_{\rm w}}^{t}dt'\, L^{(2)}(t',t_{\rm w})
\label{M_general}
\end{equation}
The integral can be separated into the contributions from the two
parts of $L^{(2)}$ as given in\eq{Ltwo_high_d}. The first part yields an
integral that converges for $t'-t_{\rm w}={{\mathcal O}}(1)$; for $t-t_{\rm w}\gg 1$, it
therefore gives $\hat{L}\eql(0)=2T_{\rm c}-1/\hat{K}\eql(0)=2T_{\rm c}-1/\mu_d$ to leading
order. The second part, on the other hand, yields explicitly
$\int_{t_{\rm w}}^{t}dt' (t_{\rm w}/\mu_d t'^2) = \mu_d^{-1}(1-t_{\rm w}/t)$, so that
\begin{equation}
\fl m^{-1}(t)M(t,t_{\rm w})=\delta(t-t_{\rm w})+\frac{1}{\mu_d} -
\frac{1}{\mu_d}\left(1-\frac{t_{\rm w}}{t}\right) = \delta(t-t_{\rm w}) +
\frac{1}{\mu_d}\frac{t_{\rm w}}{t}
\label{M}
\end{equation}
This result applies for $t-t_{\rm w}\gg 1$. For $t-t_{\rm w}={{\mathcal O}}(1)$ it is not
accurate; e.g.\ at $t=t_{\rm w}$ the continuous part of $m^{-1}(t)M(t,t_{\rm w})$
is, from\eq{M_general}, $2T_{\rm c}$ rather than $1/\mu_d$. However, this
deviation over an ${{\mathcal O}}(1)$ time-range only gives subleading corrections in
the integrals over $M$ that we need below, as indeed does the
$\delta(t-t_{\rm w})$-term.
This can be seen in\eq{DM_part} and\eq{response_magn}, where only the
combination $\delta(t-t_{\rm w})-M(t,t_{\rm w})m(t_{\rm w})$ occurs; the latter can be written as
\begin{equation}
\fl\delta(t-t_{\rm w})-M(t,t_{\rm w})m(t_{\rm w}) = \delta(t-t_{\rm w})[1-m(t)m(t_{\rm w})] -
\frac{t_{\rm w}^{1/2}}{t^{3/2}} = \delta(t-t_{\rm w}) - \frac{t_{\rm w}^{1/2}}{t^{3/2}}
\label{1_minus_Mm}
\end{equation}
The second form holds for long times, where the $m(t)m(t_{\rm w})$ term that
originated from the $\delta$-term in\eq{M} is negligible.
We can now work out the expression\eq{DM} for the full connected
correlation function. One can show that the contribution $C_{\rm m}^{(2)}$
involving $\tilde{C}\Cc$ is negligible in the long-time limit.
In the expression\eq{c_m1_unscaled} for the remainder $C_{\rm m}^{(1)}$,
let us call the second and third term $I_2$ and $I_3$.
We need $\tilde{C}_\mathbf{0}$, which
from\eq{Cc_0} reads $\tilde{C}_\mathbf{0}(t',t_{\rm w}')=T_{\rm c} t_{\rm w}'^{3/2}t'^{-1/2}$ for $t'>t_{\rm w}'$;
because $\tilde{C}_\mathbf{0}$ is symmetric in time this then implies
$\tilde{C}_\mathbf{0}(t',t_{\rm w}')=T_{\rm c} t'^{3/2}t_{\rm w}'^{-1/2}$ for $t'<t_{\rm w}'$. Paying
due attention to this temporal ordering of the arguments of $\tilde{C}_\mathbf{0}$ and
using\eq{1_minus_Mm}, one finds
\begin{eqnarray}
\fl I_2(t,t_{\rm w}) &=& T_{\rm c}\left[\int_0^{t_{\rm w}}dt'\, \frac{t'^{1/2}}{t^{3/2}}
\frac{t'^{3/2}}{t_{\rm w}^{1/2}}+\int_{t_{\rm w}}^{t}dt'\,
\frac{t'^{1/2}}{t^{3/2}}\frac{t_{\rm w}^{3/2}}{t'^{1/2}}
\right] + T_{\rm c} \int_0^{t_{\rm w}}dt'\, \frac{t'^{1/2}}{t_{\rm w}^{3/2}}
\frac{t'^{3/2}}{t^{1/2}}
\\
\fl &=& T_{\rm c}\left(\frac{t_{\rm w}}{t}\right)^{3/2}\left(\frac{4}{3}t-\frac{2}{3}t_{\rm w}\right)
\end{eqnarray}
Similarly, the double integral in\eq{c_m1_unscaled} can be evaluated as
\begin{eqnarray}
\fl I_3(t,t_{\rm w})&=&\int_0^t dt'\int_0^{t_{\rm w}}\!\!dt_{\rm w}'\,
\frac{t'^{1/2}}{t^{3/2}}\frac{t_{\rm w}'^{1/2}}{t_{\rm w}^{3/2}}\,\tilde{C}_\mathbf{0}(t_{\rm w}',t')
\\
\fl &=&T_{\rm c}\int_{t_{\rm w}}^t dt'\int_0^{t_{\rm w}}\!\!dt_{\rm w}'\, \frac{t'^{1/2}}{t^{3/2}}
\frac{t_{\rm w}'^{1/2}}{t_{\rm w}^{3/2}}\frac{t_{\rm w}'^{3/2}}{t'^{1/2}}
+2T_{\rm c}\int_0^{t_{\rm w}}\!\! dt'\int_0^{t'}\!\!dt_{\rm w}'\, \frac{t'^{1/2}}{t^{3/2}}
\frac{t_{\rm w}'^{1/2}}{t_{\rm w}^{3/2}}\frac{t_{\rm w}'^{3/2}}{t'^{1/2}}
\\
\fl &=&\frac{T_{\rm c}}{3}\left(\frac{t_{\rm w}}{t}\right)^{3/2}(t-t_{\rm w})
+ \frac{2T_{\rm c}}{3}\int_0^{t_{\rm w}} dt'\frac{t'^3}{t^{3/2}t_{\rm w}^{3/2}}
\\
\fl &=&\frac{T_{\rm c}}{6}\left(\frac{t_{\rm w}}{t}\right)^{3/2}(2t-t_{\rm w})
\end{eqnarray}
Our final long-time result for the connected magnetization correlator
including non-Gaussian corrections is then
\begin{eqnarray}
\fl C_{\rm m}(t,t_{\rm w})&=&
T_{\rm c}\frac{t_{\rm w}^{3/2}}{t^{1/2}}+I_2(t,t_{\rm w})+I_3(t,t_{\rm w})\\
\fl &=&
T_{\rm c}\left(\frac{t_{\rm w}}{t}\right)^{3/2}\left(t-\frac{4}{3}t+\frac{2}{3}t_{\rm w}
+\frac{1}{3}t-\frac{1}{6}t_{\rm w}\right)
\ = \ \frac{\Tct_{\rm w}}{2}
\left(\frac{t_{\rm w}}{t}\right)^{3/2}
\label{Cm_dgt4}
\end{eqnarray}
For the conjugate magnetization response function we get
from\eq{response_magn} and\eq{M}, together with
$R_\mathbf{0}(t,t_{\rm w})=(t_{\rm w}/t)^{1/2}$,
\begin{eqnarray}
\fl R_{\rm m}(t,t_{\rm w}) &=&
R_\mathbf{0}(t,t_{\rm w})-\int_{t_{\rm w}}^t dt'\,
\frac{t'^{1/2}}{t^{3/2}}R_\mathbf{0}(t',t_{\rm w})
\\
\fl &=&\left(\frac{t_{\rm w}}{t}\right)^{1/2}-\frac{t_{\rm w}^{1/2}}{t^{3/2}}(t-t_{\rm w})
=\left(\frac{t_{\rm w}}{t}\right)^{3/2}
\label{Rm_dgt4}
\end{eqnarray}
The FDR follows finally as
\begin{equation}
X_{\rm m}(t,t_{\rm w})=\frac{T_{\rm c}R_{\rm m}(t,t_{\rm w})}{C_{\rm m}'(t,t_{\rm w})}=\frac{4}{5}
\label{X_m_d_gt_4}
\end{equation}
Interestingly, this agrees exactly with the result for the Ising
ferromagnet in the limit of large dimensionality
$d$~\cite{GarSolPagRit05}. As in the unmagnetized case, we see
therefore that it is the {\em global} observables in the spherical
model, which are strongly affected by non-Gaussian fluctuations, that
behave like their analogues in short-range models. In fact, even the
expressions for the correlation and response functions we find here
are identical to those in the large-$d$ Ising case, implying
``universality'' at a more detailed level than one might have expected.
It is worth noting
that the effect of the non-Gaussian corrections is very large:
compared to the Gaussian result
$\tilde{C}_\mathbf{0}(t,t_{\rm w})\sim t_{\rm w}(t_{\rm w}/t)^{1/2}$, the corrections increase the
decay exponent
to $C_{\rm m}(t,t_{\rm w})\simt_{\rm w}(t_{\rm w}/t)^{3/2}$, so that $C_{\rm m}/\tilde{C}_\mathbf{0} \sim t_{\rm w}/t
\ll 1 $ for $t\gg t_{\rm w}$: there is an almost perfect cancellation of
Gaussian terms and non-Gaussian corrections for well-separated
times. Similar comments apply to the response. The
overall effect of the non-Gaussian corrections on the FD relation is
to leave this as a straight line (since $X$ is constant), but to
increase the slope from 2/3 to 4/5.
\subsection{Magnetization correlation and response: Non-equilibrium, $d<4$}
We now consider systems below the upper critical dimension, $d<4$;
here there are no predictions yet from other models for the
non-equilibrium FD behaviour following a quench of a magnetized
initial state to $T_{\rm c}$ (but see Sec.~\ref{sec:conclusion}). As in the
case of the energy correlations for
unmagnetized initial states, leading order cancellation effects have
to be taken care of for these low values of $d$.
We again need to know the scaling of $K$ to determine $L^{(2)}$; from
this we then get $M$ and finally the correlation and response
functions. The long-time scaling of the connected part of $K$ is
$\tilde{K}(t,t')=K\eql(t-t')\sc{\tilde{K}}({t}/{t'})$ as before, with $\sc{\tilde{K}}$
given by \eq{sc_Kc} but now $\alpha=(d-2)/2$. The contribution to $K$
from $\mathbf{q}=\mathbf{0}$, given by second term in\eq{K_decomp}, is negligible
relative to $\tilde{K}(t,t')$ for $t-t'={{\mathcal O}}(1)$. However, for $t-t'\gg
1$, it becomes comparable and has the same overall time scaling as
$\tilde{K}(t,t')$ in the aging regime. To see this explicitly, recall
from\eqq{g_scaling_magn}{m_in_terms_of_g} that
the square of the magnetization decays asymptotically as $m^2(t)=\mu_d
t^{-\alpha}=\mu_d t^{(2-d)/2}$ with some constant $\mu_d$. Similarly,
the equilibrium part of $\tilde{K}$ behaves as $K\eql(t-t') =
k_d(t-t')^{(2-d)/2}$ for $t-t'\gg 1$ (see after\eq{p_eql}). This gives
\begin{equation}
\frac{m^2(t)}{K\eql(t-t')}
=
\frac{\mu_d(t-t')^{(d-2)/2}}{k_dt^{(d-2)/2}}=
\frac{\mu_d}{k_d}\left(\frac{t/t'-1}{t/t'}\right)^{(d-2)/2}
\label{scaling}
\end{equation}
in the aging regime where both $t'$ and $t-t'$ are large.
(For $t-t'={{\mathcal O}}(1)$ this
expression is not accurate but this is irrelevant because there the
term $m^2(t)$ is subleading compared to $\tilde{K}(t,t')$ anyway.)
We thus have the overall scaling of $K$
\begin{equation}
\fl K(t,t')=K\eql(t-t')\sc{K}(t/t'),
\quad
\sc{K}(x) = \sc{\tilde{K}}(x)+\frac{\mu_d}{k_d}
\left(\frac{x-1}{x}\right)^{(d-2)/2}
\label{Ktot}
\end{equation}
To simplify the scaling function, we integrate the
expression\eq{sc_Kc} by parts and rescale $y\to xy$, bearing in mind
that $\alpha=(d-2)/2$:
\begin{eqnarray}
\sc{\tilde{K}}(x)&=&\frac{d-2}{2} \left(\frac{x-1}{x}\right)^{(d-2)/2}
\left[\frac{2}{d-2}(x-1)^{(2-d)/2}\right.
\nonumber\\
& & {}-{}\left.\int_0^{1/x}dy\,y^{(d-4)/2}(1-y)^{(2-d)/2}\right]
\end{eqnarray}
For $x\to 1$ the integral becomes a Beta-function which evaluates to
$\Gamma((d-2)/2) \Gamma((4-d)/2)$, and extracting this term gives
\begin{eqnarray}
\fl \sc{\tilde{K}}(x)&=& x^{(2-d)/2}-\Gamma\left({\textstyle\frac{d}{2}}\right)
\Gamma\left({\textstyle\frac{4-d}{2}}\right) \left(\frac{x-1}{x}\right)^{(d-2)/2}
\nonumber\\
\fl & & {}+{} \frac{d-2}{2}
\left(\frac{x-1}{x}\right)^{(d-2)/2}\int_{1/x}^1 dy\,
y^{(d-4)/2}(1-y)^{(2-d)/2}
\end{eqnarray}
The second term has the same dependence on $x$ as the additional
contribution from zero wavevector in\eq{Ktot}. In fact, it turns out
that these two terms cancel exactly: from our
definitions of $k_d$ and $\mu_d$ we have $K\eql(t)=k_dt^{(2-d)/2}$ for
large $t$, while $m^{-2}(t)=g(t)/m^2(0)=\mu_d^{-1}t^{(d-2)/2}$. Laplace
transforming gives $\hat{K}\eql(s)=k_d \Gamma((4-d)/2) s^{(d-4)/2}$
and $\hat{g}(s)/m^2(0) = \mu_d^{-1}\Gamma(d/2)s^{-d/2}$ to leading
order for small $s$. But then\eq{gs2} shows that
$\mu_d^{-1}\Gamma(d/2)=[k_d\Gamma((4-d)/2)]^{-1}$, or $\mu_d/k_d =
\Gamma(d/2)\Gamma((4-d)/2)$, proving the cancellation anticipated
above. Overall, we thus have for the scaling function of $K$
\begin{equation}
\fl\sc{K}(x)=x^{(2-d)/2}+\frac{d-2}{2}\left(\frac{x-1}{x}\right)^{(d-2)/2}
\int_{1/x}^1 dy\,y^{(d-4)/2}(1-y)^{(2-d)/2}
\label{Ktot_final}
\end{equation}
Expanding for $x\approx 1$, one sees that the leading order variation
is linear in $x-1$:
\begin{eqnarray}
\fl \sc{K}(x)&\approx&1+\frac{2-d}{2}(x-1)+
\frac{d-2}{2}\left(\frac{x-1}{x}\right)^{(d-2)/2}
\int_{1/x}^1dy\, (1-y)^{(2-d)/2}
\nonumber\\
\fl &\approx&1+\frac{1}{2}\frac{(d-2)^2}{4-d}(x-1)
\end{eqnarray}
Note that the prefactor is positive, so that $\sc{K}(x)$ {\em
increases} with $x$ in the current scenario. This trend persists for
all $x$, not just $x\approx 1$, and the scaling function monotonically
approaches a limit value for $x\to\infty$. The latter
follows from\eq{Ktot} as $\mu_d/k_d=\Gamma(d/2)\Gamma((4-d)/2)$,
since the connected contribution $\sc{\tilde{K}}(x)$ decays to zero for
$x\to\infty$.
With the single overall scaling\eq{Ktot} of $K$ we no longer need to
decompose $L^{(2)}$ into $\tilde{L}^{(2)}$ and $\Ltwo_\zv$ as we did in $d>4$;
instead the long-time behaviour of $L^{(2)}$ will have the same
structure as in the unmagnetized case,
\begin{equation}
L^{(2)}(t,t_{\rm w})=L^{(2)}\eql(t-t_{\rm w})\sc{L}(t/t_{\rm w})
\label{Ltot}
\end{equation}
One can then follow exactly the discussion in Sec.~\ref{sec:KandL} to
arrive at the integral equation\eq{pinv_cond}
for $\sc{L}(x)$. Solving the latter
looks a rather formidable task, given that $\sc{K}$ itself has the
relatively complicated form\eq{Ktot_final}. Remarkably, however,
the solution can be found in closed form and is given simply by
\begin{equation}
\sc{L}(x)=\frac{2}{4-d}\,x^{(2-d)/2}+\frac{2-d}{4-d}\,x^{-d/2}
\label{solution_L}
\end{equation}
We were led to this result initially by a systematic series expansion
of both $\sc{K}(x)$ and $\sc{L}(x)$ in terms of $(x-1)/x$. We do not
detail this here, but verify in~\ref{sec:L_solution} by
direct calculation that\eq{solution_L} does indeed
solve\eq{pinv_cond}.
We next calculate the kernel $M(t,t_{\rm w})$. One inserts the
scaling\eq{Ltot} into\eq{M_general}, subtracting off and adding back
on the contribution from the equilibrium part of $L^{(2)}$:
\begin{eqnarray}
m^{-1}(t)M(t,t_{\rm w})&=&\delta(t-t_{\rm w})+2T_{\rm c}-\int_{t_{\rm w}}^t
dt'\,L^{(2)}\eql(t'-t_{\rm w})
\nonumber\\
& &{}+{}\int_{t_{\rm w}}^{t} dt'\,L^{(2)}\eql(t'-t_{\rm w})
\left[1-\sc{L}\left(\frac{t'}{t_{\rm w}}\right)\right]
\label{Mt}
\end{eqnarray}
This is done to account for the leading order cancellation of the
second and third terms:
\begin{equation}
\fl 2T_{\rm c}-\int_{t_{\rm w}}^t\!\!
dt'\,L^{(2)}\eql(t'-t_{\rm w}) =
2T_{\rm c}-\hatL^{(2)}\eql(0)+\int_t^\infty \!\!dt'\,L^{(2)}\eql(t'-t_{\rm w}) =
\frac{2\lambda_d}{4-d}
(t-t_{\rm w})^{({d-4})/{2}}
\end{equation}
where in the last step we have restricted ourselves to the aging
regime $t-t_{\rm w}\gg 1$ and used the asymptotic
behaviour\eq{ptwo_eql_asymptotics}, $L^{(2)}\eql(t'-t_{\rm w})=\lambda_d
(t'-t_{\rm w})^{(d-6)/2}$. In the remaining integral in\eq{Mt}, the factor
$1-\sc{L}(t'/t_{\rm w})\sim (t'-t_{\rm w})/t_{\rm w}$ ensures that the integration no
longer has its weight concentrated near $t'=t_{\rm w}$; put differently,
after scaling the integration variable by $t_{\rm w}$ to $y=t'/t_{\rm w}$ the
integral is convergent at the lower end. We thus obtain in the aging
regime
\begin{eqnarray}
\fl m^{-1}(t)M(t,t_{\rm w})&=&\delta(t-t_{\rm w}) + \lambda_dt_{\rm w}^{(d-4)/2}
\left\{\frac{2}{4-d}(x-1)^{(d-4)/2} \right.
\nonumber\\
& & {}+{} \left. \int_1^x dy\,(y-1)^{(d-6)/2}[1-\sc{L}(y)]\right\}
\label{initial_M}
\end{eqnarray}
Inserting\eq{solution_L} for $\sc{L}$, the integral in\eq{initial_M}
can be done explicitly to give
$[2/(4-d)]x^{(2-d)/2}(x-1)^{(d-4)/2}$ for the sum of the terms in
curly brackets. A little care is needed
here because the separate integrals over $(y-1)^{(d-6)/2}$ and
$(y-1)^{(d-6)/2}\sc{L}(y)$ are divergent at the lower end.
One can avoid this by analytical continuation from $d>4$, where these
divergences are absent, or by integrating from $1+\epsilon$ to $x$ and
taking $\epsilon\to 0$ at the
end.
The $\delta$-term is again subleading for long times in the relevant
combination\eq{1_minus_Mm}, which we can write as
\begin{eqnarray}
\fl \delta(t-t_{\rm w})-M(t,t_{\rm w})m(t_{\rm w}) &=& \delta(t-t_{\rm w}) -
\frac{2}{4-d}\frac{\lambda_d\mu_dt_{\rm w}^{(d-4)/2}}{t_{\rm w}^{(d-2)/4}t^{(d-2)/4}}x^{(2-d)/2}(x-1)^{(d-4)/2}
\nonumber\\
\fl &=& \delta(t-t_{\rm w})-\frac{1}{t}\sc{M}(x)
\label{combination}
\end{eqnarray}
with
\begin{equation}
\sc{M}(x) = \frac{d-2}{2}\, x^{(2-d)/4}\left(\frac{x-1}{x}\right)^{(d-4)/2}
\label{M_scaling_final}
\end{equation}
Here we have eliminated the constants $\lambda_d$ and
$\mu_d$ using the following argument. From\eq{gs2},
$\hat{g}(s)/m^2(0)=\hat{L}\eql(s)/s^2$ for small
$s$, i.e.\ $\hat{L}\eql(s)=s^2\hat{g}(s)/m^2(0)$. In the time domain
this gives at long times
$L^{(2)}\eql(t)=-L\eql(t)=-(\partial_t)^2m^{-2}(t)=-\mu_d^{-1}(\partial_t)^2
t^{(d-2)/2} = -\mu_d^{-1}[(d-2)/2][(d-4)/2]t^{(d-6)/2}$, so that
$\lambda_d\mu_d=(d-2)(4-d)/4$.
Reassuringly, for $d\to 4$ the result\eq{M_scaling_final} tends to
$\sc{M}(x)=x^{-1/2}$,
matching smoothly onto the result\eq{1_minus_Mm} we found earlier in $d>4$.
For $d<4$, the scaling function diverges as $x\to 1$. Since
from\eq{M_general} the continuous part of $M(t,t_{\rm w})$ is exactly given
by $2T_{\rm c} m(t)$ for equal times, this indicates that the above aging
regime expression must break down eventually when $t-t_{\rm w}$ becomes
small, as expected. In the integrals where $M$ appears below such
effects can be neglected, however, because they only give subleading
corrections.
With the scaling of $\sc{M}$ in hand we can now compute the connected
correlation function $C_{\rm m}(t,t_{\rm w})$ including non-Gaussian corrections,
given by\eq{DM}. After rescaling the integration variables
the first part\eq{DM_part} can be written as
\begin{eqnarray}
\fl C_{\rm m}^{(1)}(t,t_{\rm w})&=& \int_0^x dy\,
[\delta(y-x)-x^{-1}\sc{M}(x/y)]
\nonumber\\
\fl & & \times \int_0^1 dy_{\rm w}\,
[\delta(y_{\rm w}-1)-\sc{M}(1/y_{\rm w})] \tilde{C}_\mathbf{0}(t_{\rm w} y,t_{\rm w} y')
\label{Ione}
\end{eqnarray}
where the Gaussian magnetization correlator is
\begin{equation}
\fl \tilde{C}_\mathbf{0}(t_{\rm w} y,t_{\rm w} y')
= ({4\Tct_{\rm w}}/{d})\min\left\{y(y/y_{\rm w})^{(d-2)/4},y_{\rm w}(y_{\rm w}/y)^{(d-2)/4}\right\}
\end{equation}
from\eq{Cc_0}.
At first sight\eq{Ione} suggests, e.g.\ from the $\delta(y-x)$ term,
an asymptotic decay of $C_{\rm m}^{(1)}\sim t_{\rm w} x^{(2-d)/4}$
for large $x$. But this would not match continuously with the
result\eq{Cm_dgt4} we found for $d>4$. A cancellation of such leading
order terms must therefore occur for $x\to\infty$. To show this
explicitly, we verify from\eq{M_scaling_final} the identity
\begin{eqnarray}
\fl\lefteqn{\int_0^x dy\,
\left[\delta(y-x)-x^{-1}\sc{M}\left({x}/{y}\right)\right]y^{-(d-2)/4} = }
\nonumber\\
&=& x^{(2-d)/4}-\frac{d-2}{2x}\int_0^x
dy\,(x/y)^{(2-d)/4}(1-y/x)^{(d-4)/2}y^{(2-d)/4}\
\\
&=& x^{(2-d)/4}-\frac{d-2}{2}x^{(2-d)/4}\int_0^1 dz\,(1-z)^{(d-4)/2} = 0
\label{identity}
\end{eqnarray}
Multiplying this by $({4\Tct_{\rm w}}/d) \int_0^1 dy_{\rm w}\, [\delta(y_{\rm w}-1)-
\sc{M}(1/y_{\rm w})]y_{\rm w}^{(d+2)/4}$ and subtracting from\eq{Ione} exactly
cancels all contributions in the range $y>y_{\rm w}$, giving
\begin{eqnarray}
\fl C_{\rm m}^{(1)}(t,t_{\rm w})&=&\frac{4T_{\rm c} t_{\rm w}}{dx} \int_0^1 dy_{\rm w}\,
\int_0^{y_{\rm w}} dy\, \sc{M}\left({x}/{y}\right)
\left[\delta(y_{\rm w}-1)-\sc{M}\left({1}/{y_{\rm w}}\right)\right]
\nonumber\\
\fl & & \times \left[y_{\rm w}\left(\frac{y_{\rm w}}{y}\right)^{(d-2)/4}
- y\left(\frac{y}{y_{\rm w}}\right)^{(d-2)/4}\right]
\label{C_m1}
\end{eqnarray}
This shows that $C_{\rm m}^{(1)}/t_{\rm w}$ is a scaling function of
$x=t/t_{\rm w}$. Its full $x$-dependence has to be found numerically
from\eq{C_m1} or via series
expansion~\cite{Annibale_thesis},
but we can obtain the large-$x$ behaviour that is required for the
asymptotic FDR $X^\infty$ in closed form. For $x\rightarrow\infty$
one can replace the function
$\sc{M}(x/y)$ with its asymptotic form
$[(d-2)/2](x/y)^{(2-d)/4}$ from\eq{M_scaling_final} to get
\begin{eqnarray}
\fl \frac{C_{\rm m}^{(1)}(t,t_{\rm w})}{\Tct_{\rm w}}
&=&\frac{2(d-2)}{d} x^{-(d+2)/4}
\int_0^1 dy_{\rm w}\, \int_0^{y_{\rm w}} dy\,y^{(d-2)/4}
\left[\delta(y_{\rm w}-1)-\sc{M}\left(\frac{1}{y_{\rm w}}\right)\right]
\nonumber\\
\fl& & \times \left[y_{\rm w}\left(\frac{y_{\rm w}}{y}\right)^{(d-2)/4}
-y\left(\frac{y}{y_{\rm w}}\right)^{(d-2)/4}\right]
\\
\fl&=&\frac{2(d-2)}{d+2} x^{-(d+2)/4}
\left[1-\frac{d-2}{2} \frac{\Gamma\left(\frac{d+4}{2}\right)
\Gamma\left(\frac{d-2}{2}\right)}{\Gamma(d+1)}\right]
\label{C_m1inf}
\end{eqnarray}
This exhibits the expected leading order cancellation for large
$x$, which gives an additional factor of $1/x$ compared to the naive
result $x^{(2-d)/4}$.
To complete the calculation of the correlation function we need to
evaluate $C_{\rm m}^{(2)}$ from\eq{c_m2}, which cannot be neglected for
$d<4$. This requires the long-time behaviour of $\tilde{C}\Cc(t,t_{\rm w})$, which is
given by\eq{sc_CC_ord} and\eq{sc_CC_dis} for $t>t_{\rm w}$ and
$t<t_{\rm w}$, respectively, as for the unmagnetized case. The only
modification arises from the different behaviour of $g(t)$. One thus finds
\begin{equation}
\fl \frac{\tilde{C}\Cc(t,t_{\rm w})}{\tilde{C}\Cc(t,t)}={\mathcal{G}}(t/t_{\rm w}),
\quad
{\mathcal{G}}(x)=\left\{
\begin{array}{ll}
{\displaystyle \frac{\int dw\, w^{(d-6)/2} \sc{C}^2(w) e^{-2(x-1)w}}
{x\int dw\, w^{(d-6)/2} \sc{C}^2(w)}} & \mbox{for $x\geq 1$} \\
x^{(d-6)/2}{\mathcal{G}}(1/x) & \mbox{for $x\leq 1$}
\end{array}
\right.
\label{CCt_scaling}
\end{equation}
The scaling of the equal-time value of $\tilde{C}\Cc$
is, from\eq{sc_CC_ord}, $\tilde{C}\Cc(t,t)=\tilde\gamma_d t^{(4-d)/2}$ with
$\tilde\gamma_d = \sigma_d T_{\rm c}^2\int dw\, w^{(d-6)/2} \sc{C}^2(w)$;
compare\eq{CCd}. Inserting this into\eq{c_m2} gives
\begin{eqnarray}
\fl\lefteqn {C_{\rm m}^{(2)}(t,t_{\rm w})=
\frac{1}{2} \int\, dt'\,dt_{\rm w}'\, M(t,t')M(t_{\rm w},t_{\rm w}') \tilde{C}\Cc(t',t_{\rm w}')}
\\
\fl &=&\frac{1}{2} \int\, dt'\,dt_{\rm w}'\, \frac{1}{m(t')m(t_{\rm w}')}
\frac{1}{t}\sc{M}(t/t')\frac{1}{t_{\rm w}}\sc{M}(t_{\rm w}/t_{\rm w}')
\tilde{C}\Cc(t',t') \mathcal{G}(t',t_{\rm w}')
\\
\fl &=&\frac{t_{\rm w} \tilde\gamma_d}{2\mu_d x}
\int_0^1 dy_{\rm w}\,y_{\rm w}^{(d-2)/4}\sc{M}(1/y_{\rm w})
\int_0^{x}dy\, \sc{M}(x/y) y^{(6-d)/4} {\mathcal{G}}(y/y_{\rm w})
\\
\fl &=&\frac{t_{\rm w}}{x} \int_0^1 dy_{\rm w}\, y_{\rm w}^2 \sc{M}(1/y_{\rm w})
\int_0^{x/y_{\rm w}}du\, \sc{M}(x/uy_{\rm w})u^{(6-d)/4}
\frac{\tilde\gamma_d}{2\mu_d}{\mathcal{G}}(u)
\label{general_Cm2}
\end{eqnarray}
In the second line we have used\eq{combination}
to write $M(t,t')=m(t)\delta(t-t')-t^{-1}m^{-1}(t')\sc{M}(t/t')$ up to
negligible corrections, and then
immediately discarded the $\delta$-function contributions, which give
subleading corrections.
Let us denote by $U$ the value of the $u$-integral
in\eq{general_Cm2}. Since $\mathcal{G}(u)$ is defined separately for
$u>1$ and $u<1$ in\eq{CCt_scaling}, one splits the integral accordingly:
\begin{eqnarray}
\fl \frac{2\mu_d U}{\tilde\gamma_d} &=&
\int_0^1\!\! du\, \sc{M}(x/uy_{\rm w})u^{(d-6)/4} {\mathcal{G}}(1/u)+
\int_1^{x/y_{\rm w}}\!\! du\, \sc{M}(x/uy_{\rm w})u^{(6-d)/4} {\mathcal{G}}(u)
\\
\fl &=& \int_1^\infty\!\! du\, \sc{M}(xu/y_{\rm w})u^{-(d+2)/4} {\mathcal{G}}(u)+
\int_1^{x/y_{\rm w}}\!\! du\, \sc{M}(x/uy_{\rm w})u^{(6-d)/4} {\mathcal{G}}(u)
\label{decomposition}
\end{eqnarray}
We now need ${\mathcal{G}}(u)$ for $u>1$. The denominator
in\eq{CCt_scaling} is $\tilde\gamma_d/(\sigma_d T_{\rm c}^2)$,
and bearing in mind the
definition\eq{C_scaling_magn} of $\sc{C}$ with $\alpha=(d-2)/2$ gives
\begin{eqnarray}
\fl \frac{\tilde\gamma_d}{\sigma_d T_{\rm c}^2} u{\mathcal{G}}(u) &=&
\int dw\, w^{(d-6)/2}\sc{C}^2(w)e^{-2(u-1)w}
\\
\fl &=& 4\int dw\, w^{(d-2)/2}\int_0^1 dy\,\int_0^1 dy'\,
(yy')^{(d-2)/2}e^{-2w(1-y-y'+u)}
\\
\fl &=&2^{(4-d)/2}\Gamma\left({\textstyle\frac{d}{2}}\right)
\int_0^1 dy\,\int_0^1 dy'\, (yy')^{(d-2)/2}(1-y-y'+u)^{-d/2}
\label{3_dim_int}
\end{eqnarray}
We now insert this into\eq{decomposition} and simplify the
numerical prefactors by using $\mu_d=\lambda_d^{-1}(d-2)(4-d)/4$ and
the explicit expression\eq{Ld} for $\lambda_d$ to get
\begin{eqnarray}
\fl
U&=& \frac{T_{\rm c}}{\Gamma(\frac{d-2}{2}) \Gamma(\frac{4-d}{2})}
\left[\int_1^{\infty}du\, \sc{M}(xu/y_{\rm w})u^{-(d+6)/4} \right. \nonumber\\
\fl & &\times \int_0^1 dy \int_0^1 dy'\,(yy')^{(d-2)/2}(1-y-y'+u)^{-d/2}
\nonumber\\
\fl &&{}+{}\left. \int_1^{x/y_{\rm w}}du\, \sc{M}(x/uy_{\rm w})u^{(2-d)/4}
\int_0^1 dy \int_0^1 dy'\, \ldots\right]
\label{exact_I}
\end{eqnarray}
This is the $u$-integral from\eq{general_Cm2} and so overall we have
a four-dimensional integral over $y_{\rm w}, u, y, y'$ for $C_{\rm m}^{(2)}$. In
general this cannot be evaluated in closed form; a series expansion is
given
in~\cite{Annibale_thesis}.
The large-$x$ behaviour, which will give
us the asymptotic FDR, is easier to extract. In the first $u$-integral
of\eq{exact_I} one can directly use the asymptotic form of
$\sc{M}(xu/y_{\rm w})$. One can show that for large $x$ the same replacement
can be made in the second integral, and the upper integration limit sent to
infinity thereafter.
This gives for the large-$x$ behaviour of $U$
\begin{equation}
U= \frac{d-2}{2}\frac{T_{\rm c} V_d}{\Gamma(\frac{d-2}{2}) \Gamma(\frac{4-d}{2})}
x^{(2-d)/4} y_{\rm w}^{(d-2)/4}
\label{U_large_x}
\end{equation}
where $V_d$ is a $d$-dependent numerical constant given by
\begin{equation}
\fl V_d=\int_1^{\infty}\!\!du\, (u^{-(d+2)/2}+1) \int_0^1\! dy \int_0^1\! dy'\,
(yy')^{(d-2)/2}(1-y-y'+u)^{-d/2}
\label{integral_sums}
\end{equation}
Inserting\eq{U_large_x} into\eq{general_Cm2}, the remaining
$y_{\rm w}$-integral can be done explicitly to give
\begin{eqnarray}
\fl C_{\rm m}^{(2)}(t,t_{\rm w})=\left(\frac{d-2}{2}\right)^2
\frac{\Gamma(\frac{d+4}{2}) V_d}{\Gamma(d+1)\Gamma(\frac{4-d}{2})}
\Tct_{\rm w} x^{-(d+2)/4}
\label{C_m2inf}
\end{eqnarray}
As anticipated this has the same scaling as the first
contribution\eq{C_m1inf} to the correlation function, so that overall
for large $x$
\begin{eqnarray}
\fl C_{\rm m}(t,t_{\rm w}) = C_{\rm m}^{(1)}+C_{\rm m}^{(2)}&=&\frac{d-2}{2}\Tct_{\rm w}
x^{-(d+2)/4}\left[\frac{4}{d+2}
\left(1-\frac{d-2}{2}\frac{\Gamma(\frac{d+4}{2})\Gamma(\frac{d-2}{2})}
{\Gamma(d+1)}\right)\right.
\nonumber\\
& &{}+{}\left.\frac{d-2}{2}\frac{\Gamma(\frac{d+4}{2})V_d}
{\Gamma(d+1)\Gamma(\frac{4-d}{2})}\right]
\label{Cm_tot_infty}
\end{eqnarray}
The magnetization {\em response} function is rather easier to
find, by using\eq{R_simplification_magn} and\eq{combination}
in\eq{response_magn}
and rescaling the integration variable to $y=t'/t_{\rm w}$ as usual:
\begin{equation}
R_{\rm m}(t,t_{\rm w}) = \int_1^x dy\,\left[\delta(y-x)-x^{-1}\sc{M}(x/y)\right]y^{-(d-2)/4}
\end{equation}
The structure of this is rather similar to $C_{\rm m}^{(1)}$, and
by subtracting the vanishing term\eq{identity} one again gets a
significant cancellation,
\begin{equation}
R_{\rm m}(t,t_{\rm w}) = \frac{1}{x}\int_0^1 dy\,\sc{M}(x/y)y^{(2-d)/4}
\end{equation}
Inserting the explicit form\eq{M_scaling_final} of $\sc{M}$ one then
gets simply
\begin{equation}
R_{\rm m}(t,t_{\rm w}) = x^{(2-d)/4}\left[1-\left(1-\frac{1}{x}\right)^{(d-2)/2}\right]
\label{R_finite}
\end{equation}
which for $x\rightarrow\infty$ behaves as
\begin{equation}
R_{\rm m}(t,t_{\rm w}) = \frac{d-2}{2}x^{-(2+d)/4}
\label{R_infty}
\end{equation}
With the expressions $\eq{Cm_tot_infty}$ and $\eq{R_infty}$ for
correlation and response in the limit of long, well-separated times we
can now finally compute the asymptotic FDR defined by
\begin{eqnarray}
\fl X_{\rm m}^\infty &=&
\lim_{t\ggt_{\rm w}\gg 1}\frac{T_{\rm c}R_{\rm m}(t,t_{\rm w})}{C_{\rm m}'(t,t_{\rm w})}
=\frac{4}{d+6}\left[\frac{4}{d+2}
\left(1-\frac{d-2}{2}\frac{\Gamma(\frac{d+4}{2})\Gamma(\frac{d-2}{2})}
{\Gamma(d+1)}\right)\right.
\nonumber\\
\fl & &{}+{}\left.\frac{d-2}{2}\frac{\Gamma(\frac{d+4}{2}) V_d}
{\Gamma(d+1)\Gamma(\frac{4-d}{2})}\right]^{-1}
\label{Xz}
\end{eqnarray}
where $V_d$ is given by\eq{integral_sums} and the prefactor $4/(d+6)$ accounts
for the fact that $C_{\rm m}\sim t_{\rm w} x^{-(2+d)/4} \simt_{\rm w}^{(d+6)/4}$ and hence
$C_{\rm m}'(t,t_{\rm w})=[(d+6)/4t_{\rm w}]C_{\rm m}(t,t_{\rm w})$.
Before exploring this result, let us comment briefly on the limit as
$d\to 4$, which should make contact with our results in the
previous subsection. The contribution from $C_{\rm m}^{(2)}$, which appears
in the second line of\eq{Cm_tot_infty} and\eq{Xz}, vanishes linearly
in $4-d$ in this limit because of the factor $\Gamma^{-1}((4-d)/2)$;
$V_d$ stays finite as we show below. This is consistent with the fact
that in dimension $d>4$ this term does not contribute. In the limit
$d\to 4$ one has, from\eq{Cm_tot_infty}, $C_{\rm m}=\Tct_{\rm w} x^{-3/2}/2$ for
large $x$ which matches precisely\eq{Cm_dgt4} for
$d>4$. Similarly, the large-$x$
magnetization response\eq{R_finite} for $d\to 4$ is $R_{\rm m}=x^{-3/2}$ in
agreement with\eq{Rm_dgt4}.
We now look in more detail at the $d$-dependence of the asymptotic FDR\eq{Xz}
for the magnetization. Expanding in $\epsilon=4-d$ one has
\begin{eqnarray}
\fl X_{\rm m}^\infty
&=&\left(\frac{2}{5}+\frac{\epsilon}{25}+{{\mathcal O}}(\epsilon^2)\right)
\left[\left(\frac{2}{3}+\frac{\epsilon}{9}\right)
\left(\frac{3}{4}-\frac{\epsilon}{6}\right)+
\frac{\epsilon}{8} V_4+{{\mathcal O}}(\epsilon^2)\right]^{-1}
\nonumber\\
&=&\frac{4}{5}+\left(\frac{28}{225}-\frac{V_4}{5}\right)\epsilon
+{{\mathcal O}}(\epsilon^2)
\end{eqnarray}
where $V_4$ is the limiting value of $V_d$ for $d\to 4$, which can be
worked out explicitly as
\begin{equation}
V_4=-\frac{2}{3}\ln 2+\frac{11}{12}+\frac{\pi^2}{24}
\end{equation}
so that
\begin{equation}
\fl X_{\rm m}^\infty=\frac{4}{5}+\left(\frac{2\ln 2}{15}
-\frac{53}{900}-\frac{\pi^2}{120}\right)\epsilon+{{\mathcal O}}(\epsilon^2)
\label{X_m_epsilon_expansion}
\end{equation}
It is remarkable that in a system as simple as the spherical model,
where standard critical exponents are rational functions of the
dimension $d$, the magnetization FDR for magnetized initial states is
very much more complicated, and irrational already {\em to first order} in
$\epsilon$.
\begin{figure}
\begin{center}
\centerline{\includegraphics[width=10.0cm,clip=true]{Xinf_new.eps}}
\caption{Asymptotic FDR $X^\infty_{\rm m}$ for the magnetization, for
critical coarsening with nonzero initial magnetization. Solid line:
Full theory~(\protect\ref{Xz}) including non-Gaussian corrections;
dotted lines indicate the first-order expansions near $d=2$ and
4. Dashed line: Gaussian theory~(\protect\ref{Xqneq}).
\label{fig:X_magnetized}
}
\end{center}
\end{figure}
In the opposite limit $d\to 2$, the $u$-integral in the
definition\eq{integral_sums} of $V_d$ diverges at the upper end;
dropping all non-divergent corrections gives the leading divergence as
\begin{equation}
\fl V_d\approx\int_1^{\infty}du\,\int_0^1 dy \int_0^1 dy'\,(1-y-y'+u)^{-d/2}
\approx \int_1^{\infty}du\,u^{-d/2} = \frac{2}{d-2}
\label{limit_d2}
\end{equation}
This divergence balances the vanishing prefactor
$(d-2)/2$ in\eq{Cm_tot_infty} and\eq{Xz} whereas the contribution in
the round brackets coming from $C_{\rm m}^{(1)}$ vanishes linearly with
$d-2$. Consequently the asymptotic behaviour of the correlation
function is, for $d$ close to $2$, $C_{\rm m}=[(d-2)/2]\Tct_{\rm w} x^{-1}$ and
the FDR becomes $X_{\rm m}^\infty=4/(d+6)=1/2$. One can also obtain
the leading order correction in $\epsilon'=d-2$, which is given by
\begin{equation}
X_{\rm m}^\infty=
\frac{1}{2}+\left(\frac{1}{16}+\frac{\pi^2}{48}\right)\epsilon'
+{{\mathcal O}}(\epsilon'^2)
\label{X_m_epsilon_prime_expansion}
\end{equation}
The only subtlety here is working out the subleading term of
$V_d$. Setting $V_d=2/\epsilon'+a_0+\ldots$, $a_0$ can be obtained as
the limit for $d\to 2$ of\eq{integral_sums} with $u^{-d/2}$ subtracted
from the integrand. The limit can be taken in the integrand itself for
finite $u$, giving a convergent integral with value
$3/2-\pi^2/12$. But one has to account separately for the large-$u$
tail $u^{-d/2}[-1+\int dy\,dy' (yy')^{(d-2)/2}]$ which integrates to
$(-1+4/d^2)[2/(d-2)]\to -2$ for $d\to 2$, giving $a_0=-1/2-\pi^2/12$
overall.
In summary, the asymptotic magnetization FDR $X_{\rm m}^\infty$ decays
from $4/5$ in $d=4$ to $1/2$ in $d=2$. Fig.~\ref{fig:X_magnetized} also
shows numerical values for intermediate $d$. $X_{\rm m}^\infty$ is
larger than the
FDR\eq{Xqneq} from the Gaussian theory except in the limit $d\to2$; it also
remains larger throughout than the FDR\eq{X_baseline} for
unmagnetized initial states, shown in Fig.~\ref{fig:X_normal_and_E}.
\begin{figure}
\begin{center}
\centerline{\includegraphics[width=9.0cm,clip=true]{d234_new.eps}}
\caption{Normalized magnetization FD plot for $d=2, 3, 4$, showing
normalized susceptibility $\tilde{\chi}_{\rm m}$ versus normalized
correlation $\tildeC_{\rm m}$ in the limit of long times. For $d=4$ the plot
is a straight line with (negative) slope $4/5$ as expected. Increasing
deviations from this appear as $d$ decreases towards $2$.
\label{fig:mFD_plot}
}
\end{center}
\end{figure}
We next turn to the shape of the FD plot for the magnetization. As
explained in the introduction, this
is obtained by plotting the normalized susceptibility
$\tilde{\chi}_{\rm m}(t,t_{\rm w}) = T_{\rm c}\chi_{\rm m}(t,t_{\rm w})/C_{\rm m}(t,t)$ versus
$\tildeC_{\rm m}(t,t_{\rm w})=C_{\rm m}(t,t_{\rm w})/C_{\rm m}(t,t)$.
In the limit $d\to 4$ the FD
plot must be a straight line with slope $X_{\rm m} = 4/5$, by
continuity with the results for $d>4$. Numerical evaluation
(see Fig.~\ref{fig:mFD_plot}%
) shows that, as the dimensionality decreases,
the FD plots deviate progressively from this straight lines. The most
extreme case is the limit $d\to 2$, where analytical forms can be
found.
To find the correlation function for $d\to 2$, it is useful to note that
the scaling function $\sc{M}(x)$ becomes equal to $\delta(x-1)$ in the
limit. Formally, one sees easily from\eq{M_scaling_final} that for
any smooth bounded function $f(x)$ and fixed $c>1$,
$\int_1^c dx\,
\sc{M}(x) f(x) \to f(1)$ because the divergence of $\sc{M}(x)$ at
$x=1$ becomes non-integrable in $d=2$. We now exploit this to
simplify $U$ from\eq{exact_I}.
In the first term in square brackets, the argument $xu/y_{\rm w}$ of
$\sc{M}$ is always larger than $x/y_{\rm w}$ and hence than $1$ (except at
the irrelevant boundary $y_{\rm w}=x$ of the $y_{\rm w}$-integral); in the limit
$d\to 2$, this contribution therefore vanishes. In the second term,
replacing $\sc{M}(x/uy_{\rm w})$ by $\delta(x/uy_{\rm w} - 1)$ and expanding the
prefactor to leading order in $(d-2)/2$ gives
\begin{equation}
\frac{U}{T_{\rm c}} = \frac{d-2}{2} \frac{x}{y_{\rm w}}
\int_0^1 dy \int_0^1 dy'\,(1-y-y'+x/y_{\rm w})^{-1}
\label{d2_expansion}
\end{equation}
Performing the integrals over $y$ and $y'$, one has
\begin{equation}
\frac{U}{T_{\rm c}}=\frac{d-2}{2} \frac{x}{y_{\rm w}}
\left[\frac{x}{y_{\rm w}}\ln\left(1-\frac{y_{\rm w}^2}{x^2}\right)+
\ln\left(\frac{x+y_{\rm w}}{x-y_{\rm w}}\right)\right]
\end{equation}
We can now use this limit form of $U$ to get the contribution
$C_{\rm m}^{(2)}$ to the correlation function; recall that $U$ was defined
as the $u$-integral in\eq{general_Cm2}. In the remaining
$y_{\rm w}$-integral in this equation one can again replace $\sc{M}(1/y_{\rm w})$ by
$\delta(1/y_{\rm w} -1)$ so that only $y_{\rm w}=1$ contributes, giving
\begin{equation}
C_{\rm m} = C_{\rm m}^{(2)}=\frac{d-2}{2}T_{\rm c} t_{\rm w} \left[x\ln\left(1-\frac{1}{x^2}\right)+
\ln\left(\frac{x+1}{x-1}\right)\right]
\label{Cm_d2}
\end{equation}
One can show that the other contribution $C_{\rm m}^{(1)}$ to the
magnetization correlator, given by\eq{C_m1}, vanishes as $\sim(d-2)^2$ for
$d\to 2$ so that to leading order $C_{\rm m}=C_{\rm m}^{(2)}$ as anticipated
in writing\eq{Cm_d2}. The corresponding response function
is found by expanding\eq{R_finite} to leading order in $d-2$:
\begin{equation}
R_{\rm m} = \frac{d-2}{2}\ln\left(\frac{x}{x-1}\right)
\end{equation}
To get the FDR it only remains to work out the $t_{\rm w}$-derivative of
$C_{\rm m}$:
\begin{eqnarray}
\fl C_{\rm m}' \ \ \equiv \ \ \partial_{t_{\rm w}}C_{\rm m}&=&
\frac{d-2}{2}\,T_{\rm c}\ln\left(\frac{x+1}{x-1}\right)
\end{eqnarray}
We therefore get, for the full dependence of the limiting FDR for
$d\to 2$ on scaled time $x=t/t_{\rm w}$
\begin{equation}
X_{\rm m}(x)=\frac{T_{\rm c}R_{\rm m}}{C_{\rm m}'} = \ln\left(\frac{x}{x-1}\right)
\left[\ln\!\left(\frac{x+1}{x-1}\right)\right]^{-1}
\end{equation}
For $x\to\infty$ this gives $X_{\rm m}^\infty=1/2$ consistent with the
discussion above. In the limit $x\to 1$ of comparable times, on the
other hand, $X_{\rm m}(x)$ approaches 1, logarithmically slowly; the FD
plot for $d\to 2$ therefore starts off with a pseudo-equilibrium
slope. Interestingly, this implies that the trends of the slope with
$d$ are different at the two ends of the plot: for well-separated
times ($x\to\infty$), the slope {\em decreases} from $4/5$ to $1/2$ as
$d$ decreases from 4 to 2; for comparable times ($x\to 1$) it {\em
increases} from 4/5 to 1.
To get the FD plot itself we need the susceptibility, which is found
by integration of the response\eq{R_finite} as
\begin{eqnarray}
\chi_{\rm m}(t,t_{\rm w})&=&\int_{t_{\rm w}}^t dt'\, R_{\rm m}(t/t')=t\int_{1/x}^1dz\, R_{\rm m}(1/z)
\\
&=& t\int_{1/x}^1 dz\,z^{(d-2)/4}
\left[1-\left(1-z\right)^{(d-2)/2}\right]
\label{chi_fin}
\end{eqnarray}
Expanding to linear order in $d-2$ and integrating gives
\begin{equation}
\chi_{\rm m}(t,t_{\rm w})
=\frac{d-2}{2}\,t\,\frac{x-1}{x}\left[1-\ln\left(\frac{x-1}{x}\right)\right]
\end{equation}
The normalized susceptibility $\tilde\chi_{\rm m}$ is obtained by
dividing by $C_{\rm m}(t,t)/T_{\rm c}$, which from\eq{Cm_d2} for $x\to 1$ equals
$2\ln 2[(d-2)/2]\,t$.
The resulting FD plot for $d\to 2$ is shown in
Fig.~\ref{fig:mFD_plot}, together with the ones for $d=3$
(determined numerically%
) and $d=4$. For $d\to
2$ the approach of the slope to the equilibrium value $X=1$ for $x\to
1$ is difficult to see because it is logarithmically slow. As
expected from the trends with $d$ in the initial and final slopes of
the FD plot, the curve for $d\to 2$ is the most strongly curved, while
in $d=4$ we have the anticipated straight line with slope $4/5$
that is required by continuity with the results for $d>4$.
To quantify the shape of the FD plot further one can also consider the
axis ratio $Y$, defined as the limiting value of $\tilde\chi_{\rm
m}(t,t_{\rm w})=T_{\rm c}\chi_{\rm m}(t,t_{\rm w})/C_{\rm m}(t,t)$ for well-separated times
$t\ggt_{\rm w}\gg 1$. In equilibrium this would correspond to the FDT for
the static quantities, i.e. equal-time fluctuations and static
susceptibilities. Out of equilibrium, if the FD plot is straight then
$Y$ coincides with $X$; if it is not, then is has been argued that in
some circumstances $Y$ can be more relevant for characterizing
effective temperatures than $X$~\cite{OHeLiuNag04}. For the
magnetization FD plot in the current scenario of magnetized initial
states, we see from Fig.~\ref{fig:mFD_plot} that $Y$ decreases along
with $X_{\rm m}^\infty$ from $4/5$ as the dimension is lowered below
$d=4$. The two quantities begin to
differ more noticeably as $d$ decreases further, with $X_{\rm
m}^\infty=1/2$ and $Y=1/(2\ln 2)=0.7213\ldots$ in the limit $d\to 2$.
\section{Summary and discussion}
\label{sec:conclusion}
In this paper we have considered the non-equilibrium dynamics of the
spherical ferromagnet after a quench to the critical temperature
$T_{\rm c}$. Our focus has been the calculation of correlation and response
functions and the associated fluctuation-dissipation ratios (FDRs)
$X(t,t_{\rm w})$. The key quantity that can be extracted from the latter is
the asymptotic FDR $X^\infty$ for large and well-separated times
$t\ggt_{\rm w}\gg 1$; it is independent of model-specific details within a
given dynamical universality
class. We were motivated by two questions: how does $X^\infty$
depend on the observable considered, both with regard to the
lengthscale and the type of observable (spin, bond, spin product)? And
what is the effect of initial conditions, in particular the presence
of a nonzero magnetization in the initial state? The first question
has implications for the interpretation of $T/X^\infty$ as an
effective temperature, which is plausible only if this quantity is
observable-independent. The second one allowed us to uncover whether
different initial conditions can lead to different universality
classes of critical coarsening.
A peculiarity of the spherical model is the weak infinite-range interaction
produced by the spherical constraint. This requires that one
distinguishes between long-range or ``block'' observables, which probe
lengthscales large compared to the (time-dependent) correlation length
but small compared to the system size, and global observables whose
behaviour depends on correlations across the entire
system. Technically, the first case is much easier to treat because
the standard theory where the spins have Gaussian statistics can be
used. Global correlation and response functions, on the other hand,
require non-Gaussian corrections arising from the fluctuations of the
effective Lagrange multiplier.
We dealt with the case of finite-range (i.e.\ either local or
long-range) observables in Sec.~\ref{sec:finite-range}. For spin
observables, we found in the long-range case\eq{X_baseline} the same
$X^\infty$ as for local spin correlations and
response~\cite{GodLuc00b}. This was as expected from the general
correspondence between local and long-range observables discussed in
the introduction. The FD plot for the long-range spin observable,
i.e.\ the magnetization, is a straight line in the long-time
limit. This is as in the Ising case in $d=1$~\cite{MayBerGarSol03},
but it is interesting to note that here it holds for all dimensions
$d>2$. On the other hand, in the Ising model with $d\geq 2$, RG
arguments have been adduced~\cite{CalGam05,MayBerGarSol04,CalGam02} to
suggest that the magnetization FD plot should not be straight, though
with deviations that are likely too small to be detectable
numerically~\cite{MayBerGarSol04}. It is likely that the Gaussian
statistics of the spherical model are responsible for producing a
simpler, straight-line magnetization FD plot in all dimensions,
although it would be interesting to know whether any other models have
this property.
We then looked at the effect of the type of observable on $X^\infty$,
considering both bond and spin product observables in either the local
or long-range versions. The results in equations~(\ref{X_bond_local}),
(\ref{X_bond_long-range}), (\ref{X_prod_local}), (\ref{Xbl_large_d})
show that, although the precise time-dependence of $X(t,t_{\rm w})$ varies,
the asymptotic FDR $X^\infty$ is the same in all cases. This is
consistent with general arguments~\cite{CalGam04} suggesting that for
a Gaussian theory all observables should yield the same $X^\infty$. In
contrast to the Ising case~\cite{MayBerGarSol03}, not all long-range
observables give nontrivial FD plots; in fact, only the block product
observable does so, and only for $d<4$, while all others produce
pseudo-equilibrium FD plots for long times.
The bulk of the paper was concerned with the more challenging analysis
of global correlation and response functions, focussing mostly on the
energy as a key observable. In Sec.~\ref{sec:setup} we constructed a
framework for calculating non-Gaussian corrections to the
spins, which are ${{\mathcal O}}(N^{-1/2})$ to leading order. This lead to the
general expression\eq{yt3} for these leading-order corrections. It
involves a two-time kernel $L(t,t_{\rm w})$ which from\eq{pinv_def} is the
functional inverse of $K(t,t_{\rm w})$ defined in\eq{p_def}. The basis of
all subsequent calculations is the determination of the long-time scaling of
these two functions, as summarized at the end of Sec.~\ref{sec:KandL}.
In Sec.~\ref{sec:energy_general} we obtained general expressions for
energy correlation and response functions, in terms of the kernel $L$
and other quantities known from the Gaussian theory; the results can
be found in eqs.~(\ref{C1},\ref{C2},\ref{C3},\ref{C4}) and\eq{RE}.
Evaluating these first for the equilibrium case, we found that the
energy correlation and susceptibility display a plateau for $T$ just
below $T_{\rm c}$ and $d>4$; this is caused by the $\mathbf{q}=\mathbf{0}$ wavevector,
i.e.\ by the slow relaxation of the global magnetization. In
Sec.~\ref{sec:noneq_large_d}, we proceeded to the long-time analysis
of energy FD behaviour in the non-equilibrium case for $d>4$; the key
results are\eq{CE_d_gt_4_longtime} and\eq{RE_d_gt_4_longtime}. The
associated FDR is given explicitly in\eq{XE} and has the {\em same}
asymptotic value $X^\infty=1/2$ as for all other (finite-range)
observables in $d>4$. The analysis of the case $d<4$ is more
difficult, and we were able to find closed-form
results\eqq{CE_prime}{RE_d_lt_4} only in the limit of well-separated
times $t/t_{\rm w}\gg 1$. This is, however, enough to determine $X^\infty$,
with the result\eq{Xinf_dlt4_explicit}. Evaluating this, both
numerically and by expansion in $4-d$ and $d-2$, the crucial
conclusion is that it does not coincide with the asymptotic FDR for
finite-range observables; see Fig.~\ref{fig:X_normal_and_E}. A naive
interpretation of $T/X^\infty$ as an
effective temperature for critical coarsening dynamics is therefore
ruled out, since such a temperaure ought to be observable-independent.
On the other hand, to first order in $4-d$ the result agrees with an
RG calculation~\cite{CalGam04} for the $O(n)$-model. We conclude that
non-Gaussian corrections to the FD behaviour of global observables in
the spherical model capture genuine physical effects that have close
counterparts in more realistic systems with only short-range
interactions.
Finally, in Sec.~\ref{sec:magnetized} we turned our attention to
critical coarsening starting from magnetized initial states;
physically this situation could be produced by an up-quench from an
equilibrated state at a starting temperature $T<T_{\rm c}$. We concentrated
on the simpler spin observables and found that already for them, the
presence of a nonzero magnetization makes global properties sensitive
to non-Gaussian corrections. As with the energy fluctuations, it is the
{\em global} correlation and response functions that make contact with
the results for short-range models, as obtained recently for the Ising
case~\cite{GarSolPagRit05}: we find $X^\infty_{\rm m}=4/5$ for $d>4$,
eq.\eq{X_m_d_gt_4}. This is distinct from the value $X^\infty=1/2$ for
the unmagnetized case, indicating that magnetized critical coarsening
is in a separate dynamical universality class. Surprisingly, even the
expressions
for correlation and response functions themselves, which are not
expected to be universal, coincide with those for the Ising case.
It remains to be understood whether this is accidental or has more
profound origins. For the case $d<4$, we obtained new {\em exact} values for
the asymptotic FDR of magnetized critical coarsening. The
magnetization response\eq{R_finite} can be found
explicitly for long times, while for the magnetization
correlator only the asymptotics for well-separated
times\eq{Cm_tot_infty} can be written in closed form. The resulting
$X^\infty_{\rm m}$, eq.\eq{Xz}, is surprisingly nontrivial: while it
matches continuously with $X^\infty_{\rm m}=4/5$ in $d>4$ and
approaches the simple value $X^\infty_{\rm m}=1/2$ for $d\to 2$ as
shown in Fig.~\ref{fig:X_magnetized}, it is irrational already to
first order in an expansion in $4-d$, eq.\eq{X_m_epsilon_expansion}, or
$d-2$, eq.\eq{X_m_epsilon_prime_expansion}.
While the conclusion of our calculation as regards the existence of a
well-defined effective temperature for critical coarsening is
negative, the issue of dynamic universality classes and new asymptotic
FDRs due to magnetized (and possibly other, different) initial
conditions clearly deserves further study. Results for systems with
short-range interactions, such as the $O(n)$ and $n$-vector models,
would be particularly welcome. After the present work was completed we
became aware that a first step in this direction has recently
been taken by the authors of Ref.~\cite{FedTri06}, who calculated the
FDR for the $n$-vector model with a magnetized initial state within an
$\epsilon$-expansion around $d=2$. Intriguingly, their result
$X^\infty=1/2$ for $d=2$ itself agrees with ours, but the first-order
correction in $d-2$ remains {\em rational} even for $n\to\infty$. It
therefore disagrees with our spherical model
result\eq{X_m_epsilon_prime_expansion}. This appears to be the first
example of genuine differences between the spherical and $n$-vector
(with $n\to\infty$) models, which are known to have identical
properties in equilibrium~\cite{Stanley68} and within a Gaussian
theory of the dynamics.
As regards future work, we note first that a complete classification
of dynamical universality classes within critical coarsening remains
to be achieved. An earlier study of the spherical model considered
initial conditions with long-range correlations but no overall
magnetization; this yields no new (non-zero) values of the asymptotic
FDR $X^\infty$~\cite{PicHen02}. The presence of a non-zero
magnetization thus appears to be important for observing new
phenomena, and is reflected in our calculation by the fact that
non-Gaussian fluctuations become important. Whether there are yet
other initial conditions that could give rise to distinct values of
$X^\infty$ is an open problem.
Our general framework for treating non-Gaussian
corrections to the dynamics can also be applied in other contexts.
For example, it can be used to analyse the {\em
fluctuations} across thermal histories of correlation and response
functions that have been coarse-grained across a finite-sized
system. The properties of these fluctuations should be useful for
understanding dynamical heterogeneities in coarsening
dynamics~\cite{CasChaCugIguKen03}, and we will report on the
results of such a study shortly. We have also extended our approach to
non-Gaussian corrections for the dynamics of e.g.\ the $O(n)$-model
with large but finite $n$, opening up the attractive prospect of
obtaining exact results analogous to the ones in this paper for models
with exclusively short-range interactions.
{\bf Acknowledgements}: We thank P Calabrese and A Gambassi for
sharing their field theoretic results with us before publication, and
for helpful discussions. PS acknowledges the hospitality of the KITP,
Santa Barbara, where this research was supported in part by the NSF
under grant no.\ PHY99-07949.
|
1,314,259,994,909 | arxiv | \section{Introduction}
The study of ultracold quantum gases is interlinked with controlling
magnetic Fano-Feshbach resonances and thereby changing the effective
interparticle interaction by many orders of
magnitudes~\cite{inouyeNature98,courteillePRL98}, see also
Refs.~\onlinecite{timmermansPhysRep99,giorginiRMP08,chinRMP10}. This
makes ultracold Fermi gases a convenient tool to study the behavior of
a degenerate fermionic many-body system~\cite{oharaScience02} over a
wide range of interaction strengths, in particular fermionic
superfluidity~\cite{chinScience04}. Indeed, changing the magnetic
field across a resonance makes it possible to continuously tune the
gas from the Bardeen-Cooper-Schrieffer (BCS) state of Cooper pairs to
a Bose-Einstein condensate (BEC) of weakly bound molecules. After the
first observations of a molecular BEC of fermions
\cite{jochimScience03,greinerNature03,zwierleinPRL03a}, this so-called
BCS-BEC cross-over has been widely studied experimentally
\cite{bartensteinPRL04,regalPRL04,regalPRL05,kinastPRL04,bourdelPRL04,%
greinerPRL05,partridgePRL05} and theoretically
\cite{hollandPRL01,timmermansPhysLettA01,linghamPRL14,%
hoinkaPRL13}, see also reviews
Refs.~\onlinecite{duinePhysRep04,chenPhysRep05}. The observation of
quantized vortices on both sides of the BCS-BEC cross-over provided an
unambiguous proof of superfluidity by fermionic
pairing~\cite{zwierleinNature05}. Recent work investigated the effect
of partial polarization of a two-component Fermi gas on the Fermi
liquid parameters \cite{navonScience10},
the nature of the transition from a BCS state to a state of a
molecular BEC~\cite{huZwierleinScience12}, and quantifying the
superfluid fraction in a Fermi gas by means of second
sound~\cite{sidorenkovNature13}.
In this work we concern ourselves with the low-density properties of
homogeneous Fermi gases at zero temperature. We will use a
square-well and a Lennard-Jones interaction potential for our study.
Changing the interaction strength (coupling constant) of the
respective potential changes the scattering length for two-body
scattering $a_0$, which we refer to as vacuum scattering length, as
opposed to the in-medium scattering length $a$ introduced later. When
$a_0\,<\,0$, {\em i.e.\/} the interaction is effectively attractive,
one expects BCS type pairing of particles with opposite spin. As
$a_0\rightarrow -\infty$, a low energy resonance of the two-body
problem generates bound dimers. This unitary limit, where the only
relevant length scale is the inverse Fermi wave number $k_{\rm F}^{-1}$,
marks the border between the BCS and BEC regime. Increasing the
interaction strength further, the Cooper pairs become weakly bound
bosonic molecules. The singularity of the vacuum scattering length
signals the transition.
We are in particular interested in structural quantities such as the
energetics, distribution functions, the stability of the system
against spinodal decomposition and dimerization, and possibly BCS
pairing. We utilize a quantitative method of microscopic many-body
theory to determine correlation effects, {\em i.e.\/} effects beyond
the weak coupling or mean-field approximations \cite{Landau5} that are
routinely applied at low densities.
For example, the ground state energy depends in the low-density limit
only on the dimensionless parameter
$k_{\rm F}\,a_0$~\cite{HuangYang57,Landau5}. We are interested here in the
question in which parameter range this ``universal'' behavior
persists. Another example for correlation effects is the pair
distribution function, $g(r)$. In correlation-free mean--field
approaches, $g(r)$ is equal to the distribution function of the
non-interacting Fermi gas, $g_{\rm F}(r)$. As we will see below,
$g(r)$ deviates, in particular for spin-antiparallel particles,
substantially from $g_{\rm F}(r)$ when the absolute value of the
scattering length becomes large.
In the limit of weak interactions, the system can be described by a
BCS type wave function. For systems where the weak-coupling
approximation does not apply (for example for Lennard-Jones type
interactions), the pairing gap can be obtained by an extension of the
Jastrow-Feenberg variational method, the correlated BCS (CBCS) method
\cite{CBFPairing,KroTrieste}, reviewed in section~\ref{sec:CBCS}. See
also Refs. \onlinecite{YangClarkBCS,HNCBCS} and
\onlinecite{Fabrocinipairing} for a similar implementation of the same
ideas. The CBCS theory takes into account {\em short-ranged\/}
correlations analogously to the theory for normal systems, and
supplements these by the typical BCS correlations. In its essence,
CBCS provides a recipe for calculating an effective interaction that
enters the standard BCS formalism. An alternative way to deal with the
problem that implements a full Fermi Hypernetted Chain (FHNC)
summation for large gap parameters has been suggested by Fantoni
\cite{Fantonipairing,Fabrocinipairing2}; unfortunately, the approach
uses a normalization of the correlated BCS wave function which leads
to divergences for optimized or otherwise long-ranged correlations. It
was therefore replaced in Ref. \onlinecite{CBFPairing} by the method
reviewed in Sec. ~\ref{sec:CBCS}.
Our paper is organized as follows: In Sec.~\ref{sec:ManyBodyTheory} we
will review briefly review the basics of the correlated basis
functions (CBF) method. We call this approach ``generic'' many-body
theory because the same equations can be derived from Green's
functions approaches \cite{parquet1}, from Coupled Cluster theory
\cite{BishopValencia} and from an extension of density functional
theory which includes pair correlations. We evaluate in section
\ref{ssec:lowdens} the low-density limit and show that the exact
formula \cite{HuangYang57} is not reproduced by the Jastrow-Feenberg
and/or the ``fixed node'' approximation. To correct this problem, we
review in section \ref{ssec:CBF} perturbation theory in a correlated
basis and show that second-order CBF corrections must be added.
In Sec. \ref{sec:CBCS} we will review the CBCS theory. It is seen
that the theory can be formulated in exactly the same way as ordinary
BCS theory, CBCS simply provides a means for deriving weak, effective
interactions from a strong, bare interaction. Upon closer inspection,
the mapping of the bare interaction to an effective interaction is
closely related to the transition from the bare interaction to the
$T$-matrix used in the low-density expansion of BCS theory
\cite{gorkovJETP61,heiselbergPRL00}.
In Sec.~\ref{sec:results}, we present our results for the energy, the
pair distribution function, the in-medium scattering length, and the
gap energy as function of Fermi wave number $k_{\rm F}$ and the vacuum
scattering length $a_0$. We show that the dynamical correlations can
be characterized by three regimes: For short distances, $r\approx
\sigma$, these correlations are, of course, determined by the
interaction. An intermediate regime is dominated by two-body
scattering where correlations decays as $1/r$. A third, asymptotic
regime for $r$ larger than the average interparticle distance is dominated by
many-body effects where the correlations decay as $1/r^2$.
We find an instability of the FHNC-Euler-Lagrenge (FHNC-EL) solutions
accompanied by a divergence of the in-medium scattering length $a$,
which we interpret as onset of dimerization. This happens well before
the divergence of the vacuum scattering length $a_0$ and is caused by
the induced interaction mediated by phonon exchange. Thus, dimers can
be formed at finite density even if the bare potential does not have a
bound state. Wave functions of the type normally used only in the
``BEC'' regime \cite{carlsonPRL03,astraPRL04,astraPRL05BCSBEC} would
be more appropriate in this regime.
In the regime $a<0$ we solve the CBCS gap equation and show that
deviations from the simple BCS approximation can be separated into
contributions from the induced interaction, {\em i. e.\/} the density
dependence of the in-medium scattering length, and from the
non-negligible momentum dependence of the pairing interaction in the
CBCS gap equation.
\section{Generic Many-Body Theory}
\label{sec:ManyBodyTheory}
\subsection{Variational wave functions}
\label{ssec:FHNC}
We start our discussion with the Jastrow-Feenberg theory for
a strongly interacting, translationally invariant {\em normal\/} system.
As usual, we assume a non-relativistic many-body Hamiltonian
\begin{equation}
H = -\sum_{i}\frac{\hbar^2}{2m}\nabla_i^2 + \sum_{i<j}
v( {\bf r}_i- {\bf r}_j)
\label{eq:Hamiltonian}
\end{equation}
in terms of a local, phenomenological interaction $v(r)$.
The method starts with a variational {\em
ansatz\/} for the wave function \cite{FeenbergBook}
\begin{align}
\Psi_0({\bf r}_1,\ldots,{\bf r}_N) &= \frac{1}{\cal N}
F({\bf r}_1,\ldots,{\bf r}_N)
\Phi_0({\bf r}_1,\ldots,{\bf r}_N)\label{eq:wavefunction}\\
F({\bf r}_1,\ldots,{\bf r}_N) &= \exp\frac{1}{2}
\left[\sum_{i<j} u_2({\bf r}_i,{\bf r}_j)
+ \ldots\right]\label{eq:Jastrow}\\
{\cal N} &= \left\langle \Phi_0\right|
F^\dagger F \left|\Phi_0\right\rangle^{\frac{1}{2}}\,,
\end{align}
where $\Phi_0({\bf r}_1,\ldots,{\bf r}_N)$ is a model state, normally
a Slater-determinant for fermions and $\Phi_0({\bf r}_1,\ldots,{\bf
r}_N)=1$ for bosons. There are basically two ways to deal with this
type of wave function. In quantum Monte Carlo studies, the wave
function (\ref{eq:wavefunction}) is referred to as ``fixed node
approximation'' and an optimal correlation function $F({\bf
r}_1,\ldots,{\bf r}_N)$ is obtained by stochastic means
\cite{carlsonPRL03,changPRA04,astraPRL04,astraPRL05BCSBEC,%
changPRL05,loboPRL06,burovskiPRL06,akkineniPRB07,morrisPRA10,%
bulgacPRL06,liPRA11}, decomposition into $n$-body correlations
$u_n({\bf r}_1,\ldots,{\bf r}_n)$ is then, of course, not necessary.
Alternatively, one can use diagrammatic methods, specifically the
optimized Fermi-hypernetted chain (FHNC-EL) method for the calculation
of physically interesting quantities. These diagrammatic methods have
been successfully applied to such highly correlated Fermi systems as
$^4$He and $^3$He at $T=0$~\cite{polish}, they are naturally expected
to work much better in the low density systems of interest here. In
fact, we have shown in recent work \cite{ljium} that even the simplest
version of the FHNC-EL theory is accurate within better than one
percent at densities less that 25 percent of the ground state density
of liquid \he3.
Diffusion Monte Carlo calculations typically use a parametrized
Jastrow-Feenberg ansatz for importance sampling, where the parameters
are optimized by variational Monte Carlo calculations. JF theory makes
explicit use of the form (\ref{eq:Jastrow}). It has been shown
\cite{parquet1} that triplet correlations contribute to the ground
state energy only in fourth order of the interactions. Even in
strongly interacting quantum fluids like the helium liquids, triplet
correlations contribute no more than five to ten percent to the ground
state energy \cite{PPA2,polish} in both isotopes, they are completely
negligible below approximately 25 percent of the respective
equilibrium densities. It is therefore admissible, at the low
densities we are concerned with here, to identify the Jastrow-Feenberg
approximation with the fixed-node approximation in quantum Monte Carlo
calculations. Monte Carlo calculations based on lattice Hamiltonians
have been performed for the unitary Fermi gas, where this nodal
restriction was not required~\cite{burovskiPRL06,bulgacPRL06}.
The correlations $u_n({\bf r}_1,\ldots,{\bf r}_n)$ are obtained by minimizing
the energy, {\em i.e.\/} by solving the Euler-Lagrange (EL) equations
\begin{eqnarray}
&&E_0 = \left\langle\Psi_0\right|H\left|\Psi_0\right\rangle
\equiv H_{{\bf o},{\bf o}}\label{eq;energy}\\
&& \frac{\delta E_0}
{\delta u_n}({\bf r}_1,\ldots,{\bf r}_n) = 0\,.
\label{eq:euler}
\end{eqnarray}
The evaluation of the energy (\ref{eq:euler}) for the variational wave
function (\ref{eq:Jastrow}) and the analysis of the variational
problem is carried out by cluster expansion and resummation methods.
The procedure has been described at length in review articles
\cite{Johnreview} and pedagogical material \cite{KroTrieste}. In any
approximate evaluation of the energy expectation value, it is
important to make sure that the resulting equations are consistent
with the {\em exact\/} variational determination of the
correlations. It has turned out that the (Fermi-)hypernetted chain
hierarchy ((F)HNC) of approximations is the only systematic
approximation scheme that preserves the properties of the variational
problem~\cite{FeenbergBook}.
Here, we spell out the simplest version of the equations that is
consistent with the variational problem (``FHNC-EL//0''). These do not
provide the quantitatively best implementation \cite{polish}, instead,
they provide the {\em minimal\/} version of the FHNC-EL theory. In
particular, they contain the relevant physics, namely the correct
description of both short- and long--ranged correlations. They also
are the minimal implementation of the theory that gives the correct
expansion of the ground state energy in powers of $(k_{\rm F}\,a_0)$ for the
wave function Eq. (\ref{eq:wavefunction}).
In the FHNC-EL//0 approximation \cite{Mistig}, which contains both the
random phase approximation (RPA) and the Bethe-Goldstone equation in a
``collective'' approximation \cite{KroTrieste}, the Euler equation
(\ref{eq:euler}) can be written in the form
\begin{equation}
S(k) = \frac{S_{\rm F}(k)}{\sqrt{1 +
2\frac{\displaystyle S_{\rm F}^2(k)}{\displaystyle t(k)}
\tilde V_{\rm p-h}(k)}} \,,
\label{eq:FermiRPA0}
\end{equation}
where $S(k)$ is the static structure factor of the interacting system,
$t(k) = \hbar^2 k^2/2m$ is the kinetic energy of a free particle,
$S_{\rm F}(k)$ is the static structure of the non-interacting Fermi system,
and
\begin{eqnarray}
V_{\rm p-h}(r) &=&\>
\left[1+ \Gamma_{\!\rm dd}(r)\right]v(r)
+ \frac{\hbar^2}{m}\left|\nabla\sqrt{1+\Gamma_{\!\rm dd}(r)}\right|^2 \nonumber\\
&&+ \Gamma_{\!\rm dd}(r)w_{\rm I}(r)\nonumber\\
&\equiv& v_{\rm CW}(r) + \Gamma_{\!\rm dd}(r)w_{\rm I}(r)
\label{eq:VddFermi0}
\end{eqnarray}
is what we call the ``particle-hole interaction''. For further
reference, we have above defined the ``Clark-Westhaus effective
interaction'' $v_{\rm CW}(r)$ \cite{Johnreview}. As usual, we define
the Fourier transform with a density factor,
\begin{equation}
\tilde f( {\bf k}) \equiv \rho \int d^3 r e^{\I {\bf k}\cdot {\bf r}} f( {\bf r})\,.
\label{eq:Fouri}
\end{equation}
Auxiliary quantities are the ``induced interaction''
\begin{equation}
\tilde w_{\rm I}(k)=-t(k)
\left[\frac{1}{S_{\rm F}(k)}-\frac{1}{ S(k)}\right]^2
\left[\frac{S(k)}{S_{\rm F}(k)}+\frac{1}{2}\right].
\label{eq:inducedFermi0}
\end{equation}
and the ``direct-direct correlation function''
\begin{equation}
\tilde \Gamma_{\!\rm dd}(k) = \bigl(S(k)-S_{\rm F}(k)\bigr)/S_{\rm F}^2(k)\,.
\label{eq:GFHNC}
\end{equation}
Eqs.~(\ref{eq:FermiRPA0})--(\ref{eq:GFHNC}) form a closed set which
can be solved by iteration. Note that the Jastrow correlation
function (\ref{eq:Jastrow}) has been eliminated entirely.
\begin{figure}[h]
\centerline{\includegraphics[width=0.6\columnwidth]{Xee.eps}}
\caption{The two diagrams contributing to $(\Delta X_{\rm ee})_1(r)$.
The usual diagrammatic notations of FHNC-EL theory
\cite{KroTriesteBook} apply.}
\label{fig:Xee}
\end{figure}
The pair distribution function can generally be written as
\begin{equation}
g(r) = \left[1 + \Gamma_{\!\rm dd}(r)\right]\left[g_{\rm F}(r)+C(r)\right]
\end{equation}
where, roughly speaking, $\Gamma_{\!\rm dd}(r)$ describes dynamic,
short-range correlations, $g_{\rm F}(r) = 1-\frac{1}{\nu}\ell^2(rk_{\rm F})$
is the pair distribution function of the non-interacting Fermi gas,
and $\ell(x)=\frac{3}{x}j_1(x)$ is the Slater exchange
function. $C(r)$ describes the combination of statistical and dynamic
correlations. In leading order in the dynamic correlations we have
\begin{equation}
\tilde C(k) = (S_{\rm F}^2(k)-1)\tilde\Gamma_{\!\rm dd}(k)
+ (\Delta \tilde X_{\rm ee})_1(k)
\label{eq:C0ofk}
\end{equation}
where $(\Delta \tilde X_{\rm ee})_1(k)$ is represented by the two leading
order exchange diagrams shown in Fig. \ref{fig:Xee}.
The energy per particle is, in this approximation \cite{polish,ljium},
\begin{widetext}
\begin{eqnarray}
\frac{E}{N} &=& \frac{3}{5}e_{\rm F} + e_{\rm R} + e_{\rm Q} + t^{(3)}_{\rm JF}
\,,\nonumber \\
e_{\rm R} &=& \frac{\rho }{2}\int\! d^3r\>\bigl[g_{\rm F}(r) + C(r)\bigr]
v_{\rm CW}(r)\,,
\label{eq:EJF}\\
e_{\rm Q} &=& \frac{1}{4}\int\!\frac{d^3k}{(2\pi)^2\rho}\>
t(k)\tilde\Gamma_{\!\rm dd}^2(k)\left[S^2_{\rm F}(k)/S(k)-1\,\right]
-\frac{1}{4}\int\!\frac{d^3k}{(2\pi)^2\rho}\>
t(k)\tilde\Gamma_{\!\rm dd}(k)(\Delta \tilde X_{\rm ee})_1(k)
\equiv e_{\rm Q}^{(1)}+ e_{\rm Q}^{(2)}\label{eq:EQ}\\
t^{(3)}_{\rm JF} &=& \frac{\hbar^2\rho^2}{8m\nu^2}
\int d^3 r_{12}d^3r_{13}
\Gamma_{\rm dcc}( {\bf r}_1; {\bf r}_2, {\bf r}_3)
\nabla_1^2\ell(r_{12}k_{\rm F})\ell(r_{13}k_{\rm F})
\label{eq:TJF}
\end{eqnarray}
where $e_{\rm F} = \frac{\hbar^2k_{\rm F}^2}{2m}$ is the Fermi energy of
non-interacting particles, $\nu$ is the degree of degeneracy of the
single particle states; in our case we have generally $\nu=2$. The
term $t_{\rm JF}^{(3)}$ is the three-body term of the Jackson-Feenberg
kinetic energy. The function $\Gamma_{\rm
dcc}( {\bf r}_1; {\bf r}_2, {\bf r}_3)$ is the sum of all three-point
diagrams that have an exchange path connecting points $ {\bf r}_2$ and
$ {\bf r}_3$ and no exchange lines attached to point $ {\bf r}_1$ which is
dynamically connected in such a way that there exists a path between
$ {\bf r}_1$ and each of the other two external points that {\it does
not\/} go trough the third external point. The term is normally
numerically very small, we keep it here for the purpose of deriving
the low-density expansion. To obtain the correct low-density limit,
we retain all contributions to $t_{\rm JF}^{(3)}$ with two factors
$\Gamma_{\!\rm dd}(r)$:
\begin{eqnarray}
t^{(3)}_{\rm JF}&\approx& t_{\rm JF}^{(3a)}+ t_{\rm JF}^{(3b)}\nonumber\\
t_{\rm JF}^{(3a)} &=& \frac{\hbar^2\rho^2}{8m\nu^2}
\int d^3 r_{12}d^3r_{13}\Gamma_{\!\rm dd}(r_{12})\Gamma_{\!\rm dd}(r_{13})
\nabla_{ {\bf r}_1}^2\ell(r_{12}k_{\rm F})\ell(r_{13}k_{\rm F})\ell(r_{23}k_{\rm F})\\
t_{\rm JF}^{(3b)} &=& -\frac{\hbar^2\rho^2}{8m\nu^3}
\int d^3 r_{12}d^3r_{13}d^3r_{14}\Gamma_{\!\rm dd}(r_{13})
\Gamma_{\!\rm dd}(r_{24})
\nabla_{ {\bf r}_1}^2\ell(r_{12}k_{\rm F})\ell(r_{13}k_{\rm F})\ell(r_{34}k_{\rm F})\ell(r_{24}k_{\rm F})\,.
\nonumber
\label{eq:TJF2}
\end{eqnarray}
The term $t_{\rm JF}^{(3b)}$ cancels exactly the contribution
to $e_{\rm Q}^{(2)}$ originating from the second diagram in
$(\Delta X_{\rm ee})_1(r)$; the terms $e_{\rm Q}^{(2)}$, $t_{\rm JF}^{(3a)}$
and $t_{\rm JF}^{(3b)}$ can then be combined to
\begin{equation}
t_{\rm CW}^{(3)}= e_{\rm Q}^{(2)} + t_{\rm JF}^{(3a)} + t_{\rm JF}^{(3b)}
= \frac{\hbar^2\rho^2}{4m\nu^2}
\int d^3 r_{12}d^3r_{13}\nabla\Gamma_{\!\rm dd}(r_{12})\cdot\nabla
\Gamma_{\!\rm dd}(r_{13})\ell(r_{12}k_{\rm F})\ell(r_{13}k_{\rm F})\ell(r_{23}k_{\rm F})\,.
\label{eq:TCW}
\end{equation}
\end{widetext}
The term $t_{\rm CW}^{(3)}$ is recognized as the three-body
term of the ``Clark-Westhaus'' form of the kinetic energy.
To summarize, the total energy has the form
\begin{equation}
\frac{E}{N} = \frac{3}{5}e_{\rm F} + e_{\rm R} + e_{\rm Q}^{(1)} + t_{\rm CW}^{(3)}\,.
\label{eq:Efull}
\end{equation}
For further reference, we also spell out the pair distribution functions
in the spin-parallel and the spin-antiparallel channel:
\begin{align}
g_{{\uparrow\uparrow}}(r) &=\left[1+\Gamma_{\!\rm dd}(r)\right]
\Big[1+\big[(S_{\rm F}^2(k)-1)\tilde\Gamma_{\!\rm dd}(k)\big]^{\cal F}(r)\nonumber\\
&\qquad\qquad\quad -\ell^2(rk_{\rm F} )+2(\Delta X_{\rm ee})_1(r)\Big]\,,
\label{eq:guu}\\
g_{{\uparrow\downarrow}}(r) &=\left[1+\Gamma_{\!\rm dd}(r)\right]
\left[1+\big[(S_{\rm F}^2(k)-1)\tilde\Gamma_{\!\rm dd}(k)\big]^{\cal F}(r)
\right]
\label{eq:gud}\\
g(r) &= \frac{1}{2}\left[g_{{\uparrow\uparrow}}(r) + g_{{\uparrow\downarrow}}(r)\right]
\label{eq:gfull}
\end{align}
where $\left[\ldots\right]^{\cal F}(r)$ indicates the
Fourier-transform (\ref{eq:Fouri}). To leading order in the density,
the term $\ell^2(rk_{\rm F} )$ is the only term that reflects Fermi
statistics, whereas the factor $\left[1+\Gamma_{\!\rm dd}(r)\right]$
describes dynamical correlations. $g_{{\uparrow\uparrow}}(r)$, $g_{{\uparrow\downarrow}}(r)$, and $g(r)$
are normalized such that both go to unity for large $r$.
\subsection{Low-density limit}
\label{ssec:lowdens}
In the limit of low densities, the equation of state and related
quantities depend only on the vacuum $s$-wave scattering length $a_0$ and the
Fermi wave number $k_{\rm F}$. For example, the energy per particle has the
expansion \cite{HuangYang57,Landau5}
\begin{equation}
\frac{E_{\rm HY}}{N} =
\frac{\hbar^2 k_{\rm F}^2}{2m}\left[\frac{3}{5} + \frac{2}{3\pi}a_0 k_{\rm F}
+ \frac{4(11-2\ln 2)}{35\pi^2}\left(a_0 k_{\rm F} \right)^2+ \ldots\right]
\label{eq:lowdensFermi}
\end{equation}
Note that the expansion (\ref{eq:lowdensFermi}) is strictly valid only
for $a_0 > 0$, for attractive potentials the superfluid condensation
energy must be added.
The locally correlated wave function (\ref{eq:wavefunction}) is not
exact, and the question arises whether it recovers the expansion
(\ref{eq:lowdensFermi}). It is plausible that this is {\em not\/} the
case: The calculation of the third term in the expansion
(\ref{eq:lowdensFermi}) makes explicit use of the form of the energy
denominator in second order perturbation theory \cite{Landau5}. The
local correlation operator corresponds to a ``collective
approximation'' in which, among others, the particle-hole propagator
is approximated by a collective mode.
Our task is to express the variational energy expression
(\ref{eq:Efull}) to second order in the vacuum scattering length
$a_0$. One can deal with this task in two ways: One is to permit
hard-core interactions, the other, somewhat simpler, approach is to
assume a weak interaction that has a Fourier transform. In this case,
one can parallel the derivation of Ref. \onlinecite{parquet1} for
fermions.
We will show the details of the calculation in Appendix
\ref{app:lowdens}, here we discuss only the essential steps:
The vacuum scattering length is determined from the zero-energy
scattering equation
\begin{equation}
\frac{\hbar^2}{m}
\nabla^2\psi(r) = v(r)\psi(r)
\label{eq:scatteq}
\end{equation}
The scattering equation has the asymptotic solution
\begin{equation}
\psi(r) = 1 - \frac{a_0}{r}\quad{\rm as}\quad r\rightarrow\infty .
\label{eq:a0def}
\end{equation}
Multiplying Eq. (\ref{eq:scatteq}) with $\psi(r)$ and using the identity
\begin{equation}
\psi(r)\nabla^2\psi(r) = \frac{1}{2}\nabla^2 \psi^2(r) -
\left|\nabla \psi(r)\right|^2\,.
\label{eq:psi2}
\end{equation}
gives a relationship that will be useful later:
\begin{equation}
\frac{\hbar^2}{2m}
\nabla^2\psi^2(r) = \frac{\hbar^2}{m}
\left|\nabla \psi(r)\right|^2
+v(r)\psi^2(r)\equiv v_{\rm CW}^{(0)}(r)\,.
\label{eq:vcw}
\end{equation}
The quantity $v_{\rm CW}^{(0)}(r)$ is structurally identical to
$v_{\rm CW}^{(0)}$ as introduced in Eq. (\ref{eq:VddFermi0}), it is here
calculated for zero energy vacuum scattering solution $\psi(r)$ which
we indicate with the superscript $(0)$.
Integrating Eq. (\ref{eq:vcw}) leads to the relationship
\begin{eqnarray}
\frac{4\pi\rho\hbar^2}{m}a_0 &=& \rho\int d^3r\left[\frac{\hbar^2}{m}
\left|\nabla \psi(r)\right|^2
+v(r)\psi^2(r)\right]\nonumber\\
&=& \tilde v_{\rm CW}^{(0)}(0+)\,.
\label{eq:afull}
\end{eqnarray}
We notice that the induced interaction $\tilde w_{\rm I}(k)$ as
defined in Eq. (\ref{eq:inducedFermi0}) is of second order in the
interaction. To leading order in the density we can also expand
Eq. (\ref{eq:FermiRPA0})
\[S(k) = S_{\rm F}(k) - \frac{S_{\rm F}^3(k)}{t(k)}\tilde v_{\rm CW}(k)\]
and obtain from Eq. (\ref{eq:GFHNC}) the solution
\begin{equation}
\tilde \Gamma_{\!\rm dd}(k) = -\frac{\tilde v_{\rm CW}(k)S_{\rm F}(k)}{t(k)}\,.
\label{eq:EulerLow0}
\end{equation}
In addition to calculating the energy contributions (\ref{eq:Efull})
for the correlation function (\ref{eq:EulerLow0}), we must express
$\tilde v_{\rm CW}(0+)$ in terms of the scattering length because
$\tilde v_{\rm CW}(0+)$ is calculated with the optimal correlation
function (\ref{eq:GFHNC}) of the many-body problem at {\rm finite
density,} and not with the solution of the zero-density scattering
equation (\ref{eq:scatteq}). In Appendix \ref{app:lowdens} we will
prove the relationship
\begin{eqnarray}
\tilde v_{\rm CW}(0)&=& \frac{4\pi\rho\hbar^2}{m}a_0\nonumber\\
&+& \frac{1}{2} \int\frac{d^3 k}{(2\pi)^3\rho}\frac{\tilde v_{\rm CW}^2(k)}{t(k)}
\left[S_{\rm F}(k) -1\right]^2 + {\cal O}(a_0^2)
\nonumber\\
&=&\frac{4\pi\rho\hbar^2}{m}a_0
\left[1+\frac{99}{280}\frac{\tilde v_{\rm CW}(0+)}{e_{\rm F}}\right]\nonumber\\
&=& \frac{4\pi\rho\hbar^2}{m}a_0\left[1+\frac{33}{35}\frac{\a0k_{\rm F}}{\pi}\right]\,.
\label{eq:vcw0}
\end{eqnarray}
Collecting all results, one finds
\begin{equation}
\frac{E}{N} =\frac{\hbar^2 k_{\rm F} ^2}{2m}\left[\frac{3}{5}
+ \frac{2}{3}\frac{ a k_{\rm F} }{\pi} + 1.5415
\left(\frac{a k_{\rm F} }{\pi}\right)^2 +\ldots\right]\,,
\label{eq:lowdensfhnc0}
\end{equation}
see Appendix \ref{app:lowdens} for details of the calculation. The
numerical factor $2430284/1576575=1.5415$ is to be compared with the
factor $4(11-2\ln 2)/35 = 1.098$ of Eq. (\ref{eq:lowdensFermi}). We
emphasize again that the result (\ref{eq:lowdensfhnc0}) also applies
to the ``fixed-node'' approximation in Monte Carlo calculations
because the terms where that approximation deviates from our expansion
are of at least fourth order in the potential strength. To get the
exact result, one must go beyond local correlation operators; this is
done by perturbation theory in a correlated basis generated by the
correlation operator $F( {\bf r}_1,\ldots, {\bf r}_N)$ described in the next
section \ref{ssec:CBF}. The situation is analogous to the case of the
high-density limit of the correlation energy of the electron gas. With
local correlations one obtains for the logarithmic term 0.05690$\,\ln
r_s\,$Ry \cite{Zab80} instead of the exact value 0.06218$\,\ln r_s\,$Ry
\cite{Mac50,GellMannBrueckner}. This deficiency is, for the electron
gas, removed by second-order CBF theory \cite{LanttoKroSmithOaxtepec},
and we will see that the same is true here.
\subsection{Elements of Correlated Basis Functions}
\label{ssec:CBF}
We have seen above that a locally correlated wave function
(\ref{eq:wavefunction}) does not produce the exact low-density limit
(\ref{eq:lowdensFermi}) of the ground state energy. As mentioned
above, the problem can be cured by applying second-order perturbation
theory with correlated basis functions (CBF theory). We will also need
the basic ingredients of CBF theory for examining the superfluid
state. We review here this method only very briefly, details may be
found in pedagogical material \cite{KroTrieste} and review articles
\cite{Johnreview,polish}; the diagrammatic construction of the
relevant ingredients is given in Ref. \onlinecite{CBF2}.
CBF theory uses the correlation operator
$F$ to generate a complete set of correlated and normalized
$N$-particle basis states through
\begin{equation}
\vert \Psi_{\bf m}^{(N)} \rangle =
\frac{F_{\!N} \; \vert \Phi_{\bf m}^{(N)} \rangle }
{\langle \Phi_{\bf m}^{(N)} \vert F_{\!N}^{\dagger} F_{\!N}
\vert \Phi_{\bf m}^{(N)}
\rangle^{1/2} } \;,
\label{eq:States}
\end{equation}
where the $\{\vert \Phi_{\bf m}^{(N)} \rangle\}$ form a complete basis of
model states, normally consisting of Slater determinants of single
particle orbitals. Although the $\vert \Psi_{\bf m}^{(N)} \rangle$ are not
orthogonal, perturbation theory can be formulated in terms of these
states \cite{MF1,FeenbergBook}.
For economy of notation, we introduce a ``second--quantized''
formulation of the correlated states. The Jastrow--Feenberg
correlation operator in (\ref{eq:Jastrow}) depends on the particle
number, {\it i.e.\/} $F=F_{\!N}(1,\ldots,N)$
(whenever unambiguous, we omit the corresponding subscript). Starting
from the conventional $\qerz{k}, \qver{k}$ that create and annihilate
single particle states, new creation and annihilation operators
$\perz{k},\pver{k}$ of {\em correlated states\/} are defined by their
action on the correlated basis states:
\begin{eqnarray}
\perz{k}\,\bigl|\Psi_{\bf m}\bigr\rangle
&\equiv\>& \frac{ F_{\!\!_{N+1}} \qerz{k} \,\ket {\Phi_{\bf m}} }{
\bra {\Phi_{\bf m}} \qver{k} F_{\!\!_{N+1}}^\dagger
F_{\!\!_{N+1}}^{\phantom{\dagger}}
\qerz{k}\ket {\Phi_{\bf m}}^{1/2} }\, ,
\label{eq:creation}\\
\pver{k}\,\bigl|\Psi_{\bf m}\bigr\rangle
&\equiv\>& \frac{ F_{\!\!_{N-1}} \qver{k}\,\ket {\Phi_{\bf m}} }{
\bra {\Phi_{\bf m}} \qerz{k} F_{\!\!_{N-1}}^\dagger
F_{\!\!_{N-1}}^{\phantom{\dagger}}
a_{k}\ket {\Phi_{\bf m}}^{1/2} }\,.
\label{eq:annihilation}\end{eqnarray}
According to these definitions, $\alpha_{k}^\dagger$ and
$\alpha^{\phantom{\dagger}}_{k}$ obey the same (anti--) commutation
rules as the creation and annihilation operators $\qerz{k}$ and
$\qver{k}$ of uncorrelated states, {\it but they are not Hermitian
conjugates.\/} If $\ket{\Psi_{\bf m}}$ is an $N$--particle state,
then the state in Eq.~(\ref{eq:creation}) must carry an
$(N\!+\!1)$-particle correlation operator $F_{\!\!_{N+1}}$, while that in
Eq.~(\ref{eq:annihilation}) must be formed with an
$(N\!-\!1)$--particle correlation operator $F_{\!\!_{N-1}}$.
In general, we label ``hole'' states, which are occupied in $\vert
\Phi_{\bf o} \rangle$, by $h$, $h'$, $h_i\;, \ldots\,$, and unoccupied
``particle'' states by $p$, $p'$, $p_i\;,$ \textit{etc.}. To display the
particle-hole pairs explicitly, we will alternatively to
$\bigl|\Psi_{\bf m}\bigr\rangle$ use the notation $\bigl|\Psi_{p_1 \ldots
p_d\, h_1\ldots h_d}\bigr\rangle $. A basis state with $d$
particle-hole pairs is then
\begin{equation}
\bigl|\Psi_{p_1 \ldots p_d\, h_1\ldots h_d}\bigr\rangle
=\perz{p_1}\cdots\perz{p_d}\pver{h_d}\cdots\pver{h_1}\ket{\Psi_{\bf o}}
\,.
\label{eq:psimph}
\end{equation}
For the off--diagonal elements $O_{\bf m,n}$ of an operator $O$
we sort the quantum numbers $m_i$ and $n_i$ such that $|\Psi_{\bf m} \rangle$
is mapped onto $\left|\Psi_{\bf n}\right\rangle$ by
\begin{equation}
\label{eq:defwave}
\left|\Psi_{\bf m}\right\rangle = \perz{m_1}\perz{m_2}
\cdots
\perz{m_d} \; \pver{n_d} \cdots \pver{n_2}\pver{n_1}
\left|\Psi_{\bf n} \right\rangle \;.
\end{equation}
From this we recognize that, to leading order in $N$, any $O_{\bf
m,n}$ depends only on the {\it difference\/} between the states
$\vert \Psi_{\bf m} \rangle$ and $\vert \Psi_{\bf n} \rangle$ and {\it
not\/} on the states as a whole. Consequently, $O_{\bf m,n}$ can be
written as matrix element of a $d$-body operator
\begin{equation}
\label{eq:defmatrix}
O_{\bf m,n} \equiv \langle m_1\, m_2 \, \ldots m_d \,|
{\cal O}(1,2,\ldots d) \,|n_1\,
n_2 \, \ldots n_d\rangle_a \;.
\end{equation}
(The index $a$ indicates antisymmetrization.)
Key quantities for the execution of the theory are diagonal and off-diagonal matrix elements of unity and
$H'\!\equiv H\!-\!H_{{\bf o},{\bf o}}$
\begin{eqnarray}
M_{\bf m,n} &=& \langle \Psi_{\bf m} \vert \Psi_{\bf n} \rangle
\equiv \delta_{\bf m,n} + N_{\bf m,n}\;,
\label{eq:defineNM}
\\
H'_{\bf m,n} &\equiv &
W_{\bf m,n} + \frac{1}{2}\left(H_{\bf m,m}+H_{\bf n,n}-2H_{\bf o,o}
\right)N_{\bf m,n} \,. \qquad
\label{eq:defineW}
\end{eqnarray}
Eq. (\ref{eq:defineW}) defines a natural decomposition
\cite{CBF2,KroTrieste} of the matrix elements of $H'_{\bf m,n}$ into
the off-diagonal quantities $W_{\bf m,n}$ and $N_{\bf m,n}$ and
diagonal quantities $H_{\bf m,m}$.
To leading order in the particle number, the {\it diagonal\/} matrix
elements of $H'\!\equiv H\!-\!H_{{\bf o},{\bf o}}$ become additive, so
that for the above $d$-pair state we can define the CBF single
particle energies
\begin{equation}
\bra{\Psi_{\bf m}} H' \ket{\Psi_{\bf m}} \>\equiv\>
\sum_{i=1}^d e_{p_ih_i} + {\cal O}(N^{-1}) \;,
\label{eq:CBFph}
\end{equation}
with $e_{ph} = e_p - e_h$ where
\begin{eqnarray}
e_p &=&\phantom{-}\bigl\langle\Psi_{\bf o}\bigr|\pver{p}\,H'\perz{p}\
\bigl|\Psi_{\bf o}\bigr\rangle = t(p) + u(p)\nonumber\\
e_h &=&-\bigl\langle\Psi_{\bf o}\bigr|\perz{h}\,H'\pver{h}\
\bigl|\Psi_{\bf o}\bigr\rangle = t(h) + u(h)\,
\label{eq:spectrum}
\end{eqnarray}
and $u(p)$ is an average field that can be expressed in terms of the
compound diagrammatic quantities of FHNC theory.
According to (\ref{eq:defmatrix}),
$W_{{\bf m},{\bf n}}$ and $N_{{\bf m},{\bf n}}$ define
$d-$particle operators ${\cal N}$ and ${\cal W}$, {\em e.g.\/}
\begin{eqnarray}
N_{{\bf m},{\bf o}} &\equiv& N_{p_1p_2\ldots p_d \,h_1h_2\ldots h_d,0} \nonumber\\
&\equiv& \langle p_1p_2\ldots p_d \,|\, {\cal N}(1,2,\ldots,d)\,
|\,h_1h_2\ldots h_d \rangle_a \;,\nonumber\\
W_{{\bf m},{\bf o}} &\equiv& W_{p_1p_2\ldots p_d \,h_1h_2\ldots h_d,0}\nonumber\\
&\equiv& \langle p_1p_2\ldots p_d \,|\, {\cal W}(1,2,\ldots,d)\,
|\,h_1h_2\ldots h_d \rangle_a \;.\qquad\;
\label{eq:NWop}
\end{eqnarray}
Diagrammatic representations of ${\cal N}(1,2,\ldots,d)$ and ${\cal
W}(1,2,\ldots,d)$ have the same topology \cite{CBF2}. In
homogeneous systems, the continuous parts of the $p_i,h_i$ are wave
numbers ${\bf p}_i,{\bf h}_i$; we abbreviate their difference as ${\bf
q}_i$.
In principle, the ${\cal N}(1,2,\ldots,d)$ and ${\cal
W}(1,2,\ldots,d)$ are non-local $d$-body operators. In the next
section, we will show that we need, for examining pairing phenomena,
only the two-body operators. Moreover, the low density of the systems
we are examining permits the same simplifications of the FHNC theory
that we have spelled out in Sec. \ref{ssec:FHNC}. In the same
approximation, the operators ${\cal N}(1,2)$ and ${\cal
W}(1,2)$ are local, and we have \cite{polish}
\begin{eqnarray}
{\cal N}(1,2) &=& {\cal N}(r_{12}) = \Gamma_{\rm dd}(r_{12})\nonumber\\
{\cal W}(1,2) &=& {\cal W}(r_{12})\,,\quad \tilde {\cal W}(k) =
- \frac{t(k)}{S_{\rm F}(k)}\tilde \Gamma_{\rm dd}(k)\,.
\label{eq:NWloc}
\end{eqnarray}
The most straightforward application of CBF theory is to calculate
corrections to the ground state energy. In second order we have,
for example
\begin{widetext}
\begin{equation}
\delta E_2 = -\frac{1}{4}\sum_{pp'hh'}\frac{
\left|\left\langle pp'\right|{\cal W}\left|hh'\right\rangle_a
+ \frac{1}{2}\left[t_p + t_{p'}-t_h -t_{h'}\right]
\left\langle pp'\right|{\cal N}\left|hh'\right\rangle_a\right|^2}
{t_p + t_{p'}-t_h -t_{h'}}\,.
\label{eq:E2CBF}
\end{equation}
\end{widetext}
The magnitude of the CBF correction is normally comparable to the correction
from three-body correlations \cite{polish}; it is also important to
note that there are significant cancellations between the
two terms in the numerator. We will show in appendix \ref{app:lowdens}
that the CBF correction (\ref{eq:E2CBF}) corrects the coefficient
of the third term in the expansion (\ref{eq:lowdensfhnc0}) and leads
to the exact low-density limit (\ref{eq:lowdensFermi}).
\section{BCS Theory with correlated wave functions}
\subsection{General derivation}
\label{sec:CBCS}
We show in this section how the variational theory is adapted to the
description of the superfluid state. We restrict ourselves here to the
simplest case of $S$--wave pairing and show how the effective
interactions, which enter phenomenological theories as parameters, may
be calculated from first principles. This section reviews
the derivations of Refs. \onlinecite{CBFPairing,KroTrieste}.
The BCS theory of fermion superfluidity ge\-ne\-ra\-li\-zes the
Hartree--Fock model $\left\{\Ket{\Phi_m}\right\} $ by introducing a
superposition of independent particle wave functions corresponding to
different particle numbers
\begin{equation}
\ket{{\rm BCS}} = \prod_{ {\bf k}}
(u_{ {\bf k}} + v_{ {\bf k}} a_{ {\bf k} ,\uparrow }^\dagger
a_{-{ {\bf k},\downarrow }}^\dagger)\ket{0} \, .
\label{8.6.1}
\end{equation}
The coefficient functions $u_{ {\bf k}}$ and $v_{ {\bf k}}$, known as
Bogoljubov amplitudes, describe the
distortion of the Fermi surface due to the pairing phenomenon.
To deal with strongly interacting systems, adequate provision must be
made for the singular or near--singular nature of the two--body
interaction $v(r)$ for small $r$. To build the required geometrical
correlations into the microscopic description of the system, we can
define a correlated BCS state, incorporating both short-ranged and BCS
correlations. We are faced with a formal mismatch, which prevents us
from simply applying the correlation factor $F_{\!N}$ to $\ket{\rm
BCS}$, since the former is defined in the $N$--particle Hilbert
space and the latter is a vector in Fock space with indefinite
particle number. The most natural way to deal with this is first
projecting the bare BCS state on an arbitrary member of a complete set
of independent--particle states with fixed particle numbers, then
applying the correlation operator to that state, normalizing the
result, and finally summing over all particle numbers. We must then
distinguish between correlation operators and normalization integrals
corresponding to different particle numbers $N$. Thus, the correlated
BCS state is written as
\begin{eqnarray}
\ket{\rm CBCS} &=& \prod_{ {\bf k}}
(u_{ {\bf k}} + v_{ {\bf k}} \alpha_{ {\bf k} ,\uparrow }^\dagger
\alpha_{-{ {\bf k},\downarrow }}^\dagger)\ket{\Phi_0}\nonumber\\
&=& \sum_{m,N} \ket {\Psi_m^{(N)}}
\langle\Phi_m^{(N)} \ket{\rm BCS} \,.
\label{8.6.25}
\end{eqnarray}
The trial state (\ref{8.6.25}) superposes the correlated basis states
$\ket {\Psi_m^{(N)}}$ with the same amplitudes with which the model
states $\ket{\Phi_m^{(N)}}$ enter the corresponding expansion of the
{\it original\/} BCS vector.
To derive the relevant equations we consider the expectation value of
an arbitrary operator $\hat O$ with respect to the
superfluid state:
\begin{equation}
\left\langle\hat O\right\rangle_s =
\frac{{\bra {\rm CBCS}} \hat O {\ket {\rm CBCS}}}
{\langle {\rm CBCS}\ket {\rm CBCS}} \, .
\label{8.6.26}
\end{equation}
One may pursue cluster--expansion and resummation methods of
expectation values (\ref{8.6.26}) for the superfluid trial state
(\ref{8.6.25}), this has been done successfully for the one-- and
two--body density matrices corresponding to a slightly different
choice of the correlated BCS state~\cite{Fantonipairing} which
exhibits, unfortunately, divergences for optimized correlation
functions. We do not follow this route, but instead consider the
interaction of only one Cooper pair at a time. The error introduced by
this is of order $\xi = (\Delta_F / \epsilon_F )^2$, where $\Delta_F$
is the superfluid gap energy. We will demonstrate below that this
quantity is indeed small in the regime where the wave function
(\ref{8.6.25}) is appropriate.
To implement the decoupling approximation, it is sufficient to retain
the terms of {\it first order in the deviation\/} $v^2_{ {\bf k}} -
v_{0, {\bf k}}^2$ and those of {\it second order\/} in
$u_{ {\bf k}}v_{ {\bf k}}$. The calculation of $\bigl\langle \hat H - \mu
\hat N\bigr\rangle$ for correlated states \cite{CBFPairing} is
somewhat tedious, we only give the essential steps and the final
result. It is convenient to introduce the creators and annihilators
of correlated Cooper pairs,
\begin{eqnarray}
\beta_ {\bf k}^\dagger &=&
\alpha_{ {\bf k}\uparrow }^\dagger\alpha_{ - {\bf k}\downarrow }^\dagger
\nonumber\\
\beta_ {\bf k} &=&
\alpha_{ - {\bf k}\downarrow }\alpha_{ {\bf k} \uparrow } \, .
\label{5.7.8}
\end{eqnarray}
\begin{widetext}
In terms of these quantities, the expectation value of
an operator $\hat O$ is, to leading order in the amplitudes
$v^2_{ {\bf k}} - v_{0, {\bf k}}^2$ and $u_{ {\bf k}}v_{ {\bf k}}$
\begin{eqnarray}
&&\langle \hat O \rangle_s
= \Bigl\langle \Psi_0 \Bigr| O^{(N)}\Bigl| \Psi_0 \Bigr\rangle
\nonumber\\
&+& \sum_{ k > k_{\rm F} } v_ {\bf k}^2
\Bigl\langle \Psi_0\, \beta_ {\bf k} \Bigr|
\left[ \hat O^{(N+2)} - O_{oo}^{(N)}\right]
\Bigl|\beta_ {\bf k}^\dagger \Psi_0 \Bigr\rangle
+ \sum_{ k < k_{\rm F} } u_ {\bf k}^2
\Bigl\langle \Psi_0 \, \beta_k^\dagger \Bigr|
\left[ \hat O^{(N-2)} - O_{oo}^{(N)}\right]
\Bigl|\beta_{ {\bf k}} \Psi_0 \Bigr\rangle
\nonumber\\
&+&\sum_{ k>k_{\rm F} , k' < k_{\rm F} } u_ {\bf k} v_ {\bf k} u_{ {\bf k}'} v_{ {\bf k}'}
\Bigl\langle \Psi_0 \Bigr|\left[\hat O^{(N)} - O_{oo}^{(N)}\right]
\Bigl|\beta_ {\bf k}^\dagger \beta_{ {\bf k}'}\Psi_0 \Bigr\rangle
\nonumber\\
&+& \sum_{ k > k_{\rm F} , k' > k_{\rm F} } u_ {\bf k} v_ {\bf k} u_{ {\bf k}'} v_{ {\bf k}'}
\Bigl\langle \Psi_0 \beta_ {\bf k} \Bigr|
\left[\hat O^{(N+2)} - O_{oo}^{(N)}\right]
\Bigl|\beta_{ {\bf k}'}^\dagger\Psi_0 \Bigr\rangle
\nonumber\\
&&+ \sum_{ k < k_{\rm F} , k' < k_{\rm F} } u_ {\bf k} v_ {\bf k} u_{ {\bf k}'} v_{ {\bf k}'}
\Bigl\langle \Psi_0 \beta_ {\bf k}^\dagger\Bigr|
\left[ \hat O^{(N-2)} - O_{oo}^{(N)}\right]
\Bigl|\beta_{ {\bf k}'} \Psi_0 \Bigr\rangle
\nonumber\\
&+&
\sum_{ k < k_{\rm F} , k' > k_{\rm F} } u_ {\bf k} v_ {\bf k} u_{ {\bf k}'} v_{ {\bf k}'}
\Bigl\langle \Psi_0 \beta_ {\bf k}^\dagger \beta_{ {\bf k}'}\Bigr|
\left[\hat O^{(N)} - O_{oo}^{(N)}\right]
\Bigl| \Psi_0 \Bigr\rangle\,.
\label{5.7.9}
\end{eqnarray}
In Eqs. (\ref{5.7.9}), the operators $\hat O^{(N)}$, $\hat O^{(N-2)}$,
and $\hat O^{(N+2)}$ are the $N$, $N-2$, and $N+2$ --particle
realizations of the operator $\hat O$, and $O_{oo}^{(N)}$ the
expectation value of the operator $\hat O$ in the $N$--particle
correlated ground state. Inserting $\hat H - \mu \hat N$ for $\hat O$
into the expansion (\ref{5.7.9}), where $\mu $ is a Lagrange
multiplier (the chemical potential) introduced to adjust the average
particle number $\langle\hat N \rangle_s = N$, we recover the
effective interactions, overlap integrals (\ref{eq:NWop}), and
single--particle energies (\ref{eq:spectrum}) of section
\ref{ssec:CBF}, {\it e.g.}
\begin{eqnarray}
&&\bra {\Psi_0\, \beta_ {\bf k}}
\left[\hat H^{(N+2)} - \mu(N+2) - H_{oo}^{(N)} + \mu N\right]
\ket{ \beta_ {\bf k}^\dagger\,\Psi_0} = 2 [ e_k- \mu ]\,,
\label{5.7.10}\\
&&\bra {\Psi_0}\left[\hat H^{(N)}-\mu\hat N - (H_{oo}^{(N)}-\mu N)\right]
\ket{\beta_ {\bf k}^\dagger \beta_{ {\bf k}'}^{\phantom{\dagger}}\, \Psi_0}
\nonumber\\
&&=\bra { {\bf k}\uparrow,- {\bf k}\downarrow} W (1,2)
\ket { {\bf k}'\uparrow , - {\bf k}'\downarrow}_a
+ \left[ e (k) - e (k') \right]
\bra { {\bf k}\uparrow, - {\bf k}\downarrow}
N (1,2)\ket { {\bf k}'\uparrow, - {\bf k}'\downarrow}_a\,,
\label{5.7.11}\\
&&\bra {\Psi_0\beta_ {\bf k}^{\phantom{\dagger}}}\left[ \hat H^{N+2} -
\mu(N+2)- (H_{oo}^{(N)} - \mu N)\right]
\ket{\beta_{ {\bf k}'}^\dagger\Psi_0}
\nonumber\\
&& = \bra { {\bf k}\uparrow, - {\bf k}\downarrow} W(1,2)
\ket{ {\bf k}'\uparrow,- {\bf k}'\downarrow}_a
+ (e(k) + e(k') - 2\mu)
\bra{ {\bf k}\uparrow , - {\bf k}\downarrow} N(1,2)
\ket{ {\bf k}'\uparrow,- {\bf k}'\downarrow}_a\,.\nonumber\\
\label{5.7.12}
\end{eqnarray}
Accordingly, we may write the energy of the superfluid state
in the form
\begin{equation}
\langle \hat H - \mu \hat N \rangle_s = H_{oo}^{(N)} - \mu
N + 2 \sum_{ k>k_{\rm F} } v_k^2 (e_k- \mu ) + 2
\sum_{ k<k_{\rm F} } u_k^2 (e_k- \mu ) + \sum_{ {\bf k}, {\bf k}'}u_ {\bf k} v_ {\bf k}
u_ {\bf k}' v_ {\bf k}' {\cal P}_{ {\bf k}\kvec'}
\label{5.7.13}\end{equation}
with the ``pairing interaction''
\begin{eqnarray}
{\cal P}_{ {\bf k}\kvec'} &=& \bra{ {\bf k} \uparrow ,- {\bf k}\downarrow}
{\cal W}(1,2)\ket{ {\bf k}'\uparrow ,- {\bf k}'\downarrow}_a
+ (|e_k- \mu | + |e_{k'}- \mu |)
\bra{ {\bf k} \uparrow ,- {\bf k}\downarrow}
{\cal N}(1,2)\ket{ {\bf k}'\uparrow , - {\bf k}'\downarrow}_a\nonumber\\
&\equiv&{\cal W}_{ {\bf k}\kvec'}+(|e_k- \mu | + |e_{k'}- \mu |)
{\cal N}_{ {\bf k}\kvec'}\,.
\label{5.7.14}\end{eqnarray}
\end{widetext}
With the results (\ref{5.7.13}) and (\ref{5.7.14}), we have arrived at
a formulation of the theory which is formally identical to the BCS
theory for weakly interacting systems. Upon closer inspection (see the
next section) we will see that our formulation corresponds to a BCS
theory formulated in terms of the scattering matrix
\cite{PethickSmith}. The correlation operator serves here to tame the
short--range dynamical correlations, the effective interaction
${\cal W}(1,2)$ is just an energy independent approximation
of the $T$-matrix.
We may proceed now in the conventional way to determine the
Bogoljubov--amplitudes $u_ {\bf k} $, $v_ {\bf k} $, by variation of the
condensation energy (\ref{5.7.13}) to compute the superfluid
condensation energy, or to investigate the local stability of the
normal ground state by second variation. Minimization of the energy
expectation value determines the BCS amplitudes $u_{ {\bf k}}$,
$v_{ {\bf k}}$. The CBCS gap equation becomes
\begin{equation}
\Delta_ {\bf k} = -\frac{1}{2}\sum_{ {\bf k}'} {\cal P}_{ {\bf k}\kvec'}
\frac{\Delta_{ {\bf k}'}}{\sqrt{(e_{ {\bf k}'}-\mu)^2 + \Delta_{ {\bf k}'}^2}}\,.
\label{eq:gap}
\end{equation}
The conventional, {\em i.e.\/} uncorrelated, BCS gap equation
\cite{FetterWalecka} is retrieved by replacing the effective
interaction ${\cal P}_{ {\bf k}\kvec'}$ by the matrix elements of the
bare interaction. Note that our ``decoupling approximation'' simply
means that we assume that the pairing interaction ${\cal
P}_{ {\bf k}\kvec'}$ does not depend on the Bogoljubov amplitudes.
This does {\em not\/} assume that the gap is small compared with the
Fermi energy.
\subsection{Analysis of the gap equation}
\label{ssec:Gapeq}
In the local approximations appropriate for low densities, the
effective interaction is given by Eqs. (\ref{eq:NWloc}).
The pairing matrix element is expressed in terms of the
Fourier-transforms $\tilde {\cal W}(k)$ and $\tilde {\cal N}(k)$:
\begin{equation}
{\cal W}_{ {\bf k}\kvec'}=\frac{1}{N}\tilde{\cal W}( {\bf k}- {\bf k}')\,,
\qquad{\cal N}_{ {\bf k}\kvec'}=\frac{1}{N}\tilde{\cal N}( {\bf k}- {\bf k}')\,.
\label{eq:Pdef}
\end{equation}
The remaining arguments are standard, {\em cf.\/}
Ref. \onlinecite{FetterWalecka,PethickSmith}: If the gap at the
Fermi surface is small, we can replace the
pairing interaction $\tilde{\cal W}(k)$ by its $s$-wave matrix
element at the Fermi surface,
\begin{equation}
\tilde {\cal W}_F \equiv \frac{1}{2 k_{\rm F}^2}\int_0^{2k_{\rm F}} dk k \tilde{\cal W}(k)
= N{\cal W}_{k_{\rm F},k_{\rm F}}\,.
\label{eq:V1S0}
\end{equation}
Then we can write the gap equation as
\begin{align}
1 = - \tilde {\cal W}_F\int\frac{ d^3k'}{(2\pi)^3\rho}
\Bigg[&\frac{1}{\sqrt{(e_{k'}-\mu)^2 + \Delta^2_{k_{\rm F}}}}\label{eq:gaplowdens}
\\
&- \frac{|e_{k'}-\mu|}{\sqrt{(e_{k'}-\mu)^2 + \Delta^2_{k_{\rm F}}}}
\frac{S_{\rm F}(k')}{t(k')}
\Bigg]\nonumber
\end{align}
which is almost identical to Eq. (16.91) in
Ref. \onlinecite{PethickSmith}. In particular, the second term has
the only function to regularize the integral for large $k'$. We can,
therefore, immediately conclude that the zero temperature gap is, in
this approximation, given by
\begin{equation}
\Delta_{F} = \frac{8}{e^2}e_{\rm F}\exp\left(\frac{\pi}{ 2a_Fk_{\rm F}}\right)\,.
\label{eq:GapApprox}
\end{equation}
with
\begin{equation}
\frac{4\pi\rho\hbar^2}{m}a_F \equiv {\cal W}_F
\label{eq:aFdef}
\end{equation}
The low-density limit is then obtained by identifying $a$ with the
vacuum scattering length $a_0$:
\begin{equation}
\Delta_{F}^{(0)} = \frac{8}{e^2}e_{\rm F}\exp\left(\frac{\pi}{ 2\a0k_{\rm F}}\right)\,.
\label{eq:GapLowdens}
\end{equation}
Of course, our equations (\ref{5.7.14}), (\ref{eq:gap}) are much more
general: At low densities, the subtraction term -- {\em i.e.\/} the
second term in the square bracket of Eqs. (\ref{eq:Pdef}) and
(\ref{eq:gaplowdens}) is important to regularize the integral for
large momentum transfers. At higher densities, where the finite range
of the interaction provides that momentum cutoff, the same term is
negligible since the energy numerator term $(|e_k- \mu | + |e_{k'}-
\mu |)$ is zero at the Fermi momentum.
By comparison with the low-density limit and Eq. (\ref{eq:afull}) we
interpret the constant
\begin{equation}
a \equiv \frac{m}{4\pi\rho\hbar^2}\tilde {\cal W}(0+)
\label{eq:amedium}
\end{equation}
as an ``in-medium'' scattering length. Hence, at finite densities,
one expects two types of corrections:
\begin{enumerate}
\item[(i)] Medium corrections: The effective pairing interaction
$\tilde {\cal W}(k)$ is related to $\tilde v_{\rm CW}(k)$ through
Eqs. (\ref{eq:FermiRPA0})- (\ref{eq:GFHNC}) which leads to
\begin{eqnarray}
{\cal W}(r) &=& v_{\rm CW}(r) + (1+\Gamma_{\!\rm dd}(r))w_{\rm I}(r)
\nonumber\\
&=&\left[1+ \Gamma_{\!\rm dd}(r)\right]\left[v(r)+w_{\rm I}(r)\right]\nonumber\\
&& + \frac{\hbar^2}{m}\Big|\nabla\sqrt{1+\Gamma_{\!\rm dd}(r)}\Big|^2\,.
\label{eq:Wofr}
\end{eqnarray}
Because of Eq. (\ref{eq:vcw0}) and the fact that the induced interaction
$w_{\rm I}(r)$ is of second order in the interaction, we conclude that
that
\begin{equation}
a = a_0\left[1+{\cal O}(\a0k_{\rm F})\right]
\end{equation}
In the same order, non-local contributions to the pairing interaction
(\ref{eq:NWloc}) \cite{CBF2} contribute. These can be identified with
particle-hole ladder diagrams and vertex corrections. Topologically,
one of these diagrams corresponds to the polarization correction
identified by Gorkov {\em et al.\/} \cite{Gorkov}. Moreover, similar
to our analysis of the low-density limit, local correlation functions
will not get the right coefficient of the term proportional to
$(\a0k_{\rm F})^2$, hence CBF corrections to the pairing interaction
\cite{CBFPairing} will also lead to modifications of order $(\a0k_{\rm F})^2$.
\item[(ii)] The solution of the $s$-wave gap equation is dominated by
the matrix element (\ref{eq:V1S0} of ${\cal W}(r)$ at the Fermi
surface which leads to the solution (\ref{eq:GapApprox}). Only if
$\tilde{\cal W}(k)$ is practically constant for $0\le k\le 2k_{\rm F}$, we
can idenitfy $a_F$ with the in-medium scattering length $a$. The
dominant finite-range correction to the pairing interaction comes
from the kinetic energy term
$\frac{\hbar^2}{m}\left|\nabla\sqrt{1+\Gamma_{\!\rm
dd}(r)}\right|^2$. For distances much larger than $\sigma$ but
smaller than the average particle distance, this term is dominated
by the vacuum solution of the Euler equation, $\sqrt{1+\Gamma_{\!\rm
dd}(r)} = 1-\frac{a_0}{r}$. For distances larger than $1/k_{\rm F}$,
we obtain from Eqs.~(\ref{eq:FermiRPA0}) and (\ref{eq:GFHNC}) that
$\Gamma_{\!\rm dd}(r)$ falls off like
\begin{equation}
\Gamma_{\!\rm dd}(r) \sim -{9\over 8}{V_{\rm p-h}(0+)\over \hbar^2 k_{\rm F}^2/2m}
{1\over r^2k_{\rm F}^2}
\label{eq:GammaLong}\,.
\end{equation}
Consequently, the effective interaction ${\cal W}(k)$ is quadratic in
$k$ for $k\lek_{\rm F}$ and has a linear dependence \begin{equation}
\tilde{\cal W}(k) = \frac{4\pi\rho a}{m}\left(1-\frac{\pi}{4}a
k\right)\,.
\label{eq:Wofk}
\end{equation}
for $k>k_{\rm F} $. The variation of the pairing interaction between $k=0$
and $k=k_{\rm F}$ is interaction dependent and causes, as we shall see, a
correction that is larger than the ones due to the effects mentioned
above.
\end{enumerate}
All but one of the correction discussed above are of
order $\a0k_{\rm F}$ and higher {\em i.e.\/}
\begin{equation}
a = a_0\left[1 + \alpha\frac{\a0k_{\rm F}}{\pi}+\ldots\right]
\label{eq:aofa0}
\end{equation}
where $\alpha$ is a numerical constant. Among others, the polarization
correction discussed by Gorkov \cite{Gorkov} has this structure.
Inserting the above expansion in (\ref{eq:GapApprox}) changes the
pre-factor to
\begin{align}
\Delta_F &\approx
\frac{8}{e^2}\exp\Bigg(\frac{\pi}{2 ak_{\rm F}
\Big(1 + \alpha\displaystyle\frac{\a0k_{\rm F}}{\pi}\Big)}\Bigg)\nonumber \\
&=
\frac{8}{e^2}\exp\left(-\frac{\alpha}{2}\right)
\exp\left(\frac{\pi}{2\a0k_{\rm F}}\right)\,.
\label{eq:scaled}
\end{align}
In other words, to the extent that an expansion in powers of
$(\a0k_{\rm F})$ is legitimate, all of these corrections just lead to a
modified, but universal, pre-factor in Eq. (\ref{eq:GapLowdens}). This
does not apply to the finite-range correction of the pairing
interaction in the relevant regime $k\lek_{\rm F}$. One would, of course,
expect that this finite-range correction is of the same order of
magnitude but its value depends, as we shall see, on details of the
interaction. Hence, we conclude that the exponential behavior of
Eq. (\ref{eq:GapLowdens}) is universal whereas the pre-factor is not.
\section{Results}
\label{sec:results}
\subsection{Energetics}
We have examined in this paper two model potentials, namely a Lennard-Jones
(LJ) potential
\begin{equation}
V_{\rm LJ} = 4\epsilon \left[\left(\frac{\sigma}{r}\right)^{12}
-\left(\frac{\sigma}{r}\right)^{6}\right]
\label{eq:VLJ}
\end{equation}
and an attractive square well (SW) potential
\begin{equation}
V(r) =
\begin{cases}
-\epsilon & \text{if}\quad r < \sigma\,, \\
\phantom{-} 0& \text{if}\quad r > \sigma\,.\\
\end{cases}
\end{equation}
Both potentials are parametrized by a typical range $\sigma$ and the
depth of the attractive well $\epsilon$. In both cases, we measure
energies in units of $\hbar^2/2 m \sigma^2$, and length in units of
$\sigma$. Thus, the interaction strength $\epsilon$ and the density
are the only free parameters.
Our choice of interactions provides effective potentials designed to
avoid the instability against clustering that exists for realistic
alkali interactions, but otherwise be close to a realistic
situation. The simplest connection to real interactions is provided by
the vacuum $s$-wave scattering length $a_0$. The procedure is
legitimate in the low-density limit, many observable properties of
these gases, such as the energy (\ref{eq:lowdensFermi}), depend indeed
only on the $s$-wave scattering length \cite{HuangYang57,Landau5}.
For higher densities this ``universal'' behavior ceases. It is the
purpose of our calculation to explore that area, and also study the
model dependence, by comparing results for the LJ and SW model. To
make contact with low-density expansions, as well as to determine the
range of ``universal behavior'', we shall use the $s$-wave scattering
length $a_0$ instead of the well-depth $\epsilon$ to characterize the
potential.
\begin{figure}
{\includegraphics[width=1.0\columnwidth]{scatplot.eps}}
\caption{(color online) The plot shows the scattering length $a_0$ as
a function of the interaction strength for the LJ (red, solid) and
the SW (blue, dashed) potential. The vertical lines (at $\epsilon =
11.18$ for LJ and $\epsilon = 4.336$ for SW) indicate the
interaction strength where a two-body bound state appears. The dots
on the lines indicate the interaction strengths corresponding to
vacuum scattering lengths $a_0/\sigma = -0.5, -1.0,\ldots\-4.0$ for
which we highlight the Landau parameter and the in-medium scattering
lengths in Figs.~\ref{fig:x0hc} and \ref{fig:x0lj}.}
\label{fig:scatplot}
\end{figure}
For the SW potential, $a_0$ is given by
$a_0=\frac{1}{\kappa}(\kappa\sigma - \tan\kappa\sigma)$ where
$\kappa=\sqrt{m\epsilon/\hbar^2}$. For the LJ potential, $a_0$ must
be obtained numerically. We show $a_0$ in Fig.~\ref{fig:scatplot} as
a function of the potential well depth $\epsilon$. The attractive SW
potential has negative scattering length for an interaction strength
below the first resonance, whereas the LJ potential has a positive
$a_0$ below $\epsilon=4.336$, indicating an effectively repulsive
interaction. Thus, when the interaction strength $\epsilon$ of the LJ
potential is raised starting from zero, we find three regimes. (i)
For $psilon\,<\,4.336$ we have $a_0\,>\,0$ and there is no bound
state; the many-body ground state is a normal Fermi gas. (ii) For
$4.336<\epsilon<11.18$ we have $a_0\,<\,0$. There is still no two-body
bound state; due to the effectively attractive potential, the
many-body ground state in the low density limit is expected to be a
BCS state. At $\epsilon\,=\,11.18$ the LJ potential has a resonance
at zero scattering energy. (iii) For $11.18\,<\,\epsilon$, $a_0$
becomes positive again, and the potential supports one two-body bound
state; the many-body ground state in the low density limit is expected
to be a BEC state of a molecular Bose gas. All these states of Fermi
gases have been studied extensively in experiments with ultracold
alkali gases as discussed in the introduction. In a previous
paper~\cite{ljium} we have already studied both, $a_0\,>\,0$ and
$a_0,<\,0$ for fermions, and $a_0\,>\,0$ for bosons. In that work we
have also examined more sophisticated versions of the FHNC-EL method
and have concluded that these are necessary only at densities
comparable to that of liquid helium, the reader is referred to that
work to assess the range of densities for which the very simple
version of the theory spelled out in Sec. \ref{ssec:FHNC} is
reliable. In short, the accuracy of our energy calculations is
expected to be better than 1 percent below a density of $\rho =
10^{-2}\,\sigma$, whereas the error of the simple FHNC-EL version is
about 10 percent as the density increased to $\rho = 0.4\,\sigma$
which is close to the freezing density of \he3.
We focus in the present work on the effect of attractive interactions
($a_0\,< 0\,$). We have solved
Eqs. (\ref{eq:FermiRPA0})-(\ref{eq:GFHNC}) on a mesh of $2^{18}$
points, with a resolution of 30 points between $r=0$ and $r=\sigma$,
amounting to a box size of 8732$\,\sigma$. Such a huge box size is
necessary to obtain a reasonable momentum space resolution at the very
low densities we are considering here: Note that a Fermi wave number
of $10^{-3}\sigma^{-1}$ corresponds to a wavelength of $6000\sigma$,
hence this box size is the bare minimum of what one should take to
resolve features of the order of the Fermi wave number. All our
calculations are done for the range of interaction strength where
there is no two-body bound state, {\em i.e.\/} before the first
resonance of $a_0$ appears (indicated by vertical lines in
Fig. \ref{fig:scatplot}).
Our equation of state for the two potential models is shown in
Figs. \ref{fig:eoshc} and \ref{fig:eoslj}. To recover the exact
low-density limit (\ref{eq:lowdensFermi}), we have added the
second-order CBF correction (\ref{eq:E2CBF}). To emphasize the
interaction terms, we have subtracted the kinetic energy $E_{\rm kin}
= \frac{3}{5} e_{\rm F} N$. We have
normalized the equation of state to the expansion
(\ref{eq:lowdensFermi}), thus, Figs. \ref{fig:eoshc} and
\ref{fig:eoslj} show only the model-dependent correction to the
equation of state. Omitting the CBF correction (\ref{eq:E2CBF}) and
comparing to the low-density expansion (\ref{eq:lowdensfhnc0}) gives
practically the same results, they are therefore not shown.
Figs. \ref{fig:eoshc} and \ref{fig:eoslj} show already a visible model
dependence of the equation of state at approximately $\a0k_{\rm F} \ge
0.01$ where the third term in the expansion (\ref{eq:lowdensFermi}) is
approximately $10^{-4}$. In other words, for both interactions the
model-dependent corrections are of the same order of magnitude than
the third term in the expansion (\ref{eq:lowdensFermi}). For both interactions
we observe that, dependent on the interaction strength, the equation
of state begins to deviate strongly from a simple, smooth power law
for $\a0k_{\rm F} > 0.02$.
\begin{figure}
\centerline{\includegraphics[width=0.7\columnwidth,angle=-90]{eoshc.eps}}
\caption{(color online) The plot shows interaction contribution to the
equation of state, normalized to the low density expansion, {\em
i.e.} the second and third term in the expansion
(\ref{eq:lowdensFermi}) for the square-well potential. We show
results for scattering lengths $ a_0/\sigma = -1,-2,-3,-4$,
the symbols indicate the numerical values and the curves
a second-order polynomial fit of the form $E/E_0 = 1+\alpha (\a0k_{\rm F})
+ \beta (\a0k_{\rm F})^2$. $E_{\rm HY}$ refers to the expansion
(\ref{eq:lowdensFermi}).
\label{fig:eoshc}}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=0.7\columnwidth,angle=-90]{eoslj.eps}}
\caption{(color online) Same as Fig.~\ref{fig:eoshc} for the Lennard-Jones
model of the interaction.
\label{fig:eoslj}}
\end{figure}
The most interesting feature we observe is that the FHNC--EL equations
cease to have solutions at sufficiently large values of the density or
of $-a_0$. Such a limit is expected for sufficiently attractive
interactions: The Fermi gas is, in the low density limit, stabilized
by the Pauli pressure. As the density increases, the energy per
particle becomes negative and the static incompressibility
\begin{equation}
mc^2 = \frac{d}{d\rho}\rho^2\frac{d}{d\rho}\frac{E}{N}\,,
\label{eq:mc2}
\end{equation}
where $c$ is the hydrodynamic speed of sound, goes to zero.
Such an effect has already been reported by Owen \cite{OwenVar}.
$mc^2\to 0$ indicates a spinodal instability, where the system
separates into a low and a high density phase. On the other
hand, it is widely accepted that a low density two-component Fermi gas
is subject to dimerization close to the unitary limit, $-a_0\to\infty$.
In the following we will argue that dimerization indeed occurs,
but well before the limit $-a_0\to\infty$. Instead,
dimerization is accompanied by the divergence of the in-medium scattering
length.
Let us examine the question of stability from the point of view of the
existence of solutions of the FHNC-EL equations: {\em In general,\/}
the FHNC-EL equations cease to have solutions if the assumed wave
function is unstable against small perturbations. This is most clearly
seen for the case of density fluctuations: The term under the square
root of Eq. (\ref{eq:FermiRPA0}) must be positive. In the limit
$k\rightarrow 0+$ this leads to the condition
\begin{equation}
1+F_0^s \equiv
1 + \frac{4 m}{\hbar^2 k_{\rm F} ^2}\left(\frac{3}{4}\right)^2 \tilde V_{\rm p-h}(0+)
\rightarrow 1 + \frac{3}{\pi}a_0 k_{\rm F} > 0.
\label{eq:mc2coll}
\end{equation}
where the right-most expression is the low density limit, $\a0k_{\rm F}\ll 1$, where
$F_0^s$ is small and the in-medium scattering length $a$ is well
approximated by the vacuum scattering length $a_0$, see Eq. (\ref{eq:mc2full}).
The limit can be regarded as the low density limit of the particle-hole
interaction,
\begin{eqnarray}
\tilde V_{\rm p-h}(0+)
&=& \tilde v_{\rm CW}(0+) + \rho\int d^3r \Gamma_{\!\rm dd}(r)w_{\rm I}(r)
\label{eq:Vphrho}\\
&
\rightarrow& \frac{4\pi\rho\hbar^2}{m} a_0
\quad\mbox{as}\quad\rho\rightarrow 0
\nonumber\end{eqnarray}
where we used that the induced interaction is of second order in the
bare interaction.
Hence, the system can be driven into an instability by
holding the potential fixed and simply increasing the density factor
in Eq. (\ref{eq:Vphrho}). Note that the above stability limit
(\ref{eq:mc2coll}) is valid for the local correlation operator
(\ref{eq:wavefunction}). In an improved calculation that does not
rely on a local correlation operators but rather includes CBF corrections
to all orders \cite{polish} the stability condition would
read
\begin{equation}
1 + \frac{3 m}{\hbar^2 k_{\rm F} ^2} \tilde V_{\rm p-h}(0+)
= 1 + \frac{2}{\pi}a k_{\rm F} > 0\,.
\label{eq:mc2full}
\end{equation}
The condition is immediately recognized as the stability condition of
Landau's Fermi Liquid theory, $F_0^s > -1$. Note that the ground state
theory formulated in Eqs. (\ref{eq:FermiRPA0})-(\ref{eq:GFHNC}) does
not contain self-energy insertions (called ``cyclic-chain'' diagrams
in the FHNC theory), which means that the effective mass is equal to
1. According to our findings in Ref. \onlinecite{ljium}, this is an
acceptable approximation at the low densities under consideration
here. Since the interaction contribution to $F_0^s$ is, at low
densities, proportional to $k_F$, an instability against density
fluctuations could occur for sufficiently attractive interactions. This
has the consequence that the FHNC-EL equations cease to have a
solution.
We found indeed an instability of the solutions of the FHNC-EL
equations, however, this instability does {\em not\/} appear to be
caused by $F_0^s \rightarrow -1$. In fact, in our calculations we were
not able to come close to the point of spinodal instability. We show
in Figs.~\ref{fig:x0hc} and \ref{fig:x0lj} the value of $F_0^s$ as a
function of density for a family of potential strengths. For the SW
potential, we have, depending on the coupling strength $\epsilon$,
been able to solve the FHNC-EL equations in the regime between
$\rho=3.4\times 10^{-11}\sigma^{-3}$ up to a critical density that
depends on the coupling strength. At that density, $F_0^s$ begins to
drop very rapidly but it does not appear to approach the critical
value of $-1$ for a spinodal instability, see Fig.~\ref{fig:x0hc}. We
also note that an instability signified $F_0^s\rightarrow -1$ means a
transition to a state with non-uniform density which is not what
has been observed.
\begin{figure}
\centerline{\includegraphics[width=0.7\columnwidth,angle=-90]{f0s_hc.eps}}
\caption{(color online) The plot shows the dependence of the Landau
parameter $F_0^s$ for the attractive square-well potential as a
function of the density for a sequence of coupling strengths
$\epsilon = 0.1, 0.2, \ldots, 4.6$ (blue, dashed curves) and vacuum
scattering lengths $a_0/\sigma = -0.5,-1.0,\ldots,-4.0$ (red, solid
curves) that correspond to the dots in Fig. \ref{fig:scatplot}. The
curve that ends at the lowest density corresponds to the strongest
interaction $\epsilon = 4.6,\ a_0/\sigma=-11.2\,$, whereas the ones
corresponding to the weak interactions $0.1\le\epsilon\le 0.7$ are
stable beyond a density of $\rho=0.4\sigma^{-3}$.
\label{fig:x0hc}}
\end{figure}
Whereas a system of fermions interacting with an attractive SW potential
exhibits an instability as the density is increased, the
LJ model interaction leads, due to its repulsive core, to a
richer phase diagram that also features a high-density condensed
phase -- the liquid phase of $^3$He. At low densities, we see the same
picture as for the SW potential, namely that the liquid is
stabilized by the Pauli pressure, but the FHNC-EL equations cease to
have solutions above a certain density where the Landau-parameter
$F_0^s$ is still far from its critical value $F_0^s=-1$. The situation
is different in the high-density regime: For sufficiently strong
interactions, the system also can develop a high-density condensed
phase. The noteworthy feature is that one can get much closer to the
spinodal instability limit $F_0^s=-1$ from above than from below,
where $F_0^s$ is always significantly higher than -1.
\begin{figure}
\centerline{\includegraphics[width=0.7\columnwidth,angle=-90]{f0s_lj.eps}}
\caption{(color online) Same as Fig. \ref{fig:x0hc} for
the Lennard-Jones model of the interaction. The blue (dashed) lines
correspond to coupling strengths $\epsilon = 7.0, 9.0, 9.2, 9.3,\ldots 9.9$.
Note that the
Lennard-Jones model also supports a high-density condensed phase:
The curves for the interaction strengths
$\epsilon = 7.0\ (a_0/\sigma=-1.12)$ (blue, dashed),
$\epsilon = 7.51\ (a_0/\sigma=-1.5)$ and $\epsilon = 8.01\
(a_0/\sigma=-2)$ (both red, solid) are discontinuous at high
density.
\label{fig:x0lj}}
\end{figure}
To examine the nature of the instability, we show in
Fig.~\ref{fig:amed_all} the in-medium scattering length $a$, see
Eq.~(\ref{eq:amedium}). The ratio $a/a_0$ is shown as function of
$-k_{\rm F}a_0$ for increasing values of $a_0$, $a_0=-0.5,-1.0,\dots,-4.0$
for the SW model and $a_0=-1.5,-2.0,\dots,-4.0$ for the LJ model. We
do not show $a_0=-0.5$ and $-1.0$ for the LJ model, because the system
is stable for all densities as discussed above. Similar to the vacuum
scattering length, the in-medium scattering length exhibits a
singularity, however, the location of the singularity depends
obviously on both the density and the interaction model. Evidently,
medium corrections to the effective interactions $\tilde{\cal W}(k)$
have the effect that the in-medium scattering length exhibits a
singularity as $k_{\rm F}$ is increasing. The larger $|a_0|$, the smaller
the critical $k_{\rm F}$ value where the the divergence happens, see also
the next figure. $a/a_0$ is universal only for very low densities,
where all curves merge into a single curve converging towards unity in
the zero density limit. For finite density, $a/a_0$, and thus $k_{\rm F}
a$, depends on $a_0$, $k_{\rm F}$, and the interaction model, and not just
on the dimensionless parameter $k_{\rm F}a_0$.
The appearance of such an effect is not surprising because the leading
correction to the bare interaction is phonon-exchange which is
attractive, leading to an earlier divergence of the in-medium
scattering length $a$ than the bare vacuum scattering length $a_0$ as
the density is increased. Thus, we are led to the conclusion that the
instability found in our calculations is an indication of
phonon-induced dimerization. Further evidence for the appearance of a
phonon-exchange induced dimerized phase will be provided in the next
section where we discuss distribution functions. Since a dimerized
phase is not within the space of wave functions
(\ref{eq:wavefunction}), the FHNC-EL equations simply cease to have a
solution. A more appropriate variational wave function is perhaps the
``Pfaffian'' form suggested, among others, in
Refs. \cite{YangClarkBCS, astraPRL04, bajdichPRL06, PhysRevB.77.115112}.
Similar dimerization effects have been observed in two-dimensional
$^3$He-$^4$He mixtures, in double-layers of bosonic dipoles
\cite{PhysRevA.87.033624}, and also in the process of
$\alpha$-clustering of nuclear matter \cite{alpha,alpha2}.
\begin{figure}
\centerline{\includegraphics[width=1.0\columnwidth]{ainvzil.eps}}
\caption{(color online) The ratio between the in-medium scattering
length $a$, Eq. (\ref{eq:amedium}), and the vacuum scattering length
$a_0$ as function of $-k_{\rm F}a_0$, for both the LJ (full line) and the
SW potential (dashed line). The different curves correspond to
different values of $a_0/\sigma=-0.5,-1.0,\dots,-4.0$ (SW) and
$a_0/\sigma=-1.5,-2.0,\dots,-4.0$ (LJ), with the higher curves
correponding to larger $|a_0|$. The deviation of $a/a_0$ from unity
is universal only for low $k_{\rm F}$. The divergence of $a/a_0$
indicates dimerization.
\label{fig:amed_all}}
\end{figure}
Figs.~\ref{fig:amed_all} show essentially the same scenario for the SW
and the LJ model. At a given density, the critical scattering length
$a_0$ where dimerization occurs is model-dependent. But the
difference becomes smaller as the density is reduced. In order to
determine the critical scattering length, we have extrapolated the
in-medium scattering length to the point where it diverges. The result
for both interactions is shown in Fig. \ref{fig:acrit} where we show,
as function of $k_{\rm F}\sigma$, the inverse of the critical scattering
length where the system becomes unstable for both the SW and the LJ
interaction. The two curves approach each other as $k_{\rm F}\sigma$ is
well below 0.1, which shows that the critical value of $\sigma/a_0$ becomes
universal in the low density limit realized in ultracold quantum
gases, although, as a consequence of many-body effects, this
critical value of $\sigma/a_0$ is not zero.
\begin{figure}
\centerline{\includegraphics[width=1.0\columnwidth]{acrit.eps}}
\caption{(color online) The figure shows, for the square-well (blue line
with circle markers) and the Lennard-Jones (red line with circular
markers) potential models, the inverse of the critical
value of the vacuum scattering length $a_0$ below which a
non-dimerized Fermi liquid phase exists.
\label{fig:acrit}}
\end{figure}
\subsection{Distribution Functions}
\label{ssec:dists}
The pair distribution functions for parallel and anti-parallel spins,
$g_{\uparrow\uparrow}(r)$ (\ref{eq:guu}) and $g_{\uparrow\downarrow}(r)$ (\ref{eq:gud}),
contain information about correlations due to Fermi-statistics and,
more interestingly, due to interactions. The latter are captured by
the direct-direct correlation function $\Gamma_{dd}(r)$, which we show
in Figs.~\ref{fig:glj839hc372} for the LJ and SW model, respectively.
The respective potential strengths were chosen such that the
scattering length was $a_0=-2.5\,\sigma$. $\Gamma_{dd}(r)$ is shown
for three Fermi wave numbers $k_{\rm F}\sigma=0.001, 0.01,$ and $0.04$. One
can discern two regimes: the asymptotic regime where $\Gamma_{dd}(r)$
falls off as $1/r^2$ for $rk_F\gtrsim 1$ due to many-body effects, see
Eq.~(\ref{eq:GammaLong}); and an intermediate regime $k_{\rm F}\sigma
\lesssim rk_{\rm F}\lesssim 1$ where $r$ is smaller than the average
particle distance and $\Gamma_{dd}(r)$ falls off like $1/r$ as
expected from two-body scattering in vacuum. The behavior in the
asymptotic regime can be obtained from the $k\to 0$ limit of
Eqs. (\ref{eq:FermiRPA0}) and (\ref{eq:GFHNC}) which leads to
Eq. (\ref{eq:GammaLong}). In the low-density limit, the speed of sound
is obtained from the equation of state (\ref{eq:lowdensFermi}). Since
only the speed of sound enters, the asymptotic form of
$\Gamma_{dd}(r)$ is independent of the interaction model. In
Figs.~\ref{fig:glj839hc372} this asymptotic behavior is illustrated by
straight lines.
\begin{figure}
\centerline{\includegraphics[width=1.0\columnwidth]{glj839hc372.eps}}
\caption{(color online) The direct-direct correlation function
$\Gamma_{dd}(r)$ for $a_0/\sigma=-2.5$ and $k_{\rm F}\sigma=0.001, 0.01,$
and $0.04$ for the LJ model (upper pane) and the SW model (lower
pane). Straight lines indicate the asymptotic $1/r^2$ behavior
given by Eq. (\ref{eq:GammaLong}) and another straight line
indicates the $1/r$ behavior for intermediate distances.
\label{fig:glj839hc372}}
\end{figure}
For low densities, the spin-parallel pair distribution function
$g_{\uparrow\uparrow}(r)$ is dominated by Fermi statistics. The Pauli exclusion
principle ensures that $g_{\uparrow\uparrow}(0)=0$, regardless of interactions.
This is guaranteed by the statistical factor $1-\ell^2(rk_{\rm F} )$ in
Eq. (\ref{eq:guu}) which suppresses $g_{\uparrow\uparrow}(r)$ for $rk_{\rm F}\lesssim
1$, thus screening the interaction. Note also that
$\lim_{r\rightarrow 0}\left[C(r) +(\Delta \tilde X_{\rm
ee})_1(r)\right] = 0$. Therefore, the interaction effectively
plays no role in a dilute gas of spin-polarized fermions, which has
also been established in many experiments. For example for $k_{\rm F}\sigma
=0.01$, the curves for $g_{\uparrow\uparrow}(r)$ are almost indistinguishable from
each other and from the distribution function of non-interacting
fermions for a wide sequence of values of the scattering length
between $a_0=-0.5$ and $-4.0$. $g_{\uparrow\uparrow}(r)$ is essentially identical
to the spin-parallel pair distribution of free fermions
$g^F_{\uparrow\uparrow}(r)$, this has also been verified by QMC
results~\cite{astraPRL04,morrisPRA10}. Thus, little or no non-trivial
information can be obtained from $g_{\uparrow\uparrow}(r)$, and we therefore
refrain from further discussing or showing this quantity.
\begin{figure}
{\includegraphics[width=1.0\columnwidth]{gant.eps}}
\caption{(color online) $g_{\uparrow\downarrow}(r)$ for $k_{\rm F} =0.01/\sigma$ and
$a_0/\sigma=-0.5, -1.0, -1.5, \ldots -4.0$\,. The uppermost curve corresponds to
the largest value of $|a_0|$. We show results for the LJ (solid
red lines) and SW (dashed blue lines) potential. The inset shows
$g_{\uparrow\downarrow}(r)-1$ on a logarithmic scale to illustrate that for
$r\gtrsim 3\sigma$, $g_{\uparrow\downarrow}(r)-1$ is independent of the
interaction model.}
\label{fig:gant}
\end{figure}
The anti-parallel pair distribution function $g_{\uparrow\downarrow}(r)$ is not
suppressed by statistical correlations. Instead, $g_{\uparrow\downarrow}(r)$ is
dominated by correlation effects, which are described by the
direct--direct correlation function $\Gamma_{\!\rm dd}(r)$,
Eq.~(\ref{eq:GFHNC}). The effectively attractive interaction,
$a_0\,<\,0$, leads to an enhancement of $g_{\uparrow\downarrow}(r)$ as $r$ is
reduced. We show in Fig.~\ref{fig:gant} $g_{\uparrow\downarrow}(r)$ for $k_{\rm F}\sigma
=0.01$ and $a_0/\sigma=-0.5, -1.0,\ldots -4.0$, for both the SW and
the LJ potential. For small distances $r$, $g_{\uparrow\downarrow}(r)$ is
dominated by the interaction potential, leading to a large value at
$r=0$ for the SW potential: The maximum value of $g_{\uparrow\downarrow}$ is
attained at $r=0$. For the largest $|a_0|$ considered here, we have
obtained solutions of the FHNC-EL equations where $g_{\uparrow\downarrow}(0)$ is
more than 150 times larger than $g_{\uparrow\downarrow}(\infty)$. The short range
repulsion of the LJ interaction suppresses the pair distribution for
$r<\sigma$, and therefore $g_{\uparrow\downarrow}(r)\rightarrow 0$ as $r
\rightarrow 0$. The maximum value of $g_{\uparrow\downarrow}(r)$, which is
attained at finite $r$, is therefore much lower than in the SW case.
One might argue that both models are unrealistic at short $r$, the
shape of $g_{\uparrow\downarrow}(r)$ for $r\lesssim\sigma$ is of only theoretical
interest. However, we will show in Section \ref{ssec:bcs} that
finite-range effects have a quite visible effect on the superfluid
energy gap and, hence, on the condensation energy. We also observe
that for $r\gtrsim 3\sigma$, $g_{\uparrow\downarrow}(r)$ becomes universal in the
sense that it is independent of the choice of interaction. This is
not a feature special to cold gases: The asymptotic behavior of
$g_{\uparrow\downarrow}(r)$ is determined by the speed of sound
\cite{FeenbergBook}, see also Eq. (\ref{eq:GammaLong}).
$g_{\uparrow\downarrow}(r)$ is still strongly enhanced compared to the asymptotic
limit $g_{\uparrow\downarrow}(r\to\infty)=1$. Furthermore we find that
$g_{\uparrow\downarrow}(r)$ is very similar to the zero-energy scattering solution
of the two-body Schr\"odinger equation, this is simply due to the fact
that $g_{\uparrow\downarrow}(r)$ is dominated by the dynamic correlation function
$1+\Gamma_{\!\rm dd}(r)$, {\em cf.} Eq. (\ref{eq:gud}). The situation
changes again as we approach the dimerization instability: The maximum
value of $1+\Gamma_{\!\rm dd}(r)$ starts to rise rapidly indicating an
approaching singularity. The effect becomes stronger and appears at
lower density as the coupling strength of the underlying bare
interaction is increased.
\begin{figure} {\includegraphics[width=1.0\columnwidth]{ghcmaxlog.eps}}
\caption{(color online) The upper pane shows, for the square-well
potential, the maximum value of $1+\Gamma_{\!\rm dd}(r)$ for
vacuum scattering lengths $a_0/\sigma = -1.0, -1.5, \ldots
-4.0$. The uppermost curve, which ends at the smallest value of
$k_{\rm F}\sigma$, corresponds to the largest value of $a_0$. The lower
pane shows the ratio between the the maximum value of
$1+\Gamma_{\!\rm dd}(r)$ and the maximum value of the the vacuum
solution $|\psi(r)|^2$, {\em cf.} Eq. (\ref{eq:scatteq}).}
\label{fig:ghcmax}
\end{figure}
\subsection{BCS pairing}
\label{ssec:bcs}
We now return to the question of the density-- and momentum
dependence of the pairing interaction $\tilde{\cal W}(k)$. We have
discussed already above that the in-medium scattering length can
become singular due to phonon-exchange correction.
Before we describe our calculations we go back to the momentum
dependence of the pairing interaction. We show in
Figs. \ref{fig:vpairofk} $\tilde{\cal W}(k)/\tilde{\cal W}(0+)$ for
two values of $k_{\rm F}$, $k_{\rm F}\sigma=0.01$ (top panel) and $k_{\rm F}\sigma=0.04$
(bottom panel). For small $k_{\rm F}$ such as $k_{\rm F}\sigma=0.01$ and less, the
regime where $\Gamma_{\!\rm dd}(r)$ behaves as $\sqrt{1+\Gamma_{\!\rm
dd}(r)} = 1-\frac{a_0}{r}$ is rather large, see
Figs.~\ref{fig:vpairofk} and therefore the linear regime in
$\tilde{\cal W}(k)$ is well defined and agrees well with the
prediction (\ref{eq:Wofk}). For $k_{\rm F}\sigma=0.04$, the regime
where the $a_0/r$ behavior of the correlations is visible is much
smaller, hence the linear regime is less well defined and $\tilde{\cal
W}(k)$ also becomes model dependent. The important point to be made
here is, however, that in neither case, and especially as $|a_0|$ gets
larger, the simple estimate $\tilde{\cal W}(k)\approx\tilde{\cal
W}(0+)$ from low-density expansions is valid for a significant
portion of the integration regime $0\le k\le 2k_{\rm F}$ which is needed for
the calculation of the $s$-wave pairing matrix elements, see
Eq.~(\ref{eq:V1S0}).
\begin{figure}
{\includegraphics[width=1.0\columnwidth]{vpairk.eps}}
\caption{(color online) The figures show the pairing interaction
$\tilde {\cal W}(k)$, normalized to its value at $k\to 0$, for a
sequence of vacuum scattering lengths $a_0/\sigma = -2.0, -2.5,
-3.0, -3.5, -4.0$ and both potential models (full line: LJ, dashed
line: SW) for $k_{\rm F}\sigma=0.01$ (top panel) and
$k_{\rm F}\sigma=0.04$ (bottom panel); larger values of $|a_0|$
correspond to higher curves. The linear regime for
$k_{\rm F}=0.01\sigma^{-1}$ agrees well with the slope predicted by
Eq. (\ref{eq:Wofk}). }
\label{fig:vpairofk}
\end{figure}
We have calculated the gap in the excitation spectrum at the Fermi
surface, $\Delta_{k_{\rm F}}$ (Eq.~(\ref{eq:gap})), for the LJ and the SW
interaction model, for a wide range of densities, characterized by the
Fermi wave number $k_{\rm F} $, and $s$-wave scattering lengths $a_0$. A
first overview is shown in Fig.~\ref{fig:GapSWLJbare}, where the higher
values of $\Delta_F$ correspond to larger values of $|a_0|$.
\begin{figure}
{\includegraphics[width=1.0\columnwidth]{Gapbare.eps}
}
\caption{(color online) The gap $\Delta_{k_{\rm F}}$ at the Fermi momentum in
units of the Fermi energy as a function of Fermi wave number $k_{\rm F}$
for the SW model (left panel) and the LJ model (right panel),
for $a_0/\sigma = -0.5, -1.0, \ldots -4.0$
(higher values of $\Delta_{k_{\rm F}}$ correspond to larger values of $|a_0|$).
Full line: full numerical solution; dashed line: approximate
solution $\Delta_F$ (\ref{eq:GapApprox}); dash-dotted line:
Solution $\Delta_F^{(0)}$ obtained by setting $a=a_0$ in Eq. (\ref{eq:GapLowdens}).
}
\label{fig:GapSWLJbare}
\end{figure}
Figs. \ref{fig:GapSWLJbare} provide two pieces of information: One is
an assessenent of the accuracy of the approximate solution
(\ref{eq:GapApprox}) which is obviously qute good throughout the whole
regime of interaction strengths and densities. The major difference
comes, at hight densities, from the deviation of the vaccum scattering
length from $a_F$. The general dependence of the gap on both the
interaction strength and the density is quite similar, in particular
the exponential dependence on the scattering length holds over many
orders of magnitude. This is consistent with the general feature
spelled out in Sec. \ref{ssec:Gapeq} that medium-- and finite--range
corrections are manifested in the pre-factor in
Eq. (\ref{eq:scaled}). This pre-factor is universal as long as the
in-medium scattering length is of the form (\ref{eq:aofa0}) which is
the case, amomg others, if the momentum dependence of $\tilde{\cal
W}(k)$ is linear. Deviations from this universal behavior can only
be expected from the quadratic behavior of $\tilde{\cal W}(k)$ for
$0\le k \lesssimk_{\rm F}$ which is a direct manification of many-body
effects. Indeed, this correlation effect is significant: We show in
Fig. \ref{fig:VSCkfkf} the ratio $a_F/a_0$, see Eq.~(\ref{eq:aFdef}),
in the density/interaction strengt regimes where the gasp is larger
than $10^{-10}e_{\rm F}$. Evidently, the behavior is, in that regime,
neither linear, nor a universal function of $a_0k_{\rm F}$. Only at very
small values of $\a0k_{\rm F}$ where the gap is of the order of
$10^{-8}e_{\rm F}$, a linear behavior might be interpreted, but the slope of
such a linear behavior depends on the interaction model. Calculations
at even lower density where a linear regime might be found require a
much bigger mesh than used here to reliably resolve the region $0\le k
\lesssimk_{\rm F}$.
\begin{figure}
{\includegraphics[width=0.7\columnwidth,angle=-90]{Vkfkf_new.eps}}
\caption{The figure shows the ratio of the scaled pairing matrix
element $a_F$, Eq. (\ref{eq:aFdef}) to the vacuum scattering length
$a_0$ for the Lennard-Jones model (red, solid curves) and the
soft-core interaction model (blue, dashed curves) as a function of
$-\a0k_{\rm F}$. The different curves are for different values of
$a_0/\sigma=-0.5,-1.0,\dots,-4.0$ (SW) and
$a_0/\sigma=-1.5,-2.0,\dots,-4.0$ (LJ), with the higher curves
correponding to larger $|a_0|$. }
\label{fig:VSCkfkf}
\end{figure}
\begin{figure}
{\includegraphics[width=1.0\columnwidth]{lj_bcs_results.NEW.eps}}
\caption{(color online) The superfluid gap $\Delta_{k_{\rm F}}$ as
function of $-1/k_{\rm F}a_0$ obtained for many different values of $k_{\rm F}$
($k_{\rm F}\sigma\le 0.1$) and $a_0$, for the LJ model. The color of
the symbols indicates the $k_{\rm F}$ values. Left panel: ratio between
the gap $\Delta_{k_{\rm F}}$ and the approximation $\Delta_F^{(0)}$ (\ref{eq:GapApprox}). Right
panel: ratio between the gap $\Delta_{k_{\rm F}}F$ and the approximation
$\Delta_F$ (\ref{eq:GapLowdens}), where a universal dependence on $k_{\rm F}a_0$ can
be observed for small $k_{\rm F}$. }
\label{fig:gapLJall}
\end{figure}
\begin{figure}
{\includegraphics[width=1.0\columnwidth]{hc_bcs_results.NEW.eps}}
\caption{(color online)
Same as Fig.\ref{fig:gapLJall} for the SW model.
}
\label{fig:gapHCall}
\end{figure}
In order to disentangle medium and finite-range corrections, we show
in the right and left panels of Figs.~\ref{fig:gapLJall} (LJ model)
and \ref{fig:gapHCall} (SW model) the ratios $\Delta_{k_{\rm F}}/\Delta_{F}$
and $\Delta_{k_{\rm F}}/\Delta^{(0)}_{F}$ as defined in
Eqs. (\ref{eq:GapApprox}) and (\ref{eq:GapLowdens}), respectively, plotted as
function of $-1/k_{\rm F}a_0$. The color of the
symbols encodes the $k_{\rm F}$ value of the corresponding data point,
between red for $k_{\rm F}\sigma=0.1$ and black for $k_{\rm F}\to 0$ -- the low
density regime which usually is assumed to be universal in the sense
that all quantities depend on $-1/k_{\rm F}a_0$ only. Data
for $k_{\rm F}\sigma>0.1$ is not shown in these figures. The two
ratios $\Delta_{k_{\rm F}}/\Delta_{F}$ and $\Delta_{k_{\rm F}}/\Delta^{(0)}_{F}$
give information on two different effects: The ratio
$\Delta_{k_{\rm F}}/\Delta_F$ is an assessment of the accuracy of the
approximations leading to Eqs. (\ref{eq:GapApprox}) and reflects the
importance of the momentum dependence of the pairing
interaction. Evidently, this effect is visible and depends little on
the density, the ratio does not seem to go to unity with decreasing
density. This is simply a consequence of our analysis of
Figs. \ref{fig:vpairofk}: No matter how small the Fermi momentum is,
the momentum dependence of the pairing interaction is never given by
the naive application (\ref{eq:Wofk}) of low-density arguments but
rather reflects genuine many-body physics. For the range of
densities and coupling strengths, $\Delta_{k_{\rm F}}/\Delta^{(2)}_F$ is between
$60\%$ and $75\%$, but there is a rapid drop of $\Delta_{k_{\rm F}}/\Delta^{(2)}_F$
for $-1/k_{\rm F}a_0<5$, as the system becomes unstable against
dimerization. In other words, neglecting the momentum-dependence of
the pairing interaction, and thus the finite range of the interaction
(including the induced interaction), leads to an overestimation of
$\Delta_{k_{\rm F}}$ that becomes more severe as one approaches the
dimerization instability, driven by the divergence of the in-medium
scattering length $a$. We also note that, apart for some values at
larger $k_{\rm F}$, $\Delta_{k_{\rm F}}/\Delta^{(2)}_F$ collapses on a single curve,
i.e. it is characterized by a universal dependence on $k_{\rm F}a_0$. The
curves are identical for the LJ and the SW model.
The difference between our microscopic calculation and the low-density
expression (\ref{eq:GapLowdens}) is much more visible, see left panels of
Figs. \ref{fig:gapLJall} and \ref{fig:gapHCall}. This effect
has, of course, to do with the divergence of the in-medium scattering
length as discussed above. However, even far from the singularity, we
see very little evidence of an approach to a ``universal'' behavior.
Evidently one must go to much lower densities. Although we could run
our ground state calculations down to densities of
$10^{-11}\,\sigma^{-3}$, the gap itself becomes many orders of
magnitude smaller than the Fermi energy, {\em cf.\/}
Figs. \ref{fig:GapSWLJbare} and are therefore of only methodological interest.
\section{Summary}
We have in this paper reported ground state calculations for
low-density Fermi gases described by two model interactions, an
attractive square-well potential and a Lennard-Jones potential. We
have used the optimized Fermi-Hypernetted Chain (FHNC-EL) integral
equation method which has been proved to provide, in the density
regimes of interest here, an accuracy better than one percent
\cite{ljium}. We have examined the low-density expansion of the
energy for a local correlation operator, written in the conventional
Jastrow-Feenberg form (\ref{eq:wavefunction}), our results also apply
for fixed-node Monte Carlo calculations for wave functions written in
that form. Of course, such a result can only be obtained with
optimized correlation functions, using a parametrized Jastrow function
instead leads to an unpredictable answer of which one can only say
that it should lie above the expansion (\ref{eq:lowdensfhnc0}). We
have demonstrated that a locally correlated wave function does not
reproduce the exact low-density limit and have cured the problem by
adding the second-order CBF corrections. We have, however, also
demonstrated that already at values of $\a0k_{\rm F}\approx 10^{-2}$ the
third term in the expansion (\ref{eq:lowdensFermi}) is overshadowed by
model-dependent corrections.
The most interesting result of our work is the appearance of a
divergence of solutions of the FHNC-EL equations well before the
divergence of the vacuum scattering length $a_0$ of the interaction
potential. We have interpreted this divergence as a phonon-exchange
driven dimerization of the system, similar to what one has at
zero density when the vacuum scattering length diverges. The
appearance of such a divergence is not a surprise: As a function of
coupling strength, the vacuum scattering length has a singularity
somewhat above the strength where we find the singularity. However,
within the medium, the bare interaction is supplemented by the induced
interaction $ w_{\rm I}(r)$, {\em cf.\/} Eq. (\ref{eq:inducedFermi0})
or (\ref{eq:Wofr}), which describes phonon exchange. This density
dependent correction to the bare interaction shifts the interaction
strength at which a bound state appears. The closer the bare
interaction is to the appearance of the bound state, the smaller the
correction needs to be. Since the induced interaction depends on the
density, the singularity appears, for stronger coupling of the bare
interaction, at lower density.
We have also studied, in the stable regime, the superfluid gap and its
dependence on the density and the interaction strength. Two
corrections apply to low density expansions: medium corrections and
finite-range corrections. We have shown that the most important
finite-range corrections are a direct manifestation of the many-body
nature of the system. For low densities, the gap can be reasonably
well approximated by neglecting finite-range corrections but accounting
for medium corrections, but the deviations from the full numerical
solution increase on approaching the dimerization instability.
The natural question arises what is the phase ``on the other side'' of
the instability. It is plausible that this is a phase of dimers which
could be described by a product of pair wave functions
\cite{YangClarkBCS,astraPRL04,PhysRevB.77.115112}. Full optimzation of
such wave function containing both Jastrow correlations and an antisymmetrized
product of pair wave functions requires first to derive the diagrammtic
expansion, which goes beyond the scope of this paper.
|
1,314,259,994,910 | arxiv | \section{Introduction}
Over the years, finite-size-scaling (FSS) techniques have proved
themselves an indispensable tool for computer simulation
investigations of critical phenomena in model systems, facilitating
accurate estimates of infinite volume quantities from simulations of
finite size. Of the many previous FSS simulation studies that have
been performed \cite{PRIVMAN}, most have focussed on critical
phenomena in lattice-based magnetic spin systems such as the Ising
\cite{BINDER3,FERRENBERG1}, $\phi^4$ \cite{NICOLAIDES}, XY
\cite{GOTTLOB} and Heisenberg models \cite{PECZAK,CHEN}. Among the
specific approaches employed to study such systems, use of the order
parameter distribution function has proved itself one of the most
powerful. The FSS properties of the order parameter distribution are
now routinely employed in studies of magnetic systems, facilitating
both the accurate location of the critical point and the detailed
elucidation of its character \cite{BINDER1}.
Only comparatively recently has attention turned to the task of
applying FSS techniques to the simulation of critical fluids. Work to
date has concentrated on attempting to carry over to fluids the order
parameter block distribution techniques developed in the magnetic
context \cite{ROVERE,BRUCE2,WILDING1}. In the process, however, it has
been found necessary to generalise the FSS equations to take account
of the absence in fluids of the energetic (`particle-hole' \cite{FN1})
symmetry that prevails in magnetic systems such as the Ising model.
Although this reduced symmetry is believed to have no bearing on the
universal critical behaviour of fluids, (which for system with
short-ranged interactions correspond to the Ising universality class),
it has long been recognised that it leads to certain non-universal
effects. The most celebrated of these is a weak energy-like
singularity in the coexistence diameter on the approach to
criticality, the existence of which constitutes a failure for the law
of rectilinear diameter \cite{SENGERS}.
Within the renormalisation group framework, the critical behaviour of
a given system is characterised by the values of its relevant scaling
fields specifying the location of the effective hamiltonian with
respect to the fixed point \cite{WEGNER}. In models of the Ising
symmetry, these scaling fields are simply identifiable with the
thermodynamic fields, namely the (reduced) temperature and the applied
field. By contrast for fluids, the absence of particle-hole symmetry
implies that the scaling fields comprise {\em mixtures} (i.e. linear
combinations) of the temperature and chemical potential. As a
consequence of this `field mixing', the fluid scaling operators (the
quantities conjugate to the two relevant scaling fields) are also
predicted to differ from those of the symmetric Ising systems. In the
Ising model, the scaling operators are simply the order parameter
(i.e. magnetisation) and the energy density. In fluids however, they
are expected to be linear combinations of the order parameter
(particle density) and the energy density.
The modified forms of the scaling variables have recently been
incorporated within a FSS theory describing the interplay of energy
and density fluctuations in near-critical fluids
\cite{BRUCE2,WILDING1}. This theory provides a potentially powerful
framework for the detailed simulation study of critical phenomena in
fluids. Hitherto, however, only a limited appraisal of the theory has
been performed. Attention has focused on one predicted manifestation
of field mixing, namely a finite size correction to the limiting form
of the critical order parameter distribution. The existence of this
correction has indeed been confirmed in detailed Monte-Carlo
simulation studies of both the 2D Lennard-Jones fluid \cite{WILDING1}
and a 2D decorated lattice gas model \cite{WILDING2}. To date,
however, the full extent of the claims embodied in the mixed field FSS
theory have not been closely scrutinised.
In the present paper we address this matter with simulation studies of
two critical fluid systems: a decorated lattice gas model and a
polymer model. The paper is broadly organised as follows. We begin by
providing a short resum\'{e} of the mixed field FSS theory for the
near-critical density and energy fluctuations of fluids. We then
present simulation measurements of the joint distribution of the
density and energy fluctuations $p_L(\rho,u)$ at the liquid-vapour
critical point of a 3D decorated lattice gas model. Field mixing
transformations are performed that map $p_L(\rho,u)$ onto the fixed
point distribution of scaling operators \mbox{$\tilde{p}^\star(x,y)$}\ appropriate to the
Ising universality class. Effecting this data collapse yields
estimates of the field mixing parameters that control the degree of
field mixing, the values of which are found to be in excellent
agreement with analytic calculations. Application of field mixing
transformation to \mbox{$\tilde{p}^\star(x,y)$}\ are also used to generate the full
universal finite-size spectrum of the density and energy density
distributions of fluids. This analysis reveals, in particular, that
(compared to models of the Ising symmetry) the presence of field
mixing radically alters the limiting (large L) form of the critical
energy distribution.
Consideration is then given to the role of field mixing in the
sub-critical two phase regime of the decorated lattice gas model. An
examination is made of the effect of applying field mixing
transformations to the coexistence density and energy density
distributions. For sub-critical temperatures down to approximately
$0.9T_c$, it is found that the observed asymmetries of the coexistence
density distributions are well accounted for by the linear mixing of
the energy density into the ordering operator.
Finally we apply the techniques developed in the context of the
decorated lattice gas model, to a more realistic system, namely a 3D
polymer model. Simulation studies of the bond fluctuation model within
the grand canonical ensemble are used to obtain the joint distribution
of density and energy on the liquid vapour coexistence curve. The
critical temperature, chemical potential and field mixing parameters
of the model are accurately determined by requiring the collapse of
the measured scaling operator distributions onto their known universal
fixed point forms. In the sub-critical region close to the critical
point, the observed asymmetries of the density distribution are again
found to be well described by field mixing transformations.
\section{Background}
\label{sec:back}
In this section we provide a brief overview of the principal features
of the mixed field FSS theory of reference \cite{WILDING1}, placing it
within the context of the present work.
The systems we consider are assumed to be contained in a volume $L^d$
(with $d=3$ in the simulations described below) and thermodynamically
open so that the particle number can fluctuate. The observables on
which we shall focus are the particle number density:
\begin{equation}
\rho =L^{-d}N
\label{eq:phi}
\end{equation}
and the dimensionless energy density:
\begin{equation}
u=L^{-d}w^{-1}\Phi(\{{\bf r}\})
\label{eq:u}
\end{equation}
where $\Phi(\{{\bf r}\})$ is the configurational energy of the system
which we assume takes the general two-body form:
\begin{equation}
\Phi(\{{\bf r}\})=\sum_{i,j}\phi(|{\bf r_i}-{\bf r_j}|) ,
\end{equation}
with $\phi(r)$ the two-body interaction potential (e.g.
square-well or Lennard-Jones) whose associated coupling
strength (well-depth) we denote $w$.
Within the formalism of the grand canonical ensemble (GCE), the joint
distribution of density and energy fluctuations, $p_L(\rho,u)$, is
controlled by the reduced chemical potential $\mu$ and the coupling
strength $w$ (both in units of $k_BT$). The critical point of the
system is located by critical values of the chemical potential $\mu_c$
and coupling $w_c$. Deviations of $w$ and $\mu$ from their critical
values control the sizes of the two relevant scaling field that
characterise the critical behaviour \cite{WEGNER}. In the absence of
the special symmetry prevailing in the Ising model, the relevant
scaling fields comprise (asymptotically) {\em linear combinations} of
the coupling and chemical potential difference \cite{REHR}:
\begin{equation}
\tau = w_c-w+s(\mu - \mu_c) \hspace{1cm} h=\mu - \mu_c+ r(w_c-w)
\label{eq:scaflds}
\end{equation}
where $\tau$ is the thermal scaling field and $h$ is the
ordering scaling field. The parameters \mbox{$s$}\ and \mbox{$r$}\ are
system-specific quantities controlling the degree of field mixing. In
particular $\mbox{$r$}$ is identifiable as the limiting critical gradient
of the coexistence curve in
the space of $\mu$ and $w$. The role of $s$ is somewhat
less tangible; it controls the degree to which the chemical potential
features in the thermal scaling field, manifest in
the widely observed critical singularity of the coexistence curve
diameter of fluids \cite{SENGERS}.
Conjugate to the two relevant scaling fields are scaling operators
\mbox{${\cal M }$}\ and \mbox{${\cal E}$} , which comprise linear combinations of the particle
density and energy density \cite{BRUCE2,WILDING1}:
\begin{equation}
\mbox{${\cal M }$} = \mbox{$\frac{1}{1-\mix \mixp}$} \left[ \rho - s u \right] \hspace{1cm} \mbox{${\cal E}$} = \mbox{$\frac{1}{1-\mix \mixp}$} \left[ u -
r \rho \right]
\label{eq:oplinks}
\end{equation}
The operator \mbox{${\cal M }$}\ (which is conjugate to the ordering field $h$) is
termed the ordering operator, while \mbox{${\cal E}$}\ (conjugate to the
thermal field) is termed the energy-like operator. In the
special case of models of the Ising symmetry, (for which $\mbox{$s$} = \mbox{$r$}
=0$), \mbox{${\cal M }$}\ is simply the magnetisation while \mbox{${\cal E}$}\ is the energy
density.
The joint distribution of density and energy is simply related to the
joint distribution of mixed operators:
\begin{equation}
p_L(\rho,u) = \mbox{$\frac{1}{1-\mix \mixp}$} p_L(\mbox{${\cal M }$} , \mbox{${\cal E}$})
\label{eq:pdflink}
\end{equation}
Near criticality, and in the limit of large system size,
$p_L(\mbox{${\cal M }$},\mbox{${\cal E}$})$ is expected to be describable by a finite-size-scaling
relation of the form \cite{WILDING1}:
\setcounter{abc}{1}
\begin{equation}
\label{eq:ansatz}
p_L(\mbox{${\cal M }$} , \mbox{${\cal E}$}) \simeq \mbox{$\Lambda _{\cal M}^+$} \mbox{$\Lambda _{{\cal E}}^+$} \tilde{p} (\mbox{$\Lambda _{\cal M}^+$} \delta \mbox{${\cal M }$} , \mbox{$\Lambda _{{\cal E}}^+$} \delta \mbox{${\cal E}$} ,
\mbox{$\Lambda _{{\cal M}}$} \mbox{$h$} , \mbox{$\Lambda _{{\cal E}}$} \mbox{$\tau$} )
\end{equation}
\addtocounter{equation}{-1}
\addtocounter{abc}{1}
where
\begin{equation}
\label{eq:Lamdefs}
\mbox{$\Lambda _{{\cal E}}$} = \mbox{$a_{{\cal E}}$} L^{1/\nu} \hspace{1cm} \mbox{$\Lambda _{{\cal M}}$} = \mbox{$a_{{\cal M}}$} L^{d-\beta/\nu}\hspace{1cm} \mbox{$\Lambda _{{\cal M}}$} \mbox{$\Lambda _{\cal M}^+$}
= \mbox{$\Lambda _{{\cal E}}$} \mbox{$\Lambda _{{\cal E}}^+$} = L^d
\end{equation}
\addtocounter{equation}{-1}
\addtocounter{abc}{1}
and
\begin{equation}
\label{eq:deltops}
\delta \mbox{${\cal M }$} \equiv \mbox{${\cal M }$} - <\mbox{${\cal M }$} >_c \hspace{1cm} \delta \mbox{${\cal E}$} \equiv \mbox{${\cal E}$} - <\mbox{${\cal E}$}
>_c
\end{equation}
\setcounter{abc}{0}
The subscripts c in equations~\ref{eq:deltops} signify that the averages are to
be taken at criticality. Given appropriate choices for the non-universal
scale factors \mbox{$a_{{\cal M}}$}\ and \mbox{$a_{{\cal E}}$}\ (equation~\ref{eq:Lamdefs}), the
function $\tilde{p}^\star$ is expected to be universal.
Precisely at criticality, equation~\ref{eq:ansatz} implies simply
\begin{equation}
p_L(\mbox{${\cal M }$} , \mbox{${\cal E}$}) \simeq \mbox{$\Lambda _{\cal M}^+$} \mbox{$\Lambda _{{\cal E}}^+$} \tilde{p}^\star (\mbox{$\Lambda _{\cal M}^+$} \delta \mbox{${\cal M }$} , \mbox{$\Lambda _{{\cal E}}^+$} \delta
\mbox{${\cal E}$})
\label{eq:critlim}
\end{equation}
where $\mbox{$\tilde{p}^\star(x,y)$}=\tilde{p}(x,y,0,0)$ is a function
describing the universal and statistically scale invariant operator
fluctuations characteristic of the critical point.
In what follows, we shall employ Monte Carlo simulations to explicitly
test the prediction of equation~\ref{eq:critlim} for a decorated
lattice gas model and a polymer system, both members of the Ising
universality class. To do so, however, we first require an independent
estimate of the fixed point function \mbox{$\tilde{p}^\star(x,y)$}\ appropriate to the
3D Ising universality class. In practice, this is most readily obtained
by considering the prototype member of the Ising class, namely the 3D
Ising model itself. Owing to its lack of field mixing, the scaling
operators of the Ising model are simply $\mbox{${\cal M }$}\rightarrow m$ (the
magnetisation) and $\mbox{${\cal E}$}\rightarrow u$ (the energy density). Moreover,
the availability of highly accurate estimates for the Ising model
critical temperature, circumvents the need to perform a time consuming
search for the critical point.
\section{Monte Carlo simulations}
\label{sec:mc}
\setcounter{equation}{0}
\setcounter{abc}{0}
\subsection{The 3D Ising model and the form of \mbox{$\tilde{p}^\star(x,y)$}\ }
\label{sec:ising}
Using a vectorised algorithm on a Cray YMP supercomputer, we have
performed high precision Monte Carlo simulation measurements of the
joint magnetisation and energy density distribution for the 3D Ising
model on a periodic lattice of side $L=20$. The measurements were
performed at the estimated (reduced) critical coupling
$K_c^I=0.2216595(26)$, as obtained in a previous high precision Monte
Carlo study \cite{FERRENBERG1}. Following an initial equilibration
period of $2\times10^6$ Monte Carlo steps per spin (MCS), the
magnetisation and energy density were sampled at intervals of $50$ MCS
(in order to reduce correlations), and the results accumulated in a
histogram. The final histogram (comprising $2\times10^7$ entries) was
formed from $12$ independent runs, thereby allowing the statistical
independence of the data to be assessed and statistical errors to be
assigned to the results. The resulting form of \mbox{$\tilde{p}^\star(x,y)$} , normalised
to unit integrated weight and scaled to unit variance along both axes,
is shown in figure~\ref{fig:j_ising}. The associated ordering
operator distribution $\mbox{$\tilde{p}^\star(x)$}=\int \mbox{$\tilde{p}^\star(x,y)$} dy$ , and energy
operator distribution $\mbox{$\tilde{p}^\star(y)$} =\int \mbox{$\tilde{p}^\star(x,y)$} dx$ are shown in
figure~\ref{fig:i_m&u}. One observes that while the form of \mbox{$\tilde{p}^\star(x)$}\
is doubly peaked and symmetric, that of \mbox{$\tilde{p}^\star(y)$}\ is singly peaked and
asymmetric.
\subsection{The 3D decorated lattice gas model}
\label{sec:dlg_intro}
The decorated lattice gas model was first proposed in its two
dimensional form by Mermin as an example of a system exhibiting a
singular coexistence diameter \cite{MERMIN}. The model was
subsequently generalised to simple, body, and face centred cubic
lattices by other workers \cite{ZOLLWEG,MULHOLLAND1, MULHOLLAND2} and
studied for its interesting coexistence properties. The simple cubic
form of the model, on which we shall focus in the present work,
consists of an ordinary simple cubic lattice gas (whose sites we term
the primary sites), augmented (decorated) by additional secondary
sites on the intersitial bonds. Particles on primary sites are
assumed to interact with one another via a dimensionless coupling of
strength $\lambda$, while particles on the secondary sites interact
with those on primary sites via a dimensionless coupling $\eta$, but
do not interact with each other. A schematic representation of a unit
cell of the model is shown in figure~\ref{fig:dlatt}.
The configurational energy $\Phi(\{\sigma\})$ of the decorated lattice
gas model is given by
\begin{equation}
\Phi(\{\sigma\}) = \sum_{<i,j>}\eta\sigma_i\sigma_j +
\sum_{[m,n]}\lambda\sigma_m\sigma_n \label{eq:pot}
\label{eq:confen}
\end{equation}
with $\sigma_i=0,1$. The site indices $i$ and $j$ are taken to run
over nearest neighbour primary and secondary sites of the model, while $m$ and
$n$ run only over nearest neighbour primary sites. It is straightforward to
show
that the particle-hole symmetry that obtains in the ordinary lattice
gas is equivalent to the requirement that all sites have the same
average energy environment. The presence of two inequivalent
sublattices in the decorated model clearly violates this condition and
leads to field mixing.
Aside from its field mixing properties, the chief asset of the
decorated lattice gas model, is its analytic tractability. The grand
partition function of the asymmetric model can be related by means of
analytic transforms to that of the ordinary lattice gas model which is
itself isomorphic to the Ising model. Specifically, one finds:
\begin{equation}
\Omega (\mu ,T)=(1+e^{\mu/kT})^{3{\overline N}}{\overline \Omega} ({\overline
\mu},{\overline T})
\label{eq:part}
\end{equation}
where $\Omega$ is the partition function of the decorated model and $\mu$
and $T$ are the chemical potential and temperature respectively. Bars
denote quantities in the ordinary lattice gas and ${\overline N}$ is
the number of primary sites in the model.
Introducing the dimensionless chemical potential $\xi=\mu/k_bT$
equation~\ref{eq:part} leads to the following relationships
\cite{MULHOLLAND1}:
\setcounter{abc}{1}
\begin{eqnarray}
{\overline \xi} & = & \xi + 6\ln\left [ \frac{1+e^{\xi+
\eta}}{1+e^\xi}\right]\\
\addtocounter{equation}{-1}
\label{eq:relnsa}
\addtocounter{abc}{1}
{\overline \lambda} & = & \lambda + \ln \left [\frac{(1+e^\xi
)(1+e^{\xi+2\eta})}{(1+e^{\xi+\eta })^2}\right ]
\label{eq:relnsb}
\end{eqnarray}
\setcounter{abc}{0}
where ${\overline \xi}={\overline \mu}/k_b{\overline T}$ and ${\overline
\lambda}={\overline \lambda }/k_b{\overline T}$ are respectively the
dimensionless
chemical potential and dimensionless nearest neighbour coupling constant of
the ordinary lattice gas.
In the 3D ordinary lattice gas the liquid-vapour coexistence line is
specified by the condition ${\overline \xi}=-3{\overline \lambda}$
\cite{LEE}. The location of critical point that terminates this line
is not known exactly, but is trivially related to that of the Ising
model:
\begin{equation}
{\overline \lambda_c}=4K_c^I, \hspace{1cm} {\overline \xi}_c=-3{\overline
\lambda}_c
\label{eq:crittemp}
\end{equation}
where
\begin{equation}
K_c^I= 0.2216595(26)
\label{eq:crit_is}
\end{equation}
is the estimated dimensionless critical coupling of the simple cubic
Ising model \cite{FERRENBERG1}. Estimates for the critical point
parameters are thus obtainable by feeding this value of $K_c^I$ into
equations~\ref{eq:relnsa} and~\ref{eq:relnsb}. Similarly, the
coexistence curve of the decorated model is obtainable by setting
${\overline \xi}=-3{\overline \lambda}$ in equations~\ref{eq:relnsa}
and \ref{eq:relnsb} to find:
\begin{equation}
\xi + 3\lambda+3\ln\left [ \frac{1+e^{\xi + 2\eta}}{1+e^\xi}\right ] = 0
\label{eq:coexsur}
\end{equation}
Note however, that since $\lambda$ and $\eta$ both enter only as
multiplicative factors in the configurational energy
(equation~\ref{eq:confen}), the coexistence curve is uniquely
parameterised by the value of the coupling ratio $\lambda/\eta$.
Varying this ratio allows one to tune the degree of field mixing
\cite{WILDING2}. Indeed for the special choice $\lambda/\eta=-1/3$,
the average energy environment is identical for atoms on both
sublattices of the model and particle-hole symmetry is restored.
Knowledge of the mapping between the decorated lattice gas and the
ordinary lattice gas also permits an analytic calculation of the field
mixing parameters \mbox{$r$}\ and \mbox{$s$}\ for the model. The value of $r$ is
obtainable from equation~\ref{eq:coexsur} simply by calculating the
limiting critical gradient of the coexistence curve. A calculation of
\mbox{$s$}\ proceeds from the observation that in the ordinary lattice gas,
the field-like scaling field \mbox{$h$}\ coincides with the line ${\overline
\lambda}={\overline \lambda_c}$ in the space of ${\overline \xi }$ and
${\overline \lambda}$. It follows that in the decorated lattice gas
model, the direction of \mbox{$h$}\ can be obtained from
equation~\ref{eq:relnsb} by setting ${\overline \lambda}={\overline
\lambda_c}$, $\eta=\eta(\lambda)$ and solving for $\lambda$. The
value of \mbox{$s$}\ is then given by
\begin{equation}
\mbox{$s$} =\left ( \frac{\partial \lambda }{\partial \xi}\right )_c
\label{eq:s_calc}
\end{equation}
where the derivative is to be evaluated at criticality.
Compared to more realistic fluid models such as the Lennard-Jones
fluid, the great simplicity of the decorated lattice gas model renders
it highly computationally tractable. Moreover the prior availability
of accurate values for the critical parameters obviates the need for a
time consuming search of parameter space for the critical point, and
simplifies the task of data analysis. The model therefore provides
an ideal test-bed for simulation studies of critical point field
mixing.
\subsubsection{The critical limit}
\label{sec:critlim}
Using a Metropolis algorithm within the grand canonical ensemble (GCE)
\cite{BINDER4}, we have performed detailed simulation measurements of
the joint density and energy distribution $p_L(\rho,u)$ of the
decorated lattice gas model, at the estimated critical parameters
obtained as described above. All simulations were performed for a
choice of the coupling ratio $\lambda/\eta=0.1$, for which it is known
that the coexistence curve of the model closely resembles those of
many real fluids \cite{MULHOLLAND1}. In the course of the simulations,
three system sizes were studied having linear extent $L=12,L=20$ and
$L=32$. Periodic boundary conditions were employed throughout. Prior
to data collection, equilibration periods of $5\times10^6$ MCS were
utilised. Samples of the density and energy density were then
performed at intervals of $50$ MCS (to reduce correlations) and the
data stored in histograms. For each system size, $12$ independent runs
were performed in order to test the statistical independence of the
data and to assign statistical errors to the results. The final
histograms of $p_L(\rho,u)$ comprised $2\times10^7$ entries for the
$L=12$ and $L=20$ system sizes, and $1\times10^7$ entries for the
$L=32$ system size.
The measured form of $p_L(\rho,u)$ for the decorated lattice gas model
is presented in figure~\ref{fig:j_latt} for the $L=20$ system size.
Clearly apart from a general overall double peaked structure, the form
of this distribution bears little resemblance to that of \mbox{$\tilde{p}^\star(x,y)$}\
(c.f. figure~\ref{fig:j_ising}) which represents the joint critical
order parameter and energy density distribution in the absence of
field mixing. To illustrate the differences it is instructive to
compare the fluid density distributions $p_L(\rho)=\int p_L(\rho,u)du$
with \mbox{$\tilde{p}^\star(x)$} , and the fluid energy density distribution $p_L(u)=\int
p_L(\rho,u)d\rho$ with \mbox{$\tilde{p}^\star(y)$}\ .
The forms of $p_L(\rho)$ for all three system sizes are shown in
figure~\ref{fig:l_bare}a. In contrast to \mbox{$\tilde{p}^\star(x)$}\
(figure~\ref{fig:i_m&u}a), all the density distributions exhibit a
pronounced asymmetry, qualitatively similar in form to that observed
in the 2D version of the same model \cite{WILDING2}. Even more conspicuous,
however, are the differences between the finite-size forms of $p_L(u)$
(figure~\ref{fig:l_bare}b), and that of \mbox{$\tilde{p}^\star(y)$}\
(figure~\ref{fig:i_m&u}b). Clearly while the latter is singly peaked,
the former are {\em doubly peaked}, with a form (were one to plot
$p_L(-u)$), reminiscent of the density distribution. As we shall
show, the explanation of these differences is to be found in the field
mixing that manifests the lack of particle-hole symmetry in fluids.
In order to expose the universality linking the critical point of the
decorated lattice gas model to that of the Ising model, it is
necessary to recast $p_L(\rho,u)$ in terms of the joint distribution
of scaling operators \mbox{$p_L({\cal M},{\cal E})$}\ , cf. equation~\ref{eq:critlim}. To do so
however, requires specification of the field mixing parameters \mbox{$s$}\
and \mbox{$r$}\ featuring in the definitions of \mbox{${\cal M }$}\ and \mbox{${\cal E}$}\
(equation~\ref{eq:oplinks}). In practice, these values may be readily
found by requiring that the {\em single} scaling operators
distributions \mbox{$p_L({\cal M})$}\ and \mbox{$p_L({\cal E})$}\ match their respective fixed point forms
\mbox{$\tilde{p}^\star(x)$}\ and \mbox{$\tilde{p}^\star(y)$}\ . Carrying out this procedure yields the
matchings shown in figure~\ref{fig:lat_ops}a and \ref{fig:lat_ops}b.
The associated estimates for the field mixing parameters are
$\mbox{$s$}=-0.143(8)$ and $\mbox{$r$}=-3.11(1)$. These values compare very
favourably with those calculable analytically, for which one finds
$\mbox{$s$}=-0.1428\cdots , \mbox{$r$}=-3.1163\cdots$ Such a high level of
accord indicates that the matching of the operator distributions to
the universal Ising forms is a potentially very accurate method for
determining the field mixing parameters.
Having obtained estimates for \mbox{$s$}\ and \mbox{$r$} , one may then construct
the joint distribution of scaling operators \mbox{$p_L({\cal M},{\cal E})$} . The resulting form
is shown in figure~\ref{fig:tjhist}, and should be compared with that
of \mbox{$\tilde{p}^\star(x,y)$}\ shown in figure~\ref{fig:j_ising}. Clearly the agreement
between the operator distributions and the universal fixed point form
is gratifying, providing substantial corroboration of the mixed field
FSS theory.
\subsubsection{The subcritical region}
As the critical point is approached along the line of phase
coexistence, the known symmetries of the Ising problem imply that
\setcounter{abc}{1}
\begin{eqnarray}
\langle \mbox{${\cal M }$}\rangle^\pm-\langle\mbox{${\cal M }$}\rangle_c &=& \pm a |\mbox{$\tau$} |^\beta ,\\
\addtocounter{equation}{-1}
\label{eq:approacha}
\addtocounter{abc}{1}
\label{eq:approachb}
\langle \mbox{${\cal E}$}\rangle^\pm-\langle\mbox{${\cal E}$}\rangle_c &=& b |\mbox{$\tau$} |^{1-\alpha} +
\mbox{terms analytic at criticality}
\end{eqnarray}
\setcounter{abc}{0}
where $a$ and $b$ are critical amplitudes and $\pm$ denote limits as
the coexistence curve is approached from above ($\mbox{$h$}\rightarrow 0^+$)
or below ($\mbox{$h$}\rightarrow0^-$). Recalling that $\mbox{${\cal M }$}=\rho+\mbox{$s$}\mbox{${\cal E}$}$ then
yields the two branches of the coexistence curve densities near
criticality:
\begin{equation}
\rho_\pm-\rho_c = \pm a|\mbox{$\tau$}|^\beta+sb |\mbox{$\tau$} |^{1-\alpha} +
\mbox{terms analytic at criticality} ,
\label{eq:coexcv}
\end{equation}
which displays a singular diameter:
\begin{equation}
\rho_d-\rho_c=\frac{1}{2}(\rho_++\rho_-)-\rho_c\sim sb |\mbox{$\tau$} |^{1-\alpha}
\end{equation}
as is indeed observed experimentally \cite{SENGERS}.
At temperatures outside the critical region, the
relations~\ref{eq:approacha} and ~\ref{eq:approachb} are {\em not}
generally expected to hold and a crossover description to regular
classical behaviour is more appropriate
\cite{SINGH,SENGERS2,SENGERS3,SENGERS4,SENGERS5,ZCHEN}. Nevertheless
as observed in many computer simulations of various simple fluids, the
temperature dependence of the order parameter {\em is} quite
accurately described by Ising critical exponents over a remarkably
wide range of subcritical temperatures \cite{PANAGIO}, without the
need to introduce higher order (non-linear) terms in the Wegner
expansion of the scaling fields \cite{WEGNER}. In view of this it
seems of interest to examine the range of applicability of the scaling
form~\ref{eq:ansatz} in the subcritical two-phase region.
To this end we have obtained the joint density and energy density
distribution of the decorated lattice gas model at temperatures
$T=0.9T_c$ and $T=0.8T_c$, for a system of side $L=20$. In order to
circumvent the prohibitively large `tunneling' times between the
coexisting phases that normally plague GCE simulations in the
two-phase region, we have employed the multicanonical preweighting
scheme \cite{BERG}. This scheme uses weighted transitions to encourage
the system to sample those interfacial configurations that would
otherwise occur only very rarely. The weight factors are
chosen such that the sampled density distribution is approximately
flat in the density region between the peaks of the two coexisting
phases. After the simulation, the correct (`unweighted') coexistence
distribution is regained by dividing out the weight factors from
the sampled distribution. In this manner the effective tunneling
frequency between the coexisting phases may be increased by many
orders of magnitude, thereby facilitating very accurate estimates of
the coexistence form of $p_L(\rho,u)$.
Using the multicanonical preweighting scheme, samples of the density
and energy were accumulated every $20$ MCS and stored in
histograms. The final histograms for $p_L(\rho,u)$ comprised
approximately $5\times10^7$ entries. In figure~\ref{fig:coex_ops}a we
present the measured coexistence forms of $p_L(\rho)$ for $T=0.9T_c$
and $T=0.8T_c$. Also included in the figure is the critical point form
of $p_L(\rho)$ for the $L=20$ system size, as obtained
previously(c.f. subsection~\ref{sec:critlim}). Clearly the
distributions are all asymmetric, those for the two subcritical
temperatures having a form similar to those observed in an asymmetric
version of the 2D Blume-Emery-Griffiths spin model
\cite{BORGS}. Although the two peaks of the sub-critical distributions
have equal weight, (reflecting the thermodynamic condition for
coexistence \cite{EWR}), one observes that the high density peak is
much broader and shorter than the low density peak. Moreover the
magnitude of the change in position of the high density peak on
lowering the temperature is much greater than that in the low density
peak. It is this effect that gives rise to the asymmetric form of the
temperature-density phase diagram of fluids.
In figure~\ref{fig:coex_ops}b we present the corresponding forms of
the ordering operator distribution \mbox{$p_L({\cal M})$} . For each temperature the
value of the field mixing parameter \mbox{$r$}\ was obtained as the
gradient of the coexistence curve, while the value of \mbox{$s$}\ was
obtained by applying the prescription of equation~\ref{eq:s_calc}. One
sees that in this choice of variables, the distributions at $T=T_c$
and $T=0.9T_c$ are largely {\em symmetric} i.e. both peaks have the
same height and width. Only the distribution for $T=0.8T_c$ exhibits
small deviations from symmetry, these being attributable to the
non-singular (and non-Ising) part of the partition function
(equation~\ref{eq:part}). Thus it would appear that in the present
case at least, the validity of the scaling form ~\ref{eq:ansatz}
extends some $10\%$ or more below the critical temperature. In this
temperature range, the asymmetry of the density distribution can
therefore be accurately ascribed to the linear mixing of the energy
density into the ordering operator. It remains to be seen however, to
what extent this finding holds true in more realistic fluid models as
well as those possessing very large coexistence curve asymmetries,
such as polar \cite{VANLEEUWEN}, ionic \cite{SENGERS1} or metallic
fluids \cite{JUNGST,GOLDSTEIN,GOLDSTEIN1}.
It is also instructive to compare the simulation results with analytic
calculations of the coexistence density diameter $\rho_d$ and the
ordering operator diameter $\mbox{${\cal M }$}_d$. The latter can be calculated
exactly, while the former may be obtained approximately by employing
the tabulated values of the Pad\'{e} Approximants for the temperature
dependence of the Ising model energy density \cite{SCESNEY}.
Performing these calculations for fractional temperatures $t=T/T_c$ in
the range $0.7\le t \le 1.0$ yields the results shown in
figure~\ref{fig:analyt}. The weak critical singularity in the
coexistence diameter is readily discernible from the figure although,
as expected, no such singularity obtains for $\mbox{${\cal M }$}_d(t)$, which is
analytic at the critical point. Some variation in $\mbox{${\cal M }$}_d$ is seen as a
function of temperature, however, but this again arises from the
non-singular, non-Ising part of the partition function~\ref{eq:part}.
Also included in the figure are the simulation estimates of
$\langle\mbox{${\cal M }$}\rangle (t)$ and $\langle\rho\rangle(t)$ obtained from the
multicanonical simulations at $T=0.9T_c$ and $T=0.8Tc$, as well as the
conventional simulations at the critical point. Clearly a very good
overall agreement between the simulations and the analytical
predictions is apparent.
\subsection{The universal critical finite-size spectrum of \newline
$p_L(\rho)$ and $p_L(u)$}
In this subsection we return to a consideration of the critical point
forms of $p_L(\rho )$ and $p_L(u)$ in fluids, with the aim of gaining
an understanding of their shapes and finite-size-scaling behaviour. To
this end it is expedient to reexpress $\rho$ and $u$ in terms of the
scaling operators. Appealing to equation~\ref{eq:oplinks}, one finds
\begin{equation}
u=\mbox{${\cal E}$}-\mbox{$r$}\mbox{${\cal M }$} \hspace{1cm} \rho=\mbox{${\cal M }$} - s \mbox{${\cal E}$} ,
\label{eq:udenmix}
\end{equation}
so that the critical density and energy density distributions are
\begin{equation}
p_L(u)=p_L (\mbox{${\cal E}$} -r\mbox{${\cal M }$} ) \hspace{1cm} p_L(\rho)=p_L (\mbox{${\cal M }$} -s\mbox{${\cal E}$} )
\label{eq:prho&u}
\end{equation}
Now the structure of the scaling form~\ref{eq:ansatz} shows that the
typical size of the fluctuations in the energy-like operator will vary
with system size like $\delta\mbox{${\cal E}$}\sim L^{-(1-\alpha)/\nu}$, while the
typical size of the fluctuations in the ordering operator vary like
$\delta\mbox{${\cal M }$}\sim L^{-\beta/\nu}$ . It follows that for a given $L$, the
shape of the energy and density distributions can be identified with
the distribution of the variable
\begin{equation}
X_{\Theta} = a_{\cal M}^{-1}\delta \mbox{${\cal M }$} \cos \Theta
+ a_{\cal E}^{-1}\delta\mbox{${\cal E}$} \sin \Theta ,
\label{eq:Theta}
\end{equation}
with
\begin{equation}
\tan \Theta_{u} = \frac{a_{\mbox{${\cal E}$}}}{r a_{\mbox{${\cal M }$}}} L^{-(1-\alpha -\beta)/\nu}
\hspace{1cm} \mbox{and} \tan \Theta_{\rho} = \frac{s a_{\mbox{${\cal E}$}}}{a_{\mbox{${\cal M }$}}}
L^{-(1-\alpha -\beta)/\nu}
\label{eq:Ldep}
\end{equation}
where the subscripts $u$ and $\rho$ signify that the value of $\Theta$
corresponds to the energy density and density distributions
respectively.
The distributions $p(X_\Theta)$ constitute a one-parameter class of
{\em universal} functions describing the density and energy
distributions of fluids at finite $L$. Geometrically, $\Theta$ can be
interpreted as defining a direction $OX_\Theta$ in the basal plane
formed by the $Ox$ and $Oy$ axes of figure~\ref{fig:j_ising}, making
an angle $\Theta$ with the $Ox$ axis. The form of $p(X_\Theta)$ is
then obtainable by projecting \mbox{$\tilde{p}^\star(x,y)$} , onto the vertical plane
which includes the line $OX_\Theta$. A representative selection of
such projections is shown in figure~\ref{fig:projs}. For
$\Theta=0^\circ$ one obtains simply the ordering operator distribution
\mbox{$\tilde{p}^\star(x)$}\ , while for $\Theta=90^\circ$ the from is that of the energy-like
operator
\mbox{$\tilde{p}^\star(y)$}\ distribution. Intermediate between these values
a range of behaviour is obtained, representing the finite $L$ forms of
$p_L(\rho )$ and $p_L(u)$.
Asymptotically (i.e. as $L\rightarrow \infty$), equation~\ref{eq:Ldep}
implies that both $\Theta_u$ and $\Theta_\rho$ approach zero
\cite{FN2} so that in this limit
\begin{equation}
p_L(u)= p_L (-r\mbox{${\cal M }$} )\simeq\mbox{$a_{{\cal M}}$} ^{-1}\mbox{$r$} L^{\beta/\nu
}\tilde{p}_{\cal M}^\star (-\mbox{$a_{{\cal M}}$} ^{-1}\mbox{$r$} L^{\beta/\nu} \delta \mbox{${\cal M }$} )\\
\label{eq:lim_u}
\end{equation}
\begin{equation}
p_L(\rho)= p_L (\mbox{${\cal M }$} )\simeq\mbox{$a_{{\cal M}}$} ^{-1} L^{\beta/\nu
}\tilde{p}_{\cal M}^\star (\mbox{$a_{{\cal M}}$} ^{-1} L^{\beta/\nu} \delta \mbox{${\cal M }$} )\\
\label{eq:lim_rho}
\end{equation}
It follows that for {\em any} finite \mbox{$s$}\ and \mbox{$r$} , the limiting
critical point forms of $p_L(\rho)$ and $p_L(u)$ both match the
critical ordering operator distribution \mbox{$\tilde{p}^\star(x)$} . The approach to
this limiting behaviour is indeed clearly evident in the distributions
of figure~\ref{fig:l_bare}. We note however that the limiting form of
$p_L(u)$ differs radically from that of the Ising model where, owing
to the absence of field mixing, $\lim_{L\to\infty}p_L(u)=
\mbox{$\tilde{p}^\star(y)$}$. The profound influence of field mixing on the critical
behaviour of fluids should therefore be apparent \cite{FN3}.
Finally in this subsection, we point out that precise knowledge of the
location of the critical point (obtained eg. from the data collapse of
the scaling operators onto their fixed point forms), does {\em not}
imply the possibility of directly measuring the infinite-volume
density and energy density. To appreciate this, recall that
\begin{equation}
\langle u \rangle_c=\langle\mbox{${\cal E}$}\rangle_c-\mbox{$r$}\langle\mbox{${\cal M }$}\rangle_c
\hspace{1cm} \langle\rho\rangle_c=\langle\mbox{${\cal M }$}\rangle_c-\mbox{$s$}\langle\mbox{${\cal E}$}\rangle_c
\end{equation}
Now, while symmetry considerations dictate that the value of
$\langle\mbox{${\cal M }$}\rangle_c=\int \mbox{$p_L({\cal M})$} d\mbox{${\cal M }$}$ is independent of system size, no
such symmetry condition pertains to \mbox{$p_L({\cal E})$} , whose average value
$\langle\mbox{${\cal E}$}\rangle_c=\int d\mbox{${\cal E}$}\mbox{$p_L({\cal E})$}$ at criticality is expected to vary
with system size like
\begin{equation}
\langle\mbox{${\cal E}$}\rangle_c(L)-\langle\mbox{${\cal E}$}\rangle_c(\infty ) \sim L^{-(1-\alpha)/\nu}
\end{equation}
It follows that in order to extract infinite volume estimates of
$\rho_c$ and $u_c$ from simulations at the critical point, it is
necessary to extrapolate data from a number of different system sizes
to the thermodynamic limit. This procedure is illustrated in
figure~\ref{fig:Lshift} for the decorated lattice gas model, using
critical point data from the three system sizes $L=12,20,32$. The
measured values of $\langle \rho\rangle_c(L)$ and $\langle
u\rangle_c(L)$ are plotted against $L^{-(1-\alpha)/\nu}$. A least
squares fit to the data yields the infinite volume estimates
$\rho_c=0.3371(1)$ and $u_c=-0.8385(6)$. We remark that the existence
of these finite-size shifts imply that the equal weight criterion
\cite{EWR,BORGS} for the order parameter distribution, while correctly
identifying the coexistence curve in the subcritical regime, must fail close
to the critical point \cite{MUELLER}.
\subsection{Liquid-vapour equilibria of a polymer model}
\label{sec:poly}
We now apply the techniques developed in the foregoing sections to the
study of critical phenomena and phase coexistence in a more realistic
model fluid, namely a polymer system. The model we consider is the
bond fluctuation model, a coarse grained lattice-based polymer model
which combines the essential qualitative features of real polymer
systems--- monomer excluded volume and connectivity--- with
computational tractability. Within the framework of the model, each
monomer occupies a whole unit cell of a periodic simple cubic
lattice. Excluded volume interactions are catered for by requiring
that no lattice site can simultaneously be occupied by two monomer
corners. Monomers along the polymer chains are connected by bond
vectors which can assume one of $108$ possible values, providing for
$87$ distinct bond angles and $5$ distinct bond lengths. For a more
detailed description of the model, the reader is referred to the
literature \cite{BFM}.
Using a grand canonical simulation algorithm, we have simulated chains
of length $N=20$ monomers, interacting via a short range square well
potential, the range of which was set at $\sqrt{6}$ (in units
of the lattice spacing). Chain insertions and deletions were
facilitated by use of the configurational bias Monte Carlo (CBMC)
method of Siepmann \cite{SIEPMANN}. The essential idea behind the CBMC
method is to improve the low acceptance rate associated with random
trial chain insertion, by `growing' chains of favourable energy into
the system. A bookkeeping scheme maintains a record of the statistical
bias associated with choosing favourable chain conformations, and this
bias is subsequently removed when the acceptance probability is
calculated. The CBMC technique has also recently been used in
conjunction with Gibbs ensemble Monte Carlo simulations of
liquid-vapour phase coexistence of off-lattice alkanes models
\cite{SIEPMANN1}.
The quantities measured in the simulations were the monomer density:
\begin{equation}
\rho=8nN/V
\end{equation}
and the dimensionless energy density:
\begin{equation}
u=8w^{-1}\Phi(\{r\})/V
\end{equation}
where n is the number of chains, $\Phi(\{r\})$ is the configurational
energy, $w$ is the well depth and $V$ is the system volume which was
set at $V=40^3$. Here the factor of $8$ derives from the number of lattice
sites occupied by one monomer. In the course of the simulations,
measurements of $\rho$ and $u$ were performed at intervals of $5000$
chain insertion attempts and accumulated in the joint histogram
$p_L(\rho,u)$. The final histogram comprised some $3\times10^5$
entries.
In contrast to the decorated lattice gas model considered in previous
sections, the line of liquid-vapour phase coexistence is not known
{\em a-priori} for the polymer system and must therefore be identified
empirically. The precise location of the coexistence curve is
prescribed by the equal weight criterion for the two peaks of the
density distribution \cite{EWR}. Unfortunately, the task of
identifying the coexistence curve using this criterion is an extremely
time consuming and computationally demanding one, since the density
distribution is generally very sensitive to small deviations from
coexistence. In practice, however, it suffices to obtain data for only
a few points close to the coexistence curve. The full coexistence
curve between these points can subsequently be constructed using
histogram reweighting techniques \cite{FERRENBERG,DEUTSCH3}. Provided
that the measured density distributions are doubly peaked and the
temperatures studied are not too widely separated, this technique
permits a very accurate determination of the coexistence curve locus.
Starting with an initial well-depth $w=0.569$, the approximate value
of the coexistence chemical potential was determined by tuning $\mu$
until the density distribution exhibited two peaks. Again the
multicanonical preweighting scheme \cite{BERG} was employed in order
to overcome the otherwise very large tunnelling times between the
coexisting phases. A histogram extrapolation based on this data was
then used to estimate the value of the coexistence chemical potential
for a well-depth $w=0.56$, which lies close to the critical
well-depth. A further long runs was carried out at this
near-coexistence point. By extrapolating the measured near-coexistence
histograms of $p_L(\rho,u)$ in conjunction with the equal weight
criterion, we were then able to construct a sizeable portion of the
coexistence curve (and its analytic extension). Representative forms
of the density distributions along the line of coexistence and its
analytic continuation are shown in figure~\ref{fig:poly_opcx}a. The
coexistence curve, expressed as a function of the well-depth $w$ and
chemical potential $\mu$ is shown in figure~\ref{fig:cxcurve}.
To locate the critical point along the line of phase coexistence, we
utilised the universal matching condition for the operator
distributions \mbox{$p_L({\cal M})$}\ and \mbox{$p_L({\cal E})$}\ . Again applying the histogram
reweighting technique, the well-depth, chemical potential and field
mixing parameters were tuned until the forms of \mbox{$p_L({\cal M})$}\ and \mbox{$p_L({\cal E})$}\ most
accurately matched the universal critical Ising forms of
figure~\ref{fig:i_m&u}. The results of performing this procedure are
shown in figure~\ref{fig:polyops}. Given that the system contains an
average of only about $100$ polymer chains, the quality of the data
collapse is remarkable. The mappings shown were effected for a choice
of the parameters
\begin{equation}
w_c=0.5584(1), \hspace{0.5cm}\mu_c=-5.16425(2),\hspace{0.5cm} s=-0.135(4),
\hspace{0.5cm}r=-2.55(2)
\end{equation}
where we have defined $\mu$ to be the chemical potential per monomer. The
corresponding critical density and energy density distributions are
shown in figure~\ref{fig:polycrit}. They yield the (finite-size) estimates
$\rho_c=0.199(3)$ and $u_c=-0.304(4)$.
Turning lastly to the subcritical two-phase regime, we have again
considered the effect of forming linear combinations of the density
and energy density. Figure~\ref{fig:polyops}b shows the form of the
ordering operator distribution \mbox{$p_L({\cal M})$}\ obtained at coexistence using the
histogram reweighting technique for the same values of $w$ shown in
figure~\ref{fig:polyops}a. In each instance the values of \mbox{$s$}\ was
chosen so that the two peaks of \mbox{$p_L({\cal M})$}\ had both equal heights and
equal weights, while the value of \mbox{$r$}\ was chosen so that \mbox{$p_L({\cal E})$}\ was
singly peaked. As was the case for the decorated lattice gas model,
simple field mixing transformations also appear to account for the
sub-critical coexistence curve asymmetries of the polymer density
distribution, at least over the limited range of $w$ studied here.
\section{Conclusions}
\label{sec:disc}
In summary we have provided explicit demonstration of the field mixing
transformations that link the fluctuation spectra of the order
parameter and energy in the critical fluid to those of the critical
Ising magnet. The results serve to underline the profound influence of
field mixing on the non universal critical behaviour of fluids. This
influence is manifest most notably as a finite-size shift to the
measured critical density, and as an alteration to the limiting (large
L) form of the critical energy distribution. Field mixing is also
found to account for the observed asymmetries of the coexistence
density distribution over a sizeable portion of the sub-critical
region.
With regard to the general computational issues raised in this study,
it has been seen that effecting the data collapse of the fluid scaling
operator distributions onto their (independently known) universal
fixed point forms, provides a very powerful method for accurately
locating the critical point and determining the field mixing
parameters of model fluids. This use of the scaling operator
distributions represents the natural extension to fluids of the order
parameter distribution method deployed so successfully in the study of
symmetric spin models. Thus, in principle at least, there would
appear to be no barriers to attaining similar degrees of accuracy in
the study of critical fluids as has previously been achieved for
lattice spin systems.
The successes of the present work (and of an earlier FSS study of the
2D Lennard-Jones fluid \cite{WILDING1}) also attest to the utility of
the grand canonical ensemble for simulation studies of near-critical
fluids. The benefits of this ensemble stem principally from the fact
that density fluctuations are observable on the scale of the system
size itself, thus freeing the method of the interfacial effects and
additional length scales that complicate use of the `sub-block'
finite-size scaling technique within the canonical (NVT) ensemble
\cite{ROVERE,ROVERE1}. High quality results can therefore be obtained
using comparatively much smaller system sizes, with concomitant
savings in computational effort.
The ability to perform a full finite size scaling analyses in the
near-critical region also represents an important advantage of the GCE
approach over the Gibbs ensemble Monte Carlo (GEMC) simulation
technique \cite{PANAGIO}. In the GEMC method, the fluctuating box size
seems to preclude a rigorous FSS analysis \cite{PANAGIO1,MON}, thus
seriously hindering the accurate location of the critical point. The
GEMC is, nevertheless, very efficient for locating the
temperature-density phase diagram in the sub-critical regime. For this
task, use of the bare GCE method is only feasible for temperatures
within a few percent of the critical temperature because the otherwise
high interfacial free energy results in prohibitively large
`tunneling' times between the coexisting phases. Nonetheless, as we
have shown, this problem is surmountable by combining the GCE with
recently developed multicanonical preweighting and histogram
reweighting techniques, thereby enabling accurate studies of the
coexistence density and energy fluctuations even well below the
critical temperature.
\subsection*{Acknowledgements}
The authors have benefitted from helpful discussions with B.A.
Berg, K. Binder and A.M. Ferrenberg. NBW acknowledges the financial
support of a Max Planck fellowship from the Max Planck Institut
f\"{u}r Polymerforschung, Mainz. Part of the simulations described
here were performed on the CRAY-YMP computers at the HLRZ J\"{u}lich
and the RHRK Universit\"{a}t Kaiserslautern. Partial support from the
Deutsche Forschungsgemeinschaft (DFG) under grant number Bi314/3-2 is
also gratefully acknowledged.
|
1,314,259,994,911 | arxiv | \section{Introduction}
Model-free reinforcement learning (RL) techniques offer a generic framework for optimizing decision making from raw feedback signals such as system performance~\cite{SuttonBarto1998}, thus not requiring an analytical model of the system. In recent years, deep reinforcement learning (DRL) approaches which combine RL with deep neural networks have enjoyed successes in a variety of domains such as games (Go \cite{SilverHuangMaddisonEtAl2016}, Atari~\cite{MnihKavukcuogluSilverEtAl2013,double_dqn,MnihDQN2015,Espeholt2018}), and applied domains such as industrial process control \cite{Hein2017a} or robotic manipulation \cite{Tobin2017}. RL approaches have also long appealed to computer systems researchers, with experimental applications in domains such as adaptive routing or server resource management spanning back over 20 years \cite{KumarMiikkulainen1997, KumarMiikkulainen1999, TesauroJongDasEtAl2006,TesauroDasChanEtAl2007}. The advent of deep RL in in combination with widely available deep learning frameworks has renewed interest in this approach. More recent examples include automated TensorFlow device placements \cite{mirhoseini2017device, hierarchical2018}, client-side bit-rate selection for video streaming \cite{Mao2017}, and simplified cluster scheduling \cite{Mao2016}.
However, practical RL deployments in computer systems and data management remain difficult due to large training data requirements and expensive decision evaluations (e.g. multiple minutes to deploy a cluster configuration). RL algorithms also suffer from inferior predictability and stability compared to simpler heuristics \cite{Henderson2017, Mania2018}. Consequently, proof-of-concept successes in simplified and highly controlled simulations have infrequently lead to practical deployments. Nonetheless, DRL remains appealing as it combines the ability of deep neural networks to identify and combine features in unforeseen ways with learning from raw system feedback. The long-term aim is to automate manual feature and algorithm design in computer systems and potentially learn complex behaviour outperforming manual designs.
In this work, we explore these limitations by outlining a software stack for practical DRL, with focus on guiding learning via existing log data or demonstrated examples. The key idea of our paper is that in modern data processing engines, fine-granular log data can be used to extract demonstrations of desired dynamic configurations. Such demonstrations can be used to pretrain a control model, which is subsequently refined when deployed in its concrete application context. To this end, we make the following contributions:
\begin{figure*}[t]
\centering
\includegraphics[scale=.5]{lift_wide_overview.pdf}
\caption{LIFT workflow.}
\label{fig:lift-overview}
\vspace{-5mm}
\end{figure*}
We present \textit{LIFT} (\S \ref{lift}), a high level framework for \textbf{L}earn\textbf{I}ng \textbf{F}rom \textbf{T}races which provides common components to interface and map between systems and reinforcement learning, thus removing boilerplate code. We further introduce TensorForce, a highly modularized DRL library focusing on a declarative API for common algorithms, which serves as an algorithmic backend for LIFT. LIFT allows users to specify data layouts of states and action spaces which are used by TensorForce to generate TensorFlow models for executing RL tasks (\S \ref{tensorforce}). In the evaluation (\S \ref{evaluation}), we demonstrate the utility of our LIFT prototype in two experimental data management case studies. First, we use LIFT to generate a controller for automatic compound database indexing (\S \ref{traces}). Indexing is an attractive use case for RL as the optimal index set for an application depends on the complex interaction of workload, query operators within each query, data distribution, and query planner heuristics. While analytical solutions are difficult to build and vary per database and query planner, rich feedback from slow query logs enables RL controllers to identify effective solutions. Experimental results show that a LIFT-controller pretrained from imperfect rule-based demonstrations can be refined within few hours to outperform various rule and expert baselines by up to $70\%$. We also use LIFT to learn task parallelism configurations on Heron \cite{Kulkarni2015}, a state of the art stream processing engine.
Figure \ref{fig:lift-overview} illustrates LIFT's role and components. The slow query log from a database containing queries, the executed query plan, and execution statistics are read into LIFT. Via a user-defined schema and converter, LIFT interprets traces and/or provided rules as demonstrations to train an offline model. In the indexing case study, this is achieved by mapping query shape and existing indices to a state, the command required to create the index used to an action, and query performance to a reward. Traces must hence contain not only runtime performance but also corresponding configurations which can be used to reconstruct a command (action) leading to that configuration. For example, the slow query log may contain the query plan including index used, and this can be converted to the command creating that index. Schema layouts are passed to TensorForce to generate a corresponding TensorFlow graph. The states, actions, and rewards are then used to train a controller model to adopt the strategy (e.g. hand-designed rule or expert decision) behind prior indexing. Finally, LIFT is deployed in online mode to either refine indexing on an existing query set, or within a new application to replace manual tuning.
\section{Background}\label{background}
We give a brief introduction to RL with focus on practical concerns. RL is not a single optimization strategy but a class of methods used to solve the reinforcement learning problem. Informally, RL is utilized when no supervised feedback for a decision is available but reward signals indicating relative performance. For example, a cluster scheduler allocating tasks to resources may receive feedback from task completion times, but not whether a scheduling decision was optimal.
We consider the classic formulation wherein an agent interacts with an environment $\epsilon$ described by states $s \in \mathcal{S}$ and aims to learn a policy $\pi$ that governs which action $a \in \mathcal{A}$ to take in each state \cite{SuttonBarto1998}. At each discrete time step $t$, the agent takes an action $a_t$ according to its policy $\pi(a|s)$, transitions into a new state $s_{t+1}$ according to the environment dynamics, and observes a reward $r_t$ from a reward function $R(s,a)$. The goal of the agent is to maximize cumulative expected rewards $R = \mathbb{E}[\sum_t \gamma^{t}r_t]$, where future rewards are discounted by $\gamma$. State transitions and rewards are often assumed to be stochastic, and to satisfy the Markov property so each transition only depends on the prior state $s_{t-1}$.
In data management tasks, the state is typically represented as a combination of the current workload and configuration, embedded into a continuous vector space. To deal with the resulting large state spaces and generalize from seen to unseen states, RL is used in conjunction with \textit{value function approximators} such as neural networks where the expected cumulative return from taking an action $a$ in state $s$ is estimated by a function parametrized by trainable parameters $\theta$ (i.e. the neural network weights). Formally, the action-value function $Q^\pi$ is given as
\begin{align}
Q^{\pi}(s,a;\theta)=\mathbb{E}[R_t|s_t=s,a].
\end{align}
The goal of learning is to determine the optimal $Q^*(s,a)$ which maximizes expected returns. Concretely, when using Q-learning based algorithms, the neural network produces in its final layer one output per action representing it Q-value. The resulting policy is implicitly derived by greedily selecting the action with the highest Q-value while occasionally selecting random actions for exploration. Updates are performed by performing iteratively (over a sequence indexed by $i$) gradient descent on the loss~$J(\theta)_i$ \cite{MnihDQN2015}:
\begin{align}
J_i(\theta)_i = \mathbb{E}_{s,a\sim \pi}[(y_i - Q(s,a;\theta_i))^2]
\end{align}
with $y=R(s,a)+ \gamma~max_{a'}Q(s',a';\theta_{i-1})$. Intuitively, this loss is the (squared) difference between the observed reward when taking $a$ in $s$ plus the discounted estimate of future returns from the new state $s'$, and the current estimate of $Q(s,a;\theta)$, or in other words how much the Q-function has to be modified to account for observing a new reward.
In Deep Q-learning as introduced by Mnih et al. \cite{MnihDQN2015}, experience tuples of the form $(s_t,a_t,r_t,s_{t+1})$ are collected and inserted into a replay buffer, and updates are performed by sampling random batches to compute gradients. Further, learning is stabilized by using a separate target network to evaluate the Q-target $y$, which is only synchronized with the training network with delay. In contrast, policy gradient (PG) methods directly update a parametrized policy function $pi(a|s;\theta)$ such as a Gaussian or categorical distribution. This is typically (e.g. in the classical REINFORCE algorithm \cite{Williams1992}) achieved by obtaining a sample estimates of current policy performance and updating $\theta$ in the direction $\nabla_\theta log~\pi(a_t|s_t;\theta)(R_t - b_t(s_t))$. Detailed surveys of contemporary work are given by Li and Arulkumaran et al. \cite{li2017deep, arulkumaran2017brief}.
RL approaches remain attractive due to their theoretical value proposition to learn from raw feedback. However, despite over two decades of research on RL in computer systems, practical applications remain difficult to realize due to various limitations. In the following, we discuss concrete issues before introducing LIFT.
\section{Practical issues}\label{practical}
RL algorithms are known to suffer from various limitations which we highlight here in the context of data management.
\head{Training data requirements.}
First, RL methods are notoriously sample-inefficient and solving common benchmark tasks (e.g. Atari) in simulators can require up to $10^7$-$10^9$ problem interactions (states) when using recent approaches \cite{Espeholt2018}. In data management experiments, performing a single step (e.g. a scheduling decision) and observing its impact may take between seconds and hours (e.g. deciding on resources for a job and evaluating its runtime). Consequently, training through online interaction can be impractical for some tasks, and training in production systems is further undesirable as initial behavior is random to explore. A common strategy to accelerate training is to train RL agents in simulation \cite{Mao2016, Mao2017}. This approach enables researchers to explore proof-of-concept experiments but also introduces the risk of making unrealistic assumptions and oversimplifying the problem domain, thus making successful simulation-to-real transfer unlikely. Some research domains have access to verified simulators (e.g. network protocols) but this is not the case for many ad-hoc problems in data management.
Another common approach is to execute online training on a staging environment or a smaller deployment of the system. For example, in their recent work on hierarchical device placement in TensorFlow \cite{hierarchical2018}, Mirhoseini et al. report that training their placement mechanism on a small scale deployment for 12.5 GPU-hours saves 265 GPU hours in subsequent training of a neural network. Here, RL was used as a direct search mechanism where the aim of training is to identify a single final configuration which is not modified later. Successful online training is further difficult if the goal of the controller is to react to unpredictable and sudden workload changes. This is because training workloads may not sufficiently cover the state space to generalize to drastic workload changes (while exploring the state space is usually possible in simulation).
\head{Hyper-parameters and algorithmic stability.}
DRL algorithms require more configuration and hyper-parameter tuning than other machine learning approaches, as users need to tune neural network hyper-parameters, design of states/actions and rewards, and parameters of the reinforcement learning algorithm itself. A growing body of work in DRL attempts to address algorithmic limitations by more efficiently re-using training data, reducing variance of gradient estimates, and parallelizing training (especially in simulations) \cite{schulman2015trust, schulman2017proximal,Haarnoja2017, Espeholt2018}. Some of these efforts have recently received scrutiny as they have been shown difficult to reproduce~\cite{Henderson2017,Mania2018}, often due to the introduction of various additional hyper-parameters which again need to be tuned. This is complicated by the fact that RL algorithms are often evaluated on the task they were trained on (i.e. testing performance on the game the algorithm was trained on). RL is effectively used for optimization on a single task, and, as Mania et al. argue \cite{Mania2018}, some algorithmic improvements in recent work may stem from overfitting rather than fundamental improvements.
\head{Software tools.}
The reproducibility issues of RL algorithms are further exacerbated by a lack of standard tools.The practical successes of neural networks in diverse domains have led to the existence of widely adopted deep learning frameworks such as Google's TensorFlow \cite{abadi2016tensorflow}, Microsoft's CNTK \cite{seide2016cntk}, or Apache MXnet \cite{chen2015mxnet}. These libraries provide common operators for implementing and executing machine learning algorithms while also omitting the complexity of directly interfacing hardware accelerators (e.g. GPUs, FPGAs, ASICs). However, RL algorithms cannot be used with similar ease as existing research code bases primarily focus on simulation environments, and thus require significant modifications to be used in practical applications. We introduce our RL library built on top of TensorFlow in section \S\ref{tensorforce}.
The issues above continue to present significant obstacles in using RL. We investigate means to improve data efficiency and tooling by providing a software stack for deep RL focused on initializing controllers from pre-existing knowledge.
\section{LIFT}\label{lift}
\subsection{System overview}
We begin by giving a high level overview of our framework before discussing each component in detail. Generally, we distinguish between our algorithmic backend \textit{TensorForce}, and \textit{LIFT}, a collection of services which allow RL controllers to be deployed in different execution contexts, which we explain below (Figure \ref{fig:rl-stack}). Frameworks such as TensorFlow \cite{AbadiAgarwalBarhamEtAl2016} expose an API primarily on the abstraction level of numerical operators with an increasing number of modules containing neural network layers, optimizers, probability distributions, data set tools etc. However, currently no such modules exist within TensorFlow to expose RL functionality via similar APIs. TensorForce fills this gap by providing a unified API to a set of standard RL algorithms on top of TensorFlow.
The main abstractions LIFT operates on are RL models and system models. A model maps between RL agent output to system actions (e.g. configuration changes), or from system metrics to RL agent (e.g. parsing log entries to states, actions and rewards). LIFT's primary purpose is to facilitate RL usage in new systems by providing commonly used functionality pertaining to model serialization and evaluation, and further by defining system data layout and automatically mapping them to the respective TensorFlow inputs and outputs. LIFT uses TensorForce as its backend in our example implementation but is independent of both TensorForce and TensorFlow, as to be able to use any RL implementation providing a minimal common API. In the following, we discuss the design of TensorForce.
\begin{figure}[t]
\centering
\includegraphics[scale=.4]{lift_stack_paper.pdf}
\caption{LIFT stack for applied RL.}
\label{fig:rl-stack}
\vspace{-5mm}
\end{figure}
\subsection{TensorForce}\label{tensorforce}
Deep reinforcement learning is a rapidly evolving field and few standards exist with regard to usage outside controlled simulations. Various open source libraries such as OpenAI baselines \cite{openaibaselines}, Nervana coach \cite{nervanacoach}, or Ray Rllib \cite{Liang2017} exist. They are tightly coupled with simulation environments such as OpenAI gym \cite{brockman2016openai} which provide unified interfaces to tasks for evaluating and comparing algorithms. In our experiments, we have found these research frameworks to be difficult to deploy in practical use cases for two additional reasons.
First, open source reinforcement learning libraries frequently rely on fixed neural network architectures. For example, the code we analyzed typically created network output layers for actions based on descriptors provided by simulations only supporting restricted actions (e.g. only either discrete or continuous actions per step, but not both). Substantial code modifications are required to support multiple separate types of actions (tasks) per step. This is because the purpose of these reference implementations is primarily to reproduce research results on a particular set of benchmark tasks, as opposed to providing configurable, generic models. Second, as discussed in \S \ref{practical}, recent RL methods incorporate various optimization heuristics to help training efficiency and stability, thus increasing the number of tunable parameters. We found existing code bases to attempt reducing complexity by hard-coding heuristics of which users may be unaware. For example, one of the implementations we surveyed internally smoothes state vectors via an exponentially moving average, and clips reward values without documenting or exposing this feature. We hence introduce TensorForce, a general purpose DRL library which exposes a well-defined declarative interface to creating and transparently configuring state-of-the art algorithms.
\head{Design.}
Our aim is to give a unified interface to specify a decision model by describing its inputs and outputs without any restriction on the number and type of different inputs (states) or outputs (actions). Further, the specification contains the model to construct, network layers to use, and various further options to be applied such as exploration, input preprocessing (e.g. normalization or down-sampling) and outpost post-processing (e.g. noise), and algorithm-specific options such as memory size.
TensorForce is built on two principles: First, users should not be required to modify any library code to express their problem dynamics, as is often the case in current open source code, thus necessitating expressive configurations. Second, reinforcement learning use cases may drastically differ in design, e.g. environments may present continuous learning or episodic problems, algorithms may use memories to incorporate old experiences, or just learn from new observations. However, most of this arising complexity can be deterministically (depending on the model selected) handled internally. Consequently, we provide a unified API for all model and agent variants with just two methods at its core, one to request new actions for given states, one to observe rewards and notify the model of terminal states. Updates to the model are implicitly triggered according to configurations.
The advantage to our approach is that practitioners can explore different RL paradigms in their applications simply by loading another configuration without the need to modify application code (e.g. to explicitly trigger certain updates or model-specific events), or library code. The code is available open source under \url{https://github.com/reinforceio/tensorforce}.
\head{Features.}
TensorForce implements both classical algorithms serving as an entry point for practitioners as well as newer methods, which we briefly describe. From the family of Q-learning algorithms, our library implements the original deep Q-learning \cite{MnihDQN2015}, double deep Q-learning \cite{double_dqn}, normalized advantage functions for continuous Q-learning \cite{GuLillicrapSutskeverEtAl2016}, n-step Q-learning \cite{MnihBadiaMirzaEtAl2016}, and deep Q learning from demonstrations incorporating expert knowledge \cite{Hester17}.
Further, we provide classic policy gradients (REINFORCE) \cite{Williams1992}, trust region policy optimization \cite{schulman2015trust}, and proximal policy optimization (PPO) \cite{schulman2017proximal} from the spectrum of policy-based methods, which all support categorical, continuous and bounded action spaces. It is worth pointing out that many new algorithms only modify classic Q-learning or policy gradients by slightly changing the loss functions, and implementing them only requires a few lines of code on top of existing TensorForce components.
\begin{lstlisting}[float,caption={Agent API example},belowskip=-6mm,label=li:tensorforce,moredelim={[is][emphstyle]{@@}{@@}}]
from tensorforce.agents import PPOAgent
# Create a Proximal Policy Optimization agent
agent = PPOAgent(
states=dict(type='float', shape=(10,)),
actions=dict(
discrete_action=dict(type='int', num_actions=10),
binary_action=dict(type='bool')
),
network=[
dict(type='dense', size=64),
dict(type='dense', size=64)
],
step_optimizer=dict(
type='adam',
learning_rate=1e-4
),
execution=dict(type='single'),
states_preprocessing= [dict(type='running_standardize')]
)
// Connect to a client
client = DBClient(host='localhost',port=8080)
while True:
# Poll client for new state, get prediction, execute
action = agent.act(state=client.get_state())
reward = client.execute(action)
# Observe feedback
agent.observe(reward=reward, terminal=False)
\end{lstlisting}
\head{Example usage.} We illustrate how users might interact with the API in Listing \ref{li:tensorforce}. Developers specify a configuration containing at least a network specification and a description of states and action formats. Here, a single state with 10 inputs and two separate actions per step, one boolean, one discrete with 10 options are required. Single-node execution is chosen, and incoming states are normalized via a state preprocessor. Crucially, a large number of commonly used heuristics is both optional and transparently configurable.
Next, a PPO (a state-of-the-art policy optimization method, e.g. used in OpenAI's recent work on DOTA \cite{openaidota}) agent is created using the configuration, and a client is instantiated to interact with an example remote system which we desire to control. The agent can now be used by retrieving new state signals from the client, which needs to map system state (e.g. load) to inputs, and requesting actions from the agent. The client must implement these actions by mapping numerical representations such as the index of a discrete action to a change in the system. Finally, the agent has to observe the reward to provide feedback to the agent. The agent will automatically trigger updates to the underlying TensorFlow graph based on algorithm semantics, e.g. episode based, batch-based, or time-step based.
Developers are thus freed from dealing with low-level semantics of deep learning frameworks and can concentrate on mapping their system to inputs, rewards and actions. By changing a few lines in the configuration, algorithm, data collection, learning, or neural network logic can be fine-tuned. Finally, the JSON configurations can be conveniently passed to auto-tuners for hyper-parameter optimization.
\subsection{LIFT}
LIFT uses the declarative agent API and a small set of reusable components to realize three different execution modes which we describe in this section.
\head{Pretraining.} In pretraining mode, LIFT does not interact with a system but is provided with a trace data source such as a comma separated file, a database table, or a distributed file system. LIFT parses and maps these to demonstrations (described in detail in section \ref{traces}), creates an RL agent supporting pretraining, and imports data. It then executes and monitors pretraining through evaluators, i.e. by validating model performance, and finally by serializing the model.
\head{Agent-driven.} In agent-driven or \textbf{active execution}, LIFT alternates between interacting with the system (i.e. the environment) and the RL agent via the TensorForce API. Here, execution time is almost exclusively governed by waiting on the environment, as we show in \S\ref{evaluation}. The RL libraries we surveyed typically only offer agent-driven execution (e.g. OpenAI baselines) where this execution is tightly coupled with reinforcement learning logic. This is because training common simulation tasks such as the Arcade Learning Environment \cite{Bellemare2013} can be effectively parallelized to hundreds of instances due to marginal computational requirements per simulator process. These highly parallel training procedures are economically impractical for users without data center scale resources, as learning to control data processing systems requires significant I/O and compute.
\head{Environment-driven.} In environment-driven execution or \textbf{passive execution}, LIFT acts as a passive service as control flow is driven by external workload, e.g. a benchmark suite executed against a database. For example, LIFT may open up a websocket or RPC connection to a monitoring service to receive real-time performance metrics. The LIFT controller then continuously maps incoming metrics to states, passes them to the agent, and executes the necessary configuration changes on the system. Passive execution is primarily intended for deployment of trained models which can optionally perform incremental updates. All execution modes share a common set of components which users need to implement for their given system to facilitate the parsing and serialization overhead necessary to interface a system.
First, a \textbf{schema} is used to programmatically construct the layouts of states, actions and rewards. For example, in our compound indexing case study, the input size to the neural network depends on the number of available query operators and unique fields in the database. In our experience, successful application of RL initially requires frequent exploratory iterations over different state and action layouts. In LIFT, this is reflected by users implementing multiple exchangeable schemas. Downstream components for the execution modes use a schema to infer shape and type information.
Next, users implement a \textbf{model converter} as the central component for translating between RL model and controlled system via a small set of methods called throughout LIFT to i) map system output to agent states and agent actions (for pretraining), ii) map system output to rewards, and iii) map agent output to system configuration changes. LIFT's generic components for each execution mode then use converters to deserialize and parse log traces, and to perform offline (pretraining) and online (agent- or environment-driven) training.
We summarize the idea behind LIFT as motivated by two observations. First, unlike common RL simulation tasks, controlling data processing systems requires separation of environment and RL agent due to different resource needs and communication patterns (e.g. access to system metrics through RPC or other protocols). Second, using RL in practical contexts currently requires a large amount of boiler-plate code as no standard tools are available. LIFT enables researchers to focus on understanding their state, action and reward semantics and express them in a schema and system model, which generate the respective TensorFlow graphs via the TensorForce API. In the following section, we explain the pretraining process on the indexing case study.
\head{Implementation.} We implemented our LIFT prototype in $\approx$10000 lines of Python code which includes components for our example case studies. In this work, no low-latency access is required (e.g. for learning to represent data structures as described by Kraska et al. \cite{Kraska2017}) but we may implement a C++ serving layer in future case studies.
\section{Learning from traces}\label{traces}
\subsection{Problem setup}
We now illustrate the use of LIFT in an end-to-end example based on our compound database indexing application. In database management, effective query indexing strategies are crucial for meeting performance objectives. Index data structures can accelerate query execution times by multiple magnitudes by providing fast look-ups for specific query operators such as range comparisons (B-trees) or exist queries (Bloom filters). A single index can span multiple attributes, and query planners employ a wide range of heuristics to combine existing indices at runtime, e.g. by partial evaluation of a compound (multi-attribute) index. Determining optimal indices is complicated by space usage, maintenance cost, and the fact that indexing decisions cannot be made independently of runtime statistics, as index performance depends on attribute cardinality and workload distribution. In practice, indices are identified using various techniques ranging from offline tool-assisted analysis \cite{dbtuningsqlserver2005, Chaudhuri1998, Dageville2004} to online and adaptive indexing strategies \cite{Graefe2010, Idreos2011, Halim2012,Petraki2015}. Managed database-as-a-service (DBaaS) offerings sometimes offer a hybrid approach where indices for individual attributes are automatically created but users need to manually create compound indices.
We study MongoDB as a popular open source document database where data is organized as nested J/BSON documents. While a large body of work exists on adaptive indexing strategies for relational databases and columnar stores \cite{Petraki2015}, compound indexing in document databases has received less attention. Document databases are offered by all major cloud service providers, e.g. Microsoft's Azure CosmosDB offers native MongoDB support \cite{cosmosdb}, Amazon's AWS offers DynamoDB \cite{dynamodb}, and Google Cloud provides Cloud Datastore \cite{googlecloudstore}. The document database services we surveyed offer varying specialized query operators, index design, and query planners using different indexing heuristics. The aim of automatic online index selection is to omit this operational task from service users. We initially focus on common query operators available in most query dialects, as we plan to extend our work to other database layouts and query languages. Table \ref{operator-table} gives an operator overview. In MongoDB, queries themselves are nested documents.
\subsection{Modeling indexing decisions}
The MongoDB query planner uses a single index per query with the exception of $\$or$ expressions where each sub-expression can use a separate index. An index may span between $1$ and $k$ schema fields and is specified via an ordered sequence of tuples $(f_1, s_1),..,(f_n, s_n)$ where each tuple consists of a field name $f_i$ and a sort direction $s_i$ (ascending or descending). At runtime, the optimizer will use a number of heuristics to determine the best index to use.
Via index intersection, the optimizer can also partially utilize existing indices to resolve queries. For example, \textit{prefix intersection} means that for any index sequence of length $k$, the optimizer can also use any ordered prefix of length $1..k-1$ to resolve queries which do not contain all $k$ attributes in the full index. Consequently, while the tuple ordering of the index does not typically matter for individual queries, the number of indices for the entire query set can be drastically reduced if index creation considers potential prefix intersections with other queries. Similarly, sort-ordering in indices can be used to sort query results via sort intersection in case of matching sort patterns. For example, an index of the shape $[(f_1, ASC), (f_2, DESC)]$ can be used to sort ascending/descending and descending/ascending (i.e. inverted) sort patterns, but not ascending/ascending or descending/descending. Based on these indexing rules, we define the following state, action, and reward model.
\begin{table}[t]
\caption{MongoDB basic operator overview.}
\label{operator-table}
\centering
\begin{tabular}{ll}
\cmidrule{1-2}
Operators & MongoDB operator \\
\midrule
$=$,$ >$, $\geq$, $<$, $\leq$, not in & $\$eq$, $\$gt$, $\$gte$, $\$lt$,$\$lte$, $\$nin$ \\
and, or, nor, not & $\$and$ , $\$or$, $\$nor$, $\$not$ \\
limit, sort, count & $count()$, $limit(n)$, $sort(keys)$ \\
\bottomrule
\end{tabular}
\end{table}
\head{States.} Identifying the correct index for a query requires knowledge of the query shape, e.g. its operators and requested attributes. To leverage intersection, the state must also contain information on existing indices which could be used to evaluate a query. We parse queries via a tree-walk, strip concrete values from each sub-expression, and only retain a sequence of operators and attributes. If an index already exists on an attribute, we insert an additional token after the respective attribute to enable the agent to learn about index intersection and avoid adding unnecessary indices. For example, consider the simple following query counting entries with name "Jane":
\begin{alltt}
collection.find(\{\$eq: \{name: "Jane"\}\}).count()
\end{alltt}
Assuming an ascending index on the \textit{name} field already exists, the tokenized query looks as follows (with EOS representing the end-of-sentence):
\begin{alltt}
[\$eq name IDX_ASC count EOS]
\end{alltt}
These tokens are then converted to integers using a word embedding as commonly used in natural language processing applications to map a discrete set of words to a continuous vector space \cite{Mikolov2013}. In practice, a maximum fixed input length is assumed and shorter inputs are padded with zeros.
\head{Actions.}
For every query we seek to output an index (or none)
spanning at most $k$ attributes where $k$ is a small number as indices covering more than 2-4 attributes are rare in practice. This is also because compound indices containing arrays, which require multi-key indices (each array element indexed separately), scale poorly and can slow down queries. Additionally, as discussed above, index intersection makes indices order- and sort-sensitive, thus requiring to also output a sort order per attribute in a multi-key index.
The action scheme should scale independently of the number of attributes in the document schema. Consider a combinatorial action model where the agent is modelled with one explicit action per attribute, and a separate action output per possible index-key. A 3-key index task on 10 attributes would already result in thousands of action options per step ($10^3*3=3000$) when including an extra action for the three possible sort patterns (both ascending/descending, descending-ascending, ascending-descending). This approach would not generalize to changing schemas or data sets. We propose a positional action model wherein the number of actions is linear in $k$. When receiving a query, we extract all query attributes and interpret an integer action as creating an index on the $ith$ input attribute, thus allowing the agent to learn the importance of key-order for prefix intersection. To distinguish sort patterns, we create an extra action per key (one ascending, one descending with ascending default). This results in $1+ 2k$ actions for a $k$-key index with one output for no-op.
\begin{figure}[t]
\centering
\includegraphics[scale=.7]{index_action_model.pdf}
\caption{State and action parsing scheme for the indexing case study.}
\label{fig:index-action-model}
\vspace{-5mm}
\end{figure}
Figure \ref{fig:index-action-model} illustrates state and action parsing for $k=2$ and a simple query on \textit{name} and \textit{age} attributes. In the example, the \textit{name} field is already indexed so when the query is tokenized, a special index token ($IDX\_ASC$) is inserted to indicate the existing index. The tokenized sequence is mapped to integers via the embedding layer and passed through the network, which outputs $k$ integer actions. In the example, the agent decides to implement one additional single-key index by outputting 3 and 0, where 3 implies an ascending index on the second input attribute, and 0 is used for no-op if fewer than $k$ keys are required in the index.
\head{Rewards.}
The optimal indexing strategy is the minimal set of indices $\mathrm{I}$ meeting performance level objectives such as mean latency or 90th and 99th latency percentiles for a set of queries $\mathcal{Q}$. Let $t(q)$ be the time to execute a query $q \in \mathcal{Q}$ under an index set $\mathcal{I}$ and let $m(\mathcal{I})$ be the memory usage of the current index set. We set the reward $r(q)$ as the negative weighted combination of these to allow expressing trade-offs on memory usage against runtime requirements:
$r(q) = -\omega_1 m(\mathcal{I}) - \omega_2 t(q)$.
\subsection{Demonstrations and Pretraining}
We now describe the ideas behind learning from demonstrations as used in LIFT. Our approach is motivated by the observation that a human systems developer encountering a tuning problem can frequently use their expertise to come up with an initial heuristic. For example, in the indexing problem, a database expert can typically determine an effective configuration for a given application within a reasonable time frame (e.g. a few hours) with access to profiling tools. Distilling this intuitive expertise into a fully automated approach is difficult, and simple heuristics may perform well in small scenarios but fail at scale. Moreover, as discussed in \S \ref{practical}, training a RL model from scratch is expensive and difficult, while refining a model pretrained from not necessarily fully correct demonstrations may be more effective. We hence argue for an approach that leverages pre-existing domain knowledge by initializing training from demonstrations.
\head{Demonstration data.} In the indexing task, demonstrations may exist in the form of:
\begin{enumerate}
\item Query logs from applications configured by a database administrator where indices are assumed to be correct, where correctness implies fully meeting service level objectives (not necessarily being optimal).
\item Query logs from applications where indices were created using any heuristic understood to be sub-optimal and not necessarily meeting service objectives.
\item Queries and index pairs for which no runtime data is available, e.g. procedurally generated examples with either manually or heuristically chosen index recommendations (both correct and imperfect).
\end{enumerate}
The key difference between (1) and (2) is that when encountering a query for which an imperfect demonstration was available during pre-training, we do not mind testing other choices while this is unnecessary if a demonstration was optimal for the query given. This confidence must be reflected in the pretraining procedure. Further, the difference between (1), (2) and (3) is that in the latter, no reward is available without creating indices and measuring queries. Note that the key difference between demonstrations and simulation in our applications is the absence of information on system dynamics (i.e. state transitions).
A simulator for query indexing would provide insights into how addition and removal of an index affects performance. In contrast, a demonstration extracted from the slow query log of a database indicates how fast a query performed using the index chosen by the query planner, but not how much faster the index was versus not using an index, or a different index. We make use of all demonstration types but focus on (2) and (3), as we could not obtain existing traces from expert-configured systems and thus had to manually tune configurations.
\head{Algorithm.}
Hester et al. have described an algorithm to perform Deep Q-learning from such expert demonstrations (DQfD) using the example of Atari games \cite{Hester17}. In their work, an agent is trained until sufficient performance, and then games played by that agent are given as demonstrations to a new agent. DQFD works by assigning an 'expert margin' to demonstration actions by extending double Q-learning \cite{double_dqn}, a Q-learning variant which corrects biased Q- estimates in the original DQN by decoupling action selection and action evaluation.
Specifically, the double DQN loss
\begin{align}
J_{DQ}(Q)= (R(s,a) +\gamma Q(s_{t+1},a^{max}_{t+1};\theta')- Q(s,a;\theta))^2
\end{align}
where
\begin{align}
a^{max}_{t+1} = argmax_a Q(s_{t+1},a;\theta)
\end{align}
uses the target network (as explained in \S \ref{background}, parametrized by $\theta'$)) to evaluate the action selected using the training network (with parameters $\theta$). This is combined with another expert loss function $J_E$:
\begin{align}
J_E(Q) =\max_{a \in A}[Q(s, a) + l(s, a_E, a)] - Q(s,a_E)
\end{align}
Here, $l(s,a_E,a)$ is a function which outputs 0 for the expert action, and a margin value $>0$ otherwise. We convey the intuition of this loss function by recalling the action selection mechanism in Q-learning.
Recall that $Q(s,a,\theta)=\mathbb{E}[R_t|s_t=s,a]$, i.e. the expected returns from taking a decision $a$ in state $s$. At each step, the neural network (parameterized by $\theta$) used to approximate $Q$ outputs Q-values for all available actions and selects the action with the highest Q-value. By adding the expert margin to the loss of Q-values of \textit{incorrect actions}, the agent is biased towards the expert actions as a difference between expert actions and other actions of at least the margin is enforced \cite{Piot2014}. The DQFD-agent keeps a separate memory of these expert demonstrations which are first used to pretrain the agent, then combined with new online experiences at runtime so that the agent keeps being 'reminded' of demonstrations.
What does the choice of $l(s,a_E,a)$ imply for imperfect or noisy demonstrations? A large margin makes it difficult to learn about any better actions in a given state because even if, via exploration, a different action is selected and yields a higher return, an update may not change Q-values of better action beyond the margin. Second, the DQfD loss only enforces a difference in Q-values between demonstrated action and all other actions; no assumptions are made about the relationship between non-expert actions (e.g. second highest, third highest Q-value). This behavior is desirable in the indexing example because even semantically similar indices (e.g. different order, partially covering same fields) can result in much worse performance than the demonstrated index, so we initially do not want to express any preference on non-demonstrated indices. Consequently, we choose a very small margin $\leq 0.1$ which in practice results in a pre-generated model which initially only slightly favors the demonstrated action.
\begin{algorithm}
\begin{algorithmic}
\State Initialize $agent$ with demo-model and demo-data $D$
\State Initialize LIFT $system\_model$, $model\_converter$
\State Load application queries $\mathcal{Q}_{test}$
\State // Fixed time budget or until objectives met
\For{ $i=1, N$}
\State $\mathcal{I}_{test} \gets \emptyset$, clear index set in DB
\For{ $q$ in $\mathcal{Q}_{test}$}
\State // Tokenize, include existing indices
\State $s(q) \gets model\_converter.to\_agent\_state(q, \mathcal{I}_{test})$
\State $index \gets agent.act(s(q))$
\State // Create index, execute query
\State $m(\mathcal{I}_{test}) \gets system\_model.act(index)$
\State $t(q) \gets system\_model.execute(q)$
\State // Compute reward from runtime and size
\State $r_q \gets -\omega_1 m(\mathcal{I}_{test}) - \omega_2 t(q)$
\State $agent.observe(r_q)$
\State Add $index$ to $\mathcal{I}_{test}$
\EndFor
\EndFor
\State // Final evaluation, create best $\mathcal{I}_{test}$:
\State // Measure final size $m(\mathcal{I}_{test})$, run $\mathcal{Q}_{test}$
\end{algorithmic}
\caption{Online training procedure.}
\label{training-procedure}
\end{algorithm}
\subsection{Putting it all together.}
Algorithm \ref{training-procedure} shows pseudo-code for the online training procedure. Following pre-training on the demonstration data set, we start LIFT in online mode, initialize an agent with the demo model, and load the demo data. We then begin the episodic training procedure on a new set of queries $\mathcal{Q}_{test}$ we want to index. In each training episode, all indices are first removed from the database. Then, each query $q$ (sorted by length to improve intersection) is tokenized and the suggested index created. Recall that the tokenization includes the current index set $\mathcal{I}$ for the agent to learn the impact of existing indices. The size of the index set $m(\mathcal{I})$ and the runtime of the query $t(q)$ are used to inform the reward of the agent. For direct search tasks like indexing, we keep the list of index tuples associated with the highest reward during training. In the final evaluation, we recreate these indices and then run all queries 5 times on the full index set. For dynamic tasks where the agent is invoked repeatedly at runtime, we simply export the trained model which can then be used to control a system.
\section{Evaluation}\label{evaluation}
\subsection{Aims}
We evaluate our LIFT prototype through two case studies: 1) the indexing case study in which we minimize latency and memory usage by learning compound index combinations, and 2) the stream processing resource management case study in which we tune latency by setting parallelism levels under a varying workload. In both case studies, we used LIFT to implement a controller, manage demonstration data, and interact with the system. The difference is that the indexing task is an offline optimization (index set is determined once, then deployed), while in the stream processing task we use a controller at runtime to react to varying workloads. The evaluation focuses on evaluating the utility of LIFT and TensorForce to solve data management tasks, and on understanding the impact of learning from demonstrations to overcome long training times.
\newcommand{.24}{.24}
\begin{figure*}[ht]
\centering
\begin{subfigure}[t]{.24\textwidth}
\includegraphics[ clip, scale=.24]{final_test_mean_latency.pdf}
\caption{\label{fig:latency-mean} Mean latency.}
\end{subfigure}
\begin{subfigure}[t]{.24\textwidth}
\includegraphics[ clip, scale=.24]{final_test_90_latency.pdf}
\caption{\label{fig:latency-90} 90th pct. latency.}
\end{subfigure}
\begin{subfigure}[t]{.24\textwidth}
\includegraphics[ clip, scale=.24]{final_test_99_latency.pdf}
\caption{\label{fig:latency-99}99th pct. latency.}
\end{subfigure}
\begin{subfigure}[t]{.24\textwidth}
\includegraphics[ clip, scale=.24]{final_test_index_sizes.pdf}
\caption{\label{fig:index-size} Normalized index size.}
\end{subfigure}
\\
\begin{subfigure}[t]{.24\textwidth}
\includegraphics[clip,scale=.24]{pretrain_accuracy.pdf}
\caption{\label{fig:pretraining-curve} Pretraining accuracy.}
\end{subfigure}
\begin{subfigure}[t]{.24\textwidth}
\includegraphics[ clip, scale=.24]{training_reward.pdf}
\caption{\label{fig:online-curve} Online adaption rewards.}
\end{subfigure}
\begin{subfigure}[t]{.24\textwidth}
\includegraphics[ clip, scale=.24]{overlap_latency_histogram.pdf}
\caption{\label{fig:latency-overlap}Per-query latency.}
\end{subfigure}
\begin{subfigure}[t]{.24\textwidth}
\includegraphics[ clip, scale=.24]{index_breakdown.pdf}
\caption{\label{fig:index-breakdown}Per query index keys.}
\end{subfigure}
\caption{Performance evaluation on the IMDB data set.}
\end{figure*}
\subsection{Compound indexing}
\head{Setup.}
We evaluate the indexing task both on a real-world dataset (IMDB \cite{imdb}) and using synthetic queries and data. The synthetic query client is based on the YCSB benchmark \cite{CooperSilbersteinTamEtAl2010}. YCSB generates keys and synthetic values to evaluate cloud database performance via a set of common workload mixtures but has no provisions for complex queries. We implemented a YCSB-style client and workload generator targeting secondary indexing. The client is configured with a schema containing attribute names and types. The workload generator receives a query configuration containing valid query operators, maximum allowed operator degrees, query height, and distribution of aggregation operators. It can then generate a specified number of queries index suggestions based on provided rules. All experiments were run on a variety of commodity server class machines (e.g. 24 cores (Xeon E5-2600) and 198 GB RAM) and using MongoDB v3.6.4.
Queries and demonstrations are imported into LIFT's pretrain-controller which instantiates a DQfD agent and parses queries and demonstrations to states and actions as described before. We then run a small number of pretraining steps until the agent has approximately adopted the rule. We use batch sizes of 32 queries on a neural network with 1 embedding layer and 1 dense layer with 128 neurons each. Learning is executed using an adaptive moment optimizer (Adam \cite{KingmaBa2014}) with a learning rate of $0.0005$, and an expert margin of $0.1$. To refine the pretrained model, we restart LIFT in online mode and train as described in Algorithm 1. The max number of attributes per index was $k=3$.
\head{Indexing baselines.}
We consider human and rule-based baselines due to a lack of standard tools for automatic indexing in document stores. The first rule-based strategy we use to generate demonstrations is full indexing (\textit{Full} in the following) wherein we simply create a compound index covering all fields in a query (respecting its sort order), thus ensuring an index exists for every query. In the synthetic regime, where query shapes are re-sampled every experiment to evaluate generalization to different query sets, human baselines were uneconomical, and we experimented with other rule-based baselines. Partial indexing (\textit{Partial} hereafter) attempts to avoid unnecessary indices by considering existing indices on any attribute in a query, and only covering unindexed fields. Note that we do not claim these to be the most effective heuristics but merely initial guidance for a model. We also experimented with a rule based on operator and schema hints but this frequently did not perform well due to unforeseen edge cases. We refer to the following modes in the evaluation: no indexing (\textit{Default}), online learning from scratch without pretraining (\textit{Online}), pretraining without online refinement (\textit{Pretrain}, online learning following pretraining (\textit{Pretrain+Online}), human expert (\textit{Human}and the two baselines described above.
\head{Basic behaviour.}
We first show results on the publicly available internet movie database (IMDB) datasets \cite{imdb}. We imported datasets for titles and ratings (\textit{title.akas}, \textit{title.basics}, \textit{title.ratings}) comprising $\approx$10 million documents. We manually defined a representative set of 20 queries such as 'How many comedies with length of 90 minutes or less were made before 2000?'. For this fixed set, we compared our method to human expert intuition. Using human baselines (which are common in deep learning tasks) in data management is difficult due to inherent bias and prior knowledge on experiment design. Generally, a human expert can identify effective indices for a small query set given unlimited trials to refine guesses. For a more interesting comparison, we hence devised a single-pass experiment where the expert was allowed to observe runtimes on the full indexing baseline and subsequently tried to estimate an index per query.
\begin{figure*}[ht]
\centering
\begin{subfigure}[t]{.24\textwidth}
\includegraphics[ clip, scale=.24]{index_size_ablation.pdf}
\caption{\label{fig:runtimes} Index creation times.}
\end{subfigure}
\begin{subfigure}[t]{.24\textwidth}
\includegraphics[ clip, scale=.24]{scalability_test_index_sizes.pdf}
\caption{\label{fig:scalability-size} Index size in scale-up.}
\end{subfigure}
\begin{subfigure}[t]{.24\textwidth}
\includegraphics[ clip, scale=.24]{scalability_test_mean_latency.pdf}
\caption{\label{fig:scalability-mean-latency} Test set mean latencies.}
\end{subfigure}
\begin{subfigure}[t]{.24\textwidth}
\includegraphics[ clip, scale=.24]{scalability_test_90_latency.pdf}
\caption{\label{fig:scalability-90-latency} Test set 90th pctl. latencies.}
\end{subfigure}
\caption{Scalability generalization analysis using the synthetic query client. Learning was performed on 10 million documents, a new set of test queries was evaluated on 100 million documents.}
\end{figure*}
Figures \ref{fig:latency-mean}, \ref{fig:latency-90} and \ref{fig:latency-99} give mean, 90th and 99th latencies respectively on the final evaluation in which each query is executed 5 times (final results averaged over five trainings with different random seeds). The combined \textit{Pretrain+Online} strategy outperforms other methods significantly, in particular improving mean latency by $57\%$ and $62\%$ against \textit{Full} and \textit{Human} respectively and by $74\%$ on 99th percentile latency against both. Differences in 90th percentile latencies were within standard error. In the human experiment, the expert attempted to reduce indices by leveraging intersection, creating only 14 indices versus 15 on average for \textit{Pretrain+Online}. The size of the created indices was however (marginally) bigger compared to \textit{Pretrain+Online} (as shown in Figure \ref{fig:index-size}) which achieved on average $25\%$ index size improvement against the \textit{Full} strategy. Note that MongoDB always creates a default index on the \textit{\_id} attribute so the default size is not zero. We normalize sizes against the size of the full index to evaluate improvement. The expert's attempt to exploit intersection also underestimated the necessity of compound indices for some queries. The outcome illustrates the difficulty of solving the task without iterative manual analysis.
\textit{Pretrain} and \textit{Online} can perform similar to full indexing. The performance of \textit{Pretrain} (in its degree of similarity to \textit{Full}) depends on whether pretraining is continued until the rule is fully adopted. We found early stopping at $70-80\%$ accuracy to be effective when using our imperfect rules to avoid overfitting (Figure \ref{fig:pretraining-curve}. \textit{Online} can sometimes find good configurations but tends to perform significantly worse than \textit{Pretrain+Online} in mean reward due to random initialization, as seen in Figure \ref{fig:online-curve} which shows reward curves (i.e. combined size and latency) and 1 $\sigma$ confidence intervals over 5 runs. We breakdown individual queries and indices for \textit{Pretrain+Online} and \textit{Online}, i.e. standard RL. In Figure \ref{fig:latency-overlap}, we sort queries of one experiment by latency and show runtime differences (n.b. log scale), and in figure \ref{fig:index-breakdown} the number of keys in the index decision (0-3) for the query (stacked on top of each other). Performance differences are concentrated in the five slowest queries with the rest being effectively indexed by both strategies. Comparing keys used per query also shows that \textit{Pretrain+Online} created indices systematically spanning fewer keys than \textit{Online}. This does not necessarily imply smaller total index size depending on the attributes indexed but indicates more effective intersection.
\begin{table}[t]
\centering
\begin{tabular}{lll}
\cmidrule{1-3}
Workload means & Total time & Pct. \\
\midrule
Waiting on system & 64869 s ($\pm$ 4403 s) & 97.8 \% \\
Agent interaction/evaluation & 1446 s ($\pm$ 218 s) & 2.2 \% \\
Mean episode duration & 663 s ($\pm$ 42 s) & n/a \\
Min episode duration & 419 s ($\pm$ 88 s) & n/a \\
Max episode duration & 880 s ($\pm$ 62 s) & n/a \\
Pretrain+Online time to max & 43044 s ($\pm$ 16106 s) & n/a \\
Online time to max & 19088 s ($\pm$ 15011 s) & n/a \\
\bottomrule
\end{tabular}
\caption{\label{online-timing-table} Wall clock times on the IMDB data set. One episode refers to creating the entire application index set. On average, \textit{Pretrain+Online} reaches its max performance much later in the experiment as it keeps improving while \textit{Online} stops improving early.}
\vspace{-5mm}
\end{table}
\head{Timing.} Next, we analyze time spent in different training phases. Due to small neural network size, pretraining could be comfortably performed within few minutes on a CPU. This includes intermediate evaluations to test accuracy of the model on the set of rule-based demonstrations, and identifying conflicting rules. In table \ref{online-timing-table}, we break down time spent interacting with TensorForce for requesting actions/performing updates, and time spent waiting on the environment and evaluating indexing decisions by running queries. $97.9 \%$ of time was spent waiting on the database to finish creating and removing indices, and only $2.1\%$ was spent on fetching and evaluating actions/queries, and updating the RL model. Pretraining is negligible compared to online evaluation times, so pretraining is desirable if data is available. If LIFT is used for online training, employing pretraining only requires few extra converter methods.
\head{Scalability.}
The indexing problem is complicated by step durations growing with problem scale. Figure \ref{fig:runtimes} shows index creation times for increasing collection sizes. At 100 million documents generated from our synthetic schema, creating an index set for a set of queries can take hours, resulting in weeks of online training. As RL algorithms struggle with data efficiency, we believe these scalability problems will continue to present obstacles for problems such as cluster scheduling. We explore an approach where training is performed on a small data set of 10 million documents. Newly sampled test queries are evaluated on the 100 million document collection without further refinement. Figures \ref{fig:scalability-size}, \ref{fig:scalability-mean-latency}, and \ref{fig:scalability-90-latency} show index size and latencies. All learned strategies created one index per query with query runtimes increasing corresponding to document count, and \textit{Pretrain+Online} performing best. Latency metrics were dominated by a single long-running query with two expensive $\$gt$ expressions which could not be meaningfully accelerated.
While scalability transfer results show some promise, we plan to investigate an approach where a model of the query planner is learned to be able to evaluate indices without needing to run them at full scale.
\begin{table}[h]
\centering
\begin{tabular}{lllll}
\cmidrule{1-5}
Workload means : & Mean & 90th & 99th & Norm. Size (GB) \\
\midrule
\textit{Pretrain+Online} & 0.5 s & 1.7 s & 3.5 s & 0.43 \\
\textit{Online} & 0.55 s & 2.1 s & 3.5 s & 0.53 \\
\textit{Default} & 0.94 s & 2.7 s & 3.6 s & 0.03 \\
\textit{Full} & 0.51 s & 1.5 s & 3.9 s & 1.0 \\
\textit{Partial} & 0.96 s & 3.4 s & 4.4 s & 0.32 \\
\textit{Pretrain} & 0.59 s & 2.2 s & 4.1 s & 1.0 \\
\bottomrule
\end{tabular}
\caption{\label{variation-table} Performance variation when sampling different query sets per run. \textit{Min}, \textit{90th}, \textit{99th} are referring to average latencies across different query sets.}
\vspace*{-6mm}
\end{table}
\head{Generalization.} Traditional database benchmarks such as TPC-H use fixed sets of query templates sampling different attribute values at runtime. This is problematic from a deep learning perspective as the number of distinct query shapes (i.e. operator structure) is too small to evaluate generalization to unseen queries. Our synthetic benchmark client does not only sample attribute values on fixed shapes, but also query shapes. We investigate query generalization via our synthetic client by sampling 5 different query sets, and reporting on variation in learning performance. We insert 5 million documents with 15 attributes with varying data types (schema details provided in appendix). Next, $10,000$ queries and rule-based demonstrations are generated using \textit{Full} indexing as the demonstration rule. We did not see improvement when generating more examples, indicating these were sufficient to cover rule behaviour on the synthetic schema. We pretrain on these queries as before, then sample 20 new queries as the test set and perform online training. Table \ref{variation-table} gives an overview on performance variation across query sets. \textit{Pretrain+Online} saves more than $50\%$ space while performing better or comparably across latency metrics.\textit{Partial} saves even more space but fails on improving latency. Values are averaged across different tasks, thus means per task are expected to be different. Importantly, performance of our approach is not an artefact on a specific query set designed for this task but generalizes.
\subsection{Stream task parallelism}
\head{Problem setup.}
Distributed stream processing systems (DSPS) such as Storm \cite{storm}, Heron \cite{Kulkarni2015} or Flink \cite{flink} are widely used in large scale real time processing. To this end, DSPS have to meet strict service level objectives on message latency and throughput. Achieving these objectives requires careful tuning of various scheduling parameters, as processing instances may fail and workloads may vary with sudden spikes. Floratou et al. suggested the notion of self-regulating stream processing with Dhalion \cite{Floratou2017}, a rule-based engine on top of Heron which collect performance metrics, identifies symptoms of performance problems (e.g. instance failure), generates diagnoses and tries to resolve issues by making adjustments (e.g. changing packing plan). We use LIFT to learn to tune task parallelism in Heron using RL. Task parallelism corresponds to the number of physical cores allocated to a specific task in the processing topology. We use the same 3 stage word-count topology as described in Dhalion on a small cluster using 5 machines (1 master, 4 slaves).
\head{Model.}
We again use LIFT to implement state and action models, and to interface Heron's metric collection. For the state, we use a matrix containing CPU and memory usage, and time spent in back-pressure (a special message used by Heron to indicate lack of resources on a task) for all task instances. As actions, the agent outputs (integer) task parallelism values for each component in the topology. The reward is a linear combination of normalized message latencies square roots (to smooth outliers), throughput, and the fraction of available instances used.
\begin{figure}[t]
\centering
\includegraphics[scale=.55]{heron_test_plot.pdf}
\caption{Top: Rewards through varying workload. Bottom: Task parallelisms for splitter and count bolts.}
\label{fig:heron-rewards}
\vspace{-7mm}
\end{figure}
\head{Results.} Collecting data for the stream processing task is difficult as each step requires multiple minutes of collecting metrics so performance can stabilize after changes, and updating the topology by creating a new packing plan and deploying it. Due to an outstanding issue in the scheduler, we did not manage to run Dhalion itself. We could also not easily port its parallelism rule to LIFT because not all used metrics were exposed via the Heron tracker API. For the purpose of this experiment, we hence collected demonstration data from a simple threshold rule. The aim of this case study is hence not to prove superiority over Dhalion, but evaluate if rule-based demonstrations can help RL in dynamic workload environments. We train and evaluate dynamic behavior by randomly sampling different workload changes such as load moving up and down periodically, or changing from low to high/high to low. Figure \ref{fig:heron-rewards} shows results by comparing average reward over the duration of the evaluation which presented the controller with all possible workloads in deterministic order. Each step corresponds to about 2-4 minutes real time to receive metrics and implement changes.
We defined an \textit{Expert} configuration which had predetermined good configurations for each workload change. The bottom row shows how parallelism settings for both bolts are adjusted by the different strategies over time. The \textit{Expert} systematically alternates between two configurations for each component, incurring temporary low rewards upon changes. The \textit{Pretrain+Online} agent managed to avoid temporary reward loss by anticipating workload changes, and by always keeping split parallelism high as to have enough capacity for changes, thus outperforming the pre-tuned \textit{Expert} configurations slightly ($3\%$. This 'anticipation effect' is a consequence of the agent observing regularities in how workloads change. \textit{Online} failed to adopt an effective strategy within the same training time (1.5 days). Other methods performed worse although the threshold rule-based model could have been improved by manually fitting thresholds to workload changes (thus being closer to \textit{Expert}).
We provide further analysis by comparing training rewards with and without pretraining in Figure \ref{fig:heron-training-rewards}. \textit{Online} without pretraining could on average not recover good configurations, thus most of the time being at a low reward, and only occasionally seeing high rewards when workloads matched its configuration. In contrast, \textit{Pretrain+Online} achieved much higher mean rewards as after around 100 episodes of training, it began to quickly recover from workload changes to reach (normalized) high reward regions again. Our experiments show the combination of pretraining and online refinement can produce effective results on dynamic workloads. A key question is if workloads in practice exhibit irregularities which are difficult to address through manual tuning. We suspect the advantage of RL will increase for larger topologies with many different bolt types on heterogeneous resources.
\subsection{Discussion}
\head{Limitations.}
Our results indicate the potential of imperfect demonstrations when applying RL to data management tasks, improving latency metrics by up to $70\%$ in the IMDB case study against several baselines, and outperforming the expert configuration in Heron. Our experiments did not include some subtasks which prolong training times. For example, in the indexing task, we omitted considerations for indexing shards and replica sets. As RL applications in data management move from simulation to real world, they will incrementally cover additional subtasks. We also showed the difficulty of tackling tasks where step times increase with scale. Here, mechanisms such as pretraining and training on partial tasks provide a promising direction to eventually apply RL at data center scale.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[scale=0.35,width=1\linewidth]{online_only_heron_training_plot.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[scale=0.35,width=1\linewidth]{online_pretrain_heron_training_plot.pdf}
\end{subfigure}
\caption{Heron training rewards.\label{fig:heron-training-rewards}}
\vspace{-6mm}
\end{figure}
\head{Learning.}
The algorithmic limitations of current RL algorithms continue to present significant limitations in real world applications. Learning from scratch can take infeasibly long and may also be unreliable due to the stochastic nature of training. Further, learning a task once and deploying the resulting model in different contexts is unlikely to succeed due to the sensitivity of RL algorithms \cite{Kansky2017}. We rely on online refinement following the pretraining procedure which incurs only little overhead after initial implementation. The aim of our experiments was not to demonstrate the best way to e.g. perform workload management in stream processing. Recent work has illustrated how neural networks struggle with forecasting spiky workloads \cite{Ma2018}. We focused on evaluating if pretraining can help the notoriously difficult application of RL to data management tasks. In summary, DRL remains a promising direction, as even results in this exploratory phase show the ability to learn complex behaviour which normally requires manual solution design.
\section{Related work}
\head{RL in data management.}
Early work in exploring RL in computer systems can be found in routing (Q-routing) and protocol optimization \cite{BoyanLittman1994, KumarMiikkulainen1997,KumarMiikkulainen1999}. Subsequent research has covered a diverse range of domains such as cloud workload allocation, cluster scheduling, networking, or bitrate selection \cite{TesauroDasChanEtAl2007, DutreilhKirgizovMelekhovaEtAl2011, Mao2016, Mao2017, Valadarsky2017a}. Many of these works have outperformed existing approaches in simulation, but did not translate into real world deployments due to the difficulties discussed elsewhere in this paper. Notably, the idea of using neural networks in combination with RL in systems can be found as early as 2006 in Tesauro et al.'s work on server resource allocation~\cite{TesauroJongDasEtAl2006}. The authors incorporated pre-existing knowledge by initially bootstrapping control from a rule, whereas we use offline demonstrations to reduce training times. RL for combinatorial selection has also found application in tuning compression of neural networks for mobile devices \cite{Liu2018}. Mirhoseini et al. demonstrated how to use attention-based andwe p hierarchical methods known from neural machine translation to effectively perform TensorFlow device placements \cite{mirhoseini2017device, hierarchical2018}. The difference to our work is that each graph step only takes few seconds so online training can be performed more effectively. Sharma et al. used RL to learn single-key indices in relational databases, and simplified the problem by manually constructing features such as selectivity \cite{Sharma2018}.
\head{Adaptive indexing.}
A large body of work exists on indexing strategies which are widely used in practice. Offline indexing is performed using the design tuning tools provided by commercial database products which require database administrators to manually interact with the tool, and make ultimate design decisions \cite{dbtuningsqlserver2005, Chaudhuri1998, Dageville2004}. Online indexing addresses this limitation by making decisions based on continuous online monitoring \cite{Schnaitter2006}. Adaptive (or holistic \cite{Petraki2015}) indexing (e.g. in columnar databases) enable even faster reaction to workload changes by building and refining indices via lightweight incremental modifications \cite{Graefe2010, Idreos2011, Halim2012}. A similar depth of work in indexing is not available for document databases, although many techniques are likely transferable \cite{Qader2018}. Commercial MongoDB services sometimes offer index recommendations based on longer term workload patterns (\cite{mlab2018}).
\head{ML in databases.} Recently, machine learning approaches have been explored in data management. Pavlo et al. proposed the idea of a self-driving database with initial focus on employing ML techniques for workload forecasting \cite{Pavlo2017}. In subsequent work, these forecasts were evaluated on their ability to help create SQL indices \cite{Ma2018}. Their work in particular found that neural networks were not as effective in capturing spiky loads as traditional time series techniques. OtterTune \cite{VanAken2017} automatically determines relevant database parameters to create end-to-end tuning pipelines \cite{Golovin2017}. BO is not easily applicable in problems like index selection or generally combinatorial problems as it requires a similarity function (Kernel) to interpolate a smooth objective function between data points. Defining a custom Kernel between databases is difficult because semantically similar indices can perform vastly different on a workload. Kraska et al. explored representing the index data structure itself as a neural network wit the aim to learn to match data distributions and access patterns \cite{Kraska2017}. Bailis et al. subsequently argued that well tuned cuckoo hashing could still outperform these learned indices \cite{bailis2018learning}. We similarly argue that deep RL techniques are in an exploratory phase and cannot yet replace well established tuning methods.
\head{Learning from demonstrations.}
Learning from expert demonstration is a well studied notion in RL. DAGGER (for Dataset Aggregation) is a popular approach for imitation learning which requires an expert to continuously provide new input \cite{Ross2010}. While this is not directly compatible with learning from traces, we are considering future additions to LIFT where a human may interactively provide demonstrations between trials to further accelerate training. Other familiar approaches include behavioural cloning, recently also in the context of generative adversarial models \cite{Wang2017, Ho2016}. Snorkel is a system to help generate weakly supervised data via labeling functions which is conceptually similar to our rule-based demonstrations \cite{Ratner2017}. In this work, we relied on DQfD as a conceptually simple extension to Deep Q-learning \cite{Hester17}. Its main advantage is that the large margin function gives a simple way of assigning low or high confidence to demonstrations via single tunable parameter, thus in practice also allowing the use of imperfect demonstrations. Gao et al. recently suggested employing a unified objective which incorporates imperfect demonstrations naturally by accounting for uncertainty using an entropy approach \cite{Gao2018}.
\section{Conclusion}
In this paper, we discuss long evaluation times, algorithmic instability, and lack of widely available software as key obstacles to the applicability of RL in data management tasks. To help address these issues, we introduce LIFT, the first end-to-end software stack for applying RL to data management. As part of LIFT, we also introduce TensorForce, a practical deep reinforcement learning library providing a declarative API to common RL algorithms. The key idea of LIFT is to help developers leveraging existing knowledge from trace data, rules or any other form of demonstrations to guide model creation. We demonstrate the practical potential of LIFT in two proof-of-concept case studies. If online-only training takes impractically long, our results show that pretraining can significantly improve final results.
\bibliographystyle{ACM-Reference-Format}
|
1,314,259,994,912 | arxiv | \section{Introduction}
Large amounts of remote sensing images are produced daily from airborne and spaceborne sensors and can be used to monitor the state of our planet. Among the last generation sensors, the European Copernicus program has launched a series of satellites with multispectral sensors named Sentinel-2 (S2 hereafter). S2 has a revisit time between five days (at the Equator) and 2-3 days at mid-latitudes. With such high revisit rate, change detection, \emph{i.e.} the comparison of images acquired over the same geographical area at different times to identify changes \cite{Liu:2019:review_multispectral_cd}, {allows for near real-time monitoring of dynamics that are observable though remote sensing, including} forest monitoring \cite{Verbesselt:2010:trend_seasonal_timeseries,Hamunyela:2016:spatial_context_bfast}, urbanisation mapping \cite{Deng:2009:urbanization,Huang:2017:urbanisation} and disaster monitoring \cite{Brunner:2010:multi_sensor_CD_disaster,Longbotham:2012:data_fusion_contest_cd}.
Many change detection methods have been proposed in the literature~\cite{Bovolo:2015:time_cd}. They tend to identify changes either by comparing classification maps~\cite{Vol10e} or by first extracting some kind of index to be thresholded to highlight changes \cite{Vol14b}. Recently, deep learning has been considered to learn how to align data spaces, so that changes are better highlighted and easier to detect~\cite{Lin:2019:multispectral_bilinearCNN,Zhan:2017:siamese_cd,Peng:2019:UNet++,Mou:2019:CNN+RNN,Saha:2019:deepCVA}.
Despite the success of these approaches, the {lack of} a relevant and large labeled dataset {limits their applicability~\cite{Zhu:2017:DL_remote_sensing}.}
{In computer vision tasks using natural images, it is common to use models that have been pre-trained on a large dataset for a loosely related task. A different number of bands and image structure limits the usability of these models to S2 imagery. This exacerbates the need for a tailored change detection ground truth,} which is often difficult to obtain: especially when change is a rare anomaly (\emph{e.g.} after a disaster), there are no labeled sets to train deep learning models on.
To decrease the amount of supervision, one can revert to models using types of annotation requiring less human effort. One could use exploit the geometry of data manifolds by using semi-supervised models, or change the type of annotations, for example by considering weak supervision, \emph{e.g.} image-level annotations rather than pixel level ones~\cite{Kel19d} or imprecise labels~\cite{daudt2019gad}. These approaches are successful, but still require some level of supervision provided by an annotator.
In this paper, we explore the possibility of reducing this requirement to a minimum. We consider strategies based on \emph{self-supervised learning}~\cite{Doersch:2015:self-supervised_spatial_context,Caron:2018:self-supervised_clustering}, where a neural network is trained using labels extracted directly from the images themselves. Rather than training the model on the change detection task, we train it on a \emph{pretext task} for which the labels can be extracted from the image pairs directly (\emph{e.g.} relative locations of patches). By doing so, we can pre-train the majority of the weights and then teach the model to recognize changes with a minimal amount of labels. We create a large and global dataset of S2 image pairs, S2MTCP, where we train our self-supervised learning model, before then fine-tuning it on the OSCD change detection dataset~\cite{Daudt:2018:OSCD_CD_dataset} for pixel-level change detection. The results show that achieving state of art change detection is possible with such a model pre-trained without labels, opening interesting perspectives on the usage of self-supervised learning in change detection.
\section{Methods}
In this section, we present our entire pipeline (Section~\ref{sec:overall}) and then detail the self-supervised pretext tasks used for pre-training (Section~\ref{sec:ssl}).
\subsection{Change detection pipeline}\label{sec:overall}
Let $I^1$ and $I^2$ be two multispectral images acquired over the same geographical area at time $t^1$ and $t^2$ respectively.
We want to pre-train a model on a set of unlabeled images $\{U = (I_u^1, I_u^2)_i\}_{i=1}^{N}${ such that it can be} easily fine-tuned on a {small} set of labeled image pairs $\{L = (I_c^1, I_c^2)_i\}_{i=1}^{M}$.
The overall pipeline comprises three phases: first the network is trained on the pretext task (see Section~\ref{sec:ssl}), then the layer with the best features for change detection is manually selected. Finally, these features are used in a second network performing change detection. Figure \ref{fig:methodology} presents the overview of the methodology.
\begin{figure}
\includegraphics[width=\textwidth]{figs/Overview_methodology.pdf}
\caption{Overview of the methodology.} \label{fig:methodology}
\end{figure}
\subsubsection{Phase 1: self-supervised pre-training.} Ideally, we would like the change detection network to be able to focus on learning the changed areas. To do so, one would hope that the low level features in the change detection network \emph{align} the two image radiometric spaces, so that
the {features} for $I_c^1$ and $I_c^2$ become similar for areas were no changes have occurred.
To facilitate this process, we learn such features using a self-supervised task on a large, unlabeled dataset, $U$. {This task has to be related to the task of change detection so that the learned features become useful.}
We test two different pretext tasks: (1) discriminate between overlapping and non-overlapping patches and (2) minimizing the difference between overlapping patches in feature space. Both pretext tasks are described in detail in the next Section \ref{sec:pretext_tasks}.
\subsubsection{Phase 2: feature selection.}
The deeper layers in the network are likely to be more task-specific, which means that earlier layers might be more suitable for the downstream task \cite{Gidaris:2018:self-supervised_rotation}. Therefore, we add a feature layer selection step to extract the feature layer that results in the highest change detection performance. Image pairs $(I_c^1, I_c^2)_i$ are passed as input to the network and, at each layer the activation features $\mathbf{f}_{l,i}^1$ and $\mathbf{f}_{l,i}^2$ are extracted. {A linear classifier is} then trained on top of features extracted from a specific layer $l$.
Performance of each layer is manually compared, and the layer with the highest performance is selected for the change detection task.
\subsubsection{Phase 3: change detection.}
The selected layer is used to extract features from the change detection image pairs. We discriminate between unchanged ($\omega_{nc}$) and changed ($\omega_c$) pixels, based on the assumption that the unchanged pixels result in similar features and the changed pixels yield dissimilar features. Two classifiers are compared for this task: (1) a linear classifier and (2) Change vector analysis ({CVA}, \cite{Bovolo:2015:time_cd}). The linear classifier is trained in a supervised way on the complete training set $L$, by minimizing the weighted cross entropy loss. {CVA} is an unsupervised method and does not require any training. However, note that the classification with {CVA} is not fully unsupervised as {at this stage} ground reference maps were used to select the optimal feature layer. {However, solutions can be designed to make the selection procedure unsupervised.} \newline \newline
\if 0
Traditional \ac{CVA} comprises three steps. First the magnitude $\rho$ of the change vectors is computed, defined as
\begin{equation}
\rho = \sqrt{\sum_{b=1}^B (I_{c,b}^1-I_{c,b}^2)^2}
\end{equation}
where $b$ represents one dimension of the input feature vectors. Changed pixels will result in large magnitudes, whereas unchanged pixels show relatively low magnitude. As a second step a decision function is applied to distinguish between these two classes. The third step comprises the calculation of the angle between the change vectors to distinguish between different kinds of changes, given by
\begin{equation}
\alpha = arccos \left( \sum_{b=1}^B(t_b r_b) \middle/ \sqrt{\sum_{b=1}^B t_b^2 \sum_{b=1}^B r_b^2} \right)
\end{equation}
where $t_b$ and $r_b$ are the $b$th components of the change vectors $t$ and $r$ \cite{Bovolo:2007:CVA_polar}.
Any suitable thresholding technique can be employed as decision function. In this paper, two techniques are compared: Otsu's global thresholding \cite{Otsu:1979:threshold} and the triangle method as proposed in \cite{Zack:1977:triangle_threshold} and \cite{Rosin:2001:triangle}. Both methods automatically select a threshold based on gray-level histograms. Otsu's thresholding does this by minimizing the ratio of intraclass and interclass variation. This works well for bi-modal distributions. The triangle method in turn is especially suitable for unimodal, skewed distributions; it selects the threshold by drawing a straight line from the peak of the histogram to the first empty bin and maximizing the perpendicular distance between this line and the histogram.
\fi
\subsection{Pretext tasks for self-supervision}\label{sec:ssl} \label{sec:pretext_tasks}
{In self-supervised learning, a pretext task is an auxiliary learning objective on which the model is pre-trained. Although not identical to the final task (self-supervised learning is there to pre-train models when there are not enough labels for the final task), this auxiliary objective is designed such that it helps the model learn features that are expected to be useful on the final task.}
Several pretext tasks have been proposed in self-supervised learning literature{: for example, \cite{Doersch:2015:self-supervised_spatial_context} predicts relative positions of nearby patches, while \cite{Gidaris:2018:self-supervised_rotation} rotates patches and predicts such rotation for enforcing invariances. Regardless of the specific implementations,} the common denominators are that (1) the pretext labels must be extracted from the images themselves without external supervision and (2) the pretext task must help learn features that are relevant for the real downstream task (in our case detecting changes). In the previous section we discussed the need of the change detection network to learn features that project unchanged pixels pairs in the same part of the feature space (\emph{i.e.} unchanged areas become more similar \cite{Vol14b}).
To learn features in this direction, we propose two pretext tasks:
\begin{figure}[!t]
\centering
\begin{subfigure}{0.9\textwidth}
\includegraphics[width=0.95\linewidth]{figs/task1-2_patchlocations.png}
\caption{Location of patches in the images pair.}
\label{subfig:pretext_tasks_a}
\end{subfigure}
\hfill
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=0.95\linewidth]{figs/P_overlapping1-2.png}
\caption{Overlapping patch pair}
\label{subfig:pretext_tasks_b}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=0.95\linewidth]{figs/P_non-overlapping1_2.png}
\caption{Non-overlapping patch pair}
\label{subfig:pretext_tasks_c}
\end{subfigure}
\begin{subfigure}{0.67\textwidth}
\includegraphics[width=0.95\linewidth]{figs/P_triplet-task2.png}
\caption{Patch triplet}
\label{subfig:pretext_tasks_d}
\end{subfigure}
\caption[Illustration of patch sampling strategy.]{Illustration of the patch sampling strategy for the self-supervised learning tasks. (a) Patches are spatially and temporally randomly sampled in the unlabelled image pair $(I_u^1, I_u^2)$. The colored squares represent the patches locations. (b) Overlapping patch pair (red and green) for pretext Task 1. The associated pseudo label $y_{j} = 0$. (c) Non-overlapping patch pair (red and blue) for pretext Task 1. The associated pseudo label $y_{j} = 1$. (d) Patch triplet for pretext Task 2.
}
\label{fig:pretext_tasks}
\end{figure}
\begin{enumerate}
\item The first pretext task is defined by a binary classification that requires the network to predict whether or not a patch pair is overlapping. Each training example $\mathit{P_j}$ contains a patch pair $\{(\mathit{p^1}, \mathit{p^2})_j,y_j\}$. The associated pseudo label equals $y_{j} = 0$ for spatially overlapping pairs and $y_{j} = 1$ for spatially non-overlapping ones. The patch pairs are spatially and temporally randomly sampled from the unlabelled image pairs, and equally divided over the two classes. The task is illustrated in Figure \ref{subfig:pretext_tasks_a}-\ref{subfig:pretext_tasks_c}.
The underlying hypothesis is that sampling $\mathit{p^1}$ and $\mathit{p^2}$ randomly from either $I_u^1$ or $I_u^2$ learns the model to ignore irrelevant radiometric variations due to acquisition conditions and to focus on relevant spatial similarity/dissimilarity between patches.
The parameters of the network are optimized by minimizing binary cross-entropy loss, given by
\begin{equation} \label{eq:loss_binary_cross_entroply}
L = - (y_{j} \cdot log({P}(y_{j})) + (1-y_{j}) \cdot log(1 - {P}(y_{j})))
\end{equation}
where ${P}(y_j)$ is the probability of pseudo label $y_j$ given input $P_j$ as calculated by the logistic sigmoid function in the output layer of the network.
\item The second pretext task aims to learn image representations that project overlapping patches close to each other in the high dimensional feature space and non-overlapping patches far away. The patch sampling strategy is similar to the one of the first pretext task, with patches spatially and temporally randomly sampled in unlabelled image pairs. However, each training example $P_j$ contains one extra patch to form patch triplets $({p^1}, {p^2}, {p^3})_j$. Patches ${p^1}$ and ${p^2}$ are spatially overlapping, while ${p^3}$ is not (Figure \ref{subfig:pretext_tasks_a} and \ref{subfig:pretext_tasks_d}.).
The distance between features extracted from overlapping patches ${p^1}$ and ${p^2}$ should be close to zero, while the distance between feature extracted from disjoint patches ${p^1}$ and ${p^3}$ should be larger by a margin $m$. This can be accomplished by minimizing the triplet margin loss with an additional $\ell_1$ loss. The complete loss function is given by
\begin{equation}
L = max(||\mathbf{f}^1 - \mathbf{f}^2||_2 - ||\mathbf{f}^1 - \mathbf{f}^3||_2 + m, 0) + \gamma \cdot |\mathbf{f^1} - \mathbf{f^2}|
\label{eq:pt2}
\end{equation}
where $\mathbf{f^i}$ is the feature vector for patch ${p^i}$ and $\gamma$ is a hyperparameter to balance the triplet loss and the $\ell_1$ loss functions. %
\end{enumerate}
{The network for the first pretext tasks is implemented as a Siamese architecture with three convolutional layers per branch and a fusion layer, as shown in Fig.~\ref{fig:siamese}a, while the second one does not require the fusion layer, Fig.~\ref{fig:siamese}b.}
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[height=5cm]{./figs/Siamese} &
\includegraphics[height=5cm]{./figs/SiameseAPN} \\
(a) Pretext Task 1 & (b) Pretext Task 2 \\
\end{tabular}
\caption{Schematics of the architecture of the self-supervised CNNs.}
\label{fig:siamese}
\end{figure}
\section{Data and setup}
\subsection{Datasets}
\subsubsection{Change detection.}\label{sec:datal} For the change detection task, we use the OSCD benchmark dataset \cite{Daudt:2018:OSCD_CD_dataset} with annotated urban changes. It contains 24 S2 image pairs with dense reference labels $\{(I_c^1, I_c^2)_i, \Omega_i\}_{i=1}^{24}$ where $\Omega \in \{\omega_{nc}, \omega_c\}$. Images are approximately 600x600 pixels and contain scenes with different levels of urbanisation. The dataset is originally divided into 14 labeled pairs with freely available ground reference maps. The labels of the remaining 10 test pairs are only available through the DASE data portal (http://dase.grss-ieee.org/) for independent validation. In this work, 12 images are used as training set; we used the two remaining images to evaluate the change maps qualitatively. Quantitative results in the discussion section are computed on the 10 undisclosed images, after upload of the obtained maps to the DASE data portal.
\subsubsection{Sentinel-2 multitemporal cities pairs (S2MTCP) dataset}
A dataset of S2 level 1C image pairs $U =\{(I_u^1, I_u^2)_i\}_{i=1}^{N}$, was created for self-supervised training. As the scope of this research is limited to urban change detection, the image pairs were focused on urban areas. Locations are selected based on two databases containing central coordinates of major cities in the world \cite{dataset_simplemaps,dataset_geonames} {with more} than 200.000 inhabitants.
Image pairs $(I_u^1, I_u^2)_i$ are selected randomly from available S2 images of each location with less than one percent cloud cover. Bands with a spatial resolution smaller than 10 m are resampled to 10 m and images are cropped to approximately 600x600 pixels centered on the selected coordinates. Hence, every image covers approximately 3.6km$^2$. According to the Sentinel User Guide \cite{sentinel:2015:user_handbook}, level 1C processing includes spatial registration with sub-pixel accuracy. Therefore no image registration is performed.
The S2MTCP dataset contains $N=1520$ image pairs, spread over all inhabited continents, with the highest concentration of image pairs in North-America, Europe and Asia (Fig.~\ref{fig:location_worldcities}). The size of some images is smaller than 600x600 pixels. This is a result of the fact that some coordinates were located close to the edge of a Sentinel tile, the images were then cropped to the tile border. {It is available at the URL \url{https://zenodo.org/record/4280482}}.
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth]{./figs/worldcities.png}
\caption{Location of the cities sampled in the generated S2MTCP dataset.}
\label{fig:location_worldcities}
\end{figure}
\subsection{Setup}
\subsubsection{Self-supervised pretraining setup.} \label{sec:setup_pretext_task_performance}
We use 85\% of the S2MTCP dataset $U$ to train the model, and use 10\% to validate it. We keep the remaining 5\% as a blind test set for numerical evaluation.
The parameters are optimized using the Adam optimization algorithm \cite{kingma:2014:adam} with the suggested defaults for the hyperparameters ($\beta$1 = 0.9, $\beta$2= 0.999). The training is stopped when the validation loss does not decrease by 1\% in between epochs. We use a fixed learning rate of $0.001$ and weight decay ($0.0001$). The $\gamma$ parameter in Eq.~(\ref{eq:pt2}) is set to 1 experimentally. At each iteration, we sample 5 patch pairs (or triplets for pretext Task 2) from each image to generate {6350 patch pairs per epoch.} Data augmentation (90 degrees rotation and horizontal/vertical flips) are applied.
\vspace{0.2cm}
To assess the performance on the pretext tasks, we use the blind test set extracted from $U$. For pretext Task 1, we assess the success rate in the task itself in percentage, while for Task 2, we consider the value of the loss. We also run the pretext tasks on the {12} images composing OSCD test set {to assess domain shifts}. Note that no OSCD labels are used at this stage.
\subsubsection{Feature layer selection setup}
The performance of features $\mathbf{f}_l$ on the change detection task is compared using 3-fold cross validation on the OSCD labeled set. As discussed in Section~\ref{sec:datal}, the OSCD labeled set contains 12 image pairs ($(I_c^1, I_c^2), \Omega$), hence, we use 4 pairs per fold. We consider features (\emph{i.e.} activation maps) at different levels of the self-supervised model as candidates for the selection. In other words, we retain features $\mathbf{f}^{\{1,2\}}_l$, with $l=[1,...,3]$, where $l$ is the depth of the CNN considered (see schematics of Fig.~\ref{fig:siamese}) for images $I^1_c$ and $I^2_c$, respectively. We use the differences of the corresponding features as inputs for the change detection classifier. For pretext Task 1, we also consider $l=4$, \emph{i.e.} the substraction layer where $\mathbf{f}^1_3$ and $\mathbf{f}^2_3$ are fused.
{The linear classifiers} are trained for a maximum of 250 epochs and stopped if the validation loss does not improve for 50 epochs. The same optimizer and augmentation used in the previous step are used.
We sample 100 patches pairs per image of the OCSD dataset.
To make sure that the results for each experiment (varying layer and pretext task) are comparable, the patches are passed to the classifiers in the same order. Performance is evaluated based on F1-score, sensitivity, specificity and precision.
\subsubsection{Change detection setup}
Two classifiers are compared for the change detection task:
\begin{itemize}
\item \textit{Supervised linear classifier}, trained in a supervised way on the OSCD training dataset. {This model consists of a single linear layer followed by a sofmax activation function returning the probability scores $\{\omega_{c}, \omega_{nc}\}$. The threshold to obtain the change binary map was set based on the F1-score on the training set.}
\item \textit{CVA} \cite{Bruzzone:2013:cd_framework}, with detection threshold optimised using either Otsu's method or the triangle method~\cite{Rosin:2001:triangle}.
\end{itemize}
The CV folds and extracted patches are the same as in the feature layer selection step. Same goes for optimization and augmentation strategies. The learning rate was decreased to $10^{-5}$.
\section{Results and discussion}
\subsubsection{Pretext tasks performance}
The validation and test results for pretext Task 1 (\emph{i.e.} predicting whether two patches are spatially overlapping) are reported in Table \ref{tab:results_pretexttask1}. The test accuracy was consistently high in both datasets: in all cases the model was able to correctly predict whether the patches were overlapping in over 97\% of the patch pairs. The low number of epochs required to reach this high accuracy indicates the pretext task was easy to solve.
Regarding Task 2, the lowest validation loss was reached after 17 epochs and training stopped. The loss on the OSCD dataset was slightly higher than on the S2MTCP dataset (result not shown), as a result of a larger contribution of the triplet loss. We argue that this does not indicate overfitting, but rather a domain gap between the two datasets, since the difference between the validation and test loss on the S2MTCP dataset remains small.
\begin{table}
\caption{Performance on pretext Task 1, expressed in {Average} Accuracy (\%).}
\centering
\begin{tabular}{c|c|c|c} \hline
Dataset & Data split & Loss & Accuracy \\ \hline \hline
S2MTCP & validation & 0.043 & 98.93 \\
S2MTCP & test & 0.052 & 98.28 \\
OSCD & - & 0.083 & 97.67 \\
\hline
\end{tabular}
\label{tab:results_pretexttask1}
\end{table}
\subsubsection{Selection of optimal feature layer for change detection.}
Table \ref{tab:results_AA_layer_selection} presents the average accuracy over the three folds for change detection performed with features $\mathbf{f}_l$ for layers $l \in [1,4]$. The features of the second convolutional layer ($l=2$) perform best in both cases, although the differences are overall small. The performance of the deeper layers in the network trained on pretext task~1 decreases faster than the performance of the ones trained on pretext task~2. It is not surprising that features from deeper layers perform worse on the change detection task, Yosinski et al. \cite{Yosinski:2014:transferring_features} have shown that deeper layers of a CNN are specific to the task and dataset used for training, while the first layers are general-purpose. This effect has also been observed when transferring features from a pretext task to the target task in self-supervised learning \cite{Kolesnikov:2019:self_supervised_comparison}.
Based on these results, the second convolutional layer is selected for the change detection task.
\begin{table}[!t]
\caption[Evaluation of features per layer as measured by Average Accuracy on the change detection task.]{Evaluation of features per layer as measured by Average Accuracy (\%) on the change detection task by cross validation. $l = [1,3]$ represents which convolutional layers of the self-supervised model is used.
For each pretext task the best performance is highlighted in bold text.}
\centering
\begin{tabular}{l|l|l|l|l} \hline
Pretext task & $l=1$ & $l=2$ & $l=3$ & $l=4$ \\
\hline\hline
Pretext Task 1 & 76.06 & \textbf{77.82} & 75.92 & 74.26 \\
Pretext Task 2 & 78.03 & \textbf{79.11} & 78.19 & \\
\hline
\end{tabular}
\label{tab:results_AA_layer_selection}
\end{table}
\subsubsection{Numerical results on the OCSD test set.} As a final step, we compare the results of our self-supervised model with those obtained by fully supervised models on the undisclosed test set on the DASE algorithm testbed data portal (see section~\ref{sec:datal} for details).
The best performance among the self-supervised approaches, top half of Tab.~\ref{tab:SOTA_change_detection}, was achieved by the model pretrained on pretext Task 2 combined with the {CVA} classifier using the triangle method. This leads to the highest F1-score. The CVA with the Otsu method has the highest sensitivity (recall, meaning that the most changes are detected), but at the price of a very low precision due to the very high number of false positives; see also the maps in Fig.~\ref{fig:visu}. This is most probably due to the setting of the Otsu threshold, which needs to be very high to favor sensitivity. The learned classifiers (`linear') in Table~\ref{tab:SOTA_change_detection} provide the best results for pretext Task 1 and also the best results in both tasks in terms of specificity, but also show lower sensitivity scores. This results in a slightly lower F1-score for pretext Task 2. Compared with current state of art in the OSCD dataset, the self supervised models perform remarkably well, given its shallow architecture and the fact that they are pre-trained in an unsupervised way.
\begin{table}[!t]
\small
\caption[]{Comparison between supervised State of Art (S-o-A) and Self supervised models on the undisclosed test set of OCSD. All metrics are expressed in percentage. The best performance as measured by each metric are highlighted in bold text. 'Linear' corresponds to a learned linear classifier for change detection.}
\centering
\begin{tabular}{lp{2.5cm}ccccc} \hline
&Method && Sensitivity & Specificity & Precision & F1 \\
\hline\hline
\multirow{6}{*}{\rotatebox{90}{Self-supervised}}
&Task 1 & CVA+Otsu & 65.78 & 86.18 & 20.60 & 31.37 \\
&Task 1 & CVA+Triangle & 41.14 & {96.11} & 36.55 & 38.71 \\
&Task 1 & linear & 50.00 & {96.66} & 37.98 & 43.17 \\
&Task 2 & CVA+Otsu & \textbf{83.85} & 81.99 & 20.24 & 32.61 \\
&Task 2 & CVA+Triangle & 52.80 & 95.76 & {40.42} & \textbf{45.79} \\
&Task 2 & linear & 35.37 & \textbf{97.76} & \textbf{46.30} & 43.17 \\
\hline\hline
\multirow{5}{*}{\rotatebox{90}{S-o-A}}
&Siamese & \cite{Daudt:2018:OSCD_CD_dataset} & \textbf{85.63} & 85.35 & 24.16 & 37.69 \\
&Early fusion& \cite{Daudt:2018:OSCD_CD_dataset} & 84.69 & 88.33 & 28.34 & 42.47 \\
&FC-EF& \cite{Daudt:2018:fully_convolutional} & 50.97 & \textbf{98.51} & \textbf{64.42} & 56.91 \\
&FC-Siam-Conv& \cite{Daudt:2018:fully_convolutional} & 65.15 & 95.23 & 42.39 & 51.36 \\
&FC-Siam-Diff& \cite{Daudt:2018:fully_convolutional} & 57.99 & 97.73 & 57.81 & \textbf{57.91} \\
\hline
\end{tabular}
\label{tab:SOTA_change_detection}
\end{table}
\begin{figure}[!t]
\centering
\begin{tabular}{cc|ccc}
\multicolumn{1}{c}{} &&& Pretext Task 1 & Pretext Task 2\\
\includegraphics[width=.30\linewidth]{./figs/3_a.png} &&& \includegraphics[width=.30\linewidth]{./figs/Siamese_3_CVA_otsu.png} &\includegraphics[width=.30\linewidth]{./figs/Triplet_3_CVA_otsu.png}\\
Image, $t_1$&&& \multicolumn{2}{c}{CVA, Otsu's method} \\
\includegraphics[width=.30\linewidth]{./figs/3_B.png} &&& \includegraphics[width=.30\linewidth]{./figs/Siamese_3_CVA_triangle.png} &\includegraphics[width=.30\linewidth]{./figs/Task2CVA.png}\\
Image, $t_2$&&& \multicolumn{2}{c}{CVA, triangle method} \\
\includegraphics[width=.30\linewidth]{./figs/gt.png} &&& \includegraphics[width=.30\linewidth]{./figs/Siamese_3_C2_bestF1.png} &
\includegraphics[width=.30\linewidth]{./figs/Triplet_3_C2_bestF1.png}\\
Ground truth&&& \multicolumn{2}{c}{Linear classifier} \\
\end{tabular}
\caption{Example of change detection for the proposed method. {True positives are depicted in white, missed changes in green and false positives in magenta.}
}
\label{fig:visu}
\end{figure}
Finally, Fig.~\ref{fig:visu} illustrates some change maps for the Beirut image of the OCSD dataset. Looking at the maps, we observe that the CVA detection is accurate on the top right corner, but also that it tends to generate more false positives (in {magenta}), and, when using the Otsu method, {most of the} image is predicted as changed. We therefore conclude that Otsu's method is inferior to the other two, which can be both considered usable. Remarkably, the learned classifier reduces the false positive and shows the most visually pleasant results, but at the price of less precise delineation of the change than CVA with the triangle method.
\section{Conclusions}
In this paper, we explored the possibility of pre-training a convolutional neural network for change detection without labels. We perform such training by forging a pretext task inherent in the data, which aims at learning a feature space where unchanged pixels are close and far from abnormal situations. We use two self-supervised learning approaches and then fine tune the network trained this way to detect changes. Experiments in the benchmark Sentinel-2 OCSD dataset shows that traininig a model this way can lead to results close to state of the art deep learning change detection. {It is available at the URL \url{https://zenodo.org/record/4280482}}.
\bibliographystyle{ieeetr}
|
1,314,259,994,913 | arxiv | \section{Sample Information}
The hexagonal boron nitride/ graphene heterostructure was assembled by the standard dry transfer stamping method from mechanically exfoliated flakes. The stack was then deposited on a silicon chip with a 280 nm oxide layer. The contacts were made of sputtered Molybdenum-Rhenium alloy (50-50 by weight), a type-II superconductor with a high critical temperature of $8-10$ K. The junction has dimensions of 0.5 x 3$\mu m$ and has been previously used as a reference device in Ref.~\onlinecite{AnneMultiterminal}. The MoRe contacts were connected to measurement lines through Cr/Au leads and bonding pads. By comparison to the simulations, we believe these leads and bonding pads act as the part of the environment which determines junction dynamics~\cite{Clarke1988}. We model them as a resistor $R_L$ in series with the capacitor $C_0$.
\section{Measurement Techniques}
While differential resistance maps are often measured using a lock-in amplifier, in this case hysteretic switching prevented us from using this technique. Instead, for each vertical line of the maps, an approximately 20 Hz triangle wave was applied and the resulting voltage profile was measured. 200 such measurements were then averaged to produce an $I-V$ curve and then numerically differentiated. These parameters allowed for fast measurements with reasonable averaging and minimal distortion of the applied wave by the low-temperature filters. We note that the exact extent of hysteresis is a function of the sweeping speed, with faster sweeping giving more pronounced hysteresis. This is particularly relevant when comparing to the numerical results, as the simulations used much shorter time evolutions thereby exaggerating the effects of the hysteresis.
\section{Simulations}
\begin{figure}[htb]
\centering
\includegraphics[width=0.35\textwidth]{Circuit.pdf}
\caption{Diagram of the circuit used to simulate the dynamics of the Josephson junction. In practice, $C_{j}$ is negligible and is omitted from further consideration.}
\label{circuit}
\end{figure}
To simulate the behavior of a Josephson junction subject to microwave radiation, we use a modified RCSJ model as illustrated in Fig.~\ref{circuit}~\cite{jarillo-herreroQuantumSupercurrentTransistors2006}. We start with a junction with critical current $I_C$, which is shunted by a resistor $R_j$ and a capacitor $C_j$, where $R_j$ represents the dissipation in the Josephson junction and $C_j$ is the capacitance between the two superconducting leads. In the experiment, the Josephson junction is further connected to four 150 $\mu$m $\times$ 100 $\mu$m bonding pads by Cr/Au leads. The capacitance of the bonding pads, $C_0$, and the resistance of the leads, $R_L$, must be taken into account to properly simulate the junction dynamics. The four bonding pads are arranged such that the effective capacitance is equal to that of one bonding pad to the back gate, which would yield 1.8 pF for 280 nm thick SiO$_{2}$. At room temperature, the capacitance between two bonding pads and bonding wires connected to the chip carrier by bonding wire was measured to be slightly higher, around 2.5pF, which was the value used in the simulations. In practice, similar maps have been simulated using a range of $C_0$ values.
The resistance of the evaporated Cr (5 nm)/Au (45 nm) film was measured to be 0.5 Ohm/$\mathord{\scalerel*{\Box}{gX}}$, from which we estimate that $R_L$ is a few tens of Ohms for our typical devices. We use a reasonable value of $R_L=50$ Ohms for our simulations in Figure 3 of the main text and in the simulations below. Finally, $R_j=300$ Ohms is determined from the current corresponding to the center of the Shapiro plateaus in the Bessel function regime, $I_n=n\hbar \omega /R_j$. In accordance with the experiment, we assume that $R_j$ does not depend on magnetic field. The same value of $R_j=300$ Ohms is used to simulate all panels in Figure S2.
\begin{figure*}[ht!]
\centering
\includegraphics[width=\textwidth]{sims.pdf}
\caption{Simulations of the differential resistance maps at 5 GHz for comparison to Figure 1a-d. The values of $I_{C}$ used are, from left to right, 540, 200, 80 and 40 nA. Other parameters are kept the same as in Figure 3 of the main text: $C_{0}=2.5$ pF, $R_L=50$ Ohms and $R_j=300$ Ohms. }
\label{FrequencySims}
\end{figure*}
The microwave injection from the antenna can be modeled by a AC current, $I_{AC}=I_{RF} \sin{\omega t}$ where $I_{RF}$ is the current amplitude and $\omega$ is the microwave frequency. Note that a significant amount of the applied power is dropped across the capacitors in our model. The thermal noise of the resistive components in the experiment generates a Gaussian current noise $I_N$ whose variance is proportional to the temperature $T$. We find that for good agreement with the data, the noise amplitude has to be taken higher than expected for thermal noise. This is expected, for two reasons: first, in simulations $I_N$ is applied to the outside of the junction circuit, where it would be partially filtered by $C_0$ and $R_L$ before reaching the junction. Second, while each point on the map is measured for millions of cycles, we simulate it over just $\sim 500$ drive cycles, so using a higher level of noise may be expected.
We therefore, treat $I_N$ as a fitting parameter. In summary, the current source $I$ contains three components, the bias current, $I_{bias}$, the microwave radiation current, $I_{AC}$ and the thermal noise, $I_N$.
\begin{equation}
\begin{aligned}
I&=I_{bias}+I_{RF} \sin{\omega t}+I_N(t)\\
&=C_0 \frac{dV}{dt}+ I_c \sin{\phi} +\frac{\hbar}{2 e R_j}\frac{d\phi}{dt}+\frac{\hbar C_j}{2 e}\frac{d^2 \phi}{dt^2}\\
V&=\frac{\hbar}{2e}\frac{d\phi}{dt}+R_L\left(I_c \sin{\phi} +\frac{\hbar}{2 e R_j}\frac{d\phi}{dt}+\frac{\hbar C_j}{2 e}\frac{d^2 \phi}{dt^2}\right)
\end{aligned}
\label{RCSJ:eq1}
\end{equation}
The dynamics of the circuit in Fig.~\ref{circuit} is described by Eq.~(\ref{RCSJ:eq1}), where $\phi$ is the superconducting phase difference across the junction, $V$ is the voltage across the capacitor $C_0$ and $I_{N} \propto \sqrt{T}$ is the standard deviation of the Gaussian noise. Solving this third order differential equation numerically gives $\phi(t)$, from which we can derive the DC voltage across the junction, $V_j=\left<\frac{\hbar}{2e}\frac{d\phi}{dt}\right>$. Note that $C_j$ is about 4 orders of magnitude smaller than $C_0$ for the device studied here. We numerically found that $C_j$ can be neglected under this condition, simplifying Eq.~(\ref{RCSJ:eq1}) to a second order differential equation. The experimental curves strongly depend on the bias sweeping direction. To emulate the bias sweep, we use the steady solution of $\phi(t)$ at a given $I_{bias}$ as the initial condition for solving the differential equation at the next value of bias, $I_{bias}+\delta I$, where $\delta I$ is the incremental bias step.
Figure S2 shows simulated Shapiro maps at several values of the critical current, intended to be compared with the 5 GHz data of Figure 1. Remarkably, we are able to reproduce the four experimental maps in Figure 1a-d by changing only $I_{C}$, which is the only parameter we expect to be influenced by magnetic field. The values of $C_{0}=2.5$ pF, $R_L=50$ Ohms, and $R_j=300$ Ohms are kept the same in all panels.
Our model does not include heating from the RF drive, which has been been recently identified to be an important effect~\cite{DeCecco2016}. We believe that such heating offers a natural explanation for the washed out features at higher RF power observed in the experimental data.
Overall, the simple model describes the main features of our data sufficiently well.
\section{Simulation of Gate Voltage dependence}
Next, we reproduce the gate voltage dependence measured in Figure 4. Between the four maps, we adjusted the values of $I_{c}$ and $R_{j}$, where the former can be obtained from the value of the switching current at zero RF power, and the latter could be roughly extracted from the positions of the Shapiro steps, $I_n=n\hbar \omega /R_J$. Additionally, the noise amplitude $I_{N}$ was taken to be $ \propto 1/\sqrt{R_j}$. This is consistent with the expectation that the noise in the junction is given by the thermal noise of $R_{j}$. While it may be expected that the noise processes in our driven system may be more complicated, the simulations capture the general trends observed in the data of Figure 4 of the main text.
\begin{figure}[h!]
\centering
\includegraphics[width = 0.45\textwidth]{gatesim.pdf}
\caption{Simulations of differential resistance maps corresponding to Figure 4 of the main text, measured at different gate voltages. We use the values of $R_{j} =850,500,300,180$ Ohms and $I_{c} =350,500,600,800$ nA in panels (a) to (d).}
\label{GateSims}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width = \textwidth]{CPRSim.pdf}
\caption{Middle row: Simulations corresponding to Figures 1a and S2a with the skewness of the CPR increasing from left to right (the CPRs are shown in the insets). Even severe skewness does not give rise to fractional steps, although it does slightly alter the map in ways akin to changing parameters such as $I_{c}$ and $R_{j}$. Bottom row: Simulations corresponding to Figures 1d and S2d with the same range of CPR skewness. In this regime, the CPR gives rise to enhanced fractional steps. Top row: CPRs corresponding to the figures below.}
\label{CPRSims}
\end{figure*}
\section{CPR Dependence}
Figure 1d of the main text shows strong fractional Shapiro steps~\cite{AndersonDayem1964}, although there are no signs of fractional steps in the measurements with higher $I_{C}$. Our simulations are in an agreement with the experimental results, showing that a skewed CPR (current phase relationship) has minimal effect on a sample in the strongly hysteretic regime. Intuitively, we understand the hysteresis of the high $I_{C}$ maps as arising from regions of overlapping stability of integer steps. Thus it may be expected that for such parameters the fractional steps are less stable compared to overlapping integer steps.
For comparison to the experiment, we took our simulation for Figures 1a and 1d and employed CPRs with varying degrees of skewness~\cite{EnglishCPR2016,Nanda2017}. In the top row, $I_{c}=540$ nA, corresponding to Figures 1a and S2a; in the bottom row, $I_{c}=40$ nA, corresponding to Figures 1d and S2d. The three columns correspond to: sinusoidal CPR (left), a slightly skewed CPR, $I(\phi) = I_{c}[\sin(\phi)-0.2\sin(2 \phi)+0.04\sin(3 \phi)]$ (middle); and a maximally skewed sawtooth CPR (right). The insets in the top panels demonstrate the corresponding CPRs. For large $I_{c}$ simulations, increasing the skewness of the CPR only slightly distorted the map, but did not give rise to any additional plateaus. For small $I_{c}$, increasing the skewness resulted in increasing fractional plateaus. Surprisingly, for small $I_c$ even a perfectly sinusoidal CPR shows some half-quantized steps (Figure S4d). We attribute this behavior to the high frequency environment, which gives rise to some effective skewness. Comparing these simulations to the measured data, we find that the slightly skewed CPR appears to most accurately reproduce the strength of the fractional steps, as expected.
\end{document}
|
1,314,259,994,914 | arxiv | \section{Introduction}
\label{sec:intro}
Many supervised learning tasks involve designing and optimizing a loss function
that is often different from the actual performance metric for evaluating models.
For example, cross-entropy is a popular loss function for training a multi-class
classifier, but when it comes to comparing models on a test set,
classification error is used instead.
Why not optimize the performance metric directly? Because
many metrics or output decoders are non-differentiable and cannot be optimized with gradient-based methods
such as stochastic gradient descent. Non-differentiability occurs when
the output of the task is discrete (e.g. class labels), or when the output is continuous
but the performance metric has discrete operations (e.g. percentage of real-valued predictions
within a certain range of the ground truth).
To address this issue, designing a differentiable
loss that serves as a surrogate to the original metric is standard practice.
For standard tasks with simple output and metrics, there exist
well-studied surrogate losses. For
example, cross-entropy or hinge loss for classification, both of
which have proven to work well in practice.
However, designing surrogate losses can sometimes incur substantial manual effort, including a large amount of
trial and error and hyper-parameter tuning.
For example, a standard evaluation of single-person human pose estimation---predicting the
2D locations of a set of body joints for a single person in an image---involves computing
the percentage of predicted body joints that are within a given radius of the ground
truth.
This performance metric is non-differentiable.
Existing work instead trains a deep network
to predict a heatmap for each type of body joints, minimizing the difference between the predicted
heatmap and a ``ground truth'' heatmap consisting of a Gaussian bump at the ground truth
location~\cite{tompson2014joint,newell2016stacked}.
The decision for what error function to use for comparing heatmaps and
how to design the ``ground truth'' heatmaps
are manually tuned to optimize performance.
This manual effort in conventional losses is tedious but necessary,
because a poorly designed loss can be misaligned with the final performance metric and
lead to ineffective training. As we show in the experiment section, without carefully-tuned loss hyper-parameters, conventional manual losses can work poorly.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1\linewidth]{overview_v3}
\end{center}
\caption{Computation graphs for conventional losses and UniLoss. Top: (a) testing for conventional losses. The decoder and evaluator are usually non-differentiable. (b) training for conventional losses. To avoid the non-differentiability, conventional methods optimize a manually-designed differentiable loss function instead during training. Bottom: (a) refactored testing in UniLoss. We refactor the testing so that the non-differentiability exists only in Sign($\cdot$) and the multi-variant function. (b) training in UniLoss with the differentiable approximation of refactored testing. $\sigma(\cdot)$ is the sigmoid function. We approximate the non-differentiable components in the refactored testing pipeline with interpolation methods. }
\label{fig:overview}
\end{figure*}
In this paper, we seek to reduce the efforts of manual design of surrogate losses by introducing a unified surrogate loss framework applicable to a wide range of tasks.
We provide a unified framework to mechanically generate a surrogate loss given a performance metric
in the context of deep
learning.
This means that we only need to specify the performance metric (e.g.
classification error) and the inference
algorithm---the network architecture, a ``decoder'' that converts the network output
(e.g. continuous scores) to the final
output (e.g. discrete class labels), and an ``evaluator'' that converts the labels to final metric---and the rest is taken care of as part of the
training algorithm.
We introduce UniLoss (Fig.~\ref{fig:overview}), a unified framework to generate surrogate losses for training deep networks with gradient descent.
We maintain the
basic algorithmic structure of mini-batch gradient descent: for each mini-batch, we perform
inference on all examples, compute a loss using the results and the ground truths, and generate
gradients using the loss to update the network parameters. Our novelty is that we generate all the surrogate losses in a unified framework for various tasks instead of manually designing it for each task.
The key insight behind UniLoss is that for many tasks and performance metrics, evaluating a
deep network on a set of training examples---pass the examples through the network, the output
decoder, and the evaluator to the performance metric---can be refactored into a sequence of four
transformations:
the training examples are first transformed to a set of real scores, then to some real numbers
representing comparisons (through subtractions) of certain pairs of the real-valued scores,
then to a set of binary values representing the signs of the comparisons, and finally to a single
real number.
Note that the four transforms do not necessarily correspond to
running the network inference, the decoder, and the evaluator.
Take multi-class classification as an example, the training examples are first transformed to
a set of scores (one per class per example), and then to pairwise comparisons (subtractions) between the scores for each example (i.e. the argmax operation), and then to a set of binary values, and finally to a classification accuracy.
The final performance metric is non-differentiable with respect to network weights because the decoder and the evaluator
are non-differentiable. But
this refactoring allows us to
generate a differentiable approximation of each non-differentiable transformation through
interpolation.
Specifically, the transformation from comparisons to binary variables is nondifferentiable, we can approximate it by using the sigmoid
function to interpolate the sign function.
And the transformation from binary variables to final metric may be nondifferentiable, we can approximate it by multivariate interpolation.
The proposed UniLoss framework is general and can be applied to various tasks and performance metrics.
Given any performance metric involving discrete operations,
to the best of our knowledge, the discrete operations can always be refactored to step functions that first make some differentiable real-number comparisons non-differentiable, and any following operations, which fit in our framework. Example tasks include classification scenarios such as accuracy in image classification, precision and recall in object detection;
ranking scenarios such as average precision in binary classification, area under curve in image retrieval;
pixel-wise prediction scenarios such as mean IOU in segmentation, PCKh in pose estimation.
To validate its effectiveness, we perform experiments on three representative tasks from three different scenarios.
We show that UniLoss performs well on a classic classification setting, multi-class classification, compared with the well-established conventional losses.
We also demonstrate UniLoss's ability in a ranking scenario that evolves ranking multiple images in an evaluation set: average precision (area under the
precision-recall curve) in unbalanced binary classification.
In addition, we experiment with pose estimation where the output is structured as pixel-wise predictions.
Our main contributions in this work are:
\begin{itemize}
\item We present a new perspective of finding surrogate losses: evaluation can be refactored as a sequence of four transformations, where each nondifferentiable transformation can be tackled individually.
\item We propose a new method: a unified framework to generate losses for various tasks reducing task-specific manual design.
\item We validate the new perspective and the new method on three tasks and four datasets, achieving comparable performance with conventional losses.
\end{itemize}
\section{Related Work}
\subsection{Direct Loss Minimization}
The line of direct loss minimization works is related to UniLoss because we share a similar idea of finding a good approximation of the performance metric.
There have been many efforts to directly minimize specific classes of tasks and metrics.
For example, \cite{taylor2008softrank} optimized ranking metrics such as Normalized Discounted
Cumulative Gain by smoothing them with an assumed probabilistic distribution of
documents. \cite{henderson2016end} directly optimized mean average precision in object
detection by computing ``pseudo partial derivatives'' for various continuous variables.
\cite{nguyen2013algorithms} explored to optimize the 0-1 loss in binary classification by
search-based methods including branch and bound search, combinatorial search, and also coordinate descent on the relaxations of 0-1 losses.
\cite{Liu2016ICML} proposed to improve the conventional cross-entropy loss by multiplying a preset constant with the angle in the inner product of the $\mathtt{softmax}$ function to encourage large margins between classes.
\cite{fu2018end} proposed an end-to-end optimization approach for speech enhancement by directly optimizing
short-time objective intelligibility (STOI)
which is a differentiable performance metric.
In addition to the large algorithmic differences, these works also differ from ours in
that they are tightly coupled with specific tasks and applications.
\cite{hazan2010direct} and \cite{song2016training} proved that under mild conditions, optimizing a max-margin
structured-output loss is asymptotically equivalent to directly optimizing the performance
metrics.
Specifically, assume a model
in the form of a differentiable scoring function $S(x, y;w):\mathcal{X} \times \mathcal{Y}\rightarrow \mathbf{R}$ that
evaluates the compatibility of output $y$ with input $x$.
During inference, they predict the
$y_w$ with the best score:
\begin{equation}
y_w = \underset{y}{\operatorname{argmax}}~S(x,y;w).
\end{equation}
During training, in addition to this regular inference, they also
perform the \emph{loss-augmented inference}~\cite{tsochantaridis2005large,hazan2010direct}:
\begin{equation}
y^\dagger = \underset{y}{\operatorname{argmax}}~S(x,y;w) + \epsilon \xi(y_w, y),
\label{eqn:max-direct}
\end{equation}
where $\xi$ is the final performance metric (in terms of error), and $\epsilon$ is a small time-varying
weight.
\cite{song2016training} generalized this result from linear scoring functions
to arbitrary scoring functions, and developed an efficient loss-augmented inference algorithm
to directly optimize average precision in ranking tasks.
While above max-margin losses can ideally work with many different performance
metrics $\xi$, its main limitation in practical use is that it can be highly nontrivial to design an efficient
algorithm for the loss-augmented inference, as it often requires some
clever factorization of the performance metric $\xi$ over the components of the structured
output $y$. In fact, for many metrics the loss-augmented inference is NP-hard and one must
resort to designing approximate algorithms, which further increases the difficulty of practical use.
In contrast, our method does not demand the same level of human ingenuity. The main human
effort involves refactoring the inference code and evaluation code to a particular
format, which may be further eliminated by automatic code analysis. There is no need to design a new inference algorithm over discrete outputs and analyze its efficiency.
The difficulty of designing loss-augmented inference algorithms for each individual task makes it impractical to compare fairly with max-margin methods on diverse tasks, because it is unclear how to design the inference algorithms.
Recently, some prior works propose to directly optimize the performance metric
by learning a parametric surrogate loss~\cite{huang2019addressing,wu2018learning,santos2017learning,josif2019learning,engilberge2019sodeep}.
During training, the model is updated to minimize the current surrogate loss while the parametric surrogate loss is also updated to align with the performance metric.
Compared to these methods,
UniLoss does not involve any learnable parameters in the loss.
As a result, UniLoss can be applied universally across different settings without any training,
and the parametric surrogate loss has to be trained separately
for different tasks and datasets.
Reinforcement Learning inspired algorithms have been used to optimize performance metrics
for structured output problems, especially those that can be formulated as taking a
sequence of actions~\cite{ranzato2015sequence,liu2017improved,caicedo2015active,yeung2016end,zhou2018improving}. For example,
\cite{liu2017improved} use policy
gradients~\cite{sutton2000policy} to optimize metrics for image captioning.
We differ from these
approaches in two key aspects. First, we do not need to formulate a task as
a sequential decision problem, which is natural for certain tasks such as text
generation, but unnatural for others such as human pose estimation. Second, these methods treat performance metrics as black boxes, whereas we assume access to
the code of the performance metrics, which is a valid assumption in most
cases. This access allows us to reason about
the code and generate better gradients.
\subsection{Surrogate Losses}
There has been a large body of literature studying surrogate losses, for tasks including multi-class
classification~\cite{bartlett2006convexity,zhang2004statistical,tewari2007consistency,crammer2001algorithmic,allwein2000reducing,ramaswamy2013convex,ramaswamy2014consistency}, binary classification~\cite{bartlett2006convexity,zhang2004statistical,ramaswamy2013convex,ramaswamy2014consistency} and pose estimation~\cite{tompson2014joint}.
Compared to them, UniLoss reduces the manual effort to design task-specific losses. UniLoss, as a general loss framework, can be applied to all these tasks and achieve comparable performance.
\section{UniLoss}
\subsection{Overview}
\label{sec:method_overview}
UniLoss provides a unified way to generate a surrogate loss for training deep networks with mini-batch gradient descent without task-specific design. In our general framework, we first re-formulate the evaluation process and then approximate the non-differentiable functions using interpolation.
\subsubsection{Original Formulation}
Formally, let $\mathbf{x} = (x_1, x_2, \ldots, x_n) \in \mathcal{X}^n$ be a set of $n$ training examples and
$\mathbf{y}=(y_1, y_2, \ldots, y_n) \in \mathcal{Y}^n$ be the ground truth. Let $\phi(\cdot;w):
\mathcal{X} \rightarrow \mathbf{R}^d$ be a deep network parameterized by weights $w$ that outputs a $d$-dimensional
vector; let
$\delta:\mathbf{R}^d \rightarrow \mathcal{Y}$ be a decoder that decodes the network output to a possibly
discrete final output; let $\xi: \mathcal{Y}^n \times \mathcal{Y}^n \rightarrow \mathbf{R}$
be an evaluator. $\phi$ and $\delta$ are applied in a mini-batch fashion on $\mathbf{x} = (x_1, x_2, \ldots, x_n)$; the performance $e$ of the deep network is then
\begin{equation}
e=\xi(\delta(\phi(\mathbf{x};w)), \mathbf{y}).
\label{eqn:e_ori}
\end{equation}
\subsubsection{Refactored Formulation}
Our approach seeks to find a surrogate loss to minimize $e$, with
the novel observation that in many cases $e$ can be refactored as
\begin{equation}
e = g (h (f(\phi(\mathbf{x};w),\mathbf{y}) ) ),
\label{eqn:reexpr}
\end{equation}
where $\phi(\cdot;w)$ is the same as in Eqn.~\ref{eqn:e_ori}, representing a deep neural network,
$f: \mathbf{R}^{n\times d} \times \mathcal{Y}^n
\rightarrow \mathbf{R}^{l}$ is differentiable and maps outputted
real numbers and the ground truth to $l$ comparisons each
representing the difference between certain pair of real numbers, $h: \mathbf{R}^{l}
\rightarrow \mathbf\{0,1\}^{l} $ maps the $l$ score differences to $l$ binary variables, and
$g:\mathbf\{0,1\}^{l} \rightarrow \mathbf{R}$ computes the performance
metric from binary variables. Note that $h$ has a restricted form that always maps continuous values to binary
values through sign function, whereas $g$ can be arbitrary
computation that maps binary values to a real number.
We give intermediate outputs some notations:
\begin{itemize}
\item Training examples $\mathbf{x},\mathbf{y}$
are transformed to scores $\mathbf{s} = (s_1, s_2, \ldots, s_{nd})$, where
$\mathbf{s} = \phi(\mathbf{x};w)$.
\item $\mathbf{s}$ is converted to comparisons (differences of two scores) $\mathbf{c} = (c_1, c_2, \ldots, c_l)$, where $\mathbf{c} = f(\mathbf{s}, \mathbf{y})$.
\item $\mathbf{c}$ is converted to
binary variables $\mathbf{b} = (b_1, b_2, \ldots, b_l)$ representing the binary outcome of the comparisons,
where $\mathbf{b} = h(\mathbf{c})$.
\item The binary variables are transformed to a single real number by $e=g(\mathbf{b})$.
\end{itemize}
This new refactoring of a performance metric allows us to
decompose the metric $e$ with $g$, $h$, $f$ and $\phi$, where
$\phi$ and
$f$ are
differentiable functions but
$h$ and $g$ are often non-differentiable.
The non-differentiability of $h$ and $g$ causes $e$ to be
non-differentiable with respect to network weights $w$.
\subsubsection{Differentiable Approximation}
Our UniLoss generates differentiable approximations of the
non-differentiable $h$ and $g$ through interpolation, thus making the metric $e$
optimizable with gradient descent.
Formally, UniLoss gives a differentiable approximation $\tilde{e}$
\begin{equation}
\tilde{e} = \tilde{g} (\tilde{h} (f( \phi(\mathbf{x};w),\mathbf{y}))),
\label{eqn:approx}
\end{equation}
where $f$ and $\phi$ are the same as in Eqn.~\ref{eqn:reexpr}, and $\tilde{h}$ and $\tilde{g}$
are the differentiable approximation of $h$ and $g$.
We explain a concrete example of multi-class classification and introduce the refactoring and
interpolation in detail based on this example in the following sections.
\subsection{Example: Multi-class Classification}
\label{sec:cls_example}
We take multi-class classification as an example to show how refactoring works. First, we give formal definitions of multi-class classification and the performance metric: prediction accuracy.
Input is a mini-batch of images $\mathbf{x}=(x_1, x_2, \ldots, x_n)$
and their corresponding ground truth labels are $\mathbf{y}=(y_1, y_2, \ldots, y_n)$
where $n$ is the batch size. $y_i\in \{1,2,\ldots, p\}$
and $p$ is the number of classes, which happens to be the same value as $d$ in Sec.~\ref{sec:method_overview}.
A network $\phi(\cdot;w)$ outputs a score matrix $\mathbf{s}=[s_{i,j}]_{n\times p}$
and $s_{i,j}$ represents the score for the i-th image belongs to the class j.
The decoder $\delta(\mathbf{s})$ decodes $\mathbf{s}$ into the discrete outputs
$\tilde{\mathbf{y}}=(\tilde{y}_1, \tilde{y}_2, \ldots, \tilde{y}_n)$
by
\begin{equation}
\label{eqn:argmax}
\tilde{y}_i= \underset{1\leq j \leq p}{\operatorname{argmax} }~s_{i,j},
\end{equation}
and $\tilde{y}_i$ represents the predicted label of the i-th image for $i=1,2,\ldots,n$.
The evaluator $\xi(\tilde{\mathbf{y}}, \mathbf{y})$ evaluates the accuracy $e$ from $\tilde{\mathbf{y}}$ and $\mathbf{y}$ by
\begin{equation}
\label{eqn:compare}
e=\frac{1}{n}\sum_{i=1}^n [y_i=\tilde{y}_i],
\end{equation}
where $[\cdot]$ is the Iverson bracket.
Considering above together,
the predicted label for an image is correct if and only if
the score of its ground truth class is higher than the score of every other class:
\begin{equation}
\begin{split}
\label{eqn:final_pair}
[y_i=\tilde{y}_i]=\underset{j\ne y_i}{\underset{1\leq j \leq p}{\land}} [s_{i,y_i} - s_{i,j} > 0],
\text{ for all }1 \leq i \leq n,
\end{split}
\end{equation}
where $\land$ is logical and.
We thus refactor the decoding and evaluation process as a sequence of $f(\cdot)$ that transforms $\mathbf{s}$ to comparisons---$s_{i,y_i} - s_{i,j} \text{ for all } 1 \leq i \leq n, 1\leq j \leq p, \text{and } j\ne y_i$ ($n \times (p-1)$ comparisons in total), $h(\cdot)$ that transforms comparisons to binary values using $[\cdot > 0]$, and $g(\cdot)$ that transforms binary values to $e$ using logical and.
Next, we introduce how to
refactor the above procedure into our formulation and approximate $g$ and $h$.
\subsection{Refactoring}
\label{sec:cls_decomp}
Given a performance metric, we refactor it in the form of Eqn.~\ref{eqn:reexpr}. We first transform the training images into scores
$\mathbf{s} = (s_1, s_2, \ldots, s_{nd})$.
We then get the score comparisons (differences of pairs of scores) $\mathbf{c} = (c_1, c_2, \ldots, c_l)$
using $\mathbf{c} = f(\mathbf{s},\mathbf{y})$. Each comparison is
$c_i= s_{k_i^1} - s_{k_i^2}, 1\leq i\leq l, 1\leq k_i^1, k_i^2 \leq nd$.
The function $h$ then transforms the comparisons to
binary values by $\mathbf{b} = h(\mathbf{c })$. $h$ is the sign function,
i.e.
$b_i=[c_i > 0], 1\leq i\leq l$.
The function $g$ then computes $e$ by $e=g(\mathbf{b})$,
where $g$ can be arbitrary
computation that converts binary values to a real number. In practice, $g$ can be complex
and vary significantly across tasks and metrics.
Given any performance metrics
involving discrete operations in function $\xi$ and $\delta$ in Eqn.~\ref{eqn:e_ori}
(otherwise the metric $e$ is differentiable and trivial to be handled),
the computation of function $\xi(\delta(\cdot))$
can be refactored as a sequence of
continuous operations (which is optional),
discrete operations that make some differentiable real numbers non-differentiable,
and any following operations.
The discrete operations always occur when there are step functions, which can be expressed as comparing two numbers, to the best of our knowledge.
This refactoring is usually straightforward to obtain from the specification of the decoding
and evaluating procedures. The only manual effort is in identifying the discrete comparisons (binary variables).
Then we simply write the discrete comparisons as function $f$ and $h$,
and represent its following operations as function $g$.
In later sections we will show how to identify the binary variables for three commonly-used metrics in three scenarios, which can
be easily extended to other performance metrics.
On the other hand, this process is largely a mechanical exercise, as it is equivalent to rewriting some existing code in an alternative rigid format.
\subsection{Interpolation}
The two usually non-differentiable functions $h$ and $g$ are approximated by interpolation methods individually.
\subsubsection{Scores to Binaries: $h$.}
In $\mathbf{b} = h(\mathbf{c})$, each element $b_i = [c_i > 0]$. We approximate
the step function $[\cdot]$ using the $\mathtt{sigmoid}$ function. That is,
$\tilde{\mathbf{b}} = \tilde{h}(\mathbf{c})=(\tilde{b}_1, \tilde{b}_2, \ldots, \tilde{b}_l)$, and each element
\begin{equation}
\tilde{b}_i = \mathtt{sigmoid}(c_i),
\label{eqn:sigmoid}
\end{equation}
where $1\leq i\leq l$.
We now have $\tilde{h}$ as the differentiable approximation of $h$.
\subsubsection{Binaries to Performance: $g$.}
\label{sec:method_g}
We approximate $g(\cdot)$ in $e=g(\mathbf{b})$ by multivariate interpolation over the input $\mathbf{b} \in
\{0,1\}^{l}$. More specifically,
we first sample a set of
configurations as ``anchors'' $\mathbf{a} = (\mathbf{a}_1, \mathbf{a}_2, \ldots, \mathbf{a}_t)$,
where $\mathbf{a}_i$ is a configuration of $\mathbf{b}$,
and compute the output values
$g(\mathbf{a}_1),g(\mathbf{a}_2), \ldots, g(\mathbf{a}_t)$, where
$g(\mathbf{a}_i)$ is the actual performance metric value
and $t$ is the number of anchors sampled.
We then get an interpolated function over the anchors as
$\tilde{g}(\cdot;\mathbf{a})$.
We finally get
$\tilde{e} = \tilde{g}(\tilde{\mathbf{b}};\mathbf{a})$, where $\tilde{\mathbf{b}}$ is computed from $\tilde{h}$, $f$ and $\phi$.
By choosing a
differentiable interpolation method, the $\tilde{g}$ function becomes trainable using gradient-based methods.
We use a common yet effective interpolation method: inverse distance weighting (IDW)~\cite{shepard1968two}:
\begin{equation}
\tilde{g}(\mathbf{u};\mathbf{a}) =
\begin{cases}
\frac{\sum_{i=1}^t{\frac{1}{d(\mathbf{u},\mathbf{a}_i)} g(\mathbf{a}_i)}}{\sum_{i=1}^t{\frac{1}{d(\mathbf{u},\mathbf{a}_i)}}}, & d(\mathbf{u},\mathbf{a}_i)\ne 0 \text{ for }1 \leq i \leq t; \\
g(\mathbf{a}_i), & d(\mathbf{u}, \mathbf{a}_i) = 0 \text{ for some }i. \\
\end{cases}
\end{equation}
where $\mathbf{u}$ represents the input to $\tilde{g}$ and $d(\mathbf{u},\mathbf{a}_i)$ is the Euclidean distance between $\mathbf{u}$ and $\mathbf{a}_i$.
We select a subset of anchors based on the
current training examples. We use a mix of three types of anchors---good anchors with high performance values globally, bad anchors with low performance
values globally, and nearby anchors that are close to the current configuration, which is computed from the current training examples and network weights.
By using both the global information from the good and bad anchors
and the local information from the nearby anchors,
we are able to get an informative interpolation surface.
On the contrast, a naive random anchor sampling strategy does not give informative interpolation surface and cannot train the network at all in our experiments.
More specifically, we adopt a straightforward anchor sampling strategy for all tasks and metrics: we obtain good anchors by flipping some bits from the best anchor,
which is the ground truth.
The bad anchors are generated by randomly sampling binary values.
The nearby anchors are obtained by flipping some bits from the current configuration.
\section{Experimental Results}
To use our general framework UniLoss on each task, we refactor the evaluation process of the task into the format in Eqn.~\ref{eqn:reexpr}, and then approximate the non-differentiable functions $h$ and $g$ using the interpolation method in Sec. 3.
We validate the effectiveness of the UniLoss framework in three representative tasks in different scenarios: a ranking-related task using a set-based metric---average precision, a pixel-wise prediction task, and a common classification task.
For each task, we demonstrate how to formulate the evaluation process to our refactoring and compare our UniLoss with interpolation to the conventional task-specific loss.
More implementation details and analysis can be found in the appendix.
\subsection{Tasks and Metrics}
\subsubsection{Average Precision for Unbalanced Binary Classification}
Binary classification is to classify an example from two
classes---positives and negatives. Potential applications include face classification and image retrieval.
It has unbalanced number of positives and negatives in most cases, which results in that a typical classification metric such as accuracy as in regular classification cannot demonstrate how good is a model properly. For example, when the positives to negatives is 1:9, predicting all examples as negatives gets 90\% accuracy.
On this unbalanced binary classification, other metrics such as precision, recall and average precision (AP), i.e. area under the
precision-recall curve, are more descriptive metrics. We use AP as our target metric in this task.
It is notable that AP is fundamentally different from accuracy because it is a set-based metric. It can only be evaluated on a set of images, and involves not only the correctness of each image but also the ranking of multiple images.
This task and metric is chosen to demonstrate that UniLoss can effectively optimize for a set-based performance metric that requires ranking of the images.
\subsubsection{PCKh for Single-Person Pose Estimation}
Single-person pose estimation predicts the
localization of human joints. More specifically, given an image, it predicts the location of the joints. It is usually formulated as a pixel-wise prediction problem, where the neural network outputs a score for each pixel indicating how likely is the location can be the joint.
Following prior work, we use PCKh (Percentage of Correct Keypoints wrt to head size) as the performance metric.
It computes the percentage of the predicted joints that are within a given radius $r$ of the ground truth. The radius is half of the head segment length.
This task and metric is chosen to validate the effectiveness of UniLoss in optimizing for a pixel-wise prediction problem.
\subsubsection{Accuracy for Multi-class Classification}
Multi-class classification is a common task that has a well-established conventional loss --- cross-entropy loss.
We use
accuracy (the percentage of correctly classified images) as our
metric following the common practice.
This task and metric is chosen to demonstrate that for a most common classification setting, UniLoss still performs similarly effectively as the well-established conventional loss.
\subsection{Average Precision for Unbalanced Binary Classification}
\subsubsection{Dataset and Baseline}
We augment the handwritten digit dataset MNIST to be a binary classification task, predicting zeros or non-zeros. Given an image
containing a single number from 0 to 9, we classify it into the zero (positive) class or
the non-zero (negative) class. The positive-negative ratio of 1:9.
We create a validation split
by reserving 6k images from the original training set.
We use a 3-layer fully-connected neural network with
500 and 300 neurons in each hidden layer respectively. Our baseline model is trained with
a 2-class cross-entropy loss.
We train both baseline and UniLoss with a fixed learning rate of 0.01 for 30
epochs. We sample 16 anchors for each anchor type in our anchor interpolation for all of our experiments except in ablation studies.
\subsubsection{Formulation and Refactoring}
The evaluation process is essentially ranking images using pair-wise comparisons and compute the area under curve based on the ranking. It is determined by whether positive images are ranked higher than negative images.
Given that the output of a mini-batch of $n$ images is
$\mathbf{s}=(s_1, s_2, \ldots, s_n)$, where $s_i$ represents the predicted score of i-th image
to be positive. The
binary variables are
$\mathbf{b} = \{b_{i,j} = [c_{i,j} > 0] = [s_i - s_j > 0]\}$, where i belongs to
positives and j belongs to negatives.
\subsubsection{Results}
UniLoss achieves an AP of 0.9988, similarly as the baseline cross-entropy loss (0.9989).
This demonstrates that UniLoss can effectively optimize for a performance metric (AP) that
is complicated to compute and involves a batch of images.
\subsection{PCKh for Single-Person Pose Estimation}
\subsubsection{Dataset and Baseline}
We use MPII~\cite{andriluka14cvpr} which has around 22K images for training and 3K images for testing. For simplicity, we perform experiments on the
joints of head only, but our method could be applied to an arbitrary number of human
joints without any modification.
We adopt the Stacked Hourglass~\cite{newell2016stacked} as our model. The baseline loss is
the Mean Squared Error (MSE) between the predicted heatmaps and the manually-designed ``ground
truth'' heatmaps.
We train a
single-stack hourglass network for both UniLoss and MSE using
RMSProp~\cite{hinton2012lecture} with an initial learning rate 2.5e-4 for 30 epochs and then
drop it by 4 for every 10 epochs until 50 epochs.
\subsubsection{Formulation and Refactoring}
Assume the network generates a mini-batch of heatmaps $\mathbf{s}=(\mathbf{s}^1,\mathbf{s}^2,\ldots,\mathbf{s}^n)\in
\mathbf{R}^{n\times m}$,
where $n$ is the batch size, $m$ is the number of pixels in each image.
The pixel with the highest score in each heatmap is predicted as a key point during evaluation.
We note the pixels within the radius $r$ around the ground truth as
positive pixels, and other pixels as negative and each heatmap $\mathbf{s}^k$ can be flatted as
$(s_{pos,1}^{k}, s_{pos,2}^{k}, \ldots, s_{pos,m_k}^{k},
s_{neg,1}^{k}, \ldots,$
$s_{neg,m-m_k}^{k})$,
where $m_k$ is the number of positive pixels in the k-th heatmap and $s_{pos,j}^{k}$ ($s_{neg,j}^{k}$) is the score of the j-th positive (negative) pixel in the k-th heatmap.
PCKh requires to find out if a positive pixel has the highest score among others.
Therefore, we need to compare each pair of positive and negative pixels and this leads to the binary variables
$\mathbf{b}=(b_{k,i,j}) \text{ for }
1\leq k \leq n$, $1\leq i \leq m_k$, $1\leq j \leq m-m_k$,
where $b_{k,i,j} = [s_{pos,i}^{k} - s_{neg,j}^{k} > 0]$, i.e. the comparison between the i-th positive pixel and the j-th negative pixel in the k-th heatmap.
\subsubsection{Results}
It is notable that the manual design of the target heatmaps is a part of the MSE loss function for pose estimation.
It heavily relies on the careful design of the ground truth heatmaps.
If we intuitively set the pixels at the exact joints to be 1 and the rest of pixels as 0 in the heatmaps, the training diverges.
Luckily, \cite{tompson2014joint} proposed to design target heatmaps as a 2D Gaussian bump centered on the ground truth joints, whose shape is controlled by its variance $\sigma$ and the bump size. The success of the MSE loss function relies on the choices of $\sigma$ and the bump size.
UniLoss, on the other hand, requires no such design.
As shown in Table~\ref{tab:pose},
our UniLoss achieves a 95.77 PCKh which is comparable as the 95.74 PCKh for MSE with the best $\sigma$. This
validates the effectiveness of UniLoss in optimizing for a pixel-wise prediction problem.
We further observe that the baseline is sensitive to the shape of 2D Gaussian, as in Table~\ref{tab:pose}. Smaller $\sigma$ makes target heatmaps concentrated on ground truth joints and makes the optimization to be unstable. Larger $\sigma$ generates vague training targets and decreases the performance.
This demonstrates that conventional losses require dedicated manual design while UniLoss can be applied directly.
\begin{table}[!t]
\caption{PCKh of Stacked Hourglass with MSE and UniLoss on the MPII validation.}
\begin{center}
\begin{tabular}{@{}l@{\hskip 0.2in}rrrrrrr@{\hskip 0.2in}r@{}}
\toprule
Loss & MSE $\sigma=0.1$ & $\sigma=0.5 $ & $\sigma=0.7$ & $\sigma=1$ & $\sigma=3$ & $\sigma=5$ & $\sigma=10$ & UniLoss \\
\midrule
PCKh & \qquad 91.31 & 95.13 & 93.06 & 95.71 & 95.74 & 94.99 & 92.25 & \textbf{95.77} \\
\bottomrule
\end{tabular}
\end{center}
\label{tab:pose}
\end{table}
\begin{table}[!t]
\caption{Accuracy of ResNet-20 with CE loss and UniLoss on the CIFAR-10 and CIFAR-100 test set.}
\begin{center}
\begin{tabular}{@{}l@{\hskip 1in}r@{\hskip 1in}r@{}}
\toprule
Loss & CIFAR-10 & CIFAR-100\\
\midrule
CE (cross-entropy) Loss & 91.49 & 65.90\\
UniLoss & 91.64 & 65.92\\
\bottomrule
\end{tabular}
\end{center}
\label{tab:cifar}
\end{table}
\begin{table}[!t]
\caption{Ablation study for mini-batch sizes on CIFAR-10.}
\begin{center}
\begin{tabular}{@{}l@{\hskip 0.3in}rrrrrrrr@{}}
\toprule
Batch Size & 8 & 16 & 32 & 64 & 128 & 256 & 512 & 1024\\
\midrule
Accuracy & 87.89 & 90.05 & 90.82 & 91.23 & \textbf{91.64} & 90.94 & 89.10 & 87.20 \\
\bottomrule
\end{tabular}
\end{center}
\label{tlb:batchsize}
\end{table}
\begin{table}[!t]
\caption{Ablation study for number of anchors on the three tasks.}
\begin{center}
\begin{tabular}{@{}l@{\hskip 0.3in}c@{\hskip 0.3in}r@{}}
\toprule
Task & \#Anchors in Each Type & Performance \\
\midrule
& 5 & 0.9988\\
Bin. Classification (AP) & 10 & 0.9987\\
& 16 & 0.9988\\
\midrule
& 5 & 91.55\%\\
Classification (Acc) & 10 & 91.54\%\\
& 16 & 91.64\%\\
\midrule
& 5 & 94.79\%\\
Pose Estimation (PCKh) & 10 & 94.95\%\\
& 16 & 95.77\%\\
\bottomrule
\end{tabular}
\end{center}
\label{tlb:numpoints}
\end{table}
\subsection{Accuracy for Multi-class Classification}
\subsubsection{Dataset and Baseline}
We use CIFAR-10 and CIFAR-100
~\cite{krizhevsky2009learning}, with $32 \times 32$ images and 10/100 classes. They each have
50k training images and 10k test images. Following prior work~\cite{he2016deep}, we split
the training set into a 45k-5k train-validation split.
We use the ResNet-20 architecture~\cite{he2016deep}. Our
baselines are trained with cross-entropy (CE) loss. All experiments
are trained following the same augmentation and pre-processing techniques as in
prior work~\cite{he2016deep}.
We use an initial learning rate of 0.1,
divided by 10 and 100 at the 140th epoch and the 160th epoch, with a total of 200 epochs trained for both baseline and UniLoss on CIFAR-10.
On CIFAR-100, we train baseline with the same training schedule and UniLoss with 5x training schedule because we only train 20\% binary variables at each step. For a fair comparison, we also train baseline with the 5x training schedule but observe no improvement.
\subsubsection{Formulation and Refactoring}
As shown in Sec.~\ref{sec:cls_example}, given the output of a mini-batch of n images
$\mathbf{s}=(s_{1,1},s_{1,2}..,s_{n,p})$, we compare the score of the ground truth class and the scores of other $p-1$ classes for each image. That is,
for the i-th image with the ground truth label $y_i$,
$b_{i,j} = [s_{i,y_i} - s_{i,j} > 0]$, where $1\leq j \leq p$, $j\ne y_i$, and $1\leq i\leq n$.
For tasks with many binary variables such as CIFAR-100, we train a portion of binary variables in each update to accelerate training.
\subsubsection{Results}
Our implementation of the baseline method obtains a slightly better accuracy (91.49\%) than that was reported
in ~\cite{he2016deep}---91.25\% on CIFAR-10 and obtains 65.9\% on CIFAR-100.
UniLoss performs similarly (91.64\% and 65.92\%) as baselines on both datasets (Table~\ref{tab:cifar}), which shows that even when the conventional loss is well-established for the particular task and metric, UniLoss still matches the conventional loss.
\subsection{Discussion of Hyper-parameters}
\subsubsection{Mini-batch Sizes}
We also use a mini-batch of
images for updates with UniLoss.
Intuitively, as long as the batch size is not extremely small or large,
it should be able to approximate the distribution of the whole dataset.
We explore different batch sizes on the CIFAR-10 multi-class classification task, as shown in
Table~\ref{tlb:batchsize}.
The results match with our hypothesis---as long as the batch size is not extreme, the performance is similar. A batch size of 128 gives the best performance.
\subsubsection{Number of Anchors}
We explore different number of anchors in the three tasks.
We experiment with 5, 10, 16 as the number of anchors for each type of the good, bad and nearby anchors. That is, we have 15, 30, 48 anchors in total respectively.
Table \ref{tlb:numpoints} shows that binary classification and classification are less sensitive to the number of anchors, while in pose estimation, fewer anchors lead to slightly worse performance. It is related to the number of binary variables in each task: pose estimation has scores for each pixel, thus has much more comparisons than binary classification and classification. With more binary variables, more anchors tend to be more beneficial.
\section{Conclusion and Limitations}
We have introduced UniLoss, a framework for generating surrogate losses in a unified way, reducing the amount of manual design of task-specific surrogate losses. The proposed framework is based on the observation that there exists a
common refactoring of the evaluation computation for many tasks and performance metrics.
Using this refactoring we generate
a unified differentiable approximation of the evaluation computation, through
interpolation.
We demonstrate that using
UniLoss, we can optimize for various tasks and performance metrics, achieving comparable performance as task-specific losses.
We now discuss some limitations of UniLoss.
One limitation is that the interpolation methods are not yet fully explored. We adopt the most straightforward yet effective way in this paper, such as the sigmoid function and IDW interpolation for simplicity and an easy generalization across different tasks.
But there are potentially other sophisticated choices for the interpolation methods and for the sampling strategy for anchors.
The second limitation is that proposed anchor sampling strategy is biased towards the optimal configuration that corresponds to the ground truth when there are multiple configurations that can lead to the optimal performance.
The third limitation is that ranking-based metrics may result in a quadratic number of binary variables if pairwise comparison is needed for every pair of scores. Fortunately in many cases such as the ones discussed in this paper, the number of binary variables is not quadratic because many comparisons does not contribute to the performance metric.
The fourth limitation is that currently UniLoss still requires some amount of manual effort (although less than designing a loss from scratch) to analyze the given code of the decoder and the evaluator for the refactoring. Combining automatic code analysis with our framework can further reduce manual efforts in loss design.
\section{Acknowledgements}
This work is partially supported by the National Science Foundation under Grant. 1734266.
\bibliographystyle{splncs04}
|
1,314,259,994,915 | arxiv | \section{Introduction}
Early in the debate over the symmetry of the
superconducting order in the cuprates, a rather convincing
picture of the microwave properties of the YBCO-123 system was put
forward by Bonn {\it et al.}, \cite{bonnetal93} and later placed on a
microscopic foundation\cite{HPS,Rieck,SchachingerCarbotte}.
Crucial to this interpretation is the
observed collapse of the $d$-wave nodal quasiparticle scattering
rate as the system becomes
superconducting\cite{bonnetal91,nussetal,romeroetal},
leading to a
dramatic rise in the conductivity with decreasing temperature.
This rise is cut off when the inelastic mean free path becomes
comparable to the elastic one, and the conductivity subsequently
decreases because of the vanishing nodal carrier density as
temperature tends to zero. One consequence of this picture is that
the resulting conductivity peak should be suppressed
and occur at higher temperatures in dirtier samples.
In addition, the conductivity should approach
the universal value $\sigma_{0}\equiv e^2v_F/h v_2$ for zero
temperature and zero frequency as predicted by P.A. Lee\cite{PALee93} for
the case of isotropic scatterers, where
$v_F$ is the Fermi velocity and $v_2$ the gap velocity
at the node.
If one extracts $v_F/v_2$ from thermal conductivity\cite{TailleferDiraccone}
or angle resolved photoemission (ARPES) measurements\cite{ARPESDiraccone},
one finds that the universal value for both BSCCO and YBCO crystals should be
about $\sigma_0=5\times 10^5 \Omega^{-1}{\rm m}^{-1}$.
In YBCO, the residual value of the conductivity for $\omega$,$T\to$0
is difficult to determine, but appears to be approaching 3-4$\times \sigma_{0}$ in the best
crystals\cite{hosseinietal}. The peak in the conductivity occurs around
25K with an amplitude of approximately 100$\times \sigma_0$ for the lowest
frequency measured.
In BSCCO, the peak is located around 20K, but is only about 20\%
higher than the apparent residual value~\cite{leeetal}
of 8-10$\times \sigma_{0}$. Virtually no frequency dependence
is seen in the measured microwave frequency range~\cite{leeetal},
suggesting a very dirty material, in apparent
contradiction--within the ``standard" scenario--with the
low-temperature peak position. The longstanding puzzle of the low
temperature microwave peak together with indications of dirty
limit behavior have been analyzed as evidence for absorption into
a collective mode off resonance at low
frequencies\cite{orenstein1}, as well as a consequence of
nanoscale inhomogeneity\cite{orenstein2}.
In this work we argue that the temperature dependence of the
conductivity can be more naturally understood in terms of the
effect of extended scatterers present in the BSCCO crystal.
Current generation crystals are made typically with excess Bi,
deficiencies of Sr and Ca, and variable O content; cation
substitution is thought to occur frequently. Some aspects of this
defect distribution have been discussed recently by Eisaki
{\it et al.}\cite{Eisaki}. The net result of these defects is not only to
dope the nominally stoichiometric BSCCO crystal (pure BSCCO would
be an insulator), but to provide a relatively slowly varying
potential landscape for quasiparticles moving in the CuO$_2$
planes.
The effect of these extended scatterers with respect to the normal state
has recently been discussed by Abrahams and Varma\cite{AV00}
and it has been pointed out by Zhu {\it et al.}\cite{ZHA}
that the broad momentum space peaks observed in Fourier transform
STM studies of BSCCO\cite{Howaldetal,Hoffmanetal,McElroyetal} can
only be explained by potential scatterers with finite range. In a
further application of this notion to ARPES, Zhu {\it et
al.}\cite{ZHS} showed that a large concentration of impurities with potentials
peaked in the forward direction could be present without
substantially
broadening quasiparticle states except near the node. Since the
microwave conductivity is dominated by nodal quasiparticles, it is
clearly important to ask what the effects of extended or forward
scatterers are in this case.
Since the work of Durst and Lee,\cite{DurstLee} we know that the
residual conductivity in the presence of extended scatterers can
be much larger than the ``universal" value $\sigma_{0}$. This
might account for the large value of the microwave conductivity
observed in the BSCCO-2212 system at low temperatures.
To make this case, however, one needs to
examine the influence of a finite scattering range at finite
temperatures and frequencies. We have therefore generalized the
analysis of Durst and Lee in this way
and applied this treatment to the analysis of
experiments on YBCO and BSCCO. With respect to YBCO we find
that consideration of slightly extended strong scatterers provides
a better fit to the low temperature microwave conductivity than
pointlike strong scatterers. For BSCCO we conclude that it
is necessary to include a large concentration of weak-to-intermediate strength
extended impurities in addition to the strong in-plane
defects which are responsible for the unitary scattering resonances
observed by STM~\cite{Hudson}.
The outline of the paper is at follows.
In section~\ref{sec:Model}, we describe the model and derive
expressions for the self-energy and vertex function for
extended scatterers. Our approach, which is based on an extension of the work
by Durst and Lee~\cite{DurstLee}, aims at treating scattering
potentials with an extension of a few lattice spacings at maximum and
is therefore in the opposite limit from
semiclassical calculations where the impurity
potentials extend over a few coherence lengths~\cite{Adagideli,Sheehy}.
In section~\ref{sec:YBCO} we apply our treatment
to the microwave conductivity of YBCO. We show that
the consideration of slightly extended instead of pointlike
strong potential scatterers improves the agreement with the
experimental spectra. In section~\ref{sec:BSCCO} we address the
microwave conductivity of BSSCO and demonstrate that it is necessary
to include a large concentration of weak extended scatterers
in order to explain the experimental spectra. A good fit is obtained
based on a realistic disorder model for
BSCCO which contains weak extended scatterers in addition to strong
pointlike impurities. Finally, in section IV, we present our
conclusions.
\section{Treatment of extended scatterers}
\label{sec:Model}
For low temperatures and low frequencies the quasiparticle
dispersion of a $d$-wave superconductor can be linearized around
the nodes. The resulting quasiparticle spectrum has the form of a
Dirac cone, whose anisotropy is determined by the ratio $v_f/v_2$
of the Fermi velocity $v_f=\partial \epsilon_k/ \partial
k=2\sqrt{2}t$ and the gap velocity $v_2=\partial \Delta_k/\partial
k=\Delta_0/\sqrt{2}$, where $t$ is the nearest neighbor hopping
parameter and $\Delta_0$ is the maximum gap value and we have set
$a=\hbar=1$. At low temperatures and frequencies, quasiparticle
excitations are restricted to small regions around the nodes.
Therefore momentum transfer between quasiparticles is limited to
four wavevectors which connect the four nodes and include
intranode and internode scattering processes (see
Fig.~\ref{fig:NodalApprox}). Consequently a momentum dependent
impurity potential $V(k)$ can be represented by three parameters
$V_1$, $V_2$ and $V_3$ which correspond to the respective momentum
transfers. Based on these approximations and treating impurity
scattering in T-matrix approximation, an expression for the
microwave conductivity has been derived by Durst and
Lee~\cite{DurstLee}. They found that vertex corrections, which
arise from the momentum dependence of the impurity potential,
induce a dependence of the zero-temperature and zero-frequency
limit of the conductivity on the impurity potential and the
impurity concentration. Contrary to the case of pointlike
scatterers, no universal value of the electrical conductivity
exists therefore in case of extended scatterers. Durst and Lee,
however, did not further explore the frequency and temperature
dependence of the microwave conductivity. Here, we slightly modify
their approach and evaluate the conductivity for finite
frequencies and temperatures. Furthermore we consider the effect
of combining different types of scatterers and calculate the
microwave conductivity for BSCCO based on a realistic disorder
model.
\subsection{Self-energy}
\label{sec:SelfEnergy}
Before proceeding to two-particle quantities like the microwave
conductivity, it is instructive to focus first on single-particle
properties like the single-particle self-energy.
Using the Nambu notation, the single-particle self energy in a
superconductor can be decomposed as:
\begin{equation}
\tilde \Sigma(k,\omega)= \sum_\alpha \Sigma_\alpha(k,\omega) \tilde \tau_\alpha \,,
\end{equation}
where $\tilde \tau_\alpha$ are the Pauli matrices and $\tilde \tau_0$ is the unit matrix.
Treating impurity-scattering in T-matrix approximation gives rise to
the following self-energy
\begin{equation}
\tilde \Sigma(k,\omega)= n_i \tilde T_{kk}(\omega),
\end{equation}
where $n_i$ is the impurity concentration and $T_{kk}(\omega)$ is the
diagonal element of the T-matrix
\begin{equation}
\label{eq:TMatrixk}
\tilde T_{kk'}(\omega)= V_{kk'} \tilde \tau_3
+ \sum_{k''} V_{kk''} \tilde \tau_3 \tilde G(k'',\omega) \tilde T_{k''k'}(\omega)
\,.
\end{equation}
The self-energy $\tilde \Sigma(k,\omega)$ has to be solved
self-consistently in combination with the single-particle Green's function
\begin{equation}
\tilde G(k,\omega)^{-1}=\tilde G_0(k,\omega)^{-1}
-\tilde \Sigma(k,\omega)
\end{equation}
where the unperturbed Green's function is given as
\begin{equation}
\tilde G_0(k,\omega)=
\frac{\omega \tilde \tau_0 + \Delta_k \tilde \tau_1 +\epsilon_k \tilde
\tau_3}
{\omega^2-\epsilon_k^2-\Delta_k^2}\,.
\end{equation}
Following the approach of Durst and Lee~\cite{DurstLee}, we reduce the impurity
scattering potential to the four wave-vectors connecting
the nodes, i.e., $V_{kk'}$ is replaced by a $4\times4$-matrix in
nodal space
\begin{equation}
V_{kk'} \rightarrow \underline{V}=
\left( \begin{array}{cccc}
V_1 & V_2 & V_3 & V_2 \\
V_2 & V_1 & V_2 & V_3 \\
V_3 & V_2 & V_1 & V_2 \\
V_2 & V_3 & V_2 & V_1
\end{array} \right) \,,
\end{equation}
where $V_1$, $V_2$ and $V_3$ are the values of the impurity potential
at the wavevectors connecting the nodes, see Fig.~\ref{fig:NodalApprox}.
\begin{figure}[t]
\begin{center}
\leavevmode
\includegraphics[clip=true,width=.45\columnwidth]{NodalApprox.eps}
\caption{For low temperatures and low frequencies the momentum
transfer between quasiparticles is basically limited to the four
wavevectors connecting the nodes of a $d$-wave
superconductor. Therefore the momentum dependent impurity potential
$V(k)$ can be represented by the values at the respective wavevectors,
i.e., by three parameters $V_1$, $V_2$ and $V_3$.}
\label{fig:NodalApprox}
\end{center}
\end{figure}
Using this simplification, the impurity potential can be
pulled out of the integral and the T-matrix becomes a
$4\times4$-matrix in nodal space
\begin{equation}
\tilde T_{jj'}(\omega)= V_{jj'} \tilde \tau_3
+ \tilde I_G(\omega) \tilde \tau_3 \sum_{j''} V_{jj''} \tilde T_{j''j'}(\omega),
\end{equation}
where $I_G(\omega)$ is the integral of the single-particle Green's
function over one quarter of the Brillouin zone
\begin{eqnarray}
\label{eq:SingleParticleGreen}
\tilde I_G(\omega) &=& \int_{0}^{\pi} \int_{0}^{\pi}
\frac{d^2k}{(2\pi)^2} \tilde G (k,\omega)
\approx I_G(\omega) \tilde \tau_0 \,.
\end{eqnarray}
This allows for an analytical solution~\cite{DurstLee} of the T-matrix
\begin{equation}
\tilde T_{jj'}= T_{jj'}^3 \tilde \tau_3 + T_{jj'}^0 \tilde \tau_0
\end{equation}
with
\begin{equation}
T_{jj'}^3=\left(\frac{\underline{V}}{1-I_G(\omega)^2\underline{V}^2}
\right)_{jj'} ,\,\,
T_{jj'}^0=\left(\frac{-I_G(\omega)\underline{V}^2}{1-I_G(\omega)^2\underline{V}^2}
\right)_{jj'} \,,
\end{equation}
where the denominators have to be calculated as inverse matrices
in nodal space.
This gives for the $\Sigma_0$-component of the self-energy:
\begin{eqnarray}
\label{eq:SelfEnT}
&&\!\!\!\!\!\!\!\Sigma_0(\omega)=-\frac{n_i}{4I_G(\omega)}
\Bigl(4 - \frac{2}{1\!-\!I^2_G(\omega)(V_1-V_3)^2} \\
&& \!\!\!\!\!\!\! - \frac{1}{1\!-\!I^2_G(\omega)(V_1-2V_2+V_3)^2}
- \frac{1}{1\!-\!I_G^2(\omega)(V_1+2V_2+V_3)^2} \Bigr) \nonumber
\end{eqnarray}
For an isotropic impurity potential, i.e., $V_1$=$V_2$=$V_3$=$V$ this
expression simplifies to:
\begin{equation}
\label{eq:SelfEnIso}
\Sigma_{0}^{\rm iso}(V,\omega)=-\frac{n_i}{4I_G(\omega)}
\Bigl( 1-\frac{1}{1-16 I_G^2(\omega) V^2} \Bigr) \,,
\end{equation}
which implies that the self-energy for a momentum-dependent impurity potential
in Eq.~(\ref{eq:SelfEnT}) can be decomposed into a sum of
self-energies corresponding to three different isotropic impurity
potentials
\begin{eqnarray}
\label{eq:SelfEnSum}
&&\Sigma_0(\omega)=
\Sigma_{0}^{\rm iso}(\frac{1}{4}(V_1+2V_2+V_3),\omega) \\
&&+\Sigma_{0}^{\rm iso}(\frac{1}{4}(V_1-2V_2+V_3),\omega)
+ 2 \Sigma_{0}^{\rm iso}(\frac{1}{4}(V_1-V_3),\omega) \,. \nonumber
\end{eqnarray}
Consequently, the self-energy for an anisotropic impurity potential in
the nodal approximation contains up to three resonances
corresponding to the different impurity strengths
$V=(V_1+2V_2+V_3)/4$, $V=(V_1-2V_2+V_3)/4$ and $V=(V_1-V_3)/4$, see
Fig.~\ref{fig:SelfEn3Peaks}.
\begin{figure}[t]
\leavevmode
\begin{center}
\includegraphics[clip=true,width=.8\columnwidth]{SelfEn3Peak.eps}
\caption{Imaginary part of the self-consistently calculated self-energy
$\Sigma_0(\omega)$ for an
anisotropic impurity potential characterized by the three parameters
$V_1/t=100$, $V_2/t=66$ and $V_3/t=52$ (solid line). The positions of
the resonances coincide with the resonances of
the self-energies for isotropic impurity potentials $V/t=71$,
$V/t=12$ and $V/t=5$. Here $\Delta_0/t=.29$ and $n_i=.00002$ have been used.}
\label{fig:SelfEn3Peaks}
\end{center}
\end{figure}
If the scattering strength of the impurities is weak, they can
be treated within Born approximation and the self-energy of an
extended weak impurity becomes:
\begin{equation}
\label{eq:SelfEnBorn}
\Sigma_0(\omega)=-n_i (V_1^2+2V_2^2+V_3^2) I_G(\omega) \,,
\end{equation}
i.e., the self-energy for an anisotropic impurity potential is
identical to the self-energy for an isotropic impurity potential with
$V=\sqrt{V_1^2+2V_2^2+V_3^2}$.
\subsection{Microwave conductivity}
In linear response the electrical conductivity is given as:
\begin{equation}
\sigma (\Omega,T)=-\frac{{\rm Im} \Pi_{\rm
ret}(\Omega,T)}{\Omega}\,,
\end{equation}
where $\Pi_{\rm ret}(\Omega,T)$ is the retarded current-current
correlation function, which can be obtained by analytical continuation
from
\begin{equation}
\Pi(i\Omega)=\frac{e^2v_f^2}{\beta}\sum_{i\omega,k}
{\rm Tr}[\tilde G(k,i\omega) \tilde G(k,i\omega+i\Omega)
\tilde \Gamma(k,i\omega+i\Omega)] \,,
\end{equation}
where $\tilde \Gamma(k,i\omega+i\Omega)$ is the vertex function, which
for a $d$-wave superconductor arises entirely
from the momentum dependence of the impurity potential\cite{HWE88} and will be
calculated here as the sum of ladder diagrams.
For simplicity we neglect the $\Sigma_3$-component of the
self-energy and keep only the $\Sigma_0$-component. This
approximation becomes exact in the unitary limit for isotropic
scattering $V_1=V_2=V_3$ and for purely forward scattering
$V_2=V_3=0$ because the $\Sigma_3$-component vanishes in theses
limits and also in the Born approximation for all scattering
potentials. In terms of diagrams this means that all summations
are reduced to diagrams with even number of impurity lines. In
order to obtain a conserving approximation~\cite{BaymKadanoff} for
the conductivity, the vertex corrections also have to be
restricted to diagrams containing even numbers of impurity lines,
see Fig.~\ref{fig:TmatrixEven}. This is a modification to the
treatment by Durst and Lee~\cite{DurstLee}, who have summed all
ladder diagrams both with even and odd number of impurity lines.
This modification is necessary when $\Sigma_3$ is neglected
because otherwise the vertex corrections violate analyticity at
finite $\omega$ and $T$. Furthermore, this modification leads to a
simplification to the expression of the conductivity obtained by
Durst and Lee~\cite{DurstLee}, see below. Note, however, that our
treatment still agrees with Durst and Lee~\cite{DurstLee} in the
Born approximation, which is not affected by this modification and
also in the T-matrix approximation for the limit $\omega \to 0$
and $T \to 0$, i.e., the limit Durst and Lee have focused on.
Differences arise only for finite temperatures and frequencies in
the T-matrix approximation.
\begin{figure}[h]
\begin{center}
\leavevmode
\includegraphics[clip=true,width=.9\columnwidth]{SelfEnT.eps}
\\[.5cm]
\includegraphics[clip=true,width=.9\columnwidth]{OptCondTnew.eps}
\caption{T-matrix approximation for the self-energy $\Sigma_0$ and the
microwave conductivity $\sigma$. Because we neglect the
$\tilde \tau_3$-component of the self-energy, which corresponds to
neglecting all diagrams with odd number of impurity lines, all
diagrams with odd number of impurity lines have to be excluded from
the vertex corrections as well.}
\label{fig:TmatrixEven}
\end{center}
\end{figure}
Summing up all ladder diagrams with even number of impurity
lines one arrives at the following expression for the current-current
correlation function:
\begin{equation}
\Pi(i\Omega)=\frac{e^2 v_f}{\pi v_2} \sum_{i\omega}
J(i\omega,i\Omega)
\end{equation}
with
\begin{equation}
\label{eq:J}
J(i\omega,i\Omega)=\frac{I_B(i\omega,i\Omega)}
{1-\gamma(i\omega,i\Omega)I_B(i\omega,i\Omega)}
\end{equation}
where $I_B(i\omega,i\Omega)$ is the momentum integrated particle-hole
bubble
\begin{equation}
I_B(i\omega,i\Omega) = \int_{0}^{\pi} \int_{0}^{\pi}
\frac{d^2k}{(2\pi)^2} \tilde G(k,i\omega) \tilde G(k,i\omega+i\Omega) \,.
\end{equation}
Note, that here we calculate all integrals $I_B(i\omega,i\Omega)$ and
$I_G(\omega)$ numerically instead of replacing the elliptical
integration area by a circle as in Ref.~\onlinecite{DurstLee}, because
this approximation becomes inaccurate for small values of the gap,
i.e., close to $T_c$, where we parameterize the temperature dependence
of the gap in the following way:~\cite{DSH01}
\begin{equation}
\Delta_0(T)=\Delta_0 \tanh (\alpha \sqrt{T_c/T-1})
\end{equation}
with $\alpha=3.0$.
The vertex function is given by
\begin{eqnarray}
&\gamma(i\omega,i\Omega)&=\frac{n_i}{4\pi v_f v_2} \times\\
&&(T_{11}^0(i\omega)T_{11}^0(i\omega+i\Omega)
+T_{11}^3(i\omega)T_{11}^3(i\omega+i\Omega) \nonumber \\
&&\!\!-T_{13}^0(i\omega)T_{13}^0(i\omega+i\Omega)
-T_{13}^3(i\omega)T_{13}^3(i\omega+i\Omega))\,. \nonumber
\end{eqnarray}
After analytical continuation the microwave conductivity can be
expressed as~\cite{DurstLee}
\begin{eqnarray}
\label{eq:Conductivity}
&&\sigma(\Omega)=\frac{e^2 v_f}{2 \pi^2 v_2}
\int d \omega \frac{n_F(\omega)-n_F(\omega+\Omega)}{\Omega}
\\
&&\bigl( {\rm Re} J(\omega-i\delta,\omega+\Omega+i\delta)
-{\rm Re} J(\omega+i\delta,\omega+\Omega+i\delta)
\bigr) \,. \nonumber
\end{eqnarray}
In Born approximation one arrives at a very similar expression for the
conductivity, but the vertex function $\gamma (i\omega, i\Omega)$ in
Eq.~(\ref{eq:J}) is replaced by the simpler from
\begin{equation}
\label{eq:VertexBorn}
\gamma^{\rm Born}=\frac{n_i}{4\pi v_f v_2} (V_1^2-V_3^2) \,.
\end{equation}
So far we have focused on the effect of impurity scattering,
which is appropriate for low temperatures and low
frequencies. At higher temperatures, however, it
is essential to take into account inelastic
scattering processes as well, like e.g. quasiparticle quasiparticle scattering or
scattering off spin fluctuations. These inelastic
scattering processes are suppressed below $T_c$ due to the opening of
the superconducting gap in the quasiparticle excitation spectrum and
therefore the contribution of inelastic scattering increases rapidly
when $T_c$ is approached from the low temperature side. A full treatment of
inelastic scattering is beyond the scope of this paper.
It has been pointed out by Walker and Smith~\cite{WS00}
that the contribution of quasiparticle quasiparticle scattering to the transport
lifetime is exponentially suppressed for low temperatures because only
Umklapp scattering processes can decay the current and a
finite excitation energy $\Delta_U$ is necessary to permit an
Umklapp scattering process for a realistic Fermi surface.
Thus, we include the effect of inelastic scattering by simply
adding the inverse transport lifetime $\tau_{\rm inel}^{-1}(T)$, which
has been obtained by Duffy {\it et al.}~\cite{DSH01} by extracting
the Umklapp scattering processes from scattering of
quasiparticles off spin fluctuations,
to the imaginary part of the self-energy $\Sigma_0(\omega)$ in
Eq.~(\ref{eq:SelfEnT}) or Eq.~(\ref{eq:SelfEnBorn})
\begin{equation}
\label{eq:SigmaInel}
\Sigma(\omega)=\Sigma_0(\omega)-i (2 \tau_{\rm inel}(T))^{-1}
\end{equation}
Note, that our simplified treatment of $\tau_{\rm
inel}^{-1}$ completely neglects the frequency dependence of
inelastic scattering and therefore limits our approach to small
frequencies in the microwave regime, and prevents us from
calculating the conductivity in the THz-range.
\section{Comparison with experimental spectra of YBCO}
\label{sec:YBCO}
The microwave conductivity has been investigated in detail for
pointlike scatterers within the self-consistent T-matrix approximation~\cite{HPS}
and good agreement with the experimental data of YBCO has been found.
The temperature dependence of the
microwave conductivity for pointlike
scatterers, see also Fig.~\ref{fig:FitYBCOiso},
can be summarized in the following way. For zero
temperature and zero frequency the conductivity approaches a universal
value~\cite{PALee93} $\sigma_0=e^2 v_f/h v_2$
due to the fact that at zero temperature
impurities give rise to a finite quasiparticle
density of states while at the same time they reduce the quasiparticle
lifetime. At low temperatures the conductivity rises with increasing $T$
due to an increase in the number of excited
quasiparticles. The exact temperature dependence is
determined by the density of states in a $d$-wave superconductor and
the frequency dependence of the impurity
scattering rate. Starting from the opposite side, i.e., decreasing the
temperature below $T_c$ the conductivity also increases rapidly because inelastic
scattering is suppressed below $T_c$ due to the opening
of the superconducting gap in the quasiparticle excitation
spectrum. This leads to the formation of a peak
at intermediate temperatures, whose position is
determined by the microwave frequency
and the impurity scattering strength. This peak moves
to higher temperatures and its amplitude decreases with
increasing microwave frequency and impurity concentration.
Our best fit to the experimental spectra of
YBCO~\cite{hosseinietal} using pointlike strong scatterers is
displayed in Fig.~\ref{fig:FitYBCOiso}. Commonly used parameters
for YBCO are: $\Delta_0=400K$ for the gap maximum, $v_f/v_2=14$
for the anisotropy of the Dirac cone and $T_c=88.7K$
\cite{TailleferDiraccone,hosseinietal}.
In order to compare our theoretical curves to the
experimental data the value of the universal conductivity
$\sigma_0=e^2/(\hbar \pi^2) v_f/v_2$ has to be translated into a
three dimensional conductivity which can be done via
$\sigma_0^{3D}=\sigma_0^{2D} n_c$, where $n_c$ is the number of
CuO$_2$-planes per unit length in the c-direction, which is
$n_c$=1/(5.9\AA) for YBCO.
Because $\sigma \sim \lambda^{-3}$ the conductivity $\sigma$ is
very sensitive to the value of the penetration depth $\lambda$,
which has recently been measured~\cite{Pereg-Barnea} as considerably
smaller than previously published in the literature~\cite{Tallon},
i.e., $\lambda=1030 \pm 80$\AA$\,$ instead of $\lambda\simeq 1550$\AA.
This would increase the published values~\cite{hosseinietal} of the
microwave conductivity by a factor of four.
Indeed, it turns out that we obtain the best fit to the microwave
conductivity of YBCO when we assume the absolute values of the
conductivity to be roughly twice the previously published
data~\cite{hosseinietal}, which would correspond to a penetration
depth of approx. 1200\AA. Therefore we allow ourselves the freedom to
scale our calculated
curves by roughly a factor of 1/2 when comparing to the
experimental published values (exact scaling factor is stated in
the figure captions).
\begin{figure}[t]
\begin{center}
\leavevmode
\includegraphics[clip=true,width=.9\columnwidth]{FitYBCOisoTest.eps}
\caption{Fit to the experimental spectra of YBCO (reproduced from
Ref.~\onlinecite{hosseinietal}) using pointlike strong scatterers with $V=100t$
and a concentration of $n_i=.000035$.
The magnitude of the conductivity has been scaled by a factor of .42,
which corresponds to assuming a penetration depth of 1200\AA.}
\label{fig:FitYBCOiso}
\end{center}
\end{figure}
As can be seen from Fig.~\ref{fig:FitYBCOiso} the assumption of
pointlike scatterers can reproduce the temperature and frequency
dependence of the microwave conductivity of YBCO quite well
(see Refs.~\onlinecite{HPS,Rieck,SchachingerCarbotte}). The
largest discrepancy arises for low temperatures and low frequencies,
where experimentally a nearly linear increase of the conductivity with
temperature is found whereas the theory based on pointlike scatterers
predicts a quadratic temperature dependence. It has been suggested by Hettler
and Hirschfeld~\cite{Hettler} that the theoretical lineshape becomes
more linear at low temperatures when the suppression of the order
parameter surrounding a strong pointlike scatterer is taken into
account. This low temperature behavior has been attributed to the
formation of a second resonance in the self-energy $\Sigma_0$ at low
frequencies. It is intriguing to note that we find a similar resonance in the
self-energy for slightly extended strong potential scatterers, see
Fig.~\ref{fig:SelfEn3Peaks}, and therefore it is interesting to investigate
whether the presence of slightly extended potential scatterers can also explain
the linear $T$-dependence of the microwave conductivity for low
temperatures. Our best fit to the experimental spectra of YBCO using
slightly extended potential scatterers is displayed in
Fig.~\ref{fig:FitYBCOext}.
Obviously the consideration of extended scatterers considerably
improves the agreement with the experimental data at low
temperatures.
\begin{figure}[t]
\begin{center}
\leavevmode
\includegraphics[clip=true,width=.9\columnwidth]{FitYBCOextTest.eps}
\caption{Fit to the experimental spectra of YBCO (reproduced from
Ref.~\onlinecite{hosseinietal}) using slightly extended strong scatterers with
$V_1=100t$, $V_2=92t$, $V_3=82t$
and a concentration of $n_i=.000015$.
The magnitude of the conductivity has been scaled by
a factor of .425 which corresponds to assuming a penetration depth of 1200\AA.}
\label{fig:FitYBCOext}
\end{center}
\end{figure}
\begin{figure}[b]
\begin{center}
\leavevmode
\includegraphics[clip=true,width=.98\columnwidth]{OptCondStrongImpV2Ftest.eps}
\caption{Microwave conductivity for strong scatterers with varying
degree of forward scattering. Impurity concentration:
$n_i=.000035$, maximum gap: $\Delta_0=.29t$. Left panel:
$\omega=1.14$GHz, right panel: $\omega=13.4$GHz. Inset:
Self-energy for the respective scattering potentials.}
\label{fig:OptCondStrongImpV}
\end{center}
\end{figure}
Surprisingly, the concentration of extended scatterers used for
the fit in Fig.~\ref{fig:FitYBCOext} is even lower than the
concentration of pointlike scatterers we have used for the fit in
Fig.~\ref{fig:FitYBCOiso}. Generally one would assume that due to
the forward scattering character of extended impurities a larger
concentration is necessary to obtain a similar scattering rate as
for pointlike impurities. To gain more insight into this unusual
behavior we show in Fig.~\ref{fig:OptCondStrongImpV} the microwave
conductivity for different extensions of the scattering potential
at two of the experimentally measured frequencies, i.e., 1.14 GHz
and 13.4 GHz. Increasing the forward scattering character of the
impurity potential slightly from $V_1=V_2=V_3=100t$ to $V_1=100t$,
$V_2=90t$ and $V_3=80t$ essentially reduces the height of the peak
in the conductivity at 1.14 GHz and makes the low temperature
increase more linear. For this small deviation from isotropic
scattering, vertex corrections are small and the variation of the
conductivity can be attributed to the formation of a second
resonance in the self-energy at intermediate frequencies, see
inset in Fig.~\ref{fig:OptCondStrongImpV} (see also discussion in
Sec.~\ref{sec:SelfEnergy}). The nearly linear increase of the
self-energy below the second resonance causes the more linear
$T$-dependence of the conductivity at low temperatures. Only when
the forward scattering character of the impurity potential is
further enhanced, the vertex corrections begin to outweigh the
growing self-energy and the conductivity rises above the values
obtained for isotropic scattering. For these more extended
scattering potentials the second resonance in the self-energy
moves to lower frequencies until it merges with first resonance.
For the larger microwave frequency of 13.4 GHz, see right panel of
Fig.~\ref{fig:OptCondStrongImpV}, the anisotropy of the impurity
potential has less effect. This implies that for slightly extended
impurity potentials as considered for YBCO in
Fig.~\ref{fig:FitYBCOext} the frequency dependence becomes much
weaker and therefore a smaller concentration than in the case of
pointlike impurities is necessary to reproduce the experimentally
observed frequency dependence.
We emphasize that this analysis cannot rule out other
explanations\cite{Hettler,DurstLeetwins} of the
quasilinear in $T$ behavior observed at low frequencies.
\section{Comparison with experimental spectra of BSCCO}
\label{sec:BSCCO}
In this section we want to explore what can be learned about the
type of disorder contained in the BSCCO compounds by analyzing its
microwave conductivity which has measured by S.-F. Lee {\it et
al.}~\cite{leeetal}. The temperature and frequency dependence of
the microwave conductivity in BSCCO (see symbols in
Fig.~\ref{fig:BSCCOExp1}) is quite different than in YBCO. The
absolute value of the microwave conductivity is smaller by almost
a factor of 10 indicating that BSCCO is a dirtier compound than
YBCO. This agrees well with the observation that the conductivity
of BSCCO changes noticeably only in the
THz-regime~\cite{orenstein1}, i.e., at much higher frequencies
than in YBCO. The characteristic peak in the microwave
conductivity, however, occurs at lower temperatures than in the
cleaner system YBCO contrary to predictions for strong pointlike
scatterers~\cite{HPS}. Furthermore this peak is much less
pronounced and resembles more a plateau, suggestive of the weak
scattering limit~\cite{HPS}. Finally, the conductivity does not
apparently approach the universal value $\sigma_0$ for the lowest
temperatures and frequencies measured. This might indicate the
presence of extended scatterers, which enhance the
zero-temperature and zero-frequency limit of the conductivity as
has been shown by Durst and Lee~\cite{DurstLee}.
\begin{figure}[h]
\begin{center}
\leavevmode
\begin{minipage}{.49\columnwidth}
\includegraphics[clip=true,width=.99\textwidth]{BSCCOiso.eps}
\end{minipage}
\begin{minipage}{.49\columnwidth}
\includegraphics[clip=true,width=.99\textwidth]{BSCCOExtWeak.eps}
\end{minipage}\\
\caption{Fit to the experimental microwave conductivity of BSCCO
(reproduced from Ref.~\onlinecite{leeetal}) using weak scatterers: (a)
pointlike scatterers with $V/t=1$ and $n=.049$, (b) weak extended
scatterers with $V_1/t$=2, $V_2/t$=.8, $V_3/t$=.4 and $n=.145$.
(The inelastic scattering rate has been increased by a factor of 2.6 in (a)
and 3 in (b) with respect to Ref.~\onlinecite{DSH01}).}
\label{fig:BSCCOExp1}
\end{center}
\end{figure}
In order to check the applicability of these scenarios for BSCCO we compare
in Fig.~\ref{fig:BSCCOExp1} respective fits to the microwave
conductivity using (i) only pointlike weak scatterers
(Fig.~\ref{fig:BSCCOExp1}(a)) and (ii) only extended weak scatterers
(Fig.~\ref{fig:BSCCOExp1}(b)). For the inelastic scattering rate
$\tau^{-1}_{\rm inel}(T)$ we assume the same temperature-dependence as for
YBCO~\cite{DSH01} but we allow for a different prefactor in order to
account for discrepancies between YBCO and BSCCO (the exact prefactor is
stated in the figure captions).
Obviously both disorder models (i) and (ii) result in very good
fits of the experimental microwave conductivity of BSCCO. The main
difference is that the conductivity for isotropic scatterers
approaches the universal value $\sigma_0$ for $T \to 0$ whereas the conductivity
for extended scatterers (Fig.~\ref{fig:BSCCOExp1}(b)) with the
potential parameters $V_1/t=2$, $V_2/t=0.8$ and $V_3/t=0.4$ approaches
an enhanced value.
Unfortunately, it is not possible to distinguish between these two different
scenarios from the microwave conductivity alone, because there is no
experimental data available below $T=5$K.
\begin{figure}[b]
\begin{center}
\leavevmode
\begin{minipage}{.49\columnwidth}
\includegraphics[clip=true,width=.99\textwidth]{FreqIso.eps}
\end{minipage}
\begin{minipage}{.49\columnwidth}
\includegraphics[clip=true,width=.99\textwidth]{FreqExtWeak.eps}
\end{minipage}\\
\caption{Higher frequency conductivity for the disorder models of
Fig.~\ref{fig:BSCCOExp1}: (a)
pointlike scatterers with $V/t=1$ and $n=.049$, (b) weak extended
scatterers with $V_1/t$=2, $V_2/t$=.8, $V_3/t$=.4 and $n=.145$.}
\label{fig:HighFreq}
\end{center}
\end{figure}
Further insight could be gained by comparing the frequency
dependence of these two models. Impurity scattering alone would
predict a different frequency dependence for isotropic and
extended scatterers, as illustrated in Fig.~\ref{fig:HighFreq}.
Whereas the magnitude of the conductivity remains rather large at
low temperatures in the case of extended scatterers even for high
frequencies, it almost vanishes in the case of pointlike
impurities. Due to the large impurity concentration, however, this
frequency dependence is most pronounced in the THz-regime, as
observed experimentally~\cite{orenstein1}, where the contribution
of inelastic scattering cannot be neglected. Thus, the THz-data
present not only a probe of elastic impurity scattering but also
of inelastic scattering processes and are therefore not directly suitable
to distinguish between pointlike and extended impurities.
The only way we can proceed now is try to exclude one of
the two models indirectly via analyzing an additional
observable. Thus we will argue in the following that disorder
model (i) containing 4.9\% weak isotropic scatterers with a scattering
strength of $V$=1$t$ would yield an unrealistically large normal state
scattering rate.
Assuming a normal state density of states of $1/8t$ yields
a normal state scattering rate of $\tau^{-1} \approx 0.6 T_c$ for $t$=120meV.
According to Abrikosov Gorkov's
scaling law this would reduce $T_c$ by 25\%, which we
consider as an unreasonably large suppression because
$T_c \approx 93$K in the samples used for microwave conductivity in
Ref.~\onlinecite{leeetal}, which is close to the highest values of $T_c$
measured for the BSCCO-compounds.
Extended impurities, on the other hand, act mainly as small angle
scatterers and affect $T_c$ much less than isotopic
scatterers~\cite{Kee,otherTc}.
This allows us to assess model (i),
which consists only of weak pointlike scatterers, as very unlikely
and to conclude that at least a large fraction of the disorder in
the BSCCO compounds should be attributed to extended scatterers.
This could be confirmed by experiments on crystals at lower $T$.
So far we have focused on the effect of weak scatterers which we
attribute to the out-of plane disorder introduced by doping.
Defects whiting the CuO$_2$-planes like Zn-substitution for Cu, on
the other hand, are generally considered to act as strong
pointlike scatterers. These strong scattering defects have been
observed in STM-experiments~\cite{Hudson} on BSCCO-compounds and are
often assumed to be the main source of disorder in the
YBCO-compounds although their concentration is very low.
We therefore now address the question of whether our model for the
microwave conductivity in BSCCO is compatible with an additional
small concentration of strong pointlike impurities, which are most
likely also present in the compound used for measurements of the
microwave conductivity of BSCCO in Ref.~\onlinecite{leeetal}.
We incorporate this realistic disorder model, which consists of weak
extended and strong pointlike scatterers, by calculating the diagrams depicted in
Fig.~\ref{fig:Model2Imp}.
The weak extended scatterers are treated in
Born approximation and the strong pointlike impurities in T-matrix
approximation, where as before we consider only the $\Sigma_0$-component
of the self-energy, i.e., only diagrams of the T-matrix with an even
number of impurity lines. Because vertex corrections vanish for
pointlike scatterers, only the weak extended scatterers contribute
and the vertex corrections can be calculated in Born
approximation. Thus, the microwave conductivity $\sigma(\Omega)$
is still given by Eq.~(\ref{eq:Conductivity}) with the bubble $J(\omega,\Omega)$
as in Eq.~(\ref{eq:J}) and the vertex function
$\gamma^{\rm Born}$ given in Eq.~(\ref{eq:VertexBorn}).
Only the self-energy $\Sigma_0$ has to be calculated
self-consistently as the sum of Eq.~(\ref{eq:SelfEnIso}),
Eq.~(\ref{eq:SelfEnBorn}) and the inelastic scattering rate
\begin{eqnarray}
\Sigma_0(\omega,T) &=& -\frac{n_s}{4I_G(\omega)}
\Bigl( 1-\frac{1}{1-16 I_G^2(\omega) V_s^2} \Bigr) \\
&& - n_w (V_1^2+2V_2^2+V_3^2) I_G(\omega) -i (2 \tau_{\rm inel}(T))^{-1}. \nonumber
\end{eqnarray}
Here, $n_s$ is the concentration of strong pointlike impurities with a
scattering potential $V_s$, $n_w$ is the concentration of weak
extended scatterers characterized by the potential parameters $V_1$,
$V_2$, $V_3$ and $I_G(\omega)$ is the momentum integrated single
particle Green's function given by Eq.~(\ref{eq:SingleParticleGreen}).
\begin{figure}[t]
\begin{center}
\leavevmode
\includegraphics[clip=true,width=.9\columnwidth]{SelfEn2ImpD.eps}
\\[.5cm]
\includegraphics[clip=true,width=.9\columnwidth]{OptCond2ImpD.eps}
\caption{Diagrams for the self-energy $\Sigma$ and
microwave conductivity $\sigma$ considered in the realistic disorder
model for BSCCO. Circles denote the pointlike strong impurities, which
are treated in T-matrix approximation. Crosses denote the weak
extended scatterers, which are treated in Born approximation. Only the
extended, i.e., only the weak scatterers contribute to the vertex corrections.}
\label{fig:Model2Imp}
\end{center}
\end{figure}
\begin{figure}[b]
\begin{center}
\leavevmode
\begin{minipage}{.49\columnwidth}
\includegraphics[clip=true,width=.99\textwidth]{AddStrong.eps}
\end{minipage}
\begin{minipage}{.49\columnwidth}
\includegraphics[clip=true,width=.99\textwidth]{AddStrongForward.eps}
\end{minipage}\\
\caption{Effect of adding strong pointlike scatterers for the
microwave conductivity at 14.4 GHz. (a) Weak extended
scatterers with parameters of Fig.~\ref{fig:BSCCOExp1}(b), i.e.,
$n_w=0.145$ and $V_1/t=2$, $V_2/t=0.8$, $V_3/t=0.4$, and
additional strong pointlike scatterers with $V/t=100$ and different
concentrations $n_s$. (b) Fixed concentration $n_s=0.001$
of strong pointlike impurities for varying concentration
and forward scattering potential of the weak scatterers.}
\label{fig:WeakExtStrong}
\end{center}
\end{figure}
The effect of adding a small concentration of strong pointlike
scatterers to the extended weak scatterers used in
Fig.~\ref{fig:BSCCOExp1}(b) is illustrated in
Fig.~\ref{fig:WeakExtStrong}. Additional strong pointlike scatterers
mainly reduce the conductivity as can be seen
Fig.~\ref{fig:WeakExtStrong}(a). In order to raise the conductivity
to its previous values the forward scattering character of the weak
impurities must therefore be enhanced.
On the other hand, this increases the
difference between the zero-temperature value and the maximum value of the
conductivity (see dashed line in Fig.~\ref{fig:WeakExtStrong}(b))
and necessitates a larger concentration of weak
extended scatterers. Finally the lineshape of the conductivity
(solid line in Fig.~\ref{fig:WeakExtStrong}(b)) looks
similar to the one without strong pointlike impurities
but it is flatter at low temperatures than before. This poses
an upper limit for the concentration of strong scatterers compatible
with the experimental microwave conductivity of BSCCO.
A fit to the experimental microwave conductivity of BSCCO containing
.05\% pointlike strong scatterers and 10\% weak extended scatterers is
shown in Fig.~\ref{fig:BSCCOExp2}. This is about the
largest concentration of pointlike strong scatterers still compatible
with the microwave conductivity. This concentration is
smaller than the .2\% observed in STM-experiments~\cite{Hudson} on BSCCO but it is very
plausible that the number of in-plane defects varies for different
compounds and maybe even between bulk and surface.
\begin{figure}[t]
\begin{center}
\leavevmode
\includegraphics[clip=true,width=.85\columnwidth]{BSCCOWeakStrong.eps}
\caption{Fit to the experimental micowave conductivity of BSCCO
(reproduced from Ref.~\onlinecite{leeetal}) using $n_w=.1$ weak extended
scatterers with $V_1/t$=3, $V_2/t$=.9, $V_3/t$=.5 and $n_s=.0005$
strong pointlike scatterers with $V_s/t=100$.
(The inelastic scattering rate has been increased by a factor of 3.4
with respect to Ref.~\onlinecite{DSH01})}
\label{fig:BSCCOExp2}
\end{center}
\end{figure}
\section{Conclusions}
In summary, we have investigated the influence of extended scatterers
on the microwave conductivity of $d$-wave superconductors
by extending the approach of Durst and Lee~\cite{DurstLee}, which is
based on a nodal approximation for the quasiparticle spectrum and the
impurity potential, to finite temperatures and frequencies.
We have slightly modified the approach of Durst and Lee by considering
only vertex functions which contain an even number of impuritity lines.
This modification is necessary to ensure analyticity at finite temperatures and
frequencies when only the normal self-energy component $\Sigma_0$ is
considered.
The effect of extended scatterers on the temperature and frequency
dependence of the microwave conductivity can be summarized as
follows: for a small concentration of slightly extended strong
scatterers a second resonance forms in the self-energy at
intermediate frequencies similar to treatments which consider the
suppression of the order parameter surrounding a strong pointlike
impurity~\cite{Hettler}. This results in a more linear temperature
dependence of the conductivity at low temperatures and therefore
improves the agreement with experimental spectra of YBCO at low
temperatures.
For more extended scatterers the vertex corrections begin to
dominate over the self-energy and the magnitude of the
conductivity increases.
The microwave conductivity of BSCCO is very different compared to YBCO and
cannot be understood in terms of only strong scattering pointlike impurites. We find
that a large concentration of weak extended scatterers is necessary to
explain the observed temperature and frequency dependence of the
microwave conductivity in BSCCO, where: (i) the impurity concentration
has to be large to explain the small magnitude of the
conductivity and the negligible frequency dependence in the microwave
range, (ii) the scattering strength has to be small to account for
the plateau-like lineshape of the conductvity at small temperatures
and (iii) the range of the scattering potential has to be spatially extended
in order to keep the $T_c$-suppression reasonably small~\cite{Kee}.
Finally, we have shown that adding a small concentration of
pointlike strong scatterers, which have been observed in
STM-experiments~\cite{Hudson}, to the weak extended scatterering
potential, which we attribute to the out-of-plane disorder
introduced by doping, is still compatible with the microwave
conductivity of BSCCO. Although it would be necessary to refine
our treatment of inelastic scattering by accounting for its
frequency dependence and its contribution to vertex corrections in
order to address the conductivity in the
THz-range~\cite{orenstein1}, it is interesting to note that
elastic scattering alone would predict a very different frequency
dependence for pointlike and extended scatterers.
{\it Acknowledgments.} This work was supported by a Feodor-Lynen
Fellowship from the A. v. Humboldt Foundation (TSN) and ONR grant
N00014-04-0060 (PJH). The authors are grateful to D.A. Bonn, R. Harris,
I. Bosovic, L.-Y. Zhu, D.~J.~Scalapino and P.~W\"olfle for
stimulating conversations.
|
1,314,259,994,916 | arxiv | \section{Introduction}
Observation made by SUMER instrument on board SoHO confirms that longitudinal standing (slow) magneto--acoustic waves have been seen to suffer a rapid damping \citep{Kliem02,Wang02,Wang03b}. Modelling for observed oscillations by SUMER have been carried out by \cite{Taroyan07}, where excitation and damping mechanisms are investigated. It is also presented how standing and propagating slow MHD waves can be differentiated from each other. The issue of fast damping is still obscure and attracts a remarkable attentions to understand the dominant method of damping. The most nominated mechanism for damping the standing longitudinal oscillations are thermal conduction, radiation and viscosity \citep{Ofmanwang, Moortel03,Mendoza04,Taroyan05,Sigalotti07,Al-Ghafri12,Al-Ghafri14}. Radiation is found to dominate the damping of cool coronal loops whereas hot loops are damped by thermal conduction. For a recent review of standing slow waves in coronal loops see \cite{Wang11}.
Plasma cooling has been detected everywhere in the solar atmosphere \citep{viall12,viall13}. In the absence of coronal heating, the plasma starts to cool by radiation and thermal conduction \citep{Klimchuk12}. For example, \cite{Aschwanden08} have shown that the coronal loops are cooling with the characteristic cooling time of the order of a few oscillation periods. \citet{Morton09b,Morton09c,Ruderman11a,Ruderman11b} investigated the effect of cooling on kink oscillations of coronal loop. They found that the kink oscillations experience an amplification due to the cooling. Further to this, propagating slow MHD waves in a homogeneous, radiatively cooling plasma are studied by \cite{Morton10a}. Essentially, plasma cooling is reported to cause a strong damping for longitudinal slow MHD waves. Recently, \cite{Erdelyi11} investigated the behaviour of longitudinal magneto--acoustic oscillations in slowly varying coronal plasma. In particular, the damping rate is found to undergo a reduction by the emergence of cooling.
\cite{Al-Ghafri12} and \cite{Al-Ghafri14} have studied the effect of cooling on standing slow MHD waves in hot coronal loops that are damped due to thermal conduction. In particular, the plasma cooling is assumed by unspecified thermodynamic source. The individual effect of cooling is found to amplify the amplitude of oscillating loops. Although the rate of damping caused by thermal conduction increases at the beginning, it is noticed to reduce gradually in a hot corona because of plasma cooling. As thermal conduction approaches infinity, the damping rate tends to zero. Hence, slow magnetosonic waves propagate almost without damping at the slower, isothermal sound speed.
The present article discusses the combined effects of radiative cooling and thermal conduction on damping longitudinal standing MHD waves. A function with a simple form representing the radiation mechanism is taken to ensure that the plasma temperature decreases exponentially with time according to observations. The cooling is assumed to be weak with the characteristic cooling time scale much larger than the oscillation period. The paper is structured as follows. In Section~(\ref{the model}) we present our model and derive the main governing equation with boundary conditions. The analytical results are obtained with the aid of the WKB theory in Section~(\ref{analytic sol}). In Section~(\ref{numerical}) the individual and combined effects of radiation and thermal conduction are studied by displaying the analytical solution numerically. Our discussions and conclusions are presented in Section~(\ref{conclusion}).
\section{The Model and Governing Equations}\label{the model}
We model a straight coronal loop in which the magnetic field is uniform and in the {\it z}-direction, {\it i.e.} $\mathbf{B}_0=B_0\mathbf{\hat z}$. The magnetic loop is of length $L$ with its ends at $z=\pm L/2$ . The background plasma temperature (pressure) is assumed to change as a function of time due to radiative cooling. The density is a constant.
The governing MHD equations for the background plasma take the following form
\begin{eqnarray}
&&\frac{\partial{\rho}}{\partial{t}}+\nabla.(\rho\mathbf{v}) =0,\label{Eq:cont}\\
&&\rho\frac{\partial{\mathbf{v}}}{\partial{t}}+\rho(\mathbf{v}.\nabla)\mathbf{v}=-\nabla{p}+\frac{1}{\mu_0}(\nabla\times\mathbf{B})\times\mathbf{B},\\
&&\frac{R}{\tilde\mu}\frac{\rho^\gamma}{(\gamma-1)}\left[\frac{\partial{}}{\partial{t}}\frac{T}{\rho^{\gamma-1}}
+(\mathbf{v}.\nabla)\frac{T}{\rho^{\gamma-1}}\right]=\nabla(\kappa_{\|}\nabla{T})-\rho^2Q(T) ,\\
&&{p}=\frac{R}{\tilde\mu}\rho{T},\label{Eq:gas-law}\\
&&\frac{\partial{\mathbf{B}}}{\partial{t}}=\nabla\times(\mathbf{v}\times\mathbf{B}).\label{Eq:induction}
\end{eqnarray}
Here, $\mathbf{v}$, $\mathbf{B}$, $\rho$, $p$ and $T$ represent the flow velocity, magnetic field, density, pressure and temperature respectively; $\mu_0$ is the magnetic permeability of free space, $R$ is the gas constant, $\tilde{\mu}$ is the mean molecular weight and $\gamma$ is the ratio of specific heats. The thermal conduction term is $\nabla(\kappa_{\|}\nabla{T})$ where $\kappa_{\|}=\kappa_0T^{5/2}$ and $\rho^2Q(T)$ is the general radiation term for optically thin losses.
\bigskip
Note that the radiative loss function is approximated by a piecewise continuous function \citep{Rosner78,Priest} but it is assumed here to take a simple form to mach the observed cooling as described in the following.
The observation shows that the radiative-cooling coronal loops are cooling exponentially \citep{Aschwanden08,Morton09b,Morton09c} and the cooling profile has the form
\begin{equation}
T=T_{0i}\exp(-\frac{t}{\tau_c}),\label{cooling_time}
\end{equation}
where $T_{0i}$ is the initial temperature at $t=0$ and $\tau_c$ is the cooling time scale.
In order to match the observed exponential cooling of the background plasma, \cite{Erdelyi11} assigned thermal conduction to be the essential cause of cooling in their model. Further to this, an unspecified thermodynamic source for creating the plasma cooling was suggested by \cite{Al-Ghafri12} and \cite{Al-Ghafri14} to investigate longitudinal MHD Waves in dissipative time-dependent plasma. Moreover, \cite{Morton10a} assumed that the plasma is cooling radiatively and the radiative loss function has the form $\delta p$, where the loss term is assumed to follow the Newton cooling. Hence, we assume here that the radiation term $\rho^2Q(T)\sim\delta p$. Therefore the background plasma state with no background flow satisfies the equations
\begin{eqnarray}
&&v_0=0,\;\rho_0=\textrm{const}.,\;\Longrightarrow\; \nabla p_0=0,\label{Eq:back-motion}\\
&&p_0=\frac{R}{\tilde{\mu}}\rho_0T_0,\qquad {\it i.e.}\quad p_0\sim T_0,\label{Eq:back-gas_law}\\
&&\frac{R}{\tilde\mu}\rho_0\frac{\mathrm{d}{T_0}}{\mathrm{d}{t}}
=-\delta p_0, \;\Longrightarrow\; \frac{\mathrm{d}{T_0}}{\mathrm{d}{t}}=-\delta T_0,\label{Eq:back-energy}
\end{eqnarray}
where the 0 index denotes background quantity and $\delta$ is a small quantity. Equation~(\ref{Eq:back-energy}) gives the solution
\begin{equation}
T_0=T_{0i}\exp(-\delta t).\label{Eq:tem_decay}
\end{equation}
Comparing Equation~(\ref{Eq:tem_decay}) with Equation~(\ref{cooling_time}) it is obtained that $\delta=1/\tau_c$ and this gives a justification of taking $\delta$ to be small where the observed cooling times are $500\,\rm{s}<\tau_c<2000\,\rm{s}$ \citep{Morton10a}.
Now, we linearise the governing Equations~($\ref{Eq:cont}-\ref{Eq:induction}$) about the background state by writing all the variables on the form
$$
f(z,t)=f_0(t)+f_1(z,t),
$$
and neglecting nonlinear terms, where the subscript 1 indicates the perturbed quantities. Thus, the linearised MHD equations for the longitudinal motion can be reduced to
\begin{equation}
\frac{\partial^2{v}_{1}}{\partial t^2}-\frac{\gamma p_0}{\rho_0}\frac{\partial^2{v}_{1}}{\partial z^2} =-(\gamma-1)\frac{\kappa_0 T_0^{5/2}}{\rho_0}\frac{\partial^3T_1}{\partial z^3}-\delta\frac{\partial v_1}{\partial t}.\label{Eq:energy}
\end{equation}
It is more convenient to use the non-dimensional variables to solve the governing Equation~(\ref{Eq:energy}). Hence, we introduce the dimensionless quantities
\begin{equation}
\tilde{t}=\frac{t}{P},\quad \tilde{z}=\frac{z}{L},\quad
\tilde{c}_s = \sqrt{\frac{T_0}{T_{0i}}}, \quad
\tilde{v}_{1} = \frac{{v}_{1}}{c_{si}}, \quad
\tilde{T}_{1}=\frac{T_{1}}{T_{0i}}, \quad
c_{si}^2 = \frac{\gamma RT_{0i}}{\tilde{\mu}},
\label{dimensionless}
\end{equation}
where $P$ is the period of the loop oscillation, $\tilde{c}_s$ is the dimensionless sound speed, $c_{si}$ is the initial sound speed, and we put $P = L/c_{si}$\/. In what follows we drop the tilde.
Applying non-dimensionlisation for Equation~(\ref{Eq:energy}), we arrive at
\begin{equation}
\frac{\partial^2{v}_{1}}{\partial t^2}-c_s^2\frac{\partial^2{v}_{1}}{\partial z^2} =-\frac{\sigma}{\gamma}c_s^{5}\frac{\partial^3T_1}{\partial z^3}-\epsilon\frac{\partial v_1}{\partial t}.\label{Eq:energy_dim-less}
\end{equation}
Here, the dimensionless constants $\sigma$ and $\epsilon$ represent the strength of thermal conduction and radiation and are defined by
\begin{equation}
\sigma=\frac{(\gamma-1)\tilde{\mu}\kappa_0\,T^{5/2}_{0i}}
{RL\sqrt{\gamma\, p_{0i}\,\rho_0}}, \qquad \epsilon=P\delta.
\end{equation}
Both quantities $\sigma$ and $\epsilon$ are found to be small under standard coronal conditions in which $\gamma = 5/3$, $\tilde{\mu} \approx 0.6$, $R = 8.3\times10^3\,\textmd{m}^2\,\textmd{s}^{-2}\,\textmd{K}^{-1}$, and
$\kappa_0 \approx 10^{-11}\,\textmd{m}^2\,\textmd{s}^{-1}\,\textmd{K}^{-5/2}$\/. If we take $L = 100\;\textmd{Mm}$ and $T_{0i} = 0.6- 5\:\textmd{MK}$ as typical coronal values then we obtain $0.0068 \lesssim \sigma \lesssim 0.48$.
Now, the governing Equation~(\ref{Eq:energy_dim-less}) can be written in the form of wave-like equation as
\begin{equation}
\frac{\partial}{\partial t}\left[\frac{1}{c^7_s}\left(\frac{\partial^2{v}_{1}}{\partial t^2}-c_s^2\frac{\partial^2{v}_{1}}{\partial z^2}\right)\right] = \frac{\sigma}{\gamma}\frac{\partial}{\partial z^2} \left[\gamma\frac{\partial}{\partial t}\left(\frac{1}{c_s^2}\frac{\partial v_1}{\partial t}\right)-\frac{\partial^2v_1}{\partial z^2}\right] - \epsilon\frac{\partial}{\partial t} \left(\frac{1}{c_s^7}\frac{\partial v_1}{\partial t}\right).\label{Eq:governing}
\end{equation}
Because we are interested in investigating the damping of standing waves so it is necessary to introduce the boundary conditions at $z=\pm1/2$. Therefore, as the loop ends embedded in the dense photosphere it is suitable to assume that the perturbed velocity vanishes at these ends,
\begin{equation}
v_1 = 0 \quad \mbox{at} \quad z = \pm 1/2 .
\label{bound_v1}
\end{equation}
In case of $\epsilon=\sigma=0$ ({\it i.e.} in the absence of radiative cooling and thermal conduction), Equation~(\ref{Eq:governing}) represents a simple wave equation with constant sound speed (see \opencite{Al-Ghafri12}). The effect of cooling and thermal conduction on longitudinal slow MHD waves will be investigated in the next sections.
\section{Analytical Solution}\label{analytic sol}
Now, we aim to solve the governing Equation~(\ref{Eq:governing}) using the WKB method. Thus, we need to introduce two slow variables $t_1=\epsilon t$ and $\sigma=\epsilon\tilde{\sigma}$, so Equation~(\ref{Eq:governing}) becomes
\begin{equation}
\frac{\partial}{\partial t_1}\left[\frac{1}{c^7_s}\left(\epsilon^2\frac{\partial^2{v}_{1}}{\partial t_1^2}-c_s^2\frac{\partial^2{v}_{1}}{\partial z^2}\right)\right] = \frac{\tilde{\sigma}}{\gamma}\frac{\partial}{\partial z^2} \left[\gamma\epsilon^2\frac{\partial}{\partial t_1}\left(\frac{1}{c_s^2}\frac{\partial v_1}{\partial t_1}\right)-\frac{\partial^2v_1}{\partial z^2}\right] - \epsilon^2\frac{\partial}{\partial t_1} \left(\frac{1}{c_s^7}\frac{\partial v_1}{\partial t_1}\right).\label{Eq:governing-slowVariable}
\end{equation}
Then, the WKB theory implies that the solution to Equation~(\ref{Eq:governing-slowVariable}) has the form
\begin{equation}
v_1(z,t_1) = Q(z,t_1)\exp\left(\mathrm{i}\epsilon^{-1}\Theta(t_1)\right),
\label{Eq:wkb1}
\end{equation}
where function $Q$ is expanded in power series with respect to $\epsilon$\/, i.e.,
\begin{equation}
Q = Q_0 + \epsilon\, Q_1 + \dots .
\label{power-series}
\end{equation}
Substituting Equations~(\ref{Eq:wkb1}) and (\ref{power-series}) into Equation~(\ref{Eq:governing-slowVariable}) and taking the largest order terms in $\epsilon$ (order of $\epsilon^{-3}$) we obtain
\begin{equation}
\frac{\partial^2Q_0}{\partial z^2}+\frac{\omega^2}{c_s^2}Q_0=0,\label{Eq:highest-order}
\end{equation}
where $\omega=\mathrm{d}\Theta/\mathrm{d} t_1$ and $Q_0$ is subject to the boundary conditions
\begin{equation}
Q_0=0 \qquad \textrm{at}\quad z=\pm\frac{1}{2},\label{Eq:condition-highest-order}
\end{equation}
according to Equation~(\ref{bound_v1}).
The general solution to the boundary-value problem, Equations~(\ref{Eq:highest-order}) and (\ref{Eq:condition-highest-order}), can be given by the Fourier series in the form
\begin{equation}
Q_0(z,t_1)=\sum_{n=0}^{\infty} A_n(t_1)\cos\left((2n+1)\pi z\right), \qquad \omega_n=(2n+1)\pi c_s,\;n=0,1,2,\cdots.\label{Eq:Standing_radiation}
\end{equation}
In this study, we are only interested in the fundamental longitudinal mode corresponding to $n=0$. Thus, Equation~(\ref{Eq:Standing_radiation}) reduces to
\begin{equation}
Q_0(z,t_1)=A(t_1)\cos\left(\pi z\right), \qquad \omega=\pi c_s,\label{Eq:Standing_radiation-fundamental}
\end{equation}
where $A=A_0$ and $\omega=\omega_0$. Function $A(t_1)$ refers to the amplitude of standing oscillation. In order to find the function $A(t_1)$
we collect terms of the order of $\epsilon^{-2}$. Then, we obtain the equation
\begin{equation}
\frac{\partial^2Q_1}{\partial z^2}+\frac{\omega^2}{c_s^2}Q_1=\frac{\mathrm{i}}{c_s^2}\left[\left(\frac{9}{2}\omega+3\frac{\mathrm{d}\omega}{\mathrm{d} t_1}+ \tilde{\sigma}\omega^3 c_s^3 \right)Q_0+3\omega\frac{\partial Q_0}{\partial t_1} +\frac{5}{2}\frac{c^2_s}{\omega}\frac{\partial^2Q_0}{\partial z^2}+\frac{c^2_s}{\omega}\frac{\partial^3Q_0}{\partial t_1\partial z^2}- \frac{\tilde{\sigma}}{\gamma}\frac{c^7_s}{\omega}\frac{\partial^4Q_0}{\partial z^4} \right],\label{Eq:next-order}
\end{equation}
where the function $Q_1$ satisfies the boundary conditions
\begin{equation}
Q_1=0 \qquad \textrm{at}\quad z=\pm\frac{1}{2}.\label{Eq:condition-second-order}
\end{equation}
The Sturm-Lioville problem, Equations~(\ref{Eq:next-order}) and (\ref{Eq:condition-second-order}), has a non-trivial solution when the right-hand side of Equation~(\ref{Eq:next-order}) satisfies the compatibility condition. To obtain the compatibility condition we multiply Equation~($\ref{Eq:next-order}$) by $Q_0$, integrate with respect to $z$ over $[-1/2, 1/2]$ and use the integration by parts. Hence, we arrive at
\begin{equation}
\int_{-1/2}^{1/2}\frac{\mathrm{i}}{c_s^2}\left[\left(\frac{9}{2}\omega+3\frac{\mathrm{d}\omega}{\mathrm{d} t_1}+ \tilde{\sigma}\omega^3 c_s^3\right)Q_0^2+3\omega\, Q_0\frac{\partial Q_0}{\partial t_1} +\frac{5}{2}\frac{c^2_s}{\omega}Q_0\frac{\partial^2Q_0}{\partial z^2}+\frac{c^2_s}{\omega}Q_0\frac{\partial^3Q_0}{\partial t_1\partial z^2}- \frac{\tilde{\sigma}}{\gamma}\frac{c^7_s}{\omega}Q_0\frac{\partial^4Q_0}{\partial z^4}\right]\mathrm{d} z=0.
\label{Eq:cooling-compatibility_cond}
\end{equation}
The solution to Equation~(\ref{Eq:cooling-compatibility_cond}), with the aid of Equation~(\ref{Eq:Standing_radiation-fundamental}), gives the amplitude of standing wave in the form
\begin{equation}
A(t_1)=A(0)\exp\left(\frac{-1}{4}t_1+\frac{\tilde{\sigma}}{5}(\frac{\gamma-1}{\gamma})\pi^2\,[c_s^5(t_1)-1]\right).
\end{equation}
This Equation in terms of the scaled variables can be written as
\begin{equation}
A(t)=A(0)\exp\left(\frac{-\epsilon}{4}t+\frac{\sigma}{5\epsilon}\left(\frac{\gamma-1}{\gamma}\right)\pi^2\,[c_s^5(t)-1]\right),\label{Eq:wave-amplitude}
\end{equation}
where $c_s^5(t)=\exp(-5\epsilon t/2)$. In case of $\epsilon=0$ it can be rid of the quantity $\epsilon$ ,in the denominator of second term in the exponent, using Taylor expansion for the sound speed.
\section{Numerical Evaluations}\label{numerical}
In this section, the analytical solution describing the temporal evolution of longitudinal standing-mode amplitude are studied using numerical evaluations. Typical coronal conditions are exploited to calculate the wave amplitude numerically. Then, the results are depicted to show the behaviour of MHD waves in radiatively cooling coronal loops.
\subsection{The effect of radiative cooling}
In the absence of thermal conduction ($\sigma=0$) Equation~(\ref{Eq:wave-amplitude}) reduces to
\begin{equation}
A(t)=A(0)\exp\left(\frac{-\epsilon}{4}t\right).\label{Eq:Amplitude-cooling}
\end{equation}
It is clear that from Equation~(\ref{Eq:Amplitude-cooling}) the amplitude of oscillating loops decreases with time due to radiative cooling. To give more insight into amplitude variations we take $\epsilon\in[0, 0.5]$ as typical values for solar corona. Figure~\ref{cooling-effect} shows that the cooling causes a strong damping for the coronal loops. This result is applicable to TRACE loops with temperature $T_0=1-2$ MK where radiation is the dominant damping mechanism.
\begin{figure}[!ht]
\centering
\includegraphics[height=0.45\textheight,width=0.6\textwidth]{Amplitude-cooling.eps}
\vspace{-0.8cm}
\caption{The amplitude of the standing wave with different values of $\epsilon$ $(0.0, 0.1, 0.3, 0.5)$ representing the effect of the cooling on the amplitude of standing wave. The time is measured in units of $L/c_{si}$.}\label{cooling-effect}
\end{figure}
In contrast with the result obtained by \cite{Al-Ghafri12}, who assumed the cooling by unspecified thermodynamic source, the cooling by radiation brings about an attenuation in the amplitude of waves where a strong cooling ($\epsilon=0.5$) leads to a strong damping.
\subsection{The effect of thermal conduction}
In the absence of cooling ($\epsilon=0$) Equation~(\ref{Eq:wave-amplitude}), after using Taylor expansion for the sound speed, becomes
\begin{equation}
A(t)=A(0)\exp\left(-\frac{\sigma}{2}\left(\frac{\gamma-1}{\gamma}\right)\pi^2\,t\right).\label{Eq:amplitude-thermal}
\end{equation}
This expression is consistent with its counterpart in \cite{Al-Ghafri12}. The amplitude of oscillation is damped and the strength of damping depends on the value of thermal conduction, $\sigma$. Taking into account that thermal conduction, $\sigma$, is calculated in an order of $\epsilon$ to obtain the amplitude expression, so Equation~(\ref{Eq:amplitude-thermal}) is physically applicable for a small $\sigma$. This means that Equation~(\ref{Eq:amplitude-thermal}) determines the influence of weak thermal conduction, $\sigma\ll1$. The variations of amplitude are studied on initial temperatures $T_0=[0.6, 1, 3, 5]\time10^6$ K which correspond to $\sigma=[0.0068, 0.019, 0.17, 0.48]$.
\begin{figure}[!ht]
\centering
\includegraphics[height=0.45\textheight,width=0.6\textwidth]{Amplitude-thermal.eps}
\vspace{-0.8cm}
\caption{The amplitude of the standing wave with different values of $\sigma$ $(0.0068, 0.019, 0.17, 0.48)$ representing the effect of thermal conduction on the amplitude of standing wave. The time is measured in units of $L/c_{si}$.}\label{thermal-effect}
\end{figure}
In Figure~\ref{thermal-effect} we present the effect of varying the magnitude of the thermal conduction, $\sigma$, on the damping rate of the standing acoustic wave. As we can see that the increase of thermal conduction gives rise to a strong decline in the amplitude of standing oscillations due to the presence of thermal conduction. This result is mostly expected because thermal conduction is the essential cause of damping for the observed hot coronal loops, especially in the region of temperature $T_0\ge3$ MK.
\subsection{Combined effects of radiative cooling and thermal conduction}
Now, we investigate the combined effects of radiation and thermal conduction on damping the amplitude of standing slow magneto-acoustic oscillations in radiatively cooling coronal loops using Equation~(\ref{Eq:wave-amplitude}).
\begin{figure}
\centerline{\hspace*{0.015\textwidth}
\includegraphics[width=0.515\textwidth,clip=]{Amplitude00-StandingT=06MK.eps}
\hspace*{-0.03\textwidth}
\includegraphics[width=0.515\textwidth,clip=]{Amplitude11-StandingT=1MK.eps}
}
\vspace{-0.1\textwidth}
\centerline{\hspace*{0.015\textwidth}
\includegraphics[width=0.515\textwidth,clip=]{Amplitude33-StandingT=3MK.eps}
\hspace*{-0.03\textwidth}
\includegraphics[width=0.515\textwidth,clip=]{Amplitude55-StandingT=5MK.eps}
}
\vspace{-0.05\textwidth}
\caption{The dependence of the oscillation amplitude on time. Panels (a), (b), (c) and (d) correspond to $T_{0i} = 0.6$~MK ($\sigma = 0.0068$), $T_{0i} = 1$~MK ($\sigma=0.019$), $T_{0i} = 3$~MK ($\sigma = 0.17$) and $T_{0i} = 5$~MK ($\sigma = 0.48$), respectively. The time is measured in units of $L/c_{si}$.}
\label{Amp-damping}
\end{figure}
The temporal evolution of longitudinal standing-mode amplitude for various values of $\epsilon$ and the initial loop temperature is exhibited in Figure~\ref{Amp-damping}. We can see a remarkable changes in the wave amplitude for different temperature regions. For example, Figures~\ref{Amp-damping}a and \ref{Amp-damping}b indicate that the emergence of cooling enhances the rate of damping caused by thermal conduction for loops with temperature $T_0\le1$ MK. On the other hand, the damping rate of wave amplitude starts to decrease due to the plasma cooling when the loop temperature approaches $3$ MK onwards and the reduction in damping grows quickly by increasing the amount of cooling. However, the state of damping is still rising strongly with time in the absence of cooling as displayed in Figures~\ref{Amp-damping}c and \ref{Amp-damping}d.
Overall, the scenario of damping in loops with temperature more than 3 MK is in agreement with that obtained by \cite{Al-Ghafri14}
\section{Discussion and Conclusion}\label{conclusion}
In this paper we have investigated the damping of standing longitudinal MHD waves due to radiation and thermal conduction in cooling coronal loops. The plasma cooling is assumed because of radiation method. The radiative function is postulated to have the form $\delta p$ to match the observed cooling which has approximately an exponential profile. Subsequently, the temperature in the loop decreases exponentially with the characteristic time scale which is much longer than the characteristic oscillation period. The assumption of the low-beta plasma reduces the MHD equations to one-dimensional system for standing acoustic waves. We have used the WKB theory to obtain an analytic solution for the governing MHD equations.
Typical coronal conditions are applied to evaluate the evolution of amplitude with time numerically. The results show that the radiative cooling enhances the damping rate of coronal loops with temperature $T_0\le1$ MK while in the region of temperature $T_0\ge3$ MK the damping is reduced gradually by cooling. In comparison with radiation mechanism, it is found that thermal conduction is not sufficient to cause a strong damping for very cool loops. However, the damping of hot coronal loops are mainly dominated by thermal conduction, where the amplitude of standing slow MHD waves experiences a rapid damping with time in the lack of cooling.
According to the results, the rate of damping of coronal oscillations seems to increase continuously with time until reaching its maximum in the region of temperature $4\lesssim T \lesssim6$ MK, and then decreases onwards. Eventually, slow magnetosonic waves will propagate almost without damping when thermal conduction approaches infinity.
\begin{acks}
K.S. Al-Ghafri acknowledges the support by College of Applied Sciences - Ibri, (Ministry of Higher Education), Oman.
\end{acks}
|
1,314,259,994,917 | arxiv | \section*{Acknowledgement}}{}
\newenvironment{romenumerate}[1][0pt]
\addtolength{\leftmargini}{#1}\begin{enumerate
\renewcommand{\labelenumi}{\textup{(\roman{enumi})}}%
\renewcommand{\theenumi}{\textup{(\roman{enumi})}}%
}{\end{enumerate}}
\newcounter{oldenumi}
\newenvironment{romenumerateq
{\setcounter{oldenumi}{\value{enumi}}
\begin{romenumerate} \setcounter{enumi}{\value{oldenumi}}}
{\end{romenumerate}}
\newcounter{thmenumerate}
\newenvironment{thmenumerate}
{\setcounter{thmenumerate}{0}%
\renewcommand{\thethmenumerate}{\textup{(\roman{thmenumerate})}}%
\def\item{\pa
\refstepcounter{thmenumerate}\textup{(\roman{thmenumerate})\enspace}}
}
{}
\newcounter{romxenumerate}
\newenvironment{romxenumerate}
{\begin{list}
{\upshape(\roman{romxenumerate})}
{\addtolength{\leftmargin}{-10pt}
\addtolength{\labelwidth}{-10pt}
\renewcommand{\theromxenumerate}{\textup{(\roman{romxenumerate})}}
\usecounter{romxenumerate}}
}
{\end{list}}
\newcounter{xenumerate}
\newenvironment{xenumerate}
{\begin{list}
{\upshape(\roman{xenumerate})}
{\setlength{\leftmargin}{0pt}
\setlength{\rightmargin}{0pt}
\setlength{\labelwidth}{0pt}
\setlength{\itemindent}{\labelsep}
\setlength{\topsep}{0pt}
\usecounter{xenumerate}} }
{\end{list}}
\newcommand\xfootnote[1]{\unskip\footnote{#1}$ $}
\newcommand\pfitem[1]{\par(#1):}
\newcommand\pfitemx[1]{\par#1:}
\newcommand\pfitemref[1]{\pfitemx{\ref{#1}}}
\newcommand\pfcase[2]{\smallskip\noindent\emph{Case #1: #2} \noindent}
\newcommand\step[1]{\par{#1.}}
\newcounter{steps}
\newcommand\stepx{\smallskip\noindent\refstepcounter{steps}%
\emph{Step \arabic{steps}. }}
\newcommand{\refT}[1]{Theorem~\ref{#1}}
\newcommand{\refC}[1]{Corollary~\ref{#1}}
\newcommand{\refL}[1]{Lemma~\ref{#1}}
\newcommand{\refR}[1]{Remark~\ref{#1}}
\newcommand{\refS}[1]{Section~\ref{#1}}
\newcommand{\refSS}[1]{Subsection~\ref{#1}}
\newcommand{\refP}[1]{Problem~\ref{#1}}
\newcommand{\refD}[1]{Definition~\ref{#1}}
\newcommand{\refE}[1]{Example~\ref{#1}}
\newcommand{\refF}[1]{Figure~\ref{#1}}
\newcommand{\refApp}[1]{Appendix~\ref{#1}}
\newcommand{\refTab}[1]{Table~\ref{#1}}
\newcommand{\refand}[2]{\ref{#1} and~\ref{#2}}
\newcommand\SJ{\marginal{SJ}}
\newcommand\kolla{\marginal{CHECK! SJ}}
\newcommand\linebreakx{\unskip\marginal{$\backslash$linebreak}\linebreak}
\begingroup
\count255=\time
\divide\count255 by 60
\count1=\count255
\multiply\count255 by -60
\advance\count255 by \time
\ifnum \count255 < 10 \xdef\klockan{\the\count1.0\the\count255}
\else\xdef\klockan{\the\count1.\the\count255}\fi
\endgroup
\newcommand\nopf{\qed}
\newcommand\noqed{\renewcommand{\qed}{}}
\newcommand\qedtag{\eqno{\qed}}
\DeclareMathOperator*{\sumx}{\sum\nolimits^{*}}
\DeclareMathOperator*{\sumxx}{\sum\nolimits^{**}}
\newcommand{\sum_{i=0}^\infty}{\sum_{i=0}^\infty}
\newcommand{\sum_{j=0}^\infty}{\sum_{j=0}^\infty}
\newcommand{\sum_{j=1}^\infty}{\sum_{j=1}^\infty}
\newcommand{\sum_{k=0}^\infty}{\sum_{k=0}^\infty}
\newcommand{\sum_{m=0}^\infty}{\sum_{m=0}^\infty}
\newcommand{\sum_{n=0}^\infty}{\sum_{n=0}^\infty}
\newcommand{\sum_{n=1}^\infty}{\sum_{n=1}^\infty}
\newcommand{\sum_{k=1}^\infty}{\sum_{k=1}^\infty}
\newcommand{\sumkix}[1]{\sum_{k=1}^{#1}}
\newcommand{\sum_{i=1}^n}{\sum_{i=1}^n}
\newcommand{\sum_{k=1}^n}{\sum_{k=1}^n}
\newcommand{\sum_{k=1}^N}{\sum_{k=1}^N}
\newcommand{\sum_{k=1}^{n-1}}{\sum_{k=1}^{n-1}}
\newcommand{\sum_{j=1}^n}{\sum_{j=1}^n}
\newcommand{\prod_{i=1}^n}{\prod_{i=1}^n}
\newcommand{\prod_{i=1}^m}{\prod_{i=1}^m}
\newcommand\set[1]{\ensuremath{\{#1\}}}
\newcommand\bigset[1]{\ensuremath{\bigl\{#1\bigr\}}}
\newcommand\Bigset[1]{\ensuremath{\Bigl\{#1\Bigr\}}}
\newcommand\biggset[1]{\ensuremath{\biggl\{#1\biggr\}}}
\newcommand\lrset[1]{\ensuremath{\left\{#1\right\}}}
\newcommand\xpar[1]{(#1)}
\newcommand\bigpar[1]{\bigl(#1\bigr)}
\newcommand\Bigpar[1]{\Bigl(#1\Bigr)}
\newcommand\biggpar[1]{\biggl(#1\biggr)}
\newcommand\lrpar[1]{\left(#1\right)}
\newcommand\bigsqpar[1]{\bigl[#1\bigr]}
\newcommand\Bigsqpar[1]{\Bigl[#1\Bigr]}
\newcommand\biggsqpar[1]{\biggl[#1\biggr]}
\newcommand\lrsqpar[1]{\left[#1\right]}
\newcommand\xcpar[1]{\{#1\}}
\newcommand\bigcpar[1]{\bigl\{#1\bigr\}}
\newcommand\Bigcpar[1]{\Bigl\{#1\Bigr\}}
\newcommand\biggcpar[1]{\biggl\{#1\biggr\}}
\newcommand\lrcpar[1]{\left\{#1\right\}}
\newcommand\abs[1]{|#1|}
\newcommand\bigabs[1]{\bigl|#1\bigr|}
\newcommand\Bigabs[1]{\Bigl|#1\Bigr|}
\newcommand\biggabs[1]{\biggl|#1\biggr|}
\newcommand\lrabs[1]{\left|#1\right|}
\def\rompar(#1){\textup(#1\textup)}
\newcommand\xfrac[2]{#1/#2}
\newcommand\xpfrac[2]{(#1)/#2}
\newcommand\xqfrac[2]{#1/(#2)}
\newcommand\xpqfrac[2]{(#1)/(#2)}
\newcommand\parfrac[2]{\lrpar{\frac{#1}{#2}}}
\newcommand\bigparfrac[2]{\bigpar{\frac{#1}{#2}}}
\newcommand\Bigparfrac[2]{\Bigpar{\frac{#1}{#2}}}
\newcommand\biggparfrac[2]{\biggpar{\frac{#1}{#2}}}
\newcommand\xparfrac[2]{\xpar{\xfrac{#1}{#2}}}
\newcommand\innprod[1]{\langle#1\rangle}
\newcommand\expbig[1]{\exp\bigl(#1\bigr)}
\newcommand\expBig[1]{\exp\Bigl(#1\Bigr)}
\newcommand\explr[1]{\exp\left(#1\right)}
\newcommand\expQ[1]{e^{#1}}
\def\xexp(#1){e^{#1}}
\newcommand\ceil[1]{\lceil#1\rceil}
\newcommand\floor[1]{\lfloor#1\rfloor}
\newcommand\lrfloor[1]{\left\lfloor#1\right\rfloor}
\newcommand\frax[1]{\{#1\}}
\newcommand\setn{\set{1,\dots,n}}
\newcommand\nn{[n]}
\newcommand\ntoo{\ensuremath{{n\to\infty}}}
\newcommand\Ntoo{\ensuremath{{N\to\infty}}}
\newcommand\asntoo{\text{as }\ntoo}
\newcommand\ktoo{\ensuremath{{k\to\infty}}}
\newcommand\mtoo{\ensuremath{{m\to\infty}}}
\newcommand\stoo{\ensuremath{{s\to\infty}}}
\newcommand\ttoo{\ensuremath{{t\to\infty}}}
\newcommand\xtoo{\ensuremath{{x\to\infty}}}
\newcommand\bmin{\wedge}
\newcommand\norm[1]{\|#1\|}
\newcommand\normp[1]{\|#1\|_p}
\newcommand\bignorm[1]{\bigl\|#1\bigr\|}
\newcommand\Bignorm[1]{\Bigl\|#1\Bigr\|}
\newcommand\downto{\searrow}
\newcommand\upto{\nearrow}
\newcommand\half{\tfrac12}
\newcommand\thalf{\tfrac12}
\newcommand\punkt{.\spacefactor=1000}
\newcommand\iid{i.i.d\punkt}
\newcommand\ie{i.e\punkt}
\newcommand\eg{e.g\punkt}
\newcommand\viz{viz\punkt}
\newcommand\cf{cf\punkt}
\newcommand{a.s\punkt}{a.s\punkt}
\newcommand{a.e\punkt}{a.e\punkt}
\renewcommand{\ae}{\vu}
\newcommand\whp{w.h.p\punkt}
\newcommand\ii{\mathrm{i}}
\newcommand{\longrightarrow}{\longrightarrow}
\newcommand\dto{\overset{\mathrm{d}}{\longrightarrow}}
\newcommand\pto{\overset{\mathrm{p}}{\longrightarrow}}
\newcommand\asto{\overset{\mathrm{a.s.}}{\longrightarrow}}
\newcommand\eqd{\overset{\mathrm{d}}{=}}
\newcommand\neqd{\overset{\mathrm{d}}{\neq}}
\newcommand\op{o_{\mathrm p}}
\newcommand\Op{O_{\mathrm p}}
\newcommand\bbR{\mathbb R}
\newcommand\bbC{\mathbb C}
\newcommand\bbN{\mathbb N}
\newcommand\bbT{\mathbb T}
\newcommand\bbQ{\mathbb Q}
\newcommand\bbZ{\mathbb Z}
\newcommand\bbZleo{\mathbb Z_{\le0}}
\newcommand\bbZgeo{\mathbb Z_{\ge0}}
\newcounter{CC}
\newcommand{\CC}{\stepcounter{CC}\CCx}
\newcommand{\CCx}{C_{\arabic{CC}}}
\newcommand{\CCname}[1]{\CC\xdef#1{\CCx}}
\newcommand{\CCreset}{\setcounter{CC}0}
\newcounter{cc}
\newcommand{\cc}{\stepcounter{cc}\ccx}
\newcommand{\ccx}{c_{\arabic{cc}}}
\newcommand{\ccdef}[1]{\xdef#1{\ccx}}
\newcommand{\ccreset}{\setcounter{cc}0}
\renewcommand\Re{\operatorname{Re}}
\renewcommand\Im{\operatorname{Im}}
\newcommand\E{\operatorname{\mathbb E{}}}
\newcommand\PP{\operatorname{\mathbb P{}}}
\newcommand\Var{\operatorname{Var}}
\newcommand\Cov{\operatorname{Cov}}
\newcommand\Corr{\operatorname{Corr}}
\newcommand\Exp{\operatorname{Exp}}
\newcommand\Po{\operatorname{Po}}
\newcommand\Bi{\operatorname{Bi}}
\newcommand\Bin{\operatorname{Bin}}
\newcommand\Be{\operatorname{Be}}
\newcommand\Ge{\operatorname{Ge}}
\newcommand\NBi{\operatorname{NegBin}}
\newcommand\Res{\operatorname{Res}}
\newcommand\fall[1]{^{\underline{#1}}}
\newcommand\rise[1]{^{\overline{#1}}}
\newcommand\supp{\operatorname{supp}}
\newcommand\sgn{\operatorname{sgn}}
\newcommand\Tr{\operatorname{Tr}}
\newcommand\ga{\alpha}
\newcommand\gb{\beta}
\newcommand\gd{\delta}
\newcommand\gD{\Delta}
\newcommand\gf{\varphi}
\newcommand\gam{\gamma}
\newcommand\gG{\Gamma}
\newcommand\gk{\varkappa}
\newcommand\gl{\lambda}
\newcommand\gL{\Lambda}
\newcommand\go{\omega}
\newcommand\gO{\Omega}
\newcommand\gs{\sigma}
\newcommand\gss{\sigma^2}
\newcommand\eps{\varepsilon}
\newcommand\ep{\varepsilon}
\renewcommand\phi{\xxx}
\newcommand\bJ{\bar J}
\newcommand\cA{\mathcal A}
\newcommand\cB{\mathcal B}
\newcommand\cC{\mathcal C}
\newcommand\cD{\mathcal D}
\newcommand\cE{\mathcal E}
\newcommand\cF{\mathcal F}
\newcommand\cG{\mathcal G}
\newcommand\cH{\mathcal H}
\newcommand\cI{\mathcal I}
\newcommand\cJ{\mathcal J}
\newcommand\cK{\mathcal K}
\newcommand\cL{{\mathcal L}}
\newcommand\cM{\mathcal M}
\newcommand\cN{\mathcal N}
\newcommand\cO{\mathcal O}
\newcommand\cP{\mathcal P}
\newcommand\cQ{\mathcal Q}
\newcommand\cR{{\mathcal R}}
\newcommand\cS{{\mathcal S}}
\newcommand\cT{{\mathcal T}}
\newcommand\cU{{\mathcal U}}
\newcommand\cV{\mathcal V}
\newcommand\cW{\mathcal W}
\newcommand\cX{{\mathcal X}}
\newcommand\cY{{\mathcal Y}}
\newcommand\cZ{{\mathcal Z}}
\newcommand\tJ{{\tilde J}}
\newcommand\ett[1]{\boldsymbol1\xcpar{#1}}
\newcommand\bigett[1]{\boldsymbol1\bigcpar{#1}}
\newcommand\Bigett[1]{\boldsymbol1\Bigcpar{#1}}
\newcommand\etta{\boldsymbol1}
\newcommand\smatrixx[1]{\left(\begin{smallmatrix}#1\end{smallmatrix}\right)}
\newcommand\limn{\lim_{n\to\infty}}
\newcommand\limN{\lim_{N\to\infty}}
\newcommand\qw{^{-1}}
\newcommand\qww{^{-2}}
\newcommand\qq{^{1/2}}
\newcommand\qqc{^{3/2}}
\newcommand\qqw{^{-1/2}}
\newcommand\qqq{^{1/3}}
\newcommand\qqqb{^{2/3}}
\newcommand\qqqw{^{-1/3}}
\newcommand\qqqbw{^{-2/3}}
\newcommand\qqqq{^{1/4}}
\newcommand\qqqqc{^{3/4}}
\newcommand\qqqqw{^{-1/4}}
\newcommand\qqqqcw{^{-3/4}}
\newcommand\intoi{\int_0^1}
\newcommand\intoo{\int_0^\infty}
\newcommand\intoooo{\int_{-\infty}^\infty}
\newcommand\oi{[0,1]}
\newcommand\ooo{[0,\infty)}
\newcommand\ooox{[0,\infty]}
\newcommand\oooo{(-\infty,\infty)}
\newcommand\setoi{\set{0,1}}
\newcommand\dtv{d_{\mathrm{TV}}}
\newcommand\dd{\,\mathrm{d}}
\newcommand{probability generating function}{probability generating function}
\newcommand{moment generating function}{moment generating function}
\newcommand{characteristic function}{characteristic function}
\newcommand{uniformly integrable}{uniformly integrable}
\newcommand\rv{random variable}
\newcommand\lhs{left-hand side}
\newcommand\rhs{right-hand side}
\newcommand\gnp{\ensuremath{G(n,p)}}
\newcommand\gnm{\ensuremath{G(n,m)}}
\newcommand\gnd{\ensuremath{G(n,d)}}
\newcommand\gnx[1]{\ensuremath{G(n,#1)}}
\newcommand\etto{\bigpar{1+o(1)}}
\newcommand\sL{{\mathsf L}}
\newcommand\sR{{\mathsf R}}
\newcommand\tl{T_\sL}
\newcommand\tr{T_\sR}
\newcommand\TT{\bar \cT}
\newcommand\ttn{\TT_n}
\newcommand\ctn{\cT_n}
\newcommand\ctk{\cT_k}
\newcommand\ctx[1]{\cT_{#1}}
\newcommand\ctnx[1]{\ctx{n,#1}}
\newcommand\ctnv{\ctnx v}
\newcommand\ctnl{\cT_{n,\sL}}
\newcommand\ctnr{\cT_{n,\sR}}
\newcommand\ccT{\tilde\cT}
\newcommand\ctt{\ccT_t}
\newcommand\cttau{\ccT_\tau}
\newcommand\vvx[1]{v_1\dotsm v_{#1}}
\newcommand\vvk{\vvx{k}}
\newcommand\Too{T_\infty}
\newcommand\Voo{V_\infty}
\newcommand\Cl{C_\sL}
\newcommand\Cr{C_\sR}
\newcommand\ctgl{\cT^{(\gl)}}
\newcommand\Fii{{}_1F_1}
\newcommand\gamm{\gam^2}
\newcommand\bst{binary search tree}
\newcommand\rbst{random \bst}
\newcommand\dV{\partial V}
\newcommand\gff{\gf'}
\newcommand\pxx[1]{\frac{2}{(#1+1)(#1+2)}}
\newcommand\pkk{\pxx{k}}
\newcommand\sumvtn{\sum_{v\in \ctn}}
\newcommand\sumvv{\sum_{v\in V_m'}}
\newcommand\sumvdv{\sum_{v\in\dV_m'}}
\newcommand\qp{^{1/p}}
\newcommand\qpp{^{1-1/p}}
\newcommand\FF{F^{(p)}}
\newcommand{H\"older}{H\"older}
\newcommand{P\'olya}{P\'olya}
\newcommand\CS{Cauchy--Schwarz}
\newcommand\CSineq{\CS{} inequality}
\newcommand{L\'evy}{L\'evy}
\newcommand\ER{Erd\H os--R\'enyi}
\newcommand{Lov\'asz}{Lov\'asz}
\newcommand{Fr\'echet}{Fr\'echet}
\newcommand{\texttt{Maple}}{\texttt{Maple}}
\newcommand\citex{\REM}
\newcommand\refx[1]{\texttt{[#1]}}
\newcommand\xref[1]{\texttt{(#1)}}
\hyphenation{Upp-sala}
\begin{document}
\begin{abstract}
We study maximal clades in random phylogenetic trees with the Yule--Harding
model or, equivalently, in binary search trees. We use probabilistic methods
to reprove and extend earlier results on moment asymptotics and asymptotic
normality. In particular, we give an explanation of the curious phenomenon
observed by Drmota, Fuchs and Lee (2014) that asymptotic normality holds, but
one should normalize using half the variance.
\end{abstract}
\maketitle
\section{Introduction}\label{S1}
Recall that there are two types of binary trees; we fix the notation as follows.
A \emph{full binary tree} is an rooted tree where each node has
either 0 or 2 children; in the latter case the two children are designated as
\emph{left child} and \emph{right child}.
A \emph{binary tree} is a rooted tree where each node has
0, 1 or 2 children; moreover, each child is designated as either
\emph{left child} or \emph{right child}, and each node has at most one
child of each type.
(Both versions can be regarded as ordered trees,
with the left child before the right when there are two children.)
It is convenient to regard also the empty tree $\emptyset$
as a binary tree (but not as
a full binary tree).
In a full binary tree, the leaves (nodes with no children)
are called \emph{external nodes}; the other nodes (having 2 children) are
\emph{internal nodes}.
There is a simple, well-known bijection between full binary trees and binary
trees: Given a full binary tree, its internal nodes form a binary tree;
this is a bijection, with inverse given by adding, to
any given binary tree, external nodes as children at all free places.
Note that a full binary tree with $n$ internal nodes has $n+1$ external
nodes, and thus $2n+1$ nodes in total. In particular, the bijection just
described yields a bijection between the full binary trees with $2n+1$ nodes
and the binary trees with $n$ nodes.
If $T$ is a binary, or full binary, tree, we let $T_\sL$ and $T_\sR$
be the subtrees rooted at the left and right child of the root, with
$T_\sL=\emptyset$ [$T_\sR=\emptyset$] if the root has no left [right] child.
A \emph{phylogenetic tree} is the same as a full binary tree.
In this context, the \emph{clade} of an external node $v$ is defined
to be the set of external nodes that are descendants of the parent of $v$.
(This is called a \emph{minimal clade} by \citet{BlumF} and \citet{ChangF10}.)
Note that two clades are either nested or disjoint; furthermore, each
external node belongs to some clade (for example its own).
Hence, the set of maximal
clades forms a partition of the set of external nodes.
We let $F(T)$ denote the number of maximal clades of a phylogenetic
tree $T$. (Except that for technical reasons, see \refS{Sbin}, we define
$F(T)=0$ for a phylogenetic tree $T$ with only one external node. Obviously,
this does not affect asymptotics.)
The maximal clades, and the number of them, were introduced by \citet{DBF07},
together with a biological motivation, and further studied by \citet{DFL}.
The phylogenetic trees that we consider are random; more precisely, we
consider the Yule--Harding model of a random phylogenetic tree $\TT_{n}$
with a given number $n$ internal, and thus $n+1$ external, nodes.
These can be defined recursively, with $\TT_0$ the
unique phylogenetic tree with 1 node (the root), and $\TT_{n+1}$ obtained
from $\TT_n$ ($n\ge0$)
by choosing an external node uniformly at random and converting
it to an internal node with two external children.
(Alternatively, we obtain the same random model by constructing the tree
bottom-up by Kingman's coalescent \cite{Kingman},
see further \citet{Aldous-cladograms}, \citet{BlumF} and \citet{ChangF10}.)
Recall that, for any $n\ge1$, the number of internal nodes in the
left subtree $\TT_{n,\sL}$ (or the right subtree $\TT_{n,\sR}$) is uniformly
distributed on $\set{0,\dots,n-1}$,
and that conditioned on this number being $m$, $\TT_{n,\sL}$ has the same
distribution as $\TT_m$;
see also \refR{Rsplit}.
Under the bijection above, the Yule--Harding random tree $\TT_n$ corresponds
to the random \emph{binary search tree} $\cT_n$ with $n$ nodes, see \eg{}
\citet{SJ180} and \citet{Drmota}.
The random variable that we study is thus $X_n:=F(\ttn)$,
the number of maximal clades in the Yule--Harding model.
It was proved by \citet{DF10}
that the mean number of maximal clades
$\E X_n\sim\ga n$,
where
\begin{equation}\label{ga}
\ga = \frac{1-e\qww}4.
\end{equation}
This was reproved by \citet{DFL},
in a sharper form:
\begin{theorem}[\cite{DF10,DFL}]
\label{Tmean}
\begin{equation}\label{tmean}
\E X_n=\E F(\ctn)=\ga n+O(1),
\end{equation}
where $\ga$ is given by \eqref{ga}.
\end{theorem}
Moreover, \citet{DFL} found also corresponding
results for the variance and higher central moments:
\begin{theorem}[\cite{DFL}]
\label{Tmom}
As \ntoo,
\begin{align}\label{Xvar}
\E\xpar{X_n-\E X_n}^2
&\sim 4\ga^2 n\log n, &&
\intertext{and for any fixed integer $k\ge3$,}
\E\xpar{X_n-\E X_n}^k &\sim (-1)^k\frac{2k}{k-2}\ga^k n^{k-1}.
\label{Xmom}
\end{align}
\end{theorem}
As a consequence of \eqref{Xvar}--\eqref{Xmom}, the limit distribution of
$F(\ttn)$ (after centering and normalization) cannot be found by the method
of moments.
Nevertheless,
\cite{DFL} further proved asymptotic normality,
where, unusually, the normalizing uses (the square root of) \emph{half} the
variance:
\begin{theorem}[\cite{DFL}] \label{TCLT}
As \ntoo,
\begin{equation}\label{tclt}
\frac{X_n-\E X_n}{\sqrt{2\ga^2 n\log n}} \dto N(0,1).
\end{equation}
\end{theorem}
Here and below, $\dto$ denotes convergence in distribution;
similarly, $\pto$ will denotes convergence in probability.
Unspecified limits (including implicit ones such as $\sim$ and $o(1)$) will
be as \ntoo.
Furthermore, $Y_p=\op(a_n)$, for random variables $Y_n$ and
positive numbers $a_n$, means $Y_n/a_n\pto0$.
We let $C,C_1,C_2,\dots$ denote some unspecified positive constants.
The purpose of the present paper is to
use probabilistic methods to reprove these theorems, together with some
further results;
we hope that this can give additional insight,
and it might perhaps also suggest future
generalizations to other types of random trees.
In particular, we can explain the appearance of half the variance in
\refT{TCLT} as follows:
Fix a sequence of numbers $N=N(n)$, and say that a clade is \emph{small} if
it has at most $N+1$ elements, and \emph{large} otherwise.
(We use $N+1$ in the definition only for later notational convenience;
the subtree corresponding to a small clade has at most $N$ internat nodes.)
Let $X^N_n$ be the number of maximal small clades, \ie, the small clades
that are not contained in any other small clade. It turns out that a
suitable choice of $N$ is about $\sqrt n$; we give two versions in the next
theorem.
\begin{theorem}\label{Tsmall}
\begin{thmenumerate}
\item \label{tsmall12}
Let $N:=\sqrt{n}$.
Then $\Var(X^N_n)\sim 2\ga^2 n\log n$ and
\begin{equation}\label{tsmall}
\frac{X^N_n-\E X^N_n}{\sqrt{\Var X_n^N}} \dto N(0,1).
\end{equation}
Furthermore,
$X_n-X_n^N=\op\bigpar{\sqrt{\Var X_n^N}}$ and
$\E X_n-\E X_n^N=o\bigpar{\sqrt{\Var X_n^N}}$, so we may replace $X_n^N$
by $X_n$ in the numerator of \eqref{tsmall}.
However,
\begin{equation}\label{tsmallar}
\Var(X_n-X_n^N)\sim \Var(X_n^N)\sim 2\ga^2n\log n.
\end{equation}
\item
Let $\sqrt{n}\ll N \ll \sqrt{n\log n}$, for example $N:=n\log\log n$.
Then the conclusions of \ref{tsmall12} still hold; moreover,
$\PP(X_n\neq X_n^N)\to0$.
\end{thmenumerate}
\end{theorem}
The theorem thus shows that the large clades are rare, and do not contribute
to the asymptotic distribution; however, when they appear, the larges clades
give a large (actually negative) contribution to $X_n$, and as a result,
half the variance of $X_n$ comes from the large clades.
(When there is a large clade, there is less room for other clades, so $X_n$
tends to be smaller than usually. See also \eqref{ftv} and \eqref{f} below.)
For higher moments, the large clades play a similar, but even more extreme,
role.
Note that (for $n\ge2$)
with probability $2/n$,
the root of $\TT_n$ has one internal and
one external node, and then there is a clade consisting of all external nodes;
this is obviously the unique maximal clade, and thus $X_n=1$.
Since $\E X_n =\ga n+O(1)$ by
\refT{Tmean}, we thus have $X_n-\E X_n = -\ga n +O(1)$ with probability
$2/n$, and this single exceptional event gives a contribution
$\sim (-1)^k 2\ga^k n^{k-1}$ to $\E(X_n-\E X_n)^{k}$, which
explains a fraction $(k-2)/k$ of the moment \eqref{Xmom};
in particular, this explains
why the moment is of order $n^{k-1}$.
We shall see later that, roughly speaking, the moment asymptotic
in \eqref{Xmom} is completely explained by extremely
large clades of size $\Theta(n)$,
which appear in the $O(1)$ first generations of the tree.
This will also lead to a version of \eqref{Xmom} for absolute central
moments:
\begin{theorem}
\label{Tp}
For any fixed real $p>2$, as \ntoo,
\begin{equation}\label{tp}
\E\bigabs{X_n-\E X_n}^p \sim \frac{2p}{p-2}\ga^p n^{p-1}.
\end{equation}
\end{theorem}
In \refS{Sbin}, we transfer the problem from random phylogenetic trees to
\rbst, which we shall use in the proofs.
The theorems above are proved in Sections \ref{Smean}--\ref{Sga}.
\section{Binary trees}\label{Sbin}
We find it technically convenient to work with binary trees instead of full
binary trees (phylogenetic trees), so we use the bijection in \refS{S1} to
define $F(T)$ also for binary trees $T$. (We use the same notation $F$; this
should not cause any confusion.)
With this translation, our problem is thus to study $X_n:=F(\ctn)$, where $\ctn$
is the binary search tree with $n$ nodes.
The clades in a phylogenetic
tree correspond to the internal nodes that have at least one external child,
\ie, the nodes in the corresponding binary tree that have outdegree at most
1.
We call such nodes \emph{green}.
For a binary tree $T$,
the number $F(T)$ is thus the number of \emph{maximal green nodes},
\ie, the number
of green nodes that have no green ancestor. (This holds also for the
phylogenetic tree $T$ with a single node, and thus for the empty binary
tree, with our definition $F(T)=0$ in this case.)
It follows that,
for any binary tree $T$,
\begin{equation}\label{FT}
F(T):=
\begin{cases}
1 & \text{if $T$ has a green root},
\\
F(\tl)+F(\tr) & \text{otherwise}.
\end{cases}
\end{equation}
Define, for a binary tree $T$,
\begin{equation}\label{f}
f(T):= F(T)-F(\tl)-F(\tr)=
\begin{cases}
1-F(\tr), & \tl=\emptyset,T\neq\emptyset, \\
1-F(\tl), & \tr=\emptyset,T\neq\emptyset, \\
0, & \text{otherwise}.
\end{cases}
\end{equation}
Then $F(T)$ is given by the recursion
\begin{equation}
F(T)=F(\tl)+F(\tr)+f(T),
\end{equation}
and thus
\begin{equation}\label{ftv}
F(T)=\sum_{v\in T}f(T_v),
\end{equation}
where $T_v$ is the subtree rooted at $v$, consisting of $v$ and all its
descendants.
In another words, $F(T)$ is the additive functional defined by the toll
function $f(T)$. The advantage of this point of view is that we have
eliminated the maximality condition and now sum over all subtrees $T_v$, and
that we can use general results for this type of sums, see
\citet{SJ296}.
We let $\cT$ denote the random binary search tree with a random number of
elements such that $\PP(|\cT|=n)=2/((n+1)(n+2))$, $n\ge1$.
The random binary tree $\cT$
can be constructed by a continuous-time branching process:
Let $(\ccT_t)_{t\ge0}$ be the growing tree that starts
with an isolated root at time $t=0$ and
such that each existing node gets
a left and a right child after random
waiting times that are independent and $\Exp(1)$;
we stop the process at a random time $\tau\sim\Exp(1)$,
independent of everything else, and can take $\cT=\ccT_\tau$, see
\citet{Aldous-fringe}
(where it is also proved that $\cT$ is the limit in
distribution of a random fringe tree in a binary search tree).
\section{The mean}\label{Smean}
Recall that $\ctn$ is the random binary search tree with $n$ nodes. Define
$\nu_n:=\E F(\ctn)$ and $\mu_n:=\E f(\ctn)$, with $F$ and $f$ as in \refS{Sbin}.
(In particular, $\nu_0=\mu_0=0$, while $\nu_1=\mu_1=1$ since
$F(\cT_1)=f(\cT_1)=1$.)
For $n\ge2$, $\ctnl$ is empty with probability $1/n$, and conditioned on
this event, $\ctnr$ has the same distribution as $\cT_{n-1}$.
The same holds if we interchange $\sL$ and $\sR$.
Hence, taking
the expectation in \eqref{f},
\begin{equation}\label{munu}
\mu_n = \tfrac{2}n\bigpar{1-\E F(\cT_{n-1})}
= \tfrac{2}n\bigpar{1-\nu_{n-1}} ,
\qquad n\ge2.
\end{equation}
Furthermore, we see that \eqref{f} implies
\begin{equation}\label{max}
\PP\bigpar{f(\ctn)\neq0}\le \xfrac{2}n .
\end{equation}
Since obviously $0\le F(T)\le |T|$, we have by \eqref{f} also
$-|T|\le f(T)\le 1$ and thus
\begin{equation}\label{maa}
|f(T)|\le |T|
\end{equation}
for any binary tree $T$.
In particular, this and \eqref{max} yield
\begin{equation}
\label{mua}
|\mu_n|\le\E|f(\ctn)|\le n\PP\bigpar{f(\ctn)\neq0}\le 2.
\end{equation}
It is now a simple consequence of general results that $\nu_n:=\E F(\ctn)$ is
asymptotically linear in $n$. Recall the random binary tree $\cT$ defined in
\refS{Sbin}.
\begin{lemma}
\label{LEF}
\begin{equation}\label{nun}
\nu_n:=\E F(\ctn) = n\ga+O(1),
\end{equation}
where
\begin{equation}\label{lefa}
\begin{split}
\ga&:=\E f(\cT)
=\sum_{n=1}^\infty \frac2{(n+1)(n+2)}\E f(\ctn)
=\sum_{n=1}^\infty \frac2{(n+1)(n+2)}\mu_n
\\&\phantom:
=\sum_{n=1}^\infty \frac4{n(n+1)(n+2)}(1-\nu_{n-1}).
\end{split}
\end{equation}
\end{lemma}
\begin{proof}
An instance of \citet[Theorem 3.8]{SJ296}.
More explicitly, see
\cite[Theorem 3.4]{SJ296},
\begin{equation}
\label{jw}
\E F(\ctn) = (n+1) \sum_{k=1}^{n-1} \frac{2}{(k+1)(k+2)}\mu_k + \mu_n,
\end{equation}
which implies the result by \eqref{mua} and \eqref{munu}.
\end{proof}
In order to prove \refT{Tmean}, it remains to show that $\ga$ defined in
\eqref{lefa} equals $(1-e\qww)/4$
as asserted in \eqref{ga}. In other words, we need the following.
\begin{lemma}
\label{Lga}
\begin{equation}
\E f(\cT)=\frac{1-e\qww}4.
\end{equation}
\end{lemma}
We can prove \refL{Lga} by probabilistic methods, using the construction of
$\cT$ by a branching process in \refS{Sbin}.
However, this proof is
considerably longer than the proof of \refT{Tmean} by singularity analysis
of generating functions in \cite{DF10} and \cite{DFL}; we nevertheless find
the probabilistic proof interesting, and perhaps useful for future
generalizations, but since the methods in it are not needed for other
results in the present paper, we postpone our proof of \refL{Lga} to \refS{Sga}.
\section{Variance}
Let $\gamm_n:=\Var(f(\ctn))$ and $\gss_n:=\Var(F(\ctn))$.
Then $\gamm_0=\gamm_1=\gss_0=\gss_1=0$ and, for $n\ge2$, using \eqref{f},
\begin{equation}
\label{noa}
\gamm_n = \E f(\ctn)^2 -\mu_n^2
=\frac{2}n\E\bigpar{F(\cT_{n-1})-1}^2-\mu_n^2
\le \frac{2}n n^2 =2n.
\end{equation}
Before proving the variance asymptotics in \eqref{Xvar},
we begin with a weaker estimate.
\begin{lemma}
\label{Lgss0}
For $n\ge1$,
\begin{equation}
\gss_n:=\Var F(\ctn) = O(n\log^2 n).
\end{equation}
\end{lemma}
\begin{proof}
By \cite[Theorem 3.9]{SJ296}, where it suffices to sum to $n$ since we may
replace $f(T)$ by $0$ for $|T|>n$ without changing $F(\ctn)$,
\begin{equation}\label{gssO}
\gss_n \le Cn\lrpar{\biggpar{\sum_{k=1}^n\frac{\gam_k}{k\qqc}}^2
+\sup_{k}\frac{\gamm_k}{k}+\sum_{k=1}^n\frac{\mu_k^2}{k^2}}
=O(n\log^2 n),
\end{equation}
using \eqref{noa} and \eqref{mua}, provided $n\ge2$. The case $n=1$ is trivial.
\end{proof}
Write $f(T)=g(T)+h(T)$, where
\begin{equation}\label{jg}
g(T):=
\begin{cases}
1-\nu_{|T|-1}, & \tl=\emptyset,T\neq\emptyset
\text{ or } \tr=\emptyset,T\neq\emptyset, \\
0, & \text{otherwise}.
\end{cases}
\end{equation}
and thus, see \eqref{f},
\begin{equation}\label{jh}
h(T):=
\begin{cases}
\nu_{|\tr|}-F(\tr), & \tl=\emptyset, \\
\nu_{|\tl|}-F(\tl), & \tr=\emptyset, \\
0, & \text{otherwise}.
\end{cases}
\end{equation}
Then $g(\cT_1)=1$, $h(\cT_1)=0$, and, for $k\ge2$,
using \eqref{munu} and \eqref{mua},
\begin{align}
\E g(\cT_k)&=\frac{2}{k}\bigpar{1-\nu_{k-1}}=\mu_k=O(1), \label{jeg}
\\
\E h(\cT_k)&=\frac{2}{k}\E\bigpar{\nu_{k-1}-F(\cT_{k-1})}=0 \label{jeh},
\end{align}
{and, using \refL{Lgss0},}
\begin{equation}
\Var h(\cT_k)=\frac{2}{k}\E\bigpar{\nu_{k-1}-F(\cT_{k-1})}^2
=\frac{2}k\gss_{k-1} = O(\log^2 k).
\label{jvh}
\end{equation}
Let, for an arbitrary binary tree $T$,
\begin{align}\label{GH}
G(T):=\sum_{v\in T} g(T_v)&&& \text{and}&& H(T):=\sum_{v\in T} h(T_v),
\end{align}
so by \eqref{ftv},
\begin{equation} \label{FGH}
F(T)=G(T)+H(T).
\end{equation}
\begin{lemma}
\label{LG}
For $n\ge1$,
\begin{align}
\E G(\ctn)&=\nu_n, \label{lgG}\\
\E H(\ctn)&=0, \label{lgH}\\
\Var H(\ctn)&=O(n). \label{lgVH}
\end{align}
\end{lemma}
\begin{proof}
By \cite[Theorem 3.4]{SJ296}, \cf{} \eqref{jw}, and \eqref{jeh},
\begin{equation}
\E H(\ctn) = (n+1) \sum_{k=1}^{n-1} \frac{2}{(k+1)(k+2)}\E h(\ctk) +
\E h(\ctn) = 0,
\end{equation}
which proves \eqref{lgH}. This implies \eqref{lgG},
since by \eqref{FGH},
\begin{equation}
\E G(\ctn)=\E F(\ctn)-\E H(\ctn) = \nu_n.
\end{equation}
Similarly,
by \cite[Theorem 3.9]{SJ296}, \cf{} \eqref{gssO}, and \eqref{jeh}--\eqref{jvh},
\begin{equation*}
\Var H(\ctn) \le C n \lrpar{\biggpar{\sum_{k=1}^\infty \frac{\log k}{k\qqc}}^2
+\sup_{k\ge1}\frac{\log^2 k}{k}+0}=O(n).
\qedhere
\end{equation*}
\end{proof}
We shall see that this means that
$H(\ctn)$ is asymptotically negligible, and thus it suffices to
consider $G(\ctn)$.
Note that $g(T)$ depends only on the sizes $|\tl|$ and $|\tr|$. This
enables us to easily estimate the variance of $G(\ctn)$.
\begin{theorem}
\label{TVG}
For all $n\ge1$,
\begin{equation}\label{tvg}
\Var G(\ctn) = 4\ga^2 n\log n + O(n).
\end{equation}
\end{theorem}
\begin{proof}
Write $g(T)=g(|T|,|\tl|,|\tr|)$.
(We only care about $g(k,j,l)$ when $j+l=k-1$, but use three arguments for
emphasis.)
Thus $g(k,0,k-1)=g(k,k-1,0)=1-\nu_{k-1}$ and otherwise $g(k,j,k-j-1)=0$.
Let, as in \cite[Theorem 1.29]{SJ296}, $I_k$ be uniformly distributed
on \set{0,\dots,k-1} and
\begin{equation}\label{psi}
\begin{split}
\psi_k &:= \E\bigpar{\nu_{I_k}+\nu_{k-1-I_k}+g(k,I_k,k-1-I_k)-\nu_k}^2
\\&
=\frac{1}k\sum_{j=1}^{k-2}(\nu_{j}+\nu_{k-1-j}-\nu_k)^2
+\frac{2}k \bigpar{\nu_{k-1}+1-\nu_{k-1}-\nu_k}^2
\\&
=\frac{1}k\sum_{j=1}^{k-2}(\nu_{j}+\nu_{k-1-j}-\nu_k)^2
+\frac{2}k (\nu_{k}-1)^2
\\&
=O(1)+\frac{2}k\bigpar{\ga k+O(1)}^2 = 2\ga^2 k+O(1),
\end{split}
\end{equation}
where we used that $\nu_j=\ga j+O(1)$ by \refT{Tmean}.
By \cite[Lemma 7.1]{SJ296}, then
\begin{equation}\label{varg}
\begin{split}
\Var G(\ctn)& = (n+1)\sum_{k=1}^{n-1}\frac{2}{(k+1)(k+2)}\psi_k+\psi_n
\\&
= (n+1)\sum_{k=1}^{n-1}\frac{4\ga^2 k+O(1)}{(k+1)(k+2)}+O(n)
= (n+1)\sum_{k=1}^{n-1}\frac{4\ga^2 }{k}+O(n)
\\&
=4\ga^2 n\log n+O(n).
\end{split}
\raisetag{\baselineskip}
\end{equation}
\end{proof}
We can now prove \eqref{Xvar} in \refT{Tmom}. (Higher moments are treated in
\refS{Smom}.)
\begin{theorem}
\label{TVF}
For all $n\ge1$,
\begin{equation}\label{tvf}
\Var F(\ctn) = 4\ga^2 n\log n + o(n\log n).
\end{equation}
\end{theorem}
This follows from \eqref{FGH},
\eqref{tvg} and \eqref{lgVH} by Minkowski's inequality
(the triangle inequality for $\sqrt{\Var{}}$).
\section{Asymptotic normality}\label{SCLT}
We prove the central limit theorem \refT{TCLT} by a martingale central limit
theorem for a suitable martingale that we construct in this section.
Consider the infinite binary tree $\Too$, where each node has two
children, and denote its root by $o$.
We may regard any binary tree $T$ as a subtree of $\Too$ with the
same root $o$. (In the general sense that the node set $V(T)$ is a
subset of $\Voo:=V(\Too)$, and that the left and right children are the same
as in $\Too$, when they exist.)
In particular we regard the \rbst{}
$\ctn$ as a subtree of $\Too$.
Order the nodes in $\Too$ in breadth-first order as $v(1)=o,v(2),\dots$,
and let $V_j:=\set{v(1),\dots,v(j)}$ be the set
of the first $j$ nodes. Let $\cF_j$ be the $\gs$-field generated by the
sizes $|\cT_{n,v,\sL}|$ and $|\cT_{n,v,\sR}|$ of the two child
subtrees of $\ctn$
at each node $v\in V_j$. Equivalently, we may regard $V_j$ as the internal
nodes in a full binary tree; let $\dV_j$ be the corresponding set of $j+1$
external nodes. Then $\cF_j$ is generated by the subtree sizes $|\cT_{n,v}|$
for all $v\in \dV_j$, together with the indicators $\ett{v\in \ctn}$,
$v\in V_j$, that describe $\ctn\cap V_j$. (We regard the subtree $\cT_{n,v}$
as defined for all $v\in \Voo$, with $\cT_{n,v}=\emptyset$ if $v\notin\ctn$.)
Then, conditioned on $\cF_j$, $\ctn$ consists of some given
subtree of $V_j$
together with attached subtrees $\cT_{n,v}$ at all nodes $v\in\dV_j$; these
are independent \bst{s} of some given orders.
We allow here $j=0$; $V_0=\emptyset$ and $\cF_0$ is the trivial $\gs$-field.
\begin{remark}\label{Rsplit}
As is well-known, see \eg{} \cite{Drmota}, another construction of the
\rbst{} $\ctn$ ($n\ge1$)
is to let the random variable $I_n$ be uniformly
distributed on \set{0,\dots,n-1}, and to let $\ctn$ be
defined recursively such that, given $I_n$,
$\ctnl$ and $\ctnr$ are independent binary
search trees with $|\ctnl|=I_n$ and $|\ctnr|=n-1-I_n$.
(When the tree is used to sort $n$ keys, $I_n$ tells how many of the keys
that are assigned to the left subtree.)
The pair $(I_n,n-1-I_n)$ thus tells how the tree is split at the root, and
there is a similar pair for each node.
Then $\cF_j$ is generated by these pairs (\ie, splits) for the nodes
$v_1,\dots,v_j$.
\end{remark}
Recall that $g(T)$ by \eqref{jg}
depends only on the sizes $|\tl|$ and $|\tr|$. Hence,
$\cF_j$ specifies the value of $g(\cT_{n,v})$ for every $v\in V_j$, and
it follows that
\begin{equation}\label{jepp}
\E\bigpar{G(\ctn)\mid\cF_j}
= \E\Bigpar{\sum_{v\in \Voo} g(\cT_{n,v})\Bigm|\cF_j}
= \sum_{v\in V_j} g(\cT_{n,v}) + \sum_{v\in \dV_j} \nu_{|\cT_{n,v}|}.
\end{equation}
Since the sequence of $\gs$-fields $(\cF_j)_0^\infty$ is increasing,
the sequence
$M_{n,j}:=\E\bigpar{G(\ctn)\mid\cF_j}$, $j\ge0$, is a martingale (for any
fixed $n$). It follows from \eqref{jepp}
that the martingale differences are
\begin{equation}\label{gDM}
\Delta M_{n,j}
:=M_{n,j}-M_{n,j-1}
=g(\cT_{n,v(j)})+\nu_{|\ctnx{v(j)_\sL}|} +\nu_{|\ctnx{v(j)_\sR}|}
-\nu_{|\ctnx{v(j)}|},
\end{equation}
where $v(j)_\sL$ and $v(j)_\sR$ are the children of $v(j)$.
It follows easily that, with $\psi_k$ defined in \eqref{psi},
\begin{equation}
\E\bigpar{|\Delta M_{n,j}|^2\mid\cF_{j-1}}
=
\E\bigpar{|\Delta M_{n,j}|^2\mid |\cT_{n,v(j)}|}
=\psi_{|\ctnx{v(j)}|}.
\end{equation}
Consequently, the conditional square function is given by
\begin{equation}
\begin{split}
W_n:=\sum_{j=1}^\infty \E\bigpar{|\Delta M_{n,j}|^2\mid\cF_{j-1}}
=\sum_{v\in\Voo}\psi_{|\ctnx{v}|}
=\sum_{v\in\ctn}\psi_{|\ctnx{v}|}.
\end{split}
\end{equation}
(It suffices to sum over $v\in \ctn$, since $\psi_0=0$.)
This is again a sum of the same type as \eqref{ftv} and \eqref{GH}, for the
random tree $\ctn$.
(Note that the toll function $\psi_{|T|}$ here depends only on the size
of $T$.)
In particular,
\cite[Theorem 3.4]{SJ296} applies
(in this case we can also use
\cite{Devroye1},
\cite{Devroye2} or
\cite{FlajoletGM1997}); this yields
\begin{equation}\label{ew}
\E W_n = (n+1) \sum_{k=1}^{n-1} \frac{2}{(k+1)(k+2)}\psi_k +
\psi_n.
\end{equation}
If $j$ is large enough, say $j\ge 2^n$, then $V(\ctn)\subseteq V_j$ and thus
$M_{n,j}=G(\ctn)$.
In particular, $G(\ctn)=M_{n,\infty}$.
Thus, by a standard (and simple) martingale identity,
$\Var G(\ctn)=\Var M_{n,\infty}=\E W_n$; hence \eqref{ew} yields the first
equality in \eqref{varg}. (This is no coincidence; the proof just given of
\eqref{ew} is essentially the same as the proof of
\cite[Lemma 7.1]{SJ296} that was used in \eqref{varg}, but stated in martingale
formulation.)
We now split the sum $G(\ctn)$ into two parts, roughly corresponding to
small and large clades. We fix a cut-off $N=N(n)$; for definiteness and
simplicity we choose
$N=N(n):=\sqrt n$, but we note that the arguments below hold with a few
minor modifications for any $N\ge\sqrt n$ with $N=o(\sqrt{n\log n})$.
We then define, for binary trees $T$,
\begin{align}
g'(T)&:= g(T)\ett{|T|\le N} \label{g'}\\
g''(T)&:= g(T)\ett{|T|> N}=g(T)-g'(T). \label{g''}
\end{align}
In analogy with \eqref{ftv} and \eqref{GH}, we define further
\begin{align}
G'(T):=\sum_{v\in T} g'(T_v)
&&&
\text{and} && G''(T):=\sum_{v\in T} g''(T_v);
\end{align}
thus
$G(T)=G'(T)+G''(T)$. We shall see that, asymptotically, both $G'(\ctn)$ and
$G''(T)$ contribute to the variance with equal amounts,
but nevertheless $G''(\ctn)$ is negligible (in probability).
We begin with the main term $G'(\ctn)$.
\begin{lemma}\label{LG'}
As \ntoo,
\begin{align}
\Var\bigpar{G'(\ctn)}
&= 2\ga^2 n\log n + O(n), \label{varg'}
\\
\label{lg'}
\frac{G'(\ctn)-\E G'(\ctn)}{\sqrt{2\ga^2 n\log n}}
&\dto N(0,1).
\end{align}
\end{lemma}
\begin{proof}
We define $\nu'_n:=\E G'(\ctn)$.
Note that $g'(T)$ depends only on the sizes
$|\tl|$ and $|\tr|$.
Hence we can repeat the argument above and define a martingale
$M'_{n,j}:=\E\bigpar{G'(\ctn)\mid\cF_j}$, $j\ge0$, with
$G'(\ctn)=M'_{n,\infty}$ and martingale differences
\begin{equation}\label{blid}
\Delta M'_{n,j}=\gff(\ctnx{v(j)}),
\end{equation}
where we define, \cf{} \eqref{gDM},
\begin{equation}\label{gff}
\gff(T):=g'(T)+\nu'_{|\tl|}+\nu'_{|\tr|}-\nu'_{|T|}.
\end{equation}
By \cite[Theorem 3.4]{SJ296} again, \cf{} \eqref{jw} and \eqref{ew},
using $\E g(\ctk)=\mu_k=O(1)$ by \eqref{jeg},
\begin{equation}
\begin{split}
\nu'_m&=(m+1)\sumkix{m-1}\pkk \E g'(\ctk)+\E g'(\cT_m)
\\&=(m+1)\sumkix{(m-1)\land N}\pkk \E g(\ctk)+O(1)
\\&=(m+1)\sumkix{N}\pkk \mu_k+O(1).
\end{split}
\end{equation}
Hence, \eqref{gff} yields, after cancellations,
\begin{equation}\label{gff2}
\gff(T)=g'(T)+O(1)
=
\begin{cases}
g(T)+O(1), & |T|\le N,
\\
O(1), & |T|>N.
\end{cases}
\end{equation}
Let
\begin{equation}\label{psi'}
\psi'_k:=\E |\gff(\ctk)|^2.
\end{equation}
Then, by \eqref{gff2}, \eqref{jg} and \eqref{nun},
\cf{} \eqref{psi},
\begin{equation}\label{paddington}
\psi'_k
=
\begin{cases}
\E\bigpar{g(\ctk)+O(1)}^2=2\ga^2 k+O(1), & k\le N,
\\
O(1), & k>N.
\end{cases}
\end{equation}
Furthermore, by \eqref{blid} and \eqref{psi'},
\begin{equation}
\begin{split}
\E\bigpar{|\Delta M'_{n,j}|^2\mid\cF_{j-1}}
=
\E\bigpar{|\gff(\ctnx{v(j)})|^2\mid |\ctnx{v(j)}|}
=\psi'_{|\ctnx{v(j)}|}.
\end{split}
\end{equation}
Hence,
the conditional square function of $(M'_{n,j})_j$ is
\begin{equation}\label{magno}
\begin{split}
W'_n:=\sum_{j=1}^\infty \E\bigpar{|\Delta M'_{n,j}|^2\mid\cF_{j-1}}
=\sum_{v\in\Voo}\psi'_{|\ctnx{v}|}
=\sum_{v\in\ctn}\psi'_{|\ctnx{v}|}.
\end{split}
\end{equation}
Yet another application of \cite[Theorem 3.4]{SJ296} yields, using
\eqref{paddington},
\begin{equation}\label{elfbrink}
\begin{split}
\E W_n'&=(n+1)\sum_{k=1}^{n-1}\pkk\psi'_k+\psi'_n
\\&
=(n+1)\sumkix{N}\frac{4\ga^2 k}{(k+1)(k+2)}+O(n)
\\&
=4\ga^2 n\log N+O(n) = 2\ga^2 n\log n+O(n).
\end{split}
\end{equation}
Since $\Var G'(\ctn)= \Var\bigpar{M'_{n,\infty}}=\E W'_n$,
\eqref{varg'} follows from \eqref{elfbrink}.
Moreover, the representation \eqref{magno} and
\cite[Theorem 3.9]{SJ296} (again summing only to $n$, as we may) yield,
noting that the toll function $\psi'_{|T|}$
depends only on the size of $T$,
using \eqref{paddington},
\begin{equation}
\label{anna}
\Var(W_n')
\le C n \sum_{k=1}^n \frac{(\psi'_k)^2}{k^2}
\le C_1 n \sum_{k=1}^N 1 +
C_2 n \sum_{k=1}^n \frac{1}{k^2}
=O(nN)=O(n^2).
\end{equation}
Hence, $\Var\bigpar{W'_n/(n\log n)}\to0$ as \ntoo, which together with
\eqref{elfbrink} implies
\begin{equation}\label{olof}
\frac{W'_n}{n\log n} \pto 2\ga^2.
\end{equation}
Note also that $g(T)=O(|T|)$ by \eqref{jg} and \eqref{nun}, and thus
\eqref{gff2} implies $\gff(T)=O(N)$ for all trees $T$. Thus \eqref{blid}
yields
\begin{equation}\label{ulla}
\sup_j\frac{\abs{\gD M_{n,j}}}{\sqrt{n\log n}}
=O\Bigparfrac{N}{\sqrt{n\log n}}
=o(1).
\end{equation}
We now apply the central limit theorem for martingale triangular arrays, in
the form in \cite[Corollary 1]{BrownE} (see also \cite[Theorem 3.1]{HH}),
which shows that \eqref{olof} and \eqref{ulla} together imply
\begin{equation}
\frac{G'(\ctn)-\E G'(\ctn)}{\sqrt{n\log n}}
=
\frac{M_{n,\infty}-\E M_{n,\infty}}{\sqrt{n\log n}}
\dto N\bigpar{0,2\ga^2}.
\end{equation}
(Actually, \cite[Corollary 1]{BrownE} assumes instead of \eqref{ulla} only a
conditional Lindeberg condition, which is a trivial consequence of the
uniform bound
\eqref{ulla}.)
\end{proof}
\begin{remark}
We used the breadth-first order above as just one convenient order. It is
perhaps more natural to consider instead of the sets $V_j$
arbitrary node sets $V$ of (finite)
subtrees of $\Too$ that include the root $o$. This would give us, instead of
$(M_{n,j})_j$, a
martingale indexed by binary trees. However, we have no use for this
exotic object here, and use instead the standard martingales
above.
\end{remark}
\begin{lemma}
\label{Llarge}
\begin{align}
\E |G''(\ctn)| &=O\bigpar{\sqrt n}, \label{lle}\\
\Var(G''(\ctn)) &=2\ga^2 n\log n+O(n).\label{llvar}
\end{align}
\end{lemma}
\begin{proof}
By \eqref{g''}, \eqref{jg} and \eqref{jeg},
\begin{equation}
\E|g''(\ctk)|
=|\E g(\ctk)|\cdot\ett{k>N}
= O(1)\cdot\ett{k>N}
\end{equation}
and thus, using the triangle inequality and \cite[Theorem 3.4]{SJ296},
\begin{equation*}
\begin{split}
\E |G''(\ctn)| \le (n+1) \sum_{N}^{n-1}\pkk\E |g''(\ctk)|+\E|g''(\ctn)|
=O\Bigparfrac{n}{N}
, \end{split}
\end{equation*}
yielding \eqref{lle}.
For the variance, we use either
\cite[Theorem 1.29]{SJ296}
as in the proof of \refT{TVF}, or the (essentially equivalent) martingale
argument in \eqref{blid}--\eqref{elfbrink} and conclude that, with
some $\psi''_k$ satisfying
\begin{equation}\label{padd}
\psi''_k
=
\begin{cases}
O(1), & k\le N,
\\
\E\bigpar{g(\ctk)+O(1)}^2=2\ga^2 k+O(1), & k>N,
\end{cases}
\end{equation}
we have
\begin{equation*
\begin{split}
\Var G''(\ctn)& = (n+1)\sum_{k=1}^{n-1}\frac{2}{(k+1)(k+2)}\psi''_k+\psi''_n
\\&
= (n+1)\sum_{k=\floor N+1}^{n-1}\frac{4\ga^2 k}{k^2}+O(n)
\\&
=4\ga^2 n\log (n/N)+O(n)
=2\ga^2 n\log n+O(n).
\qedhere
\end{split}
\end{equation*}
\end{proof}
\begin{proof}[Proof of \refT{TCLT}]
It follows from \eqref{lle} that
\begin{equation}
\frac{G''(\ctn)-\E G''(\ctn)}{\sqrt{2\ga^2 n\log n}}
\pto 0,
\end{equation}
which together with \eqref{lg'} yields
\begin{equation}\label{asnG}
\frac{G(\ctn)-\E G(\ctn)}{\sqrt{2\ga^2 n\log n}}
\dto N(0,1).
\end{equation}
Similarly, \eqref{lgVH} implies
\begin{equation}
\frac{H(\ctn)-\E H(\ctn)}{\sqrt{2\ga^2 n\log n}}\pto0,
\end{equation}
which together with \eqref{asnG} yields \eqref{tclt}, recalling
$X_n=F(\ctn)=G(\ctn)+H(\ctn)$ by \eqref{FGH}.
\end{proof}
\begin{proof}[Proof of \refT{Tsmall}]
(i).
Define, similarly to \eqref{g'}--\eqref{g''},
\begin{align}
f'(T):= f(T)\ett{|T|\le N}, &&&
f''(T):= f(T)\ett{|T|> N}, \label{f'}
\\
h'(T):= h(T)\ett{|T|\le N}, &&&
h''(T):= h(T)\ett{|T|> N} \label{h'},
\end{align}
and corresponding sums
$ F'(T):=\sum_{v\in T}f'(T_v)$
and similarly $F''(T)$, $H'(T)$, $H''(T)$.
The argument in \eqref{FT}--\eqref{ftv} is easily modified and shows that
\begin{equation}\label{FGH'}
X_n^N=F'(\ctn)=G'(\ctn)+H'(\ctn).
\end{equation}
The same proof as for \refL{LG} yields also
\begin{align}\label{mackmyra}
\Var H'(\ctn)=O(n)&&& \text{and}&& \Var H''(\ctn)=O(n).
\end{align}
Hence, \eqref{tsmall} follows from \refL{LG'} and \eqref{FGH'}.
Furthermore,
\begin{align}\label{FGH''}
X_n-X_n^N = F''(\ctn)=G''(\ctn)+H''(\ctn).
\end{align}
By \eqref{FGH'} and \eqref{FGH''},
\eqref{tsmallar} follows from \eqref{varg'} and \eqref{llvar},
using \eqref{mackmyra} and Minkowski's inequality.
Similarly,
\begin{equation}
\E|X_n-X_n^N| \le \E|G''(\ctn)|+\E|H''(\ctn)| = O(\sqrt n),
\end{equation}
using \eqref{lle}, \eqref{mackmyra} and H\"older's inequality, together with
$\E H''(\ctn)=0$, which is proved as \eqref{lgH}.
(ii).
The conclusions of (i) hold by the same proofs (with some minor
modifications in some estimates).
Moreover, let $Z_{n,k}$ be the number of clades of size $k+1$.
Then, for $n\ge2$, the expected number is given by
\begin{equation}
\E Z_{n,k} =
\begin{cases}
\frac{4n}{k(k+1)(k+2)}, & k<n, \\
\frac{2}{n}, & k=n, \\
0, & k>n,
\end{cases}
\end{equation}
see \cite[Theorem 1]{ChangF10}. (This can be seen as another example of
\cite[Theorem 3.4]{SJ296}.)
Consequently,
\begin{equation}
\begin{split}
\PP(X_n\neq X_n^N)
&\le \PP\Bigpar{\sum_{k>N} Z_{n,k} \ge1}
\\&
\le \E\sum_{k>N} Z_{n,k}
=
\sum_{\floor N+1}^{n-1}\frac{4n}{k(k+1)(k+2)}+\frac2n
\\&
=O\Bigparfrac{n}{N^2} + O\Bigparfrac1{n} =o(1),
\end{split}
\end{equation}
which completes the proof.
\end{proof}
\section{Higher moments}\label{Smom}
We begin the proof of \refT{Tp} by proving a weaker estimate.
We let $\normp{X}:=(\E X^p)^{1/p}$ for any random variable $X$.
Recall that $\nu_n:=\E F(\ctn)$.
\begin{lemma}\label{Lxp}
For any fixed real $p>2$, and all $n\ge1$,
\begin{equation}\label{lxp}
\E \bigabs{F(\ctn)-\nu_n}^p
\le C(p) n^{p-1}.
\end{equation}
Equivalently,
\begin{equation}\label{lxp2}
\bignorm{F(\ctn)-\nu_n}_p
= O(n\qpp).
\end{equation}
\end{lemma}
\begin{proof}
Fix $p>2$ and let $m\ge1$ be chosen below. (The constants $C_i$ below may
depend on $p$ but not on $m$.)
Let $V_j$ and $\cF_j$ be as in \refS{SCLT}, and write $V'_m:=V_{2^m-1}$,
$\cF'_m:=\cF_{2^m-1}$.
Thus $\dV'_m$ consists of the $2^m$ nodes in $\Too$ of depth $m$, and $V_m'$
consists of the $2^m-1$ nodes of smaller depth.
It follows from \eqref{ftv} that, for any binary tree $T$,
\begin{equation}\label{xa}
F(T)=\sum_{v\in V'_m} f(T_v) + \sum_{v\in \dV'_m} F(T_v).
\end{equation}
Furthermore, by \eqref{tmean},
\begin{equation}\label{xb}
\begin{split}
\sum_{v\in\dV'_m}\nu_{|T_v|}
&= \sum_{v\in\dV'_m}\bigpar{\ga {|T_v|} +O(1)}
= \ga\sum_{v\in\dV'_m}|T_v| +O(2^m)
\\&
=\ga|T| +O(2^m)
=\nu_{|T|} +O(2^m).
\end{split}
\end{equation}
Hence, by combining \eqref{xa} and \eqref{xb},
\begin{equation}\label{isis}
F(T)-\nu_{|T|}
=\sum_{v\in V'_m} f(T_v) + \sum_{v\in \dV'_m} \bigpar{F(T_v)-\nu_{|T_v|}}
+O(2^m).
\end{equation}
We shall use this decomposition for the \bst{} $\ctn$.
Note first that by \eqref{max}--\eqref{maa},
\begin{equation}\label{osiris}
\E |f(\ctn)|^p \le n^p\PP\bigpar{f(\ctn)\neq0} \le 2n^{p-1}.
\end{equation}
(This holds for any $p>0$ and generalises \eqref{mua} which is the case $p=1$.)
Hence, for any $v\in \Voo$,
\begin{equation}
\E\bigpar{|f(\ctnv)|^p\bigm||\ctnv|}
\le 2|\ctnv|^{p-1}\le 2n^{p-1},
\end{equation}
and thus
\begin{equation}\label{tor}
\E|f(\ctnv)|^{p}
\le 2n^{p-1}.
\end{equation}
Let $Y:=\sum_{v\in V'_m}f(\ctnv)$ be the first sum in \eqref{isis} for $T=\ctn$.
By Minkowski's inequality and \eqref{tor},
\begin{equation}\label{oden}
\norm{Y}_p
\le \sum_{v\in V_m'}\norm{f(\ctnv)}_p
\le 2^m 2^{1/p} n^{(p-1)/p}.
\end{equation}
Let
$Z:=\sum_{v\in\dV_m'}\bigpar{F(\ctnv)-\nu_{|\ctnv|}}$
be the second sum in \eqref{isis} for $T=\ctn$.
The $\gs$-field $\cF'_m$ specifies the sizes of the subtrees $\ctnv$
for $v\in\dV_m'$, and conditioned on $\cF'_m$, these subtrees are
independent and distributed as $\cT_{n(v)}$ of the given sizes $n(v)$. Hence,
conditionally on $\cF_m'$,
the terms in the sum $Z$ are independent and have means zero,
so we can apply Rosenthal's inequality
\cite[Theorem 3.9.1]{Gut}, which yields
\begin{multline}\label{manne}
\E\bigpar{|Z|^p\mid\cF_m'}
\le \CCname{\CCmanne}\sumvdv\E \bigpar{\abs{F(\ctnv)-\nu_{|\ctnv|}}^p\mid\cF_m'}
\\
+ \CCx\Bigpar{\sumvdv\E \bigpar{\abs{F(\ctnv)-\nu_{|\ctnv|}}^2\mid\cF_m'}}^{p/2}
.
\end{multline}
We note first that by \eqref{Xvar},
\begin{equation}
\begin{split}
\E \bigpar{\abs{F(\ctnv)-\nu_{|\ctnv|}}^2\mid\cF_m'}
\le \CC |\ctnv|\log|\ctnv|
\le \CCx |\ctnv|\log n,
\end{split}
\end{equation}
and thus
\begin{equation}
\begin{split}
\sumvdv\E \bigpar{\abs{F(\ctnv)-\nu_{|\ctnv|}}^2\mid\cF_m'}
\le \CCx \sumvdv |\ctnv|\log n
\le \CCx n\log n.
\end{split}
\end{equation}
Hence the second term on the \rhs{} in \eqref{manne} is
$\le \CC (n\log n)^{p/2}$. Taking the expectation in \eqref{manne} we thus
obtain
\begin{equation}\label{frej}
\E|Z|^p
\le \CCmanne \sumvdv\E {\abs{F(\ctnv)-\nu_{|\ctnv|}}^p}
+\CCname{\CCfrej}(n\log n)^{p/2}.
\end{equation}
Let $A_n:=\E|F(\ctn)-\nu_n|^p$.
We can write \eqref{isis} for $T=\ctn$ as
\begin{equation}\label{osis}
F(\ctn)-\nu_n=Y+Z+O(2^m).
\end{equation}
Thus, by Minkowski's inequality, \eqref{oden}
and \eqref{frej},
\begin{equation}\label{rom}
\begin{split}
A_n& =\E\bigabs{Y+Z+O(2^m)}^p
\le 3^p\bigpar{\E |Y|^p+\E|Z|^p+O(2^m)}
\\&
\le \CC 2^{mp}n^{p-1}
+\CCname{\CCz}\E|Z|^p+\CC 2^m
\le
\CCz\E|Z|^p
+\CCname{\CCrom} 2^{mp}n^{p-1}.
\end{split}
\end{equation}
Furthermore, \eqref{frej} can be written
\begin{equation}\label{freja}
\E|Z|^p
\le \CCmanne \sumvdv\E A_{|\ctnv|}
+\CCfrej(n\log n)^{p/2}.
\end{equation}
We prove the lemma by induction, and assume that
$A_k\le Ck^{p-1}$ for all $k<n$.
Since $|\ctnv|<n$ for every $v\in\dV_m'$, \eqref{freja} and the inductive
hypothesis yield
\begin{equation}\label{vidar}
\E|Z|^p
\le \CCmanne C\sumvdv\E |\ctnv|^{p-1}
+\CCfrej(n\log n)^{p/2}.
\end{equation}
If $v$ is a child of the root, then $|\ctnv|$ is uniformly distributed on
\set{0,\dots,n-1}, so $|\ctnv|\eqd\floor{nU}\le nU$, where $U\sim U(0,1)$ is
uniformly distributed on $\oi$. By induction in $m$, it follows that for any
$v\in \dV_m'$,
\begin{equation}
|\ctnv|\le n\prod_{i=1}^m U_i,
\end{equation}
with $U_1,\dots,U_m$ independent and $U(0,1)$.
Consequently,
\begin{equation}\label{scott}
\E |\ctnv|^{p-1}
\le\E\Bigpar{ n^{p-1} \prod_{i=1}^m U_i^{p-1}}
= n^{p-1} \prod_{i=1}^m \E U_i^{p-1}
= n^{p-1} (1/p)^{m},
\end{equation}
since $\E U_i^{p-1}=\intoi u^{p-1}\dd u=1/p$.
There are $2^m$ nodes in $\dV_m'$, and thus \eqref{vidar} yields
\begin{equation}\label{balder}
\E|Z|^p
\le \CCmanne C 2^m(1/p)^m n^{p-1}
+\CCfrej(n\log n)^{p/2},
\end{equation}
which together with \eqref{rom} yields, since $(n\log n)^{p/2}=O(n^{p-1})$
when $p>2$,
\begin{equation}\label{njord}
\begin{split}
A_n
&\le
\CCz\CCmanne C(2/p)^m n^{p-1}
+\CCz\CCfrej(n\log n)^{p/2}
+{\CCrom} 2^{mp}n^{p-1}
\\
&\le
\CCz\CCmanne C(2/p)^m n^{p-1}
+\CCname\CCnjord 2^{mp}n^{p-1}.
\end{split}
\end{equation}
Now choose $m$ such that $(2/p)^m\CCz\CCmanne<1/2$ (which is possible
because $p>2$). Then choose $C:=2^{mp+1}\CCnjord$. With these choices,
\eqref{njord} yields
\begin{equation}
A_n\le\tfrac12 C n^{p-1} + \tfrac12 C n^{p-1}= C n^{p-1}.
\end{equation}
In other words, we have proved the inductive step:
$A_k\le C k^{p-1}$ for $k<n$ implies $A_n\le C n^{p-1}$.
Consequently, this is true for all $n\ge0$, \ie, \eqref{lxp} holds.
(The initial cases $n=0$ and $n=1$ are trivial, since $A_0=A_1=0$.)
\end{proof}
\begin{lemma}
\label{L6}
For any fixed real $p>2$, as \ntoo,
\begin{align}
\normp{F(\ctn)}& \sim \ga n ,\label{l6F}
\\
\norm{f(\ctn)}_p& \sim 2\qp\ga n\qpp. \label{l6f}
\end{align}
\end{lemma}
\begin{proof}
By Minkowski's inequality, \eqref{lxp2} and \eqref{tmean},
\begin{equation}
\bignorm{F(\ctn)}_p =\bigabs{\E F(\ctn)}+ O(n\qpp)
=\ga n+ O(n\qpp)\sim \ga n,
\end{equation}
which is \eqref{l6F}.
For $n\ge2$, it follows from \eqref{f} that
\begin{equation}\label{halma}
\E|f(\ctn)|^p = \frac{2}n\E|1-F(\ctx{n-1})|^p
= \frac{2}n\norm{F(\ctx{n-1})-1}_p^p
\sim 2\ga^p n^{p-1},
\end{equation}
since \eqref{l6F} obviously implies also $\normp{F(\ctn)-1}\sim\ga n$.
\end{proof}
The idea in the proof of \refT{Tp} is to approximate
$\E|X_n-\E X_n|^p=\E\bigabs{\sum_v\bigpar{f(\ctnv)-\E f(\ctnv)}}^p$
by $\E\sum_v\bigabs{f(\ctnv)-\E f(\ctnv)}^p$, or simpler by
$\E\sum_v\bigabs{f(\ctnv)}^p=\sum_v\E\bigabs{f(\ctnv)}^p$.
The heuristic reason for this is that the
moment $\E\bigabs{\sum_v\bigpar{f(\ctnv)-\E f(\ctnv)}}^p$ is dominated by
the event when there is one large term (corresponding to one large clade,
\cf{} the discussion before \refT{Tp}), and then
\begin{equation}\label{app}
\Bigabs{\sum_v\bigpar{f(\ctnv)-\E f(\ctnv)}}^p
\approx \sum_v\bigabs{f(\ctnv)-\E f(\ctnv)}^p
\approx \sum_v\abs{f(\ctnv)}^p.
\end{equation}
We shall justify this in several steps. We begin by finding the expectation
of the final sum in \eqref{app}, \cf{} the sought result \eqref{tp}.
\begin{lemma}
\label{Lbb} As \ntoo,
\begin{equation}\label{lbb}
\E \sumvtn\abs{f(\ctnv)}^p\sim
\frac{2p}{p-2}\ga^p n^{p-1}.
\end{equation}
\end{lemma}
\begin{proof}
We apply again \cite[Theorem 3.4]{SJ296} and obtain
\begin{equation}
\begin{split}
\E \sum_{v\in\ctn}\abs{f(\ctnv)}^p=
(n+1)\sum_{k=1}^{n-1}\pkk\E|f(\ctk)|^p+\E|f(\ctn)|^p.
\end{split}
\end{equation}
By \eqref{halma},
\begin{equation}
\pkk\E|f(\ctk)|^p \sim \frac{2}{k^2}\cdot 2\ga^pk^{p-1}=4\ga^p k^{p-3}
\end{equation}
as \ktoo, and it follows that, as \ntoo, using $p>2$,
\begin{equation*}
\begin{split}
\E \sum_{v\in\ctn}\abs{f(\ctnv)}^p
&\sim
(n+1)\sum_{k=1}^{n-1} 4\ga^p k^{p-3}+2\ga^p n^{p-1}
\\&
\sim n\frac{4\ga^p}{p-2} n^{p-2}+2\ga^p n^{p-1}
= \frac{2p}{p-2}\ga^p n^{p-1}.
\qedhere
\end{split}
\end{equation*}
\end{proof}
Next we take again some $m\ge1$ and use the notation in the proof of
\refL{Lxp}. Since we now have proved \eqref{lxp}, the proof of \refL{Lxp}
shows that \eqref{balder} holds for every $n$, and thus, since $p>2$,
\begin{equation}
\begin{split}
\norm{Z}_p
&\le \CC (2/p)^{m/p}n\qpp+O\bigpar{(n\log n)\qq}
\\&
= \CCx (2/p)^{m/p}n\qpp+o\bigpar{n\qpp}.
\end{split}
\end{equation}
Consequently, by \eqref{osis} and Minkowski's inequality,
\begin{equation}\label{this}
\bigabs{ \norm{F(\ctn)-\nu_n}_p - \norm{Y}_p}
\le \norm{Z}_p+O(2^m)
= \CCx (2/p)^{m/p}n\qpp+o\bigpar{n\qpp}.
\end{equation}
In particular, \eqref{this} and \eqref{lxp2} imply
$ \norm{Y}_p= O(n\qpp)$.
By the mean value theorem,
\begin{equation}
\label{mean}
|x^p-y^p|\le p|x-y|\max\set{x^{p-1},y^{p-1}}
\end{equation}
for any $x,y\ge0$; hence \eqref{this} implies, using also \eqref{lxp2} again,
\begin{equation}\label{erika}
\begin{split}
\E| F(\ctn)-\nu_n|^p - \E |Y|^p
=O\bigpar{(2/p)^{m/p}n^{p-1}}+o\bigpar{n^{p-1}}.
\end{split}
\end{equation}
Let $\gd>0$ be a small positive number to be chosen later, and
let $J_v$ be the indicator of the event that $v$ is green and
$|\ctnv|\ge\gd n$.
(The idea is that the significant contributions only come from nodes $v$
with $J_v=1$.)
\begin{lemma}\label{Ljm}
For each fixed $m\ge1$ and $\gd>0$, and all $n\ge1$,
\begin{align}
\PP\Bigpar{\sumvv J_v \ge 1}&
\le 2^{m+1}\gd\qw n\qw
= O\bigpar{n\qw}
,\label{ljm1}
\\
\PP\Bigpar{\sumvv J_v \ge 2}&
\le 2^{2m+1}\gd\qww n\qww
= O\bigpar{n\qww}
.\label{ljm2}
\end{align}
\end{lemma}
\begin{proof}
We use again the $\gs$-fields $\cF_j$ from \refS{SCLT}. Since $\cF_{j-1}$
specifies $|\ctnx{v_j}|$, but not how this subtree is split at $v_j$, we
have
\begin{equation}\label{jm}
\PP(J_{v_j}=1\mid \cF_{j-1}) \le \frac{2}{|\ctnx{v_j}|}
\ett{|\ctnx{v_j}|\ge\gd n} \le \frac{2}{\gd n},
\end{equation}
and thus, by taking the expectation,
$\PP(J_{v_j}=1)\le 2/(\gd n)$. Since there are $<2^m$ nodes in
$V_m'$, \eqref{ljm1} follows.
Furthermore, for any two nodes $v_i$ and $v_j$ with $i<j$,
$J_{v_i}$ is determined by $\cF_{j-1}$, and \eqref{jm} thus gives also
\begin{equation}
\PP(J_{v_i}J_{v_j}=1\mid \cF_{j-1})
= \E(J_{v_i}J_{v_j}\mid \cF_{j-1})
=J_{v_i} \PP(J_{v_j}=1\mid \cF_{j-1})
\le \frac{2}{\gd n}J_{v_i}.
\end{equation}
Thus, by taking the expectation and using \eqref{jm} again,
$ \PP(J_{v_i}J_{v_j}=1) \le 4/(\gd n)^2$. Summing over the less than
$\binom{2^m}2<2^{2m-1}$ pairs $(v_i,v_j)$ with $v_i,v_j\in V_m'$ yields
\eqref{ljm2}.
\end{proof}
\begin{proof}[Proof of \refT{Tp}]
We show this in several steps.
\stepx
Define
\begin{equation}\label{y1}
Y_1:=\sum_{v\in V_m'} J_vf(\ctnv).
\end{equation}
Since $f(\ctnv)=0$ unless $v$ is green, we have
\begin{equation}\label{regn}
Y-Y_1=\sum_{v\in V_m'} (1-J_v)f(\ctnv)
=\sum_{v\in V_m'} f(\ctnv)\ett{|\ctnv|<\gd n}.
\end{equation}
For each $v$, it follows from \eqref{osiris}
by conditioning on $|\ctnv|$ that
\begin{equation}\label{mowgli}
\E\bigabs{f(\ctnv)\ett{|\ctnv|<\gd n}}^p\le 2{(\gd n)^{p-1}}.
\end{equation}
Hence, \eqref{regn} and Minkowski's inequality yield
\begin{equation}\label{sno}
\begin{split}
\bigabs{\norm{Y}_p-\norm{Y_1}_p}
&\le
\norm{Y-Y_1}_p
\le\sum_{v\in V_m'}\norm{ f(\ctnv)\ett{|\ctnv|<\gd n}}_p
\\&
\le {2^{m+1/p} (\gd n)\qpp}.
\end{split}
\end{equation}
Thus $\normp{Y_1}=O(n\qpp)+O(2^m\gd\qpp n\qpp)$, and \eqref{mean} yields
\begin{equation}\label{magnus}
\E |Y|^p -\E |Y_1|^p = O\bigpar{(2^m\gd\qpp+2^{mp}\gd^{p-1})n^{p-1}}.
\end{equation}
\stepx
Similarly, using \eqref{mowgli} again,
\begin{equation}\label{jesper}
\begin{split}
\E\Bigpar{\sum_{v\in V_m'} |f(\ctnv)|^p
-\sum_{v\in V_m'} J_v|f(\ctnv)|^p}
&
=\sum_{v\in V_m'}\E \bigpar{|f(\ctnv)|^p\ett{|\ctnv|<\gd n}}
\\&
\le{ 2^{m+1} (\gd n)^{p-1}}.
\end{split}
\raisetag{\baselineskip}
\end{equation}
\stepx
By \eqref{y1},
$|Y_1|^p - \sumvv|J_vf(\ctnv)|^p =0$ unless $\sumvv J_v\ge2$,
and in the latter case we have by \eqref{maa} the trivial bounds
$|Y_1|^p\le (2^m n)^p$
and $\sumvv|J_vf(\ctnv)|^p \le 2^m n^p$, and thus
$\bigabs{|Y_1|^p - \sumvv|J_vf(\ctnv)|^p}\le 2^{mp}n^p$.
Consequently, by \eqref{ljm2},
\begin{equation}
\E \Bigabs{|Y_1|^p - \sumvv|J_vf(\ctnv)|^p}
\le 2^{mp}n^p \PP\Bigpar{\sumvv J_v\ge2}
=O(n^{p-2}).
\end{equation}
Thus,
for fixed $m\ge1$ and $\gd>0$,
\begin{equation}\label{emma}
\E|Y_1|^p - \sumvv\E |J_vf(\ctnv)|^p
= O\bigpar{n^{p-2}}
= o\bigpar{n^{p-1}}.
\end{equation}
\stepx
Define $\FF(T):=\sum_{v\in T}|f(T_v)|^p$. Then, in analogy with \eqref{xa},
\begin{equation}\label{xap}
\FF(T)=\sum_{v\in V'_m} |f(T_v)|^p + \sum_{v\in \dV'_m} \FF(T_v).
\end{equation}
Note that \refL{Lbb} implies $\E\FF(\ctn)=O(n^{p-1})$.
Hence, by first conditioning on $\cF'_m$, and using \eqref{scott},
\begin{equation}
\begin{split}
\E \sum_{v\in \dV'_m} \FF(\ctnv) \le \CC \E \sumvdv|\ctnv|^{p-1}
=\CCx (2/p)^m n^{p-1}.
\end{split}
\end{equation}
Taking $T=\ctn$ in \eqref{xap} and taking the expectation, we thus find
\begin{equation}\label{sofie}
\begin{split}
\E \sum_{v\in \ctn}|f(\ctnv)|^p
- \E \sumvv|f(\ctnv)|^p
= O\bigpar{(2/p)^m n^{p-1}}.
\end{split}
\end{equation}
\stepx
Finally,
combining \eqref{erika}, \eqref{magnus}, \eqref{emma}, \eqref{jesper},
\eqref{sofie} and \eqref{lbb}, we obtain
\begin{equation}
\begin{split}
\E| F(\ctn)-\nu_n|^p
&=
\frac{2p}{p-2}\ga^p n^{p-1}
+ O\bigpar{(2/p)^{m/p}n^{p-1}}
+O\bigpar{2^m\gd\qpp n^{p-1}}
\\&
\qquad
+O\bigpar{2^{mp}\gd^{p-1}n^{p-1}}
+o(n^{p-1}).
\end{split}
\raisetag{\baselineskip}
\end{equation}
For any $\eps>0$, we can make each of the error terms on the \rhs{} less than
$\eps n^{p-1}$ by first choosing $m$ large and then $\gd$ small, and finally
$n$ large.
Consequently,
$\E| F(\ctn)-\nu_n|^p = \frac{2p}{p-2}\ga^p n^{p-1}+o(n^{p-1})$.
\end{proof}
\begin{proof}[Proof of \eqref{Xmom}]
Now $p=k$ is an integer.
If $k$ is even, then \eqref{Xmom} is the same as \eqref{tp}, so we may
assume that $p=k\ge3$ is odd.
In this case, \eqref{mean} holds for all real $x,y$.
Thus for any random
variables $X$ and $Y$, using also H\"older's inequality,
\begin{equation}
\begin{split}
\E|X^p-Y^p|
&\le p\E\bigpar{|X-Y|\,|X|^{p-1}+|X-Y|\,|Y|^{p-1}}
\\&
\le p \normp{X-Y}\bigpar{\normp{X}^{p-1}+\normp{Y}^{p-1}}.
\end{split}
\end{equation}
It is now easy to modify the proof of \refT{Tp} and obtain
\begin{equation}\label{scar}
\E\bigpar{F(\ctn)-\nu_n}^p
=
\E \sumvtn{f(\ctnv)}^p + o\bigpar{n^{p-1}}.
\end{equation}
Furthermore, it follows from \eqref{f} that $f(T)\le0$ unless $|T|=1$.
Hence,
\begin{equation}\label{lett}
\sumvtn{f(\ctnv)}^p =
-\sumvtn\abs{f(\ctnv)}^p + O(n).
\end{equation}
The estimate \eqref{Xmom} now follows from \eqref{scar}, \eqref{lett} and
\eqref{lbb}.
\end{proof}
\section{Proof of \refL{Lga}}\label{Sga}
Define a \emph{chain} of length $k$ in a (binary) tree $T$
to be a sequence of $k$ nodes $v_1\dotsm v_k$ such that $v_{i+1}$ is a (strict)
descendant of $v_i$ for each $i=1,\dots,k-1$. In other words, $v_1,\dots,v_k$ are some
nodes (in order) on some path from the root. We say that the chain
$v_1\dotsm v_k$ is \emph{green} if
all nodes $v_1,\dots,v_k$ are green. (The nodes between the $v_i$'s may have
any colour.)
For a binary tree $T$ and $k\ge1$,
let $F_k(T)$ be the number of green chains $v_1\dotsm v_k$ in $T$,
and let $f_k(T)$
be the number of such chains where $v_1$ is the root.
Obviously, \cf{} \eqref{ftv},
\begin{equation}\label{fktv}
F_k(T)=\sum_{v\in T} f_k(T_v).
\end{equation}
These functionals are useful to us because of the following simple
relations, that are cases of inclusion-exclusion.
\begin{lemma}\label{L2}
For any binary tree $T$,
\begin{align}
f(T)&=\sum_{k=1}^\infty (-1)^{k-1} f_k(T),\label{fksum}
\\
F(T)&=\sum_{k=1}^\infty (-1)^{k-1} F_k(T). \label{Fksum}
\end{align}
\end{lemma}
\begin{proof}
Let $v$ be a node in $T$ and consider the contribution to the sum in
\eqref{Fksum}
of all chains with final node $v_k=v$. This is clearly 0 if 1 if $v$ is not
green, and it is 1 if $v$ is a maximal green node; furthermore, if $v$ is
green but has $j\ge1$ green ancestors, then the contribtion is easily seen
to be $\sum_{i=0}^j\binom ji(-1)^i=(1-1)^j=0$. Hence the \rhs{} of \eqref{Fksum} is the number of
maximal green nodes, \ie, $F(T)$.
For \eqref{fksum} we can argue similarly: Both sides are 0 unless the root
$o$ is green. If it is, the chain $o$ gives contribution 1, and by
inclusion-exclusion, the chains with a given final node $v\neq o$ yield
together a ycontribution $-1$
if $v$ is green and there are no green nodes between $v$ and $o$, and 0
otherwise. Hence the sum equals $f(T)$ by \eqref{f}.
(Alternatively, \eqref{fksum} follows by induction from \eqref{Fksum},
\eqref{ftv} and \eqref{fktv}.)
\end{proof}
\begin{lemma}\label{L3}
For every $k\ge1$,
\begin{equation}
\E f_k(\cT)=\frac{k(k+3)}{(k+1)(k+2)}\cdot\frac{2^{k-1}}{k!}
= \frac{2^{k-1}}{k!} - \frac{2^{k}}{(k+2)!}.
\end{equation}
\end{lemma}
\begin{proof}
We use the construction of $\cT=\cttau$ in \refS{Sbin}, which we formulate as
follows. Consider again the infinite binary tree $\Too$,
and grow $\ctt$ as a subtree of $\Too$, \cf{} \refS{SCLT}.
To do this, we
equip each node $v$ in $\Too$ with two clocks $\Cl(v)$ and
$\Cr(v)$. These are started when $v$ is added to the growing tree $\ctt$,
and each chimes after a random time with an exponential distribution with
mean 1; when the clock chimes we add a left or right child, respectively, to
$v$.
There is also a \emph{doomsday clock} $C_0$, started at 0 and with the same
$\Exp(1)$ distribution; when it chimes (at time $\tau$),
the process is stopped and the tree $\cttau$ is output. All clocks are
independent of each other.
Fix a chain $\vvk$ in the infinite tree $\Too$, with $v_1=o$, the root.
Let $\ell_i\ge0$ be the number of nodes between $v_i$ and $v_{i+1}$.
We compute the probability that $\vvk$ is a green chain in $\cT=\cttau$
by following the construction of $\ctt$ as time progresses,
checking in several steps
whether still $\vvk$ is a candidate for a green chain, and
computing the probability of this.
(We use throughout the proof the Markov property and the memoryless property
of the exponential distribution.)
We assume for notational convenience that the path from $v_1$ to $v_k$
always uses the left child of each node.
(By symmetry, this does not affect the result.)
\step1
If $k>1$, we first need that $v_1=o$ has a left child but no right child (in
order to be green); in particular, of the three clocks
$\Cl(v_1)$, $\Cr(v_1)$, $C_0$ that run from the beginning, $\Cl(v_1)$ has to
chime first. This has probability $1/3$.
\step2
Given that Step 1 succeeds,
$v_1$ gets a left child $w_1$. If $\ell_1>0$, we need a left
child of $w_1$, and still no right child at $v_1$. (But we do not care
whether we get a right child at $w_1$ or not.) Hence we need that
$\Cl(w_1)$ chimes first among the
three clocks $\Cl(w_1)$, $\Cr(v_1)$, $C_0$ (ignoring all other clocks).
This has probability $1/3$.
This is repeated for $\ell_1$ nodes; thus, the total probability that steps 1
and 2 succeed is $3^{-(\ell_1+1)}$.
\step3
This takes us to $v_2$. If $k>2$, we need a left child but no right child at
$v_2$, and still no right child at $v_1$. Hence, the next chime from the
four clocks $\Cl(v_2)$, $\Cr(v_2)$, $\Cr(v_1)$, $C_0$ has to come from
$\Cl(v_2)$. This has probability $1/4$.
\step4
Similarly for each of the $\ell_2$ nodes between $v_2$ and $v_3$; again the
probability of success at each of these nodes is $1/4$. Hence the
probability that Steps 3 and 4 succeed is $4^{-(\ell_2+1)}$.
\step5
Steps 3 and 4 are repeated for $v_i$ for each $i<k$, yielding a probability
$(i+2)^{-(\ell_i+1)}$ of success for each $i$.
\step6
Finally, we have obtained $v_k$, and wait for the doomsday clock. Until it
chimes, we must not get any right child at $v_1,\dots,v_{k-1}$, and we must
get at most one child at $v_k$.
Hence, among the $k+2$ clocks $\Cr(v_1),\dots,\Cr(v_k)$, $\Cl(v_k)$, $C_0$,
the next chime must be either from $C_0$ (probability $1/(k+2)$), or from
$\Cl(v_k)$ or $\Cr(v_k)$, followed by $C_0$ (probability
$\frac{2}{k+2}\cdot\frac{1}{k+1})$. The probability of success in this step
is thus
\begin{equation}
\frac{1}{k+2}+ \frac{2}{k+2}\cdot\frac{1}{k+1}
=
\frac{k+3}{(k+1)(k+2)}.
\end{equation}
Combining the six steps above, we see that the probability that $\vvk$ is a
green chain in $\cttau$ is
\begin{equation}
\frac{k+3}{(k+1)(k+2)}\prod_{i=1}^{k-1} \Bigparfrac{1}{i+2}^{\ell_i+1}.
\end{equation}
Given $\ell_1,\dots,\ell_{k-1}$, there are $\prod_{i=1}^{k-1} 2^{\ell_i+1}$
choices of the chain $\vvk$, all with the same probability, so summing over
all $\ell_1,\dots,\ell_{k-1}\ge0$, we obtain
\begin{equation*}
\begin{split}
\E f_k(\cT)
&=
\frac{k+3}{(k+1)(k+2)}\prod_{i=1}^{k-1}
\sum_{\ell_i=0}^\infty\Bigparfrac{2}{i+2}^{\ell_i+1}
=
\frac{k+3}{(k+1)(k+2)}\prod_{i=1}^{k-1}
\frac{2}{i}
\\&
=
\frac{k+3}{(k+1)(k+2)}\cdot\frac{2^{k-1}}{(k-1)!}
=
\frac{k(k+3)}{(k+1)(k+2)}\cdot\frac{2^{k-1}}{k!}.
\end{split}
\qedhere
\end{equation*}
\end{proof}
\begin{proof}[Proof of \refL{Lga}]
By Lemmas \ref{L2} and \ref{L3}, and a simple calculation,
\begin{equation*}
\begin{split}
\E f(\cT) = \sum_{k=1}^\infty (-1)^{k-1} \E f_k(\cT)
= \sum_{k=1}^\infty\lrpar{\frac{(-2)^{k-1}}{k!} + \frac{(-2)^{k}}{(k+2)!}}
=\frac{1-e^{-2}}{4},
\end{split}
\end{equation*}
noting that we may take the expectation inside the sum since
it also follows from \refL{L3} that
$\sum_{k=1}^\infty \E |f_k(\cT)| = \sum_{k=1}^\infty \E f_k(\cT)<\infty$.
\end{proof}
Recall that this, together with \refL{LEF}, completes our probabilistic
proof of \refT{Tmean}.
\begin{remark}
If we in the proof above change the doomsday clock and let it have an
arbitrary rate $\gl>0$, and denote the resulting random binary tree by
$\ctgl$, then the same argument yields
\begin{equation}
\begin{split}
\E f_k(\ctgl)
&=
\frac{k+\gl+2}{(k+\gl)(k+\gl+1)}\prod_{i=1}^{k-1}
\sum_{\ell_i=0}^\infty\Bigparfrac{2}{i+\gl+1}^{\ell_i+1}
\\&
=
\frac{k+\gl+2}{(k+\gl)(k+\gl+1)}\prod_{i=1}^{k-1}
\frac{2}{i+\gl-1}
\\&
=
\frac{(k+\gl-1)(k+\gl+2)}{(k+\gl)(k+\gl+1)}\frac{2^{k-1}}{\gl\rise k}
\\&
=\frac{2^{k-1}}{\gl\rise k}-\frac{2^{k}}{\gl\rise {k+2}}.
\end{split}
\end{equation}
Thus by \refL{L2},
letting $\Fii$ denote the confluent hypergeometric function, see
\eg{} \cite[\S\S13.1--13.2 and 16.1--16.2]{NIST},
\begin{equation}\label{swf}
\begin{split}
\E f(\ctgl)
&= \sum_{k=1}^\infty (-1)^{k-1} \E f_k(\ctgl)
= \sum_{k=1}^\infty\lrpar{\frac{(-2)^{k-1}}{\gl\rise k}+\frac{(-2)^{k}}{\gl\rise{k+2}}}
\\&
=-\frac12\bigpar{\Fii(1;\gl;-2)-1}
+\frac14\Bigpar{\Fii(1;\gl;-2)
-\Bigpar{1-\frac{2}{\gl}+\frac{2\cdot2}{\gl(\gl+1)}}}
\\&
=\frac{1}4+\frac{\gl-1}{2\gl(\gl+1)}-\frac14\Fii(1;\gl;-2).
\end{split}
\raisetag\baselineskip
\end{equation}
Furthermore, if $\gl>1$ we can compute $\E F(\ctgl)$ by the same method; the
only difference is that we also allow a path of length $\ell_0\ge0$ from the
root
to $v_1$, which gives an additional factor $(1+\gl)^{-\ell_0}$ for each
$\vvk$, leading to
\begin{equation}
\E F_k(\ctgl)=\sum_{\ell_0=0}^\infty \parfrac{2}{\gl+1}^{\ell_0} \E f_k(\ctgl)
=\frac{\gl+1}{\gl-1}\E f_k(\ctgl),
\end{equation}
and hence, using both parts of \refL{L2},
\begin{equation}\label{swF}
\E F(\ctgl)
=\sum_{k=1}^\infty(-1)^{k-1}\E F_k(\ctgl)
=\frac{\gl+1}{\gl-1}\E f(\ctgl)
.
\end{equation}
Moreover, a simple argument shows that, for any $n\ge1$,
\begin{equation}
\PP(|\ctgl|=n)=\prod_{i=2}^n\frac{i}{i+\gl}\cdot\frac{\gl}{n+1+\gl}
=\frac{\gl n!}{(2+\gl)\rise n},
\end{equation}
and conditioned on $|\ctgl|=n$, $\ctgl$ has the same distribution as $\ctn$,
\ie,
$(\ctgl\mid|\ctgl|=n)\eqd\ctn$. Hence,
\begin{equation}
\E F(\ctgl) = \sum_{n=1}^\infty \frac{\gl n!}{(2+\gl)\rise n} \nu_n,
\end{equation}
which can be interpreted as an unusual type of generating function for the
sequence $(\nu_n)$; note that \eqref{swF} and \eqref{swf} yield an explicit
expression for it.
\end{remark}
\newcommand\AAP{\emph{Adv. Appl. Probab.} }
\newcommand\JAP{\emph{J. Appl. Probab.} }
\newcommand\JAMS{\emph{J. \AMS} }
\newcommand\MAMS{\emph{Memoirs \AMS} }
\newcommand\PAMS{\emph{Proc. \AMS} }
\newcommand\TAMS{\emph{Trans. \AMS} }
\newcommand\AnnMS{\emph{Ann. Math. Statist.} }
\newcommand\AnnPr{\emph{Ann. Probab.} }
\newcommand\CPC{\emph{Combin. Probab. Comput.} }
\newcommand\JMAA{\emph{J. Math. Anal. Appl.} }
\newcommand\RSA{\emph{Random Struct. Alg.} }
\newcommand\ZW{\emph{Z. Wahrsch. Verw. Gebiete} }
\newcommand\DMTCS{\jour{Discr. Math. Theor. Comput. Sci.} }
\newcommand\AMS{Amer. Math. Soc.}
\newcommand\Springer{Springer-Verlag}
\newcommand\Wiley{Wiley}
\newcommand\vol{\textbf}
\newcommand\jour{\emph}
\newcommand\book{\emph}
\newcommand\inbook{\emph}
\def\no#1#2,{\unskip#2, no. #1,}
\newcommand\toappear{\unskip, to appear}
\newcommand\arxiv[1]{\url{arXiv:#1}}
\newcommand\arXiv{\arxiv}
|
1,314,259,994,918 | arxiv | \section{Introduction}
In algebraic number theory, the task to determine the class number of a number field $k$ is until nowday a classical and difficult open problem, although we have explicit formulas and relations of class numbers.\\
In the case when this task is hard to deal with it, its seems satisfactory to determine the exact power of a prime $p$, which divides the class number of $k$.\\
Let $\Gamma = \mathbb{Q}(\sqrt[5]{n})$ be a pure quintic field, where $n$ is a $5^{th}$ power-free natural greater than one, and $k_0 = \mathbb{Q}(\zeta_5)$ be the $5^{th}$ cyclotomic field. Then $k = \Gamma(\zeta_5)$ is the normal closure of $\Gamma$ and a pure metacyclic field. There is a question when the class numbers $h_\Gamma$ of $\Gamma$ and $h_k$ of $k$ are divisible by $5$. This divisibility was studied by several researchers until now. Subsequently to Honda's study of the cubic case \cite{Honda}, C.Parry \cite{Pa} studied with difficulty this question. He presented the relation between $h_\Gamma$ and $h_k$:\\
\centerline{$5^5h_k = uh_\Gamma^4$}\\
where $u$ is a divisor of $5^6$. He also proved that $h_\Gamma$ is divisible by $5$ if and only if $h_k$ is divisible by $5^2$. The authors of \cite{Mani} gave a several results on the rank of the $5$-Sylow subgroup of the ideal class group of $\Gamma$ and $k$, by means of tools of genus theory and Kummer duality.\\
Due to an investigation of the theory developed in \cite{Mani}, we gave in \cite{FOU1} a full classification of all normal closures $k$ whenever its $5$-class group is of type $(5, 5)$, and in order to illuminate this classification by numerical examples obtained by PARI/GP \cite{PRI}, we have noticed that for large values of some forms of $n$, the $5$-class group of $k$ is cyclic of order $5$, which means that $h_k$ is exactly divisible by $5$. The done calculations allow us to conjecture that this divisibility is verified for these forms of $n$, see [\cite{FOU1}, conjecture 4.1].\\
In this paper, we are interested in giving a proof of this conjecture. In fact we shall prove the following main theorem:
\begin{theorem} \label{theo 1.1}
Let $\Gamma\,=\,\mathbb{Q}(\sqrt[5]{n})$ be a pure quintic field, where $n$ is a natural $5^{th}$ power-free. Let $k$ be the normal closure of $\Gamma$. Denote by $h_{\Gamma, 5}$ and $h_{k, 5}$ the $5$-class number respectively of $\Gamma$ and $k$. Let $q_1$ and $q_2$ be primes such that $q_1, q_2\,\equiv \, \pm2\, (\mathrm{mod}\, 5)$. If the natural $n$ takes one forms as follows:
\begin{equation}
n\,=\,\left\lbrace
\begin{array}{ll}
q_1^{e_1}q_2 & \text{ with } \quad q_i\,\equiv \,\pm7\, (\mathrm{mod}\, 25)\\
5q_1 & \text{ with }\quad q_1\,\equiv \,\pm7\, (\mathrm{mod}\, 25)\\
5q_1q_2 & \text{ with } \quad q_1 \,\text{ or }\, q_2$ $\not\equiv \,\pm7\, (\mathrm{mod}\, 25)\\
\end{array}
\right.
\end{equation}
and $q_2$ and $5$ are not a quintic residue modulo $q_1$, then $h_{\Gamma, 5}$ is trivial and $h_{k, 5} = 5$.
\end{theorem}
This result will be underpinned by numerical examples obtained with the computational number theory system PARI/GP [\cite{PRI}].
\section{Norm residue symbol}
Let $L/K$ an abelian extension of number fields with conductor $f$. For each finite or infinite prime ideal $\mathcal{P}$ of $K$, we note by $f_{\mathcal{P}}$ the largest power of $\mathcal{P}$ that divides $f$. Let
$\beta \in K^{*}$, we determine an auxiliary number $\beta_0$ by the two conditions $\beta_0\equiv\beta\,(\mathrm{mod}\, f_\mathcal{P})$ and $\beta_0\equiv1\,(\mathrm{mod}\, \frac{f}{f_\mathcal{P}})$. Let $\mathcal{Q}$ an ideal co-prime with $\mathcal{P}$ such that $(\beta_0) = \mathcal{P}^a\mathcal{Q}$ ($a=0$ if $\mathcal{P}$ infinite). We note by
\begin{center}
\large{ $\left(\frac{\beta,L}{\mathcal{P}} \right) = \left(\frac{L/K}{\mathcal{Q}} \right)$}
\end{center}
the Artin map in $L/K$ applied to $\mathcal{Q}$.\\
Let $K$ be a number field containing the $m^{th}$-roots of units, where $m\in \mathbb{N}$, then for each $\alpha,\beta \in K^{*}$ and prime ideal $\mathcal{P}$ of $K$, we define the norm residue symbol by:
\begin{center}
\large{ $\left(\frac{\beta,\alpha}{\mathcal{P}} \right)_m = \frac{\left(\frac{\beta,K(\sqrt[m]{\alpha})}{\mathcal{P}} \right)\sqrt[m]{\alpha}}{\sqrt[m]{\alpha}}$}
\end{center}
Therefore, if the prime ideal $\mathcal{P}$ is unramified in the field $K(\sqrt[m]{\alpha})$, then we write:
\begin{center}
\large{ $\left(\frac{\alpha}{\mathcal{P}} \right)_m = \frac{\left(\frac{K(\sqrt[m]{\alpha})}{\mathcal{P}} \right)\sqrt[m]{\alpha}}{\sqrt[m]{\alpha}}$}
\end{center}
\begin{remark}
\large{Notice that $\left(\frac{\beta,\alpha}{\mathcal{P}} \right)_m$ and $\left(\frac{\alpha}{\mathcal{P}} \right)_m$} are two $m^{th}$-roots of unit.
\end{remark}
\large{Following \cite{Hass}, the principal properties of the norm residue symbol are given as follows:}
\begin{Properties}\label{normprop}
\large{\item[$(1)$] $\left(\frac{\beta_1\beta_2,\alpha}{\mathcal{P}} \right)_m = \left(\frac{\beta_1,\alpha}{\mathcal{P}} \right)_m\left(\frac{\beta_2,\alpha}{\mathcal{P}} \right)_m$};
\large{\item[$(2)$] $\left(\frac{\beta,\alpha_1\alpha_2}{\mathcal{P}} \right)_m = \left(\frac{\beta,\alpha_1}{\mathcal{P}} \right)_m\left(\frac{\beta,\alpha_2}{\mathcal{P}} \right)_m$};
\large{\item[$(3)$] $\left(\frac{\beta,\alpha}{\mathcal{P}} \right)_m = \left(\frac{\alpha,\beta}{\mathcal{P}} \right)_m^{-1}$};
\item[$(4)$] If $\mathcal{P}$ is not divisible by the conductor $f(\sqrt[m]{\alpha})$ of $K(\sqrt[m]{\alpha})$ and appears in $(\beta)$ with the exponent b, then:
\large{$\left(\frac{\beta,\alpha}{\mathcal{P}} \right)_m = \left(\frac{\alpha}{\mathcal{P}} \right)_m^{-b}$ };
\large{\item[$(5)$] $\left(\frac{\beta,\alpha}{\mathcal{P}} \right)_m = 1$ if and only if $\beta$ is norm residue of $K(\sqrt[m]{\alpha})$ modulo $f(\sqrt[m]{\alpha})$ };
\large{\item[$(6)$] $\left(\frac{\tau\beta,\tau\alpha}{\tau\mathcal{P}} \right)_m = \tau\left(\frac{\beta,\alpha}{\mathcal{P}} \right)_m$ for each automorphism $\tau$ of $K$ };
\large{\item[$(7)$] ${\displaystyle \prod_{\mathcal{P}} \left(\frac{\beta,\alpha}{\mathcal{P}} \right)_m} = 1$} for all finite or infinite prime ideals;
\item[$(8)$]If $K'$ is a finite extension of $K$, $\alpha \in K^{*},\beta' \in K'$ then:
\begin{center}
\large{${\displaystyle \prod_{\mathcal{P'}|\mathcal{P}} \left(\frac{\beta',\alpha}{\mathcal{P'}} \right)_m} = \left(\frac{\mathcal{N}_{K'/K}(\beta'),\alpha}{\mathcal{P}} \right)_m$}
\end{center}
\item[$(9)$]Let $\alpha,\beta \in K^{*}$ and the conductors $f(\sqrt[m]{\alpha})$, $f(\sqrt[m]{\beta})$ of respectively $K(\sqrt[m]{\alpha})$, $K(\sqrt[m]{\beta})$ are co-prime then, the classical reciprocity law:
\begin{center}
\large{$\left(\frac{\beta}{(\alpha)} \right)_m = \left(\frac{\alpha}{(\beta)} \right)_m$}
\end{center}
\end{Properties}
\large{For more basic properties of the norm residue symbol in the number fields, we refer the
reader to \cite{Hass}.\\
Notice that in the rest of the article, we will use the norm quintic residue symbols $(m = 5)$. If we deal with a principal ring of integer, we will write the norm quintic residue symbol as follows:}
\begin{center}
\large{$\left(\frac{\beta,\alpha}{(\pi)} \right)_5 = \left(\frac{\beta,\alpha}{\pi} \right)_5$ and $\left(\frac{\alpha}{(\pi)} \right)_5 = \left(\frac{\alpha}{\pi} \right)_5$}
\end{center}
\section{Ambiguous ideal classes}
Let $\Gamma = \mathbb{Q}(\sqrt[5]{n})$ be a pure quintic field, where $n$ is a natural $5^{th}$ power-free, and $k_0 = \mathbb{Q}(\zeta_5)$ be the cyclotomic field generated by a primitive $5^{th}$ root of unit $\zeta_5$. Then $k = \Gamma(\zeta_5)$ is the normal closure of $\Gamma$, and a cyclic Kummer extension of degree $5$ of $k_0$. By $C_{k, 5}$ we denote the $5$-ideal class group of $k$, and by $C_{k, 5}^{(\sigma)}$ the subgroup of ambiguous ideal classes under the action of $Gal(k/k_0) = \langle \sigma \rangle$.\\
In [\cite{Hass2}, Theorem 13], Hasse specifies rank $C_{k, 5}^{(\sigma)}$ as follows:\\
\centerline{rank $C_{k, 5}^{(\sigma)}$ = $d+q^*-(r+1+o)$}\\
where\\
$\bullet$ $d =$ number of ramified primes in $k/k_0$.\\
$\bullet$ $r =$ rank of the free abelian part of the group of units $E_{k_0}$ of $k_0$, so $r = 1$.\\
$\bullet$ $o = 1$ because $k_0$ contains a primitive $5^{th}$ root of unit.\\
$\bullet$ $q^*$ is defined by $[N_{k/k_0}(k-\{0\})\cap E_{k_0} : N_{k/k_0}(E_{k_0})] = 5^{q^*}$. Here $N_{k/k_0}$ is the relative norm from $k$ to $k_0$.\\
We obtain that rank $C_{k, 5}^{(\sigma)}$ = $d+q^*-3$.\\
We note that $ N_{k/k_0}(E_{k_0}) = E_{k_0}^5$ and $[E_{k_0} : E_{k_0}^5] = 5^2$, so we get that $q^* \in \{0, 1, 2\}$.\\
The group $E_{k_0}$ of units is generated by $\zeta_5$ and $\zeta_5+1$, then according to the definition of $q^*$, we see that:
\begin{center}
$q^*$ = $ \begin{cases}
2 & \text{if }\, \zeta, \zeta+1 \in N_{k/k_0}(k^*),\\
1 & \text{if }\, \zeta^i(\zeta+1)^j \in N_{k/k_0}(k^*)\,\text{ for some i and j },\\
0 & \text{if }\, \zeta^i(\zeta+1)^j \notin N_{k/k_0}(k^*)\, \text{for}\hspace{2mm} 0\leq i,j\leq 4 \text{ and}\hspace{2mm} i+j\neq 0.\\
\end{cases}$
\end{center}
The following lemma gives us some results, which allow us to determine the value of $q^*$.
\begin{lemma}\label{lem 3.1}
Let $k_0 = \mathbb{Q}(\zeta_5)$ and let $k =k_0(\sqrt[5]{n})$, where $n = u\lambda^{e_\lambda}\pi_1^{e_1}....\pi_g^{e_g}$, with $u$ is a unit in $\mathbb{Z}[\zeta_5]$, $\lambda = 1-\zeta_5$ is the unique prime over $5$, and $\pi_1,...,\pi_g$ prime elements in $\mathbb{Z}[\zeta_5]$. Then:\\
$\bullet$ $\zeta_5\, \in N_{k/k_0}(k-\{0\})\, \Longleftrightarrow\, N_{k_0/\mathbb{Q}}((\pi_i))\, \, \equiv \, 1 \,(\mathrm{mod}\,25)$ for all i.\\
$\bullet$ $\zeta_5^i(\zeta_5+1)^j\, \in N_{k/k_0}(k-\{0\})\, \Longleftrightarrow\,$
every $\pi | x$ above has the property that $\zeta_5^i(\zeta_5+1)^j$ is a $5^{th}$ power modulo $(\pi)$ in $\mathbb{Z}[\zeta_5]$ for all $i, j$.\\
$\bullet$ $(\lambda)$ ramifies in $k/k_0$ $\Longleftrightarrow\,$ $n \not\equiv \, \pm1, \pm7 \,(\mathrm{mod}\,\lambda^5)$.\\
\end{lemma}
\begin{proof}
See [\cite{Mani}, Lemma 5.1]
\end{proof}
The order of the subgroup $C_{k, 5}^{(\sigma)}$ of ambiguous ideal classes whenever $n$ takes forms of theorem \ref{theo 1.1} is given by the following proposition:
\begin{proposition}\label{prop 3.1}
Let $\Gamma$, $k_0$ and $k$ as above. Let $q_1$, $q_2$ a prime numbers such that\\ $q_1 , q_2 \equiv \pm2 \,(\mathrm{mod}\, 5)$. If $n$ takes one of the following forms
\begin{equation}
n\,=\,\left\lbrace
\begin{array}{ll}
q_1^{e_1}q_2 & \text{ with } \quad q_i\,\equiv \,\pm7\, (\mathrm{mod}\, 25)\\
5q_1 & \text{ with }\quad q_1\,\equiv \,\pm7\, (\mathrm{mod}\, 25)\\
5q_1q_2 & \text{ with } \quad q_1 \,\text{ or }\, q_2$ $\not\equiv \,\pm7\, (\mathrm{mod}\, 25)\\
\end{array}
\right.
\end{equation}
Then the subgroup $C_{k, 5}^{(\sigma)}$ of ambiguous ideal classes is cyclic of order $5$.
\end{proposition}
\begin{proof}
We will successively treat the three forms to calculate $d$ and $q^*$ defined before, in order to show that rank $C_{k, 5}^{(\sigma)} = 1$.\\
We note that if $q$ is a prime of $\mathbb{Z}$ such that $q \equiv \pm2 \,(\mathrm{mod}\, 5)$, then by [\cite{washint}, Theorem 2.13], $q$ is remain inert in $k_0$ and $N_{k_0/\mathbb{Q}}(q) = q^4$.\\
Since $k$ is the normal closure of $\Gamma$, then $k = k_0(\sqrt[5]{n})$. We see that if a prime $p$ of $\mathbb{Z}$ is ramified in $\Gamma$, then all primes of $k_0$ above $p$ are ramified in $k$. Since in proposition \ref{prop 3.1} we deal with primes $q_i \equiv \pm2 \,(\mathrm{mod}\, 5)$, $(i = 1, 2)$, which are inert in $k_0$, so if $q_i$ are ramified in $\Gamma$, then they are ramified in $k$ too.\\
$\bullet$ If $n = q_1q_2$ such that $q_i \equiv \pm7 \,(\mathrm{mod}\, 25)$, then $n \equiv \pm1 \,(\mathrm{mod}\, 25)$ and by lemma \ref{lem 3.1}, we have that $\lambda$ is not ramified in $k/k_0$. The primes $q_i$ are ramified in $k$, because $disk(\Gamma/\mathbb{Q}) = 5^5n^4$ and $q_i$ divide this discriminant. Hence we have $d = 2$.\\
Since $q_i \equiv \pm7 \,(\mathrm{mod}\, 25)$, then $N_{k/\mathbb{Q}}(q) = q^4 \equiv 1 \,(\mathrm{mod}\, 25)$ and by lemma \ref{lem 3.1} we have $\zeta_5\, \in N_{k/k_0}(k-\{0\})$. By the same reasoning we have that $\zeta_5+1\, \in N_{k/k_0}(k-\{0\})$, so $q^* = 2$, whence rank $C_{k, 5}^{(\sigma)} = 1$.
$\bullet$ If $n = 5q_1$ such that $q_1 \equiv \pm7 \,(\mathrm{mod}\, 25)$, we have $n \not\equiv \pm1,\pm7 \,(\mathrm{mod}\, 25)$, so $\lambda$ is ramified in $k/k_0$. As before we have $q_1$ is ramified also in $k/k_0$, whence $d = 2$.\\
Since $q_1 \equiv \pm7 \,(\mathrm{mod}\, 25)$, we proceed as the previous point to prove that $q^* = 2$. Hence rank $C_{k, 5}^{(\sigma)} = 1$.
$\bullet$ If $n = 5q_1q_2$ such that $q_1$ or $ q_2 \not\equiv \pm7 \,(\mathrm{mod}\, 25)$. We see that $\lambda, q_1$ and $q_2$ are ramified in $k/k_0$ so $d = 3$. Since $q_1$ or $ q_2 \not\equiv \pm7 \,(\mathrm{mod}\, 25)$, and by lemma \ref{lem 3.1} we have that $\zeta_5\, \not\in N_{k/k_0}(k-\{0\})$, which imply that $q^* < 2$. Precisely, according to the proof of [\cite{Mani}, Theorem 5.13], we have $q^* = 1$. Hence rank $C_{k, 5}^{(\sigma)} = 1$.\\
We proved that for the three forms of the natural $n$, we have rank $C_{k, 5}^{(\sigma)} = 1$, which means that $C_{k, 5}^{(\sigma)}$ is a cyclic subgroup of $C_{k, 5}$ of order $5$.
\end{proof}
\section{Proof of main theorem}
Let $k$ be the normal closure of a pure quintic field $\Gamma = \mathbb{Q}(\sqrt[5]{n})$, where $n$ takes the forms mentioned above, also $k$ is cyclic Kummer extension of degree $5$ of the cyclotomic field\\ $k_0 = \mathbb{Q}(\zeta_5)$. We put $Gal(k/k_0) = \langle \sigma \rangle$. By $k_5^{(1)}$ we denote the Hilbert $5$-class field of $k$, that is the maximal abelian unramified extension of $k$ of degree a power of $5$. By class field theory $C_{k, 5} \simeq Gal(k_5^{(1)}/k)$.\\
Next we define the genus field of $k/k_0$, which we denote by $k^*$, to be the maximal abelian extension of $k_0$ contained in $k_5^{(1)}$. Then using the isomorphism $C_{k, 5} \simeq Gal(k_5^{(1)}/k)$, we see that $Gal(k_5^{(1)}/k^*)$ can be identified with a subgroup of $C_{k, 5}$, which is called the principal genus. By class field theory this subgroup is $C_{k, 5}^{1-\sigma}$. Its easy to see that $Gal(k^{*}/k) \simeq C_{k, 5}/C_{k, 5}^{1-\sigma}$, and we have the following lemma:
\begin{lemma}\label{lem 4.1}
Let $k, C_{k, 5}, C_{k, 5}^{(\sigma)}$ and $C_{k, 5}^{1-\sigma}$ as above. Then we have:\\
\centerline{rank $C_{k, 5}^{(\sigma)} = $ rank $C_{k, 5}/C_{k, 5}^{1-\sigma}$ = $1$}
\end{lemma}
\begin{proof}
Since the $5^{th}$ cyclotomic field $k_0$, has a trivial class group, we can admit the same proof of [\cite{GER1}, Lemma 2.3], and by proposition \ref{prop 3.1}, we have rank $C_{k, 5}^{(\sigma)} = 1$
\end{proof}
From lemma \ref{lem 4.1}, we deduce that rank $Gal(k^{*}/k) =$ rank $C_{k, 5}^{(\sigma)} = 1$, which means that $[k^* : k] = 5$. By Kummer theory, there exist $x_1 \in k$ such that $k^* = k(\sqrt[5]{x_1})$.\\
In [\cite{Mani}, section 6], we find an investigation of the $5$-class group of pure quintic field $\Gamma$, by giving a upper bound of its rank as follows:\\
\centerline {rank $C_{\Gamma, 5} \leq min\{t,\, t-s + rank (C_{k, 5}/C_{k, 5}^{1-\sigma})^+\}$},
where $t$ is the rank of $C_{k, 5}^{(\sigma)}$, $s = \mathrm{rank} (C_{k, 5}^{(\sigma)}.C_{k, 5}^{1-\sigma})/C_{k, 5}^{1-\sigma}$ and $(C_{k, 5}/C_{k, 5}^{1-\sigma})^+$ is defined as [\cite{Mani}, Lemma 6.1].\\
We note that according to [\cite{Mani}, Theorem 6.6], if the natural $n$ is not divisible by any prime $p \,\equiv\, 1\, (\mathrm{mod}\, 5)$, we have that $rank (C_{k, 5}/C_{k, 5}^{1-\sigma})^+ = 0$, which is verified in our situation, whence the upper bound of rank $C_{\Gamma, 5}$ becomes $t-s$.\\
The following theorem allows us to compute the value of $s$ in terms of rank of matrix with entries in $\mathbb{F}_5$ the finite field of $5$ elements.
\begin{theorem}\label{theo 4.1}
Let $k_0 = \mathbb{Q}(\zeta_5)$ and $k = k_0(\sqrt[5]{n})$, where $n$ takes forms as above and decompose in $k_0$ as $n = u\lambda^{e_\lambda}\pi_1^{e_1}...\pi_g^{e_g}$, with $u$ is a unit, $\lambda = 1-\zeta_5$ the unique prime in $k_0$ above $5$, $\pi_i $ primes of $k_0$, $e_\lambda \in \{0, 1, 2, 3, 4\}$ and $e_i \in \{1, 2, 3, 4\}$. Let $k^* = k(\sqrt[5]{x_1})$ be the genus field of $k/k_0$.\\
Let $\alpha_{1j} \in \mathbb{F}_5$ such that
\begin{center}
$\zeta_5^{\alpha_{1j}}$ = \large{$\left(\frac{x_1, n}{\pi_j} \right)_5$} for $1 \leq j \leq g$\\
\vspace{0.4cm}
\hspace{1.6cm} $\zeta_5^{\alpha_{1(g+1)}}$ = \large{$\left(\frac{x_1, \lambda}{\lambda} \right)_5$} if $\lambda$ is ramified in $k/k_0$.\\
\end{center}
If $M$ is the matrix $(\alpha_{1j})$, then $s = \mathrm{rank}\, M$.
\end{theorem}
\begin{proof}
See [\cite{Mani}, Theorem 5.10] for $t = 1$
\end{proof}
In the remainder, we compute the value of $s$ for the three forms of the natural $n$.\\
By the definition of the matrix $M$ in theorem \ref{theo 4.1}, we can see that $s = \mathrm{rank}\, M = 1$ if and only if one $\alpha_{1j} \neq 0$.
$\bullet$ If $n = q_1q_2$ such that $q_i \equiv \pm7 \,(\mathrm{mod}\, 25)$, $(i = 1, 2)$, According to [\cite{Limura}, Lemma 3.3], we have that $k^* = k(\sqrt[5]{q_1})$, so we put $x_1 = q_1$ and since $q_i$, $(i = 1, 2)$, are inert in $k_0$, we put $\pi_1 = q_1$, then we can get the value of $\alpha_{11}$ by calculus of \large{$\left(\frac{x_1, n}{\\pi_1} \right)_5$} $=$ \large{$\left(\frac{q_1,\, q_1q_2}{q_1} \right)_5$}. We have\\
\large{$\left(\frac{q_1,\, q_1q_2}{q_1} \right)_5$} = \large{$\left(\frac{q_1, q_1}{q_1} \right)_5$}\large{$\left(\frac{q_1, q_2}{q_1} \right)_5$} by $(1)$, $(2)$ of properties \ref{normprop}.\\\\
\vspace{0.5cm}
and since\\
\vspace{0.5cm}
\large{$\left(\frac{q_1, q_1}{q_1} \right)_5$} = $1$ because $q_1$ is norm in $k_0(\sqrt[5]{q_1})/k_0$.\\
\large{$\left(\frac{q_1, q_2}{q_1} \right)_5$} = \large{$\left(\frac{q_2}{q_1} \right)_5^{-1}$} by $(4)$ of properties \ref{normprop}.\\
we deduce that $\zeta_5^{\alpha_{11}}$ = \large{$\left(\frac{q_2}{q_1} \right)_5^{-1}$} $\neq 1$ because $q_2$ is not a quintic residue modulo $q_1$, hence $\alpha_{11} \neq 0$, which imply that $s = 1$.
$\bullet$ If $n = 5q_1$ such that $q_1 \equiv \pm7 \,(\mathrm{mod}\, 25)$, by [\cite{Limura}, Lemma 3.3], we have that $k^* = k(\sqrt[5]{q_1})$, so we admit the same reasoning as the previous point. Its sufficient to replace $q_2$ by $5$. Hence we have $s= 1$.
$\bullet$ If $n = 5q_1q_2$ such that $q_1$ or $q_2 \not\equiv \pm7 \,(\mathrm{mod}\, 25)$, and without loosing generality we can assume that $q_2 \not\equiv \pm7 \,(\mathrm{mod}\, 25)$. We have that $k^* = k(\sqrt[5]{5q_2})$. Put $x_1 = 5q_2$ and $\pi_1 = q_1$, then we calculate \large{$\left(\frac{x_1, n}{\\pi_1} \right)_5$} $=$ \large{$\left(\frac{5q_2,\, 5q_1q_2}{q_1} \right)_5$}. We have\\
\large{$\left(\frac{5q_2,\, 5q_1q_2}{q_1} \right)_5$} = \large{$\left(\frac{5, 5}{q_1} \right)_5$}\large{$\left(\frac{5, q_1}{q_1} \right)_5$}\large{$\left(\frac{5, q_2}{q_1} \right)_5$}\large{$\left(\frac{q_2, 5}{q_1} \right)_5$}\large{$\left(\frac{q_2, q_1}{q_1} \right)_5$}\large{$\left(\frac{q_2, q_2}{q_1} \right)_5$} by $(1)$, $(2)$ of properties \ref{normprop}.\\\\
\vspace{0.5cm}
and since\\
\vspace{0.5cm}
\large{$\left(\frac{5, 5}{q_1} \right)_5$} = $1$ by $(2)$ and $(4)$ of properties \ref{normprop}.\\
\large{$\left(\frac{5, q_1}{q_1} \right)_5$} = \large{$\left(\frac{5}{q_1} \right)_5$} $\neq 1$ by $(4)$ of properties \ref{normprop}.\\
\large{$\left(\frac{5, q_2}{q_1} \right)_5$}\large{$\left(\frac{q_2, 5}{q_1} \right)_5$} $= 1$ by $(3)$ of properties \ref{normprop}.\\
\large{$\left(\frac{q_2, q_1}{q_1} \right)_5$} = \large{$\left(\frac{q_2}{q_1} \right)_5$} $\neq 1$ by $(4)$ of properties \ref{normprop}.\\
\large{$\left(\frac{q_2, q_2}{q_1} \right)_5$} = $1$ by $(2)$ and $(4)$ of properties \ref{normprop}.\\
We deduce that $\zeta_5^{\alpha_{11}} \neq 1$, because $q_2$ and $5$ are not a quintic residue modulo $q_1$, which imply that $s = 1$.\\
In summary, we proved that for the three forms of the natural $n$ we have $s = 1$, and by proposition \ref{prop 3.1}, we have $t = 1$, whence the upper bound of rank $C_{\Gamma, 5}$ is $0$, which means that $\Gamma$ admits a trivial $5$-class number.\\
To finish the proof, we use the results of C. Parry in \cite{Pa}, which states that $5$ divides $h_\Gamma$ if and only if $5^2$ divides $h_k$. The fact that $\Gamma$ admits a trivial $5$-class number, imply that $5^2$ does not divides $h_k$, but we proved that rank $C_{k,5}^{(\sigma)} = 1$, then there is exact divisibility of $h_k$ by $5$.
\section{Numerical examples}
Let $\Gamma\,=\,\mathbb{Q}(\sqrt[5]{n})$ be a pure quintic field and $k$ its normal closure. Using the system PARI/GP \cite{PRI}, we compute the $5$-class number of $\Gamma$ and $k$ for each forms of the natural $n$. The following tables illustrate out main results theorem \ref{theo 1.1}.
\begin{center}
Table 1: $n\,=\, q_1q_2$ with $q_i\,\equiv \,\pm7\, (\mathrm{mod}\, 25)$\\
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$q_1$ & \small$q_1\,(\mathrm{mod}\,25)$& $q_2$ & \small$q_2\,(\mathrm{mod}\,25)$& $n\,=\,q_1q_2$ & $h_{k,5}$ & $h_{\Gamma,5}$\\
\hline
7 & 7 & 43 & -7 & 301 & 5 & 1 \\
7 & 7 & 193 & -7 & 1351 & 5 & 1 \\
7 & 7 & 293 & -7 & 2051 & 5 & 1 \\
107 & 7 & 43 & -7 & 4601 & 5 & 1 \\
157 & 7 & 43 & -7 & 6751 & 5 & 1 \\
457 & 7 & 43 & -7 & 19651 & 5 & 1 \\
107 & 7 & 193 & -7 & 20651 & 5 & 1 \\
557 & 7 & 43 & -7 & 23251 & 5 & 1 \\
607 & 7 & 43 & -7 & 26101 & 5 & 1 \\
157 & 7 & 193 & -7 & 30301 & 5 & 1 \\
107 & 7 & 293 & -7 & 31351 & 5 & 1 \\
757 & 7 & 43 & -7 & 32551 & 5 & 1 \\
857 & 7 & 43 & -7 & 36851 & 5 & 1 \\
907 & 7 & 43 & -7 & 39001 & 5 & 1 \\
107 & 7 & 443 & -7 & 47401 & 5 & 1 \\
257 & 7 & 193 & -7 & 49601 & 5 & 1 \\
307 & 7 & 193 & -7 & 59251 & 5 & 1 \\
157 & 7 & 443 & -7 & 69551 & 5 & 1 \\
257 & 7 & 293 & -7 & 75301 & 5 & 1 \\
457 & 7 & 443 & -7 & 202451 & 5 & 1 \\
\hline
\end{tabular}
\end{center}
\newpage
\begin{center}
Table 2 : $n\,=\, 5q_1$ with $q_1\,\equiv \,\pm7\, (\mathrm{mod}\, 25)$\\
\begin{tabular}{|c|c|c|c|c|}
\hline
$q_1$ & $q_1\,(\mathrm{mod}\,25)$ &$n\,=\,5q_1$ & $h_{k,5}$ & $h_{\Gamma,5}$ \\
\hline
7 &7 & 35 & 5 & 1 \\
43 & -7 & 215 &5 & 1 \\
107 & 7 & 535 & 5 & 1 \\
157 & 7 & 785 & 5 & 1 \\
193 & -7& 965 & 5 & 1 \\
257 & 7 & 1285 & 5 & 1 \\
293 & -7 & 1465 & 5 & 1 \\
307 & 7 & 1535 & 5 & 1 \\
443 & -7 & 2215 & 5 & 1 \\
457 & 7 & 2285 & 5 & 1 \\
557 & 7 & 2785 & 5 & 1 \\
607 & 7 & 3053 &5 & 1 \\
643 & -7& 3215 & 5 & 1 \\
757 & 7 & 3785 & 5 & 1 \\
857 & 7 & 4285 & 5 & 1 \\
907 & 7 & 4535 &5 & 1 \\
\hline
\end{tabular}
\end{center}
\vspace{0.1cm}
\begin{center}
Table 3: $n\,=\, 5q_1q_2$ with $q_1$ or $q_2$ $\not\equiv \,\pm7\, (\mathrm{mod}\, 25)$\\
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$q_1$ & \small$q_1\,(\mathrm{mod}\,25)$& $q_2$ & \small$q_2\,(\mathrm{mod}\,25)$& $n\,=\,5q_1q_2$ & $h_{k,5}$ & $h_{\Gamma,5}$\\
\hline
2 & 2 & 3 & 3 & 30 & 5 & 1 \\
7 & 7 & 3 & 3 & 105 & 5 & 1 \\
2 & 2 & 13 & 13 & 130 & 5 & 1 \\
2 & 2 & 23 & -2 & 230 & 5 & 1 \\
17 & 17 & 3 & 3 & 255 & 5 & 1 \\
2 & 2 & 53 & 3 & 530 & 5 & 1 \\
37 & 17 & 3 & 3 & 555 & 5 & 1 \\
47 & -3 & 3 & 3 & 705 & 5 & 1 \\
67 & 17 & 3 & 3 & 1005 & 5 & 1 \\
17 & 17 & 23 & -2 & 1955 & 5 & 1\\
37 & 17 & 13 & 13 & 2405 & 5 & 1 \\
47 & -3 & 13 & 13 & 3055 & 5 & 1 \\
47 & -3 & 23 & -2 & 5405 & 5 & 1 \\
47 & -3 & 43 & -7 & 10105 & 5 & 1 \\
107 & 7 & 23 & -2 & 12305 & 5 & 1 \\
67 & 17 & 53 & 3 & 17755 & 5 & 1 \\
97 & -3 & 43 & -7 & 20855 & 5 & 1 \\
\hline
\end{tabular}
\end{center}
|
1,314,259,994,919 | arxiv | \section{Introduction}
\label{sec:intro}
Data-collection rates in high energy physics (HEP) experiments, particularly those at the Large Hadron Collider (LHC),
are a continuing challenge and resulting datasets require large amounts of computing power to process.
For example, the \mbox{LHCb}\xspace experiment~\cite{Alves:2008zz} processes an event rate of 1\ensuremath{{\rm \,MHz}}\xspace in a software-based
trigger~\cite{LHCb-DP-2014-002}. The purpose of this trigger is to reduce the output
data rate to manageable levels, \mbox{\itshape i.e.}\xspace to fit in the available storage resources offline.
This amounts to a reduction from 60\,GB per second to an output data rate of 0.6\,GB per second.
In order to accomplish such a remarkable real-time data reduction in the software based trigger,
novel ideas have been introduced, such as the real-time alignment and calibration of the detector~\cite{Xu:2016mik},
in addition to the concept of real-time analysis~\cite{Aaij:2016rxn}, whereby a subset of the particles from the proton collisions need only
be saved, and not the raw data from the sub-detectors.
The aforementioned data-reduction strategy is similar across all LHC experiments, where
software based selections are applied in low-latency environments.
Machine learning (ML) is becoming an increasingly important tool to filter datasets,
be it with the identification of interesting event topologies, or the distinction
between individual particle species. For the case of \mbox{LHCb}\xspace data-taking, over 600
unique signatures are searched for in parallel in real time, each with its own set of requirements.
However only a handful at present make use of machine learning.
A large ecosystem is available for analysts to create machine learning classifiers;
the TMVA~\cite{Hocker:2007ht} and Neurobayes~\cite{Feindt:2006pm} tools being among the most widely used.
More recent examples gaining popularity include Scikit-Learn~\cite{Pedregosa:2012toh}
and Keras~\cite{keras}. It has been proven in many LHC analyses that
ML classifiers account for differences in the correlations of
training variables between signal and background events, therefore enabling more
powerful data reduction.
Despite this, the majority of searches for interesting signatures are performed
without the use of ML classifiers. Often the reason for this is the relative difficulty in
the implementation of a preferred ML classifier to the {\tt C++/Python} combination
of event selection frameworks~\cite{Barrand:2001ny}. Another
reason is the required algorithm speed. Methods such as Bonsai
Boosted Decision Trees (BBDTs)~\cite{Gligorov:2012qt} have been developed in order
to enable the quick evaluation of models. The BBDT approach relies on the
discretization of inputs such that all possible combinations along with
the associated classifier response is known before the model is evaluated.
One potential drawback of the BBDT approach is that the number of input variables is limited
in order to limit the number of possible combinations.
We present in this article a package that allows an analyst to
train a drone neural network that learns the important features of a
given ML learning classifier from any chosen package such as SciKit-Learn or Keras.
The resulting parameters are then fed into a {\tt C++} algorithm that
performs execution in HEP production environments. The details of the
drone training are provided in Sec.~\ref{sec:dlearn}. This is followed
by real examples using simulated data in Sec.~\ref{sec:hep}. The advantages
of the approach are discussed in Sec.~\ref{sec:storage} and a summary is
provided in Sec.~\ref{sec:summary}.
\section{Drone learning}
\label{sec:dlearn}
The training of the drone network requires that the original network is
extensively probed in the parameter space in which accuracy is desired.
The principle utilised in the training of the drone is that sufficient
approximation of the original network is achieved with sufficient expansion
of the hyperparameter space of the drone, and that the same global minimum
of the loss function can be found, as reported in Ref.~\cite{losssurfaces}.
The ability of a neural network with a continuous, bounded, non-constant activation
function to approximate functions to an arbitrary degree has been indeed known
since the early 1990s~\cite{HORNIK1991251}.
\subsection{Initial drone structure and corresponding training}
The drone chosen for use in this article is initialised as a
neural network with a single intermediate (hidden) layer of 5
nodes using a standard sigmoid activation function. The network
has the number of inputs determined from the number of desired
characteristics of the decay signature. A single output is taken
from the network and a linear model is used to relate layers.
The model is made to approximate the original classifier through
a supervised learning technique, though not in the traditional sense.
Instead of a label as {\tt signal} or {\tt background} taken from the training data, the
output of the original classifier is used as a label. This means that the
loss function is defined as
\begin{align}
\mathcal{L} = \sum_i \left( F(\vec{x}_i) - G_i(\vec{x}_i) \right)^2,
\end{align}
where $F(\vec{x}_i)$ and $G(\vec{x}_i)$ are the outputs
of the original and drone models on datapoint
$i$ of the mini-batch, respectively. The advantage of such a loss function is per-event
equivalence of the original and drone model, in addition to equivalence
of performance. For the drone training detailed in this article, standard
mini-batch stochastic gradient descent is used. A feature of this method
is that the drone classifier does not see the labels of the training data,
but rather learns the same properties from the original classifier.
This is therefore a neural network that learns from another neural network in an
empirical manner.
\subsection{Model morphing during the learning phase}
In order to keep the hyperparameter space to the minimum required level,
additional degrees of freedom are added only when required.
This removes the possibility of choosing an incorrect size of the
drone network. During the learning phase, the following conditions are required
to trigger the extension of the hidden layer in the $j^{\rm th}$ epoch:
\begin{align}
\delta_{j} &\equiv |\mathcal{L}_j-\mathcal{L}_{j-1}|/\mathcal{L}_j < \kappa,\label{eq:cond1}\\
\sigma_{j} &\equiv m (1 - e^{-b(\hat{t} + n)})\delta_{j}\mathcal{L}_j \nonumber\\
\mathcal{L}_j &< \hat{\mathcal{L}} - \sigma_{j} \label{eq:cond2},
\end{align}
where $\kappa$ is the required threshold, $\sigma$ is the required minimum improvement
of the loss function and $\hat{\mathcal{L}}$ is the value of the loss function when
the hidden layer was last extended. The required improvement starts from a minimum at $n$,
increases with epoch number after previous extension $\hat{t}$ and steepness $b$
until a maximum at $m$. The precise values of the parameters
$\kappa$, $n$, $m$, $b$ are not of particular importance. Rather, the topology described by
eqs.~\ref{eq:cond1} and \ref{eq:cond2} is crucial. The relative loss function improvement,
$\delta_{j}$, can never realistically be larger than $1$ and the limit, $\kappa$, at which
no significant improvement occurs is acceptably set at 0.02 (smaller than $2\sigma$
standard deviations). The descent in loss space, $\hat{\mathcal{L}} - \mathcal{L}_j$,
is further required to be significantly large, minimizing the chance of getting stuck in
isolated local minima. The function, $\sigma_{j}$, is chosen to increase this requirement
with each epoch for two reasons - it is bounded and can approach its asymptote arbitrarily fast.
It scales $\delta_{j}$ such that the loss descent must be significant
before an update is triggered. Since $\delta_{j}$ is expected to decrease with epoch number,
the minumum and maximum values of $\sigma_{j}$ are chosen as such:
\begin{align}
\sigma_{j}(\hat{t} = 0) &\equiv min(\sigma_{j}) \equiv 2.5\delta_{j}\mathcal{L}_j \implies 5\sigma ~\text{std.dev.} \\
\sigma_{j}(\hat{t} = \infty) &\equiv min(\sigma_{j}) \equiv 25\delta_{j}\mathcal{L}_j \implies 50\sigma ~\text{std.dev.}
\end{align}
The steepness, $b$, is chosen such that the transition from the minimum to maximum takes
on average 50 epochs. This ensures a change cannot be triggered immediately
after a previous one and the learning can still proceed if more freedom is indeed required.
Also, it allows the network to stabilize after a big change.
When the conditions in eqs.~\ref{eq:cond1} and \ref{eq:cond2} are met, the linear model
is updated to extend the weights matrices and bias vectors
to accommodate the layer addition.
The associated neurons are initialised with a zero weight
to ensure continuity of the loss function value.
\section{High energy physics applications}
\label{sec:hep}
\subsection{B physics}
\subsubsection{Data sample}
In order to demonstrate the functionality of the toolkit, data samples generated
from the RapidSim package~\cite{rapid} are used. The interesting signal is chosen
to be the ${\ensuremath{\B^0_\squark}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace(\ensuremath{\rightarrow}\xspace\mu\mu )\phi (\ensuremath{\rightarrow}\xspace\ensuremath{K}\xspace\PK )$ decay, and the background is
the ${\ensuremath{\D^0}}\xspace\ensuremath{\rightarrow}\xspace\pi\pi\pi\pi$ decay. A total of 10000 candidates is generated for each decay.
\subsubsection{Training of the original classifier}
\label{sec:orig_training}
The machine learning classifier, using the Keras framework~\cite{keras,adam},
is constructed as a locally connected first layer (in which filters are applied
to different regions in contrast to a full convolution layer), followed by a pooling layer,
and a standard dense layer. The exact definition can be found below.
\begin{lstlisting}
classifier = Sequential()
classifier.add(LocallyConnected1D(
filters = 90, kernel_size = 2,
activation = 'sigmoid',
input_shape = (len(setTrain[0]), 1)))
classifier.add(GlobalMaxPooling1D())
classifier.add(Dense(30, activation = 'sigmoid'))
classifier.add(Dense(1, activation = 'sigmoid'))
classifier.compile(optimizer = 'adam',
loss = 'binary_crossentropy'
, metrics = ['accuracy'])
\end{lstlisting}
The neural network is trained using kinematic properties of the respective decays.
These include the pseudorapidity, $\eta$, and momentum transverse to the direction of the
input proton beams, \mbox{$p_{\rm T}$}\xspace, of the decaying particle. In addition, the minimum and maximum \mbox{$p_{\rm T}$}\xspace and $\eta$
of the final state particles is used. The signal and background distributions of the input variables
are shown in Fig.~\ref{fig:inputs}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.33\textwidth]{pt_comp.pdf}
\includegraphics[width=0.33\textwidth]{eta_comp.pdf}
\includegraphics[width=0.33\textwidth]{minpt_comp.pdf}
\includegraphics[width=0.33\textwidth]{maxpt_comp.pdf}
\includegraphics[width=0.33\textwidth]{mineta_comp.pdf}
\includegraphics[width=0.33\textwidth]{maxeta_comp.pdf}
\caption{\small Comparison of the signal and background distributions
used to train the Keras B decay classifier.}
\label{fig:inputs}
\end{figure*}
In the training of the original classifier, half of the data is
reserved in order to test for overtraining.
\subsection{Jet separation}
\label{sec:hepGPD}
\subsubsection{Data sample}
A further demonstration is provided demonstrating a classifiers ability to separate different
kinds of jets. The data sample to show this has been generated from Pythia~\cite{Sjostrand:2007gs}
simulating pp collisions at 14\ifthenelse{\boolean{inbibliography}}{\ensuremath{~T\kern -0.05em eV}\xspace}{\ensuremath{\mathrm{\,Te\kern -0.1em V}}}\xspace.
The jets themselves are reconstructed in the Rivet analysis framework~\cite{Buckley:2010ar}
and are created using the FastJet~\cite{Cacciari:2011ma} package using the $K_t$ algorithm~\cite{Salam:2007xv}
(the definition
of the $K_t$ variable and a review of jet reconstruction algorithms
may be found in Refs~\cite{kt} and \cite{Atkin:2015msa} respectively).
A jet \mbox{$p_{\rm T}$}\xspace requirement of 20\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace is imposed on all jets.
All other parameters remain at the default values for Rivet version 2.5.4.
The signal sample is chosen to correspond to a $qg\ensuremath{\rightarrow}\xspace Wq$ type of interaction,
whereas the background is chosen to correspond to a $gg \ensuremath{\rightarrow}\xspace gg$ type. These correspond
to the Rivet analyses named {\tt MC\_WJETS} and {\tt MC\_QCD}, respectively.
Jets that originate from gluons in the final state form a background to many
analyses, therefore efficient rejection of such processes is important in making
measurements~\cite{Komiske:2016rsd}.
\subsubsection{Training of the original classifier}
The machine learning classifier chosen is also a Keras-based convolutional neural net,
constructed in an similar way as described in Sec~\ref{sec:orig_training}
\begin{lstlisting}
classifier = Sequential()
classifier.add(LocallyConnected1D(
filters = 90, kernel_size = 2,
activation = 'sigmoid',
input_shape = (len(setTrain[0]), 1)))
classifier.add(GlobalMaxPooling1D())
classifier.add(Dense(30, activation = 'sigmoid'))
classifier.add(Dense(1, activation = 'sigmoid'))
classifier.compile(optimizer = 'adam',
loss = 'binary_crossentropy'
, metrics = ['accuracy'])
\end{lstlisting}
The training data is based around the properties of the measured jets. The list of features
taken consists of the azimuthal angle, $\phi$, $\eta$ of the jet; the spread of neutral and
hadronic contributions to the jet in the $\phi$, $\eta$ variables, along with average and energy weighted
kinematic variables. In total 17 different features are used.
The signal and background distributions of the input variables
are shown in Fig.~\ref{fig:inputsGPD}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.33\textwidth]{input_gpd_eta.png}
\includegraphics[width=0.33\textwidth]{input_gpd_eta_spread_charged.png}
\includegraphics[width=0.33\textwidth]{input_gpd_eta_spread_neutral.png}
\includegraphics[width=0.33\textwidth]{input_gpd_phi.png}
\includegraphics[width=0.33\textwidth]{input_gpd_phi_spread_charged.png}
\includegraphics[width=0.33\textwidth]{input_gpd_phi_spread_neutral.png}
\includegraphics[width=0.33\textwidth]{input_gpd_hAvEta.png}
\includegraphics[width=0.33\textwidth]{input_gpd_hAvPhi.png}
\includegraphics[width=0.33\textwidth]{input_gpd_hEnergy.png}
\includegraphics[width=0.33\textwidth]{input_gpd_hSumPT.png}
\includegraphics[width=0.33\textwidth]{input_gpd_nAvEta.png}
\includegraphics[width=0.33\textwidth]{input_gpd_nAvPhi.png}
\includegraphics[width=0.33\textwidth]{input_gpd_nEnergy.png}
\includegraphics[width=0.33\textwidth]{input_gpd_nSumPT.png}
\includegraphics[width=0.33\textwidth]{input_gpd_kt2.png}
\includegraphics[width=0.33\textwidth]{input_gpd_mass.png}
\includegraphics[width=0.33\textwidth]{input_gpd_mom.png}
\caption{\small Comparison of the signal and background distributions
used to train the Keras jet separation classifier.}
\label{fig:inputsGPD}
\end{figure*}
\subsection{Drone conversions}
The drone neural networks are trained following the procedure outlined in Sec.~\ref{sec:dlearn},
In total, 300 epochs are used with
the learning rate of the stochastic gradient descent set to 0.05.
The value of $\kappa$ is chosen to be 0.02, the value of $b$ is chosen to
be 0.04 and the value of $m$ is chosen to be 50.
The loss history of the drone approximations are shown in Fig.~\ref{fig:loss}
as a function of epoch number.
The convergence is also shown in Fig.~\ref{fig:iterdiff}, which shows
the difference in the value of the loss function with respect to the previous
epoch. The epochs that trigger an increase in the number of hyperparameters
are also overlaid.
In total for the case of B decays and for the case of the jet separation classifier,
an increase was triggered 10 times.
The total number
of parameters in the final drone neural networks are therefore 121 and 286 for the B decay drone
and the jet separation drone, respectively. It is interesting
to note that with the algorithm design of Sec.~\ref{sec:dlearn}, the introduction
of the new parameter space causes the drone networks to learn faster, as evidenced by
increases in Fig.~\ref{fig:iterdiff} with continuing descent of the loss functions.
The performance of the original classifiers compared to the drone classifiers are shown in Figure~\ref{fig:roc}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.45\textwidth]{loss_history.pdf}
\includegraphics[width=0.45\textwidth]{loss_history_gpd.pdf}
\caption{\small
Convergence of the loss function during the drone training
for the case of the B
decay (left) and jet separation (right) examples.
}
\label{fig:loss}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=0.45\textwidth]{diff_history.pdf}
\includegraphics[width=0.45\textwidth]{diff_history_gpd.pdf}
\caption{\small
Difference in the loss function with respect to the previous iteration
for the case of the B
decay (left) and jet separation (right) examples.
The green triangles
depict the epoch number in which the number of hyperperameters was increased.
}
\label{fig:iterdiff}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=0.45\textwidth]{roc.pdf}
\includegraphics[width=0.45\textwidth]{roc_gpd.pdf}
\caption{\small
Signal efficiency versus background rejection of the original classifier (red) and drone
approximation (blue)
for the case of the B
decay (left) and jet separation (right) examples.
}
\label{fig:roc}
\end{figure*}
\section{Drone storage and transferability and suitability
for low-latency environments}
\label{sec:storage}
The hyperparameters and structure of the drone are required to be
portable and easily stored for later usage. For this the {\tt JSON} format was chosen as
mediator. It is human-readable and easily accessible in the {\tt Python} and {\tt C++}
environments commonly used in HEP. Thus, it is readily deployable in both personal and production environments.
Provided is a tool to export and save a drone neural network to a {\tt JSON}
formatted file which preserves the input \& output structure,
the layers and nodes, all hyperparameters and activation functions.
The drone configuration is later read in by an equivalent tool into the production software framework,
which then constructs a class object based on the Keras model. The {\tt C++} class implements
a flexible member structure that is capable of completely reproducing the original drone. The production
implementation may be used for all data reduction levels, be it in the form of a low-latency trigger
for example up to the latest stages of data handling and output.
A major advantage of this method is that analysts and users have the full freedom of latest developments
of industry standards, but need only to support a more manageable implementation in the low-latency
software. This is further aided by projects such as ONNX~\cite{ONNX}, which enable classifiers from a wider
range of software packages to be converted to a framework in which an approximation converter
is available.
The identical performance show in Fig.~\ref{fig:roc} is clearly the ideal scenario, even though
such good agreement is not always required to give better results than other low-latency methods.
However it is worth noting that the drones created in the examples of Sec.~\ref{sec:hep} are faster to
evaluate. The comparison of the time taken for each model evaluation, determined from a desktop
using a Intel Core i5-7267U processor is shown in Table~\ref{tab:comp}.
\begin{table}[t]
\centering
\caption{Hyperparameter number comparisons of the original models and drone
approximations for the HEP examples. \label{tab:comp_param}}
\begin{tabular}{l|rr}
& original model & drone \\
\hline
B decay & 4,111 & 121 \\
jet separation & 7081 & 286 \\
\end{tabular}
\end{table}
\begin{table}[t]
\centering
\caption{Processing time comparisons of the original models and drone
approximations for the HEP examples. \label{tab:comp}}
\begin{tabular}{l|rr}
& original model & drone \\
\hline
B decay & $3.87 \times 10^{-4}$ s & $4.8 \times 10^{-5}$ s \\
jet separation & $4.79 \times 10^{-4}$ s & $6.2 \times 10^{-5}$ s \\
\end{tabular}
\end{table}
\section{Summary}
\label{sec:summary}
It has been demonstrated that for the case of a high energy physics
event selection application, a drone neural network is able to accurately
approximate and learn the features of a neural network with a different
structure. The proposed algorithm design allows the drone to learn the
aforementioned features without ever having access to the training data,
or indeed any data, but only with appropriate questioning of the original model.
The equivalency of the outputs of the drone and original model enables an
analyst to treat both the original and the drone in the same way. The creation
of a drone in a standardised form permits an analyst to use any desired machine-learning
package to isolate a decay signature, and from this create a classifier
guaranteed to be suitable for execution in the {\tt C++} real-time data selection frameworks.
\section*{Acknowledgements}
\noindent
We acknowledge support from
the NWO (The Netherlands) and STFC (United Kingdom).
We are indebted to the communities behind the multiple open
source software packages on which we depend.
This project has received funding from the European Union’s Horizon
2020 research and innovation programme under the Marie Skłodowska-Curie
grant agreement No 676108.
|
1,314,259,994,920 | arxiv | \section{Claims and Insights}
This short note is accompanied by a Google Colab
notebook\footnote{%
Available
at~\href{https://github.com/google-research/google-research/tree/master/m\_theory/colab/hamming78.ipynb}{https://github.com/google-research/google-research/tree/master/m\_theory/colab/hamming78.ipynb}, and also alongside the arXiv source code of this article.
The reader can launch this via web browser by navigating to
\href{https://colab.research.google.com/}{https://colab.research.google.com/},
selecting `GitHub' as source for a new notebook, and pasting the above url.}
(based on TensorFlow~\cite{Abadi2016}) that
numerically demonstrates the validity of each of these claims:
\begin{enumerate}
\item (From \cite{Bobev:2019dik}, Eq.\ (7.5)): One can embed
$\mathcal{M}_{14}:=(SU(1,1)/U(1))^{\times7}$ in such a way into
$E_{7(+7)}$ that the holomorphic superpotential is in $1{:}1$
correspondence with the code words of the 1-bit error correcting
(7,4,3) Hamming code~\cite{hamming1950}:
\begin{equation}
\label{eq:hamming7}
\begin{aligned}
\mathcal{W}_7 := \\ \\ \\ \\
\end{aligned}\quad
\begin{gathered}
+\zeta_1\zeta_2\zeta_3\zeta_4\zeta_5\zeta_6\zeta_7\\
+ \zeta_3\zeta_5\zeta_6\zeta_7
+ \zeta_2\zeta_4\zeta_5\zeta_7
+ \zeta_2\zeta_3\zeta_4\zeta_6
+ \zeta_1\zeta_3\zeta_4\zeta_5
+ \zeta_1\zeta_4\zeta_6\zeta_7
+ \zeta_1\zeta_2\zeta_5\zeta_6
+ \zeta_1\zeta_2\zeta_3\zeta_7
\\
+ \zeta_1\zeta_2\zeta_4 + \zeta_1\zeta_3\zeta_6 + \zeta_1\zeta_5\zeta_7
+ \zeta_2\zeta_6\zeta_7 + \zeta_2\zeta_3\zeta_5 + \zeta_3\zeta_4\zeta_7
+ \zeta_4\zeta_5\zeta_6\\
+ 1 \,.
\end{gathered}
\end{equation}
\item Expanding the $(7,4,3)$~Hamming code with a parity bit to the
self-dual $(8,4,4)$~Hamming code, we can define a corresponding
hypothesized holomorphic superpotential as follows, by adding
a factor~$\zeta_8$ to those summands that have an odd number
of~$\zeta$-factors:
\begin{equation}
\label{eq:hamming8}
\begin{aligned}
\mathcal{W}_8 :=\\ \\ \\ \\
\end{aligned}\quad
\begin{gathered}
+\zeta_1\zeta_2\zeta_3\zeta_4\zeta_5\zeta_6\zeta_7\zeta_8 \\
+\zeta_3\zeta_5\zeta_6\zeta_7
+\zeta_2\zeta_4\zeta_5\zeta_7
+\zeta_2\zeta_3\zeta_4\zeta_6
+\zeta_1\zeta_3\zeta_4\zeta_5
+\zeta_1\zeta_4\zeta_6\zeta_7
+\zeta_1\zeta_2\zeta_5\zeta_6
+\zeta_1\zeta_2\zeta_3\zeta_7
\\
+\zeta_1\zeta_2\zeta_4\zeta_8 +\zeta_1\zeta_3\zeta_6\zeta_8
+\zeta_1\zeta_5\zeta_7\zeta_8 +\zeta_2\zeta_6\zeta_7\zeta_8
+\zeta_2\zeta_3\zeta_5\zeta_8 +\zeta_3\zeta_4\zeta_7\zeta_8
+\zeta_4\zeta_5\zeta_6\zeta_8\\
+1 \,.
\end{gathered}
\end{equation}
Observing that the scalar potential corresponding to such a holomorphic superpotential
on $\left(SU(1,1)/U(1)\right)^{\times 8}$ indeed does have many
equilibria that align nicely (after rescaling the cosmological
constant) with equilibiria reported in~\cite{fischbacher2009many}
for~$\mathcal{N}=16,\;D=3\;SO(8)\times SO(8)$ supergravity,
one may conjecture that one can indeed obtain this
``$(8,4,4)$~Hamming code holomorphic superpotential''
from the~$A_1$-tensor of maximal $D=3$ supergravity.
This indeed holds -- the details can be found in appendix~\ref{app:V_from_W}.
\item \label{claim:hamming8} Starting from the commonly used roots for the $\mathfrak{e}_{8(+8)}$ algebra,
where the~$120-8=112$ roots of the compact~$\mathfrak{spin}(16)$ subalgebra are given by
$(\pm 1; \pm 1;0;0;0;0;0;0) + \{\text{permutations}\}$, and the
$128$~``$\mathfrak{spin}(16)$-spinor'' roots
corresponding to the generators used to define the scalar manifold of
$SO(8)\times SO(8)$ supergravity~\cite{Nicolai:2000sc,Nicolai:2001sv},
$(\pm \frac{1}{2}; \pm \frac{1}{2};\pm \frac{1}{2};\pm \frac{1}{2};\pm \frac{1}{2};\pm \frac{1}{2};\pm \frac{1}{2};\pm \frac{1}{2})$
(where the total number of $(-)$ signs is \emph{even}),
it is possible to choose eight positive roots from the~$128$ such
that when adding the corresponding eight negative roots to the set,
no pair taken from these 16 roots have the same sign in exactly
two positions\footnote{This would be a requirement for the
commutator of the associated ladder operators to belong to
$\mathfrak{spin}(16)$, but not the $\mathfrak{u}(1)^8$ generated by
the commutators of the ladder operators for each positive root and
its associated negative root.}.
For any such choice, adding the corresponding negative roots,
and encoding a $(+)$-sign as 1 and a $(-)$-sign as 0 (or vice versa) gives us
sixteen eight-bit code words that correspond to a self-dual $(8, 4, 4)$ Hamming
code.\footnote{A related well-known observation is that scaling the self-dual $E_8$
lattice to integer coordinates and then taking coordinates modulo 2 yields the $(8, 4, 4)$
self-dual Hamming code. Doing the same for the $E_7$ root lattice yields the
$(7, 3, 4)$ `little Hamming code', while doing this for the dual $E_7$ weight
lattice ($E_7^*$) yields the $(7, 4, 3)$ Hamming code, see e.g.~\cite{conway2013sphere,belitz2011applications}.}
These sixteen roots then correspond to a $\mathfrak{sl}(2)^{\times8}$ subalgebra
of~$\mathfrak{e}_8$.
\item Performing~$\omega$-deformation~\cite{dallagata2012vacua,dall2012evidence,deWit:2013ija,Dall_Agata_2014} of
$\mathcal{N}=8,\,D=4\,SO(8)$ supergravity~\cite{deWit:1982bul},
the superpotential in Eq.~(\ref{eq:hamming7}) acquires phase factors
$\phi:=\exp(-i\omega)$ on summands with an \emph{odd} number
of~$\zeta$-factors and $\bar\phi=\exp(+i\omega)$ on summands with an \emph{even} number
of~$\zeta$-factors:
\begin{equation}
\label{eq:hamming7c}
\begin{aligned}
\mathcal{W}_{7c} :=\\ \\ \\ \\
\end{aligned}\quad
\begin{gathered}
+\zeta_1\zeta_2\zeta_3\zeta_4\zeta_5\zeta_6\zeta_7\phi\\
+\zeta_3\zeta_5\zeta_6\zeta_7\bar\phi
+\zeta_2\zeta_4\zeta_5\zeta_7\bar\phi
+\zeta_2\zeta_3\zeta_4\zeta_6\bar\phi
+\zeta_1\zeta_3\zeta_4\zeta_5\bar\phi
+\zeta_1\zeta_4\zeta_6\zeta_7\bar\phi
+\zeta_1\zeta_2\zeta_5\zeta_6\bar\phi
+\zeta_1\zeta_2\zeta_3\zeta_7\bar\phi
\\
+\zeta_1\zeta_2\zeta_4\phi +\zeta_1\zeta_3\zeta_6\phi
+\zeta_1\zeta_5\zeta_7\phi +\zeta_2\zeta_6\zeta_7\phi
+\zeta_2\zeta_3\zeta_5\phi +\zeta_3\zeta_4\zeta_7\phi
+\zeta_4\zeta_5\zeta_6\phi\\
+\bar\phi \,.
\end{gathered}
\end{equation}
Observing that the scalar potential does not change if the
superpotential gets multiplied by a complex number of magnitude~1,
and multiplying the above expression with~$\phi$ shows
$\bar\phi\mathcal{W}_{7c}=\mathcal{W}_{8|\zeta_8=\phi^2}$. Indeed,
one finds that for~$\omega=\pi/8$, the corresponding scalar
potential on~$(SU(1,1)/U(1))^{\times7}$ has equilibria for which the
cosmological constants closely correspond to known solutions of the
`dyonic SO(8)' gauging with~$\omega=\pi/8$~\cite{Berman_SO8c_2021}.
The relation between
the scalar potentials and superpotentials is given in
appendix~\ref{app:V_from_W}.
\item The above properties suggest that, at least
on~$(SL(2)/U(1))^{\times7}\sim(SU(1,1)(2)/U(1))^{\times7}$, we
might be able to retrieve the scalar potential of $D=4\,SO(8)$
supergravity from that of~$D=3\,SO(8)\times SO(8)$ supergravity by
taking some suitable~$\zeta_8\to 1$ limit\footnote{Given that
the~$\zeta$ parameters are coordinates in the Poincare disc model
of the hyperbolic plane, this is at infinite distance from the origin.}
-- and correspondingly,
get the scalar potential of dyonic $D=4\,SO(8)_c$ supergravity by
taking some $\zeta_8\to \exp(i\omega)$ limit. Hence, it seems natural to
expect that a corresponding limit may exist for the full scalar
potential: Using the $E_{7(+7)}\times SL(2)\subset E_{8(+8)}$ embedding
for which we
have~${\bf 248}\mapsto({\bf 133}, {\bf 1})+({\bf 56}, {\bf 2})+({\bf 1}, {\bf 3})$,
the~$SL(2)$ becomes the eighth~$SL(2)$ in~$E_{8}$ that commutes with the
seven~$SL(2)$s whose noncompact directions yield~$\mathcal{M}_{14}$.
Considering the triality-symmetric constructions
of~$\mathfrak{e}_7=\mathfrak{spin}(8)+{\bf 35}_v+{\bf 35}_s+{\bf 35}_c$ and
$\mathfrak{e}_8=\mathfrak{spin(8)}^L+\mathfrak{spin(8)}^R+({\bf 8}^L_v,{\bf 8}^R_v)+({\bf 8}^L_s,{\bf 8}^R_s)+({\bf 8}^L_c,{\bf 8}^R_c)$,
it is clear how $\mathfrak{e}_7+\mathfrak{sl}(2)$ is obtained from the
`symmetric' pieces of the decomposition of $\mathfrak{e}_8$ with respect
to the\footnote{Given that we can apply a triality relabeling on one of the
$\mathfrak{spin}(8)$ algebras, there is more than one way to take a diagonal.
The relevant diagonal here does not involve a triality rotation.}
diagonal~$\mathfrak{spin}(8)$ subalgebra of~$\mathfrak{spin}(8)^L+\mathfrak{spin}(8)^R$.
Using the corresponding embedding of the~$\mathfrak{e}_{7(7)}+\mathfrak{sl}(2)$
$D=4$ scalar manifold coset generators~${\bf 35}_s+{\bf 35}_c+{\bf 1}_s+{\bf 1}_c$
(`symmetric traceless~$8\times 8$ matrices over the spinors and co-spinors
from $\mathfrak{e}_7$ plus multiples-of-the-identity trace-parts
from $\mathfrak{sl}(2)$') into the space of $D=4$ scalar manifold coset
generators~$({\bf 8}^L_s,{\bf 8}^R_s)+({\bf 8}^L_c,{\bf 8}^R_c)$ via a linear
function~$E(v_{70}, s, c): \mathbb{R}^{70+2}\to\mathbb{R}^{128}$, one finds for
the $D=3$ scalar potential of~$SO(8)\times SO(8)$ supergravity:
$g_{D=3}^{-2}V_{D=3}(E(\vec 0, s, 0))<0$, and for $s>0$: $|\nabla V|>0$.
These are non-equilibrium points with negative cosmological constant.
If we now introduce an auxiliary (helper) function\footnote{
The factor $-6$ is for alignment with the usual normalization of the~$D=4$ scalar potential.
}~$H:\mathbb{R}^{70+1}\to\mathbb{R}$ as:
\begin{equation}
H(\vec v, s):= (-6)\,\cdot\,\frac{V_{D=3}(E(\vec v, s, 0))}{V_{D=3}(E(\vec 0, s, 0))},
\end{equation}
then we may conjecture that $H$ is related to the $D=4$ scalar potential of~$SO(8)$
supergravity~$g_{D=4}^{-2}V_{D=4}: \mathbb{R}^{70}\to\mathbb{R}$ via:
\begin{equation}
V_{D=4}(\vec v)=\lim_{s\to\infty}H(\vec v, s).
\end{equation}
Numerical evidence strongly supports that this hypothesis holds
\emph{on the full 70-dimensional scalar manifold of~$\mathcal{N}=8,\,D=4\,SO(8)$ Supergravity}!
\item The generalization to dyonic-$SO(8)$ also holds. Specifically, with
\begin{equation}
\label{eq:correspondence}
H_c(\vec v, s,\omega):= (-6)\,\cdot\,\frac{V_{D=3}(E(\vec v, s\cos(2\,\omega), s\sin(2\,\omega)))}{V_{D=3}(E(\vec 0, s\cos(2\,\omega), s\sin(2\,\omega)))},
\end{equation}
we find:
\begin{equation}
V_{D=4}(\vec v, \omega)=\lim_{s\to\infty}H_c(\vec v, s, \omega).
\end{equation}
(As one would expect from the~$\omega$-invariance of
the~$SO(8)$-symmetric vacuum of $SO(8)$ supergravity, one actually
finds~$V_{D=3}(E(\vec 0, s\cos(2\,\omega), s\sin(2\,\omega)))=V_{D=3}(E(\vec 0, s, 0))$,
so the above expression, presented `in symmetric form', can be simplified.)
Appendix~\ref{app:claim6log} shows the numerical evidence,
verifiable by running the accompanying Google Colab notebook.
\end{enumerate}
\section{Discussion}
The maximal (32 supercharges) gauged $D=2+1$ supergravity of Nicolai
and Samtleben~\cite{Nicolai:2000sc,Nicolai:2001sv} so far has been
mostly regarded as an exotic curiosity, as to this date there is no
known way to embed it into M theory. Correspondingly, it has perhaps
not yet received as much attention as this note suggests it should
have -- given that we observe that it indeed seems to be closely
related to the $S^7$-compactification of 11-dimensional supergravity,
i.e. the de Wit-Nicolai model -- as well as the dyonic deformations
of that model~\cite{dall2012evidence}, for which there
currently is no known way to embed these into M theory,
either~\cite{lee2015new,Inverso_2017}. Naturally, this then means that
taking the limit in a different way will also allow us to retrieve
scalar potentials of other gaugings with already-known M theory
embeddings, such as that of `dyonic ISO(7)
supergravity'~\cite{Guarino_2015,Guarino_2016}.
\subsection{Early Clues}
As it is often useful to understand the intuition that underlies an
idea, it may be appropriate to list some major clues that contributed
to generating the idea of exploring the final claim in the list
presented above. In chronological order, these clues were:
\begin{itemize}
\item The (stable and also unstable) equilibria of maximal
supergravities often have remarkable similarities across different
dimensions. Notably, this also holds in particular for~$D=4$ and $D=3$.
For example, whereas maximal $D=5$ supergravity has a
$SU(2)\times U(1)\;\mathcal{N}=2$
vacuum, maximal $D=4$ supergravity has a~$SU(3)\times U(1)\;\mathcal{N}=2$
vacuum; in~$D=4$, we see a $G_2\;\mathcal{N}=1$ vacuum,
whereas in~$D=3$, we find $G_2\times G_2\;\mathcal{N}=(1,1)$, etc. --
see~\cite{Khavaev:1998fb,warner1983some,fischbacher2009many}.
\item As the problem of finding equilibria can be expressed entirely
in terms of geometric invariants, the relevant properties of the
equilibria can be expressed in terms of algebraic numbers.
There is a general tendency for the~$D=3$ expressions to often have
remarkably low algebraic complexity
(see e.g.~\cite{Fischbacher:2002fx}), just as if~$D=4$ had
to rebalance terms to make up for some loss of a more fundamental
symmetry.
\item John Baez's article about triality and the exceptional
groups~\cite{baez:e8} clearly was inspirational for structuring
the code that does calculations in $E_7$ in such a way that it
emphasizes the role of triality, despite virtually all of the other
literature only using (anti)self-dual four-form language for $E_7$.
\item Closely studying the long list of equilibria of~$SO(8)$
supergravity~\cite{Comsa:2019rcz} reveals some remarkable coincidences,
such as the existence of a triplet of equilibria with residual symmetry
$SO(4)$ where embeddings of~$SO(4)$ into~$SO(8)$ are related by
triality. Likewise, there are closely-related-via-triality pairs
of solutions, such as
\xhref{S0668704}{https://arxiv.org/src/1906.00207v4/anc/extrema/S0668740/physics.pdf}--\xhref{S0698771}{https://arxiv.org/src/1906.00207v4/anc/extrema/S0698771/physics.pdf},
\xhref{S0869596}{https://arxiv.org/src/1906.00207v4/anc/extrema/S0869596/physics.pdf}--\xhref{S0983994}{https://arxiv.org/src/1906.00207v4/anc/extrema/S0983994/physics.pdf},
\xhref{S1068971}{https://arxiv.org/src/1906.00207v4/anc/extrema/S1068971/physics.pdf}--\xhref{S1301601}{https://arxiv.org/src/1906.00207v4/anc/extrema/S1301601/physics.pdf},
etc., that are related by triality (see
also~\cite{Borghese:2013dja}, as well as~\cite{fischbacher2010new}).
\item There have been various earlier indications that the 7-bit
Hamming code is useful to understand some nontrivial aspects of M theory~\cite{Borsten_2012,Gunaydin:2020ric}.
\end{itemize}
\subsection{Outlook}
It certainly is bemusing to observe how intuition related to binary
error correcting codes did provide a relevant clue here towards
uncovering a relation between $D=4$ and $D=3$ supergravities --
especially with a view on Wheeler's ``it from bit''
essay~\cite{Wheeler:1989ftm} which proposes an agenda that includes
``\emph{[Translating] the quantum versions of string theory and of
Einstein's geometrodynamics from the language of continuum to the
language of bits}''. One may wonder whether there are more
interesting insights that could be obtained by focusing on the
relation between remarkable lattices and binary codes -- noting
however that the (even unimodular Lorentzian) $E_{10}$ root
lattice~\cite{Gebert:e10} does not directly correspond to an error
correcting binary code -- likely due to the implicit notion of
`Euclidean distance' in the definition of error-correcting codes. This
might, however, be fixable, and suggests that a study of the relation
between~$\mathfrak{e}_{10}$ and generalized binary codes might bear
fruit.
While our focus here was exclusively on the scalar potential, this is
of course closely linked to the entire structure of the model
supersymmetry. Nominally, we are here observing a correspondence
between $D=3$ and $D=4$ supergravity in some ``AdS radius goes to
zero'' (i.e. $g^{-2}V\to-\infty$) limit. To do this, we had to ad-hoc
fix one scalar parameter and move it towards infinity without
Supergravity offering a mechanism to stabilize this configuration. We
may, at this point, only speculate whether M theory also in this
setting ``fights against being squeezed'' by growing new spatial
dimensions via some tower of massive excitations (which would mean:
degrees of freedom not present in the supergravity truncation)
collapsing to zero mass. Given our current understanding of M theory,
this speculation is however too outlandish to be taken seriously.
More tangibly, the observation that there is a~$SO(8)$ subgroup
of~$E_{8(8)}$ that rotates the eight commuting~$SL(2)$s may provide
useful to extract additional information about the structure of
the $D=4$ potential, given that this~$SO(8)$ cannot be a subgroup
of~$E_{7}$ (since it mixes the seven $U(1)$s sitting inside $E_7$
with the one outside). This might lead to an explanation for some
observations about the equilibria of the~$D=4$ scalar potential
that are currently hard to explain, such as the high degeneracies
in the mass spectra of the
equilibrium~\xhref{S1800000}{https://arxiv.org/src/1906.00207v4/anc/extrema/S1800000/physics.pdf}
\emph{despite complete breaking of~$SO(8)$} with zero residual
symmetry -- neither Lie nor discrete. Signs of a hidden~$E_{8(+8)}$
symmetry in maximal~$D=4$ supergravity are, of course,
not new~(e.g.~\cite{Ananth:2017nbi}),
and so the hope is that the rather concrete new puzzle piece
explained in this work will lead to new angles of attack to
resolve the question about the underlying symmetries of M theory.
\vskip2em
\noindent
\noindent{\bf Acknowledgments}
Thomas Fischbacher would like to thank Moritz Firsching for
independently confirming claim 3, and also Jyrki Alakuijala, George
Toderici, Ashok Popat, Rahul Sukthankar, Jay Yagnik, and Jeff Dean for
on-going support and encouragement of research that comprises a
unusual but scientifically successful applications of TensorFlow. We
also would like to thank Gianluca Inverso, David Berman, Nikolay
Bobev, Fridrik Freyr Gautason, and Hermann Nicolai for useful
discussions. Krzysztof Pilch is supported in part by DOE grant
DE-SC0011687.
|
1,314,259,994,921 | arxiv | \section{Introduction and main results}
\subsection*{Reflected Brownian motion in orthants}
Reflected Brownian motion (RBM) in orthants $\mathbb R_+^d$ is a fundamental stochastic process. Starting from the eighties, it has been studied in depth, with focuses on its definition and semimartingale properties \cite{VaWi-85,Wi-85b,Wi-95}, its recurrence or transience \cite{Wi-85a,HoRo-93,Ch-96,BrDaHa-10,Br-11}, the possible particular (e.g., product) form of its stationary distribution \cite{HaWi-87,DiMo-09}, the asymptotics of its stationary distribution \cite{HaHa-09,DaMi-11}, its Lyapunov functions \cite{DuWi-94,Sa-17}, its links with other stochastic processes \cite{LG-87,Du-04,Le-17}, its use to approximate large queuing networks \cite{Ha-78,Fo-84,BaFa-87}, numerical methods to compute the stationary distribution \cite{DaHa-92}, links with complex analysis \cite{Fo-84,BaFa-87,FrRa-19,BMEPFrHaRa-21}, PDEs \cite{HaRe-81a}, etc. The RBM is characterized by a covariance matrix $\Sigma$, a drift vector $\mu$ and a reflection matrix $R$. We will provide in Section~\ref{sec:absorption} a precise definition. While $\Sigma$ and $\mu$ correspond to the Brownian behavior of the process in the interior of the cone, the matrix $R$ describes how the process is reflected on the boundary faces of the orthant. In the semimartingale case, RBM admits a simple description using local times on orthant faces, see~\eqref{eq:RBM_semimart}.
\subsection*{Dimensions $1$ and $2$}
The techniques to study RBM in an orthant very heavily depend on the dimension. In dimension $1$, RBM with zero drift in the positive half-line $\mathbb R_+$ is equal, in distribution, to the absolute value of a standard Brownian motion, via the classical Tanaka formula; if the drift is non-zero, the RBM in $\mathbb R_+$ is connected to the so-called bang-bang process \cite{Sh-81}. Most of the computations can be performed explicitly, using closed-form expressions for the transition kernel.
The case of dimension $2$ is historically the one which attracted the most of attention, and is now well understood. Thanks to a simple linear transform, RBM in a quadrant with covariance matrix is equivalent to RBM in a wedge with covariance identity, see~\cite[Appendix]{FrRa-19}. The very first question is to characterize the parameters of the RBM (opening of the cone $\beta$ and reflection angles $\delta,\varepsilon$, see Figure~\ref{fig:dim-2})\ leading to a semimartingale RBM, as then tools from stochastic calculus become available. The condition takes the form $\alpha<1$, see \cite{Wi-85a}, with
\begin{equation}
\label{eq:def_alpha}
\alpha=\frac{\delta+\varepsilon-\pi}{\beta}.
\end{equation}
As a second step, conditions for transience and recurrence were derived, see \cite{HoRo-93,Wi-85a}. Complex analysis techniques prove to be quite efficient in dimension $2$, see \cite{BaFa-87,BMEPFrHaRa-21}. In particular, this method leads to explicit expressions for the Laplace transforms of quantities of interest (stationary distribution in the recurrent case \cite{FrRa-19,BMEPFrHaRa-21}, Green functions in the transient case \cite{Fr-20}, escape and absorption probabilities \cite{ErFrHu-20,FoFrIv-22}).
\subsection*{Higher dimension}
As opposed to the previous cases, the case of $d>2$ is much most mysterious. However, necessary and sufficient conditions for the process to be a semimartingale are known, and read as follows: denote the reflection matrix by
\begin{equation}
\label{eq:def_reflection_matrix}
R=\begin{pmatrix}
1 & r_{12} & \dots & r_{1d} \\
r_{21} & 1 & \dots & r_{2d} \\
\vdots & \vdots & \ddots & \vdots \\
r_{d1} & r_{d2} & \dots & 1
\end{pmatrix}.
\end{equation}
The column vector
\begin{equation}
\label{eq:def_reflection_vector}
R_j=\begin{pmatrix}
r_{1j} \\
\vdots \\
r_{d j}
\end{pmatrix}
\end{equation}
represents the reflection vector on the orthant face $x_i=0$. Then the RBM is a semimartingale if and only if the matrix $R$ is completely-$\mathcal S$, in the following sense, see~\cite{ReWi-88,TaWi-93}.
By definition, a principal sub-matrix of $R$ is any matrix of the form $(r_{ij})_{(i,j)\in I^2}$, where $I$ is a non-empty subset of $\{1,\ldots,d\}$, possibly equal to $\{1,\ldots,d\}$. If $x$ is a vector in $\mathbb R^d$, we will write $x> 0$ (resp.\ $x\geqslant 0$) to mean that all its coordinates are positive (resp.\ non-negative). We define $x<0$ and $x\leqslant 0$ in the same way. The definition extends to matrices.
\begin{definition}[$\mathcal S$-matrix]
A square matrix $R$ is an $\mathcal{S}$-matrix if there exists $x \geqslant 0$ such that
$Rx > 0$. Moreover, $R$ is completely-$\mathcal{S}$ if all its principal sub-matrices are $\mathcal{S}$-matrices.
\end{definition}
Apart from the semimartingale property, very few is known about multidimensional RBM. In particular, necessary and sufficient conditions for transience or recurrence are not yet fully known in the general case, even though, under some additional hypothesis on $R$, some conditions are known \cite{HaWi-87bis,Ch-96}, or in dimension 3 \cite{ElBeYa-00,BrDaHa-10}. For example, if $R$ is assumed to be a non-singular $\mathcal{M}$-matrix (which means that $R$ is an $\mathcal{S}$-matrix whose off-diagonal entries are all non-positive), then $R^{-1}\mu <0$ is a necessary and sufficient condition for positive recurrence. Moreover, contrary to the two-dimensional case, no explicit expressions are available for quantities of interest such as the stationary distribution, in general.
\begin{figure}
\includegraphics[scale=0.7]{rebondwedge1.pdf}
\caption{Wedge and angles of reflection.}
\label{fig:dim-2}
\end{figure}
\subsection*{The historical skew symmetry condition}
The only notable and exceptional case, in which everything is known and behaves smoothly, is the so-called skew symmetric case, as discovered by Harrison \cite{Ha-78} in dimension $2$, and Harrison and Williams \cite{HaWi-87} in arbitrary dimension. They prove that the RBM stationary distribution has a remarkable product form
\begin{equation}
\label{eq:parameters_product_form}
\pi(x_1,\ldots, x_d) = c_1\cdots c_d\exp(-c_1x_1-\cdots -c_dx_d)
\end{equation}
if and only if the following relation between the covariance and reflection matrices holds:
\begin{equation}
\label{eq:skew_symmetry}
2\Sigma = R\cdot \diag \Sigma + \diag \Sigma \cdot R^\top.
\end{equation}
In the latter case, the stationary distribution admits the exponential product form given by \eqref{eq:parameters_product_form}, with parameters equal to
\begin{equation*}
(c_1,\ldots,c_d)^\top = -2\cdot (\diag \Sigma)^{-1} \cdot R^{-1} \cdot \mu,
\end{equation*}
with $\mu$ denoting the drift vector.
In dimension 2, if we translate this model from the quadrant to a wedge, condition \eqref{eq:skew_symmetry} is equivalent to $\alpha=0$, see \cite[Sec.~5.2]{FrRa-19} and our Figure~\ref{fig2:dim-2}.
Models having this skew symmetry are very popular, as they offer the possibility of computing the stationary distribution in closed-form. No generalization of the skew symmetry is known, except in dimension $2$, where according to \cite{DiMo-09}, the stationary distribution is a sum of $n\geq1$ exponential terms as in \eqref{eq:parameters_product_form} (with suitable normalization) if and only if $\alpha=-n$, where the parameter $\alpha$ is as in \eqref{eq:def_alpha}. The recent article \cite{BMEPFrHaRa-21} goes much further, generalizing again this result and finding new conditions on $\alpha$ to have simplified expressions of the density distribution.
The concept of skew symmetry has been explored in other cases than orthants, see for example \cite{Wi-87,OCOr-14}.
\begin{figure}
\includegraphics[scale=1.3]{rebondspeciaux1.png}
\hspace{1cm}
\includegraphics[scale=1.3]{rebondspeciaux2.png}
\caption{On the left, the standard skew symmetry condition in a wedge, corresponding to the condition $\alpha=0 $; on the right, the dual skew symmetry condition $\alpha=1 $.}
\label{fig2:dim-2}
\end{figure}
\subsection*{Our approach and contributions}
In this paper, we will \underline{not} work under the completely-$\mathcal S$ hypothesis. More precisely, we will assume that:
\begin{assumption}
\label{as:quasi_comp-S-1}
The reflection matrix $R$ is not $\mathcal S$.
\end{assumption}
\begin{assumption}
\label{as:quasi_comp-S-2}
All principal, strict sub-matrices of $R$ are completely-$\mathcal S$.
\end{assumption}
Before going further, observe that the appearance of $\mathcal S$-matrices is very natural in the present context. Indeed, for instance, $R$ is an $\mathcal S$-matrix if and only if there exists a convex combination of reflection vectors which belongs to the interior of the orthant. Such a condition would allow us to define the process as a semimartingale after the time of hitting the origin. Similarly, the fact that a given principal sub-matrix of $R$ is $\mathcal S$ translates into the property that it is possible to define the process as a semimartingale after its visits on the corresponding face.
Therefore, as we shall prove, the probabilistic counterpart of Assumptions~\ref{as:quasi_comp-S-1} and \ref{as:quasi_comp-S-2} is that we can define the process $(Z_t)_{t\geq0}$ as a semimartingale before time
\begin{equation}
\label{eq:def_absorption_time}
T:=\inf \{t>0: Z_t=0 \}\leq \infty,
\end{equation}
but not for $t\geq T$. For this reason, we will call $T$ in \eqref{eq:def_absorption_time} the absorption time:
if the process hits the apex of the cone, then $T<\infty$ and we will say that the process is absorbed at the origin. Indeed, because of Assumption~\ref{as:quasi_comp-S-1}, there is no convex combination of reflection vectors belonging to the orthant, and consequently, we cannot define the process as a semimartingale after time $T$. However, our process is a semimartingale in the random time interval $[0,T]$; this will be proved in Proposition~\ref{prop:existence_ASRBM}.
We will also assume that:
\begin{assumption}
\label{as:drift>0}
The drift of the RBM is positive, that is, all coordinates of $\mu$ are positive.
\end{assumption}
Under Assumptions~\ref{as:quasi_comp-S-1}, \ref{as:quasi_comp-S-2} and \ref{as:drift>0}, our process exhibits the following dichotomy: either it hits the origin of the cone in finite time, i.e., $T<\infty$, or it goes to infinity (in the direction of the drift) before hitting the apex, i.e., $T=\infty$ and $\vert Z_t\vert \to\infty$ as $t\to\infty$. See Figure~\ref{fig:abs-esc}. We will prove this dichotomy in Proposition~\ref{prop:dichotomy}.
This leads us to ask the following questions: what is the absorption probability
\begin{equation}
\label{eq:absorption_probability}
f(x)=\mathbb P_x[T<\infty]?
\end{equation}
Equivalently, what is the escape probability
\begin{equation*}
\mathbb P_x[T=\infty]=1-\mathbb P_x[T<\infty]=\mathbb{P} [\vert Z_t\vert\to \infty ]?
\end{equation*}
These questions are not only of theoretical nature: they also admit natural interpretations in population biology problems, in terms of extinction times of multitype populations \cite{LaRa-13}, or in risk theory, in terms of ruin of companies that collaborate to cover their mutual deficits \cite{AlAzMu-17,BaBoReWi-14,IvBo-15}.
\begin{figure}
\includegraphics[scale=0.7]{absorption2.pdf}
\hspace{1cm}
\includegraphics[scale=0.7]{escape2.pdf}
\caption{Two examples of paths of the process $(Z_t)_{t\geq0}$, with starting point $x$ marked in red. On the left, we have $T<\infty$, meaning that the process is absorbed in finite time at the apex of the cone. On the right, the process seems to escape to infinity, meaning that $T=\infty$.}
\label{fig:abs-esc}
\end{figure}
Because of its somehow dual nature, the problem of computing the absorption (or escape) probability is a priori as difficult as the problem of computing the stationary distribution in the semimartingale, recurrent case. Therefore, a natural question is to find an analogue of the skew symmetry \cite{Ha-78,HaWi-87} in this context, which we recalled here in \eqref{eq:parameters_product_form} and \eqref{eq:skew_symmetry}. The main result of the article is given in Theorem~\ref{thm:main} below. It is stated under four assumptions; while the first three have already been introduced, the final one, Assumption~\ref{as:neumann}, is of more technical nature and will be presented in Section~\ref{sec:PDE}. We conjecture that Assumption~\ref{as:neumann} is always true. For $x=(x_1, \ldots , x_n)\in \mathbb{R}_+^d$, $f(x)=\mathbb P_x[T<\infty]$ denotes the absorption probability~\eqref{eq:absorption_probability}.
\begin{theorem}[Dual skew symmetry in an orthant]
\label{thm:main}
Under Assumptions~\ref{as:quasi_comp-S-1}, \ref{as:quasi_comp-S-2}, \ref{as:drift>0} and \ref{as:neumann},
the following statements are equivalent:
\begin{enumerate}[label={\rm(\roman{*})},ref={\rm(\roman{*})}]
\item\label{thm:main_it1}The absorption probability has a product form, i.e., there exist functions $f_1,\ldots ,f_d$ such that
\begin{equation*}
f(x)=f_1(x_1)f_2(x_2) \cdots f_d(x_d).
\end{equation*}
\item\label{thm:main_it2}The absorption probability is exponential, i.e., there exists $a\in\mathbb{R}^d\setminus \{0\}$ such that
\begin{equation*}
f(x)= \exp(a\cdot x).
\end{equation*}
\item\label{thm:main_it3}The reflection vectors $R_1,\ldots,R_d$ defined in \eqref{eq:def_reflection_matrix} and \eqref{eq:def_reflection_vector} are coplanar, that is,
\begin{equation*}
\det R =0.
\end{equation*}
\end{enumerate}
When these properties are satisfied, the vector $a=(a_1,\ldots,a_n)$ in \ref{thm:main_it2} has negative coordinates and is the unique non-zero vector such that
\begin{equation}
\label{eq:a}
a R =0\quad \text{and}\quad a \Sigma \cdot a +a \mu=0.
\end{equation}
\end{theorem}
We refer to Figures~\ref{fig2:dim-2} and \ref{fig:dim-3} for a geometric illustration of the condition $\det R =0$ appearing in \ref{thm:main_it3}. See Figure~\ref{fig:ellipsoide} for a geometric illustration of the exponential decay rate $a$ in \eqref{eq:a}. When the parameters satisfy the assumptions (and conclusions) of Theorem~\ref{thm:main}, we will say that the model satisfies the \textit{dual skew symmetry} condition. This terminology will be explained in more detail in Remark~\ref{rem:dual}. In the case of dimension $2$, Theorem~\ref{thm:main} is proved in \cite{ErFrHu-20}. Assumption~\ref{as:neumann} will be discussed in Remark~\ref{rem:as4}. Note that the proof of \ref{thm:main_it3}$\Rightarrow$\ref{thm:main_it2}$\Rightarrow$\ref{thm:main_it1} does not use Assumption~\ref{as:neumann}.
\begin{figure}
\includegraphics[scale=0.6]{dual3dcapture1.JPG}
\caption{Condition $\det R =0$ in a $3$-dimensional orthant: the reflection vectors $R_1$, $R_2$ and $R_3$ are coplanar.}
\label{fig:dim-3}
\end{figure}
\begin{figure}
\includegraphics[clip=true,trim=0cm 1cm 0cm 0cm,scale=1.3]{ellipsoide.png}
\caption{In red color: the ellipsoid with equation $x\Sigma \cdot x +\mu x =0$; in blue: $\ker R^\top$ (of dimension one by Lemma~\ref{lemme2}); in green: the exponential decay rate $a$.}
\label{fig:ellipsoide}
\end{figure}
\subsection*{Structure of the paper}
\begin{itemize}
\item Section \ref{sec:absorption}: We define properly the process and show some of its pathwise properties. In particular, Proposition~\ref{prop:dichotomy} shows the dichotomy behavior (absorption vs.\ escape at infinity).
\item Section \ref{sec:PDE}: We state and prove a PDE for the density of the absorption probability (Proposition~\ref{prop:PDE}). This PDE is dual to the one satisfied by the stationary distribution in the recurrent case.
\item Section \ref{sec:skew}: We provide a proof of our main Theorem~\ref{thm:main}.
\item Section \ref{sec:gen}: We propose a generalization of Theorem~\ref{thm:main} with absorption on facets, not necessarily the origin.
\end{itemize}
\section{Definition and first properties of the absorbed reflected Brownian motion}
\label{sec:absorption}
\subsection*{Existence and definition}
Let $(W_t)_{t\geq0}$ be a $d$-dimensional Brownian motion of covariance matrix $\Sigma$. Let $\mu \in\mathbb{R}^d$ be a drift, and let $R$ be a $d$-dimensional square matrix \eqref{eq:def_reflection_matrix} with coefficients $1$ on the diagonal.
\begin{proposition}[Existence of an absorbed SRBM]
\label{prop:existence_ASRBM}
Under Assumption \ref{as:quasi_comp-S-2}, there exists an absorbed SRBM in the orthant, i.e., a semimartingale defined up to the absorption time $T\leq\infty$ as in \eqref{eq:def_absorption_time} and such that for all $t \leqslant T$,
\begin{equation}
\label{eq:RBM_semimart}
Z_t = x +W_t+\mu t + R L_t,
\end{equation}
where $L_t$ is a vector whose $i$th coordinate $L_t^i$ is a continuous, non-decreasing process starting from $0$, which increases only when the $i$th coordinate of the process $Z_t^i = 0$, and which is called the local time on the corresponding orthant face.
\end{proposition}
Under the additional hypothesis that $R$ is completely-$\mathcal S$, Proposition~\ref{prop:existence_ASRBM} is most classical: in this case, the RBM is well defined as the semimartingale \eqref{eq:RBM_semimart}, actually for any $t\in[0,\infty)$. Our contribution here is to prove that if $R$ is not an $\mathcal S$-matrix (our Assumption~\ref{as:quasi_comp-S-1}) and is therefore not completely-$\mathcal S$, then it is still possible to define the RBM as a semimartingale on the time interval $[0,T]$.
\begin{proof}
Although Proposition~\ref{prop:existence_ASRBM} is not formally proved in Taylor's PhD thesis \cite{Ta-90}, all necessary tools may be found there. More precisely, Taylor proves that when $R$ is completely-$\mathcal S$ (i.e., our Assumption~\ref{as:quasi_comp-S-2} plus the fact that $R$ is $\mathcal S$), then the RBM is an orthant $\mathbb R_+^d$ exists as a semimartingale globally on $[0,\infty)$. The proof in \cite{Ta-90} is then split into two parts:
\begin{itemize}
\item First, \cite[Chap.~4]{Ta-90} shows that the SRBM exists on $[0,T]$, with $T$ defined in \eqref{eq:def_absorption_time}. The fact that $R$ is an $\mathcal S$-matrix is nowhere used in the proof: the only needed hypotheses are that all principal, strict sub-matrices are completely-$\mathcal S$ (our Assumption~\ref{as:quasi_comp-S-2}).
\item As a second step, in \cite[Chap.~5]{Ta-90} (see in particular her Lemma~5.3), Taylor proves that if $R$ is an $\mathcal S$-matrix, then it is possible for the process started at the origin to escape the origin and to be well defined as a semimartingale.
\end{itemize}
Using only the first part of her arguments readily entails our Proposition~\ref{prop:existence_ASRBM}.
\end{proof}
\subsection*{Absorption and escape in asymptotic regimes}
We first prove two results which are intuitively clear, namely, that the absorption probability tends to one (resp.\ zero)\ when the starting point approaches the origin (resp.\ infinity), see Proposition~\ref{prop:absorption_origin} (resp.\ Proposition~\ref{prop:absorption_infinity}). Then, we will prove in Proposition~\ref{prop:dichotomy} the dichotomy already mentioned: either the process is absorbed in finite time, or it escapes to infinity as time goes to infinity. By convention, we will write $x\to 0$ (resp.\ $x\to\infty$) to mean that $\vert x\vert\to0$ (resp.\ $\vert x\vert\to\infty$) in the cone.
\begin{proposition}[Absorption starting near the origin]
\label{prop:absorption_origin}
One has
\begin{equation*}
\lim_{x\to 0}\mathbb{P}_x [T<\infty] = 1.
\end{equation*}
\end{proposition}
\begin{proposition}[Absorption starting near infinity]
\label{prop:absorption_infinity}
One has
\begin{equation*}
\lim_{x \to \infty}\mathbb{P}_x [T<\infty] = 0.
\end{equation*}
\end{proposition}
\begin{proposition}[Complementarity of escape and absorption]
\label{prop:dichotomy}
When $T=\infty$, then almost surely $\lim_{t\to\infty} \vert Z_t\vert = \infty$, i.e.,
\begin{equation*}
\mathbb{P}_x \left[ \left. \lim_{t\to\infty} \vert Z_t\vert = \infty \right\vert T=\infty \right] = 1.
\end{equation*}
This implies that $\mathbb{P}_x[T=\infty]=\mathbb{P}_x[\vert Z_{t\wedge T}\vert \to \infty ]$.
\end{proposition}
\begin{proof}[Proof of Proposition~\ref{prop:absorption_origin}]
Let us define $\tau_x = \inf \{t>0 : x+W_t+\mu t <0 \}$ (by convention $\inf \emptyset =\infty$), and consider the set
\begin{equation*}
\{ \tau_x <\infty \} = \{ \exists t>0 \text{ such that } x+W_t+\mu t <0 \}.
\end{equation*}
The proof consists in two steps. We first prove that $\{ \tau_x <\infty \} \subset \{ T<\infty \}$ and then show that $\lim_{x\to 0}\mathbb{P}[ \tau_x <\infty ] =1$.
\begin{enumerate}[label={\rm Step~\arabic{*}.},ref={\rm Step~\arabic{*}}]
\item\label{it:step1}
Assume that $\tau_x < \infty$ and fix a $t<\infty$ such that
$x+W_t+\mu t<0$. We are going to show that $T\leqslant t$.
We proceed by contradiction and assume that $t<T$. Then from \eqref{eq:RBM_semimart} we get that
\begin{equation*}
R L_t = Z_t - x-W_t-\mu t >0.
\end{equation*}
The last inequality comes from the fact that $Z_t\geqslant 0$ and that $x+W_t+\mu t <0$.
Remembering that $L_t\geqslant 0$, the fact that $ R L_t>0$ implies that $R$ is an $\mathcal{S}$-matrix, which contradicts Assumption~\ref{as:quasi_comp-S-1}.
We conclude that $T\leqslant t<\infty$. We have thus shown that $\{ \tau_x <\infty \} \subset \{ T<\infty \}$.
\item\label{it:step2} By Blumenthal's zero--one law, we have
\begin{equation*}
\mathbb{P}[ \tau_0=0 ] =1,
\end{equation*}
since
\begin{equation*}
\{\tau_0 =0\}=\bigcap_{t>0} \left\{ \inf_{s \leqslant t} (W_s+\mu s) <0 \right\} \in \mathcal{F}_{0+}= \bigcap_{t>0} \mathcal{F}_t,
\end{equation*}
where $\mathcal{F}_t =\sigma \{ W_s , s\leqslant t \}$.
This implies that $\mathbb{P}[\tau_0 <\infty ]=1$. We deduce that almost surely, there exists $t_0$ such that $W_{t_0}+\mu t_0<0$, and then for all $x<-W_{t_0}-\mu t_0$ we have $\tau_x <\infty$. Then
$\mathds{1}_{\{\tau_x < \infty\}}\underset{x\to 0}{\longrightarrow} 1 \text{ a.s.}$, and by dominated convergence we have
\begin{equation*}
\mathbb{P}[ \tau_x <\infty ]=\mathbb{E}[\mathds{1}_{\{\tau_x < \infty\}}] \underset{x\to 0}{\longrightarrow} 1.
\end{equation*}
\end{enumerate}
Thanks to \ref{it:step1} and \ref{it:step2}, we conclude that $\mathbb{P}_x[T<\infty]\geqslant \mathbb{P}[ \tau_x <\infty ]$ and therefore $\underset{x\to 0}{\lim}\mathbb{P}_x [T<\infty] = 1$, using the above estimate.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:absorption_infinity}]
Introduce the event
\begin{equation*}
B_x = \{\forall t\in\mathbb{R},\, x+W_t+\mu t >0\}.
\end{equation*}
For any element belonging to $B_x$, then
$Z_t=x+W_t+\mu t$ for all $t\in \mathbb{R}$ (the process never touches the boundary of the orthant, meaning no reflection on the boundary). We deduce that $Z_t>0$ for all $t\in \mathbb{R}$ and then that $B_x \subset \{ T=\infty \}$. Therefore,
\begin{equation*}
\mathbb P[B_x]\leqslant \mathbb{P}_x [T=\infty].
\end{equation*}
To conclude, we are going to show that $\lim_{x\to \infty} \mathbb{P}[B_x]=1$. It comes from the fact that a.s.\ $\inf_{t>0} \{ W_t +\mu t \}> -\infty$, since $\mu>0$ by Assumption~\ref{as:drift>0}. For all $x> -\inf_{t>0} \{ W_t +\mu t \}$, we have $x+W_t+\mu t >0$ for all $t$. We deduce that
$
\mathds{1}_{\{\forall t\in\mathbb{R}, x+W_t+\mu t >0\}}\underset{x\to \infty}{\longrightarrow} 1 \text{ a.s.}
$,
and by dominated convergence, we have
\begin{equation*}
\mathbb{P}[ B_x ]=\mathbb{E}[\mathds{1}_{\{\forall t\in\mathbb{R}, x+W_t+\mu t >0\}}] \underset{x\to \infty}{\longrightarrow} 1.\qedhere
\end{equation*}
\end{proof}
Before proving Proposition~\ref{prop:dichotomy}, we first recall some useful definitions and properties related to recurrence and transience of Markov processes. All of them are most classical, but having them here stated clearly will facilitate our argument.
These results and their proofs may be found in~\cite{Az-66}.
Consider a continuous, strong Feller Markov process $X_t$ on a locally
compact state space $E$ with countable basis. For $V\subset E$, let us define $\tau_V = \inf \{t > 0 : X_t \in V \}$.
\begin{itemize}
\item The point $x\in E$ is said to be recurrent if for all neigbourhoods $V$ of $x$,
\begin{equation*}
\mathbb{P}[\limsup \mathds{1}_V (X_t)=1]=1.
\end{equation*}
\item If a point is not recurrent, it is said to be transient. In this case, by \cite[Thm.~III~1]{Az-66}, there exists a neigbourhood $V$ of $x$ such that $\mathbb{P}[\limsup \mathds{1}_V (X_t)=1]=0$.
\item The point $x$ is said to lead to $y$ if for all neighbourhoods $V$ of $y$, we have $\mathbb{P}_x [\tau_V < \infty] > 0$.
The points $x$ and $y$ are said to communicate if $x$ leads to $y$ and $y$ leads to $x$. This defines an equivalence relation.
\item If two states communicate, they are both transient or both recurrent \cite[Prop.~IV~2]{Az-66}.
\item If all points are transient, then $X_t$ tends to $\infty$ as $t\to \infty$ almost surely \cite[Prop.~III~1]{Az-66}.
\end{itemize}
\begin{proof}[Proof of Proposition~\ref{prop:dichotomy}]
Define the process $(\widetilde{Z}_t)_{t\geq 0}$ as the process $(Z_t)_{t\geq 0}$ conditioned never to hit $0$ in a finite time. The transition semigroup of this new Markov process $\widetilde{Z}_t$ is defined, for $x\in \mathbb{R}_+^d \setminus \{ 0\}$ and $V \subset \mathbb{R}_+^d \setminus \{ 0\}$, by
\begin{equation*}
\mathbb{P}_x[\widetilde{Z}_t \in V]=\mathbb{P}_x[{Z}_t \in V \vert T=\infty].
\end{equation*}
All points of $\mathbb{R}_+^d \setminus \{0\}$ communicate, they thus constitute a unique equivalence class. We deduce that they all are transient or all recurrent. It is thus enough to show that one of them is transient, to show that they all are.
Let us take a point in the interior of $\mathbb{R}_+^d \setminus \{0\}$, for example $x=(1,\ldots, 1)$. Since $\mu >0$, by standard properties of Brownian motion we have
\begin{equation*}
\mathbb{P}[\forall t \in\mathbb{R},\,x+W_t+\mu t>0]>0.
\end{equation*}
In dimension one, this property directly derives from \cite[Eq.~1.2.4(1)]{BoSa-12} (on p.~252); it easily generalizes to all dimensions.
When this event of positive probability occurs, the process never touches the boundary and thus $\widetilde{Z}_t= x+W_t+\mu t \to \infty$ and $\limsup \mathds{1}_V (\widetilde{Z}_t)=0$, for any $V$ relatively compact neighbourhood of $x$. We have shown that there exists a neighbourhood $V$ of $x$ such that $\mathbb{P}_x [\limsup \mathds{1}_V (\widetilde{Z}_t)=1]<1$, which implies that $x$ is not recurrent and is then transient.
Using \cite[Prop.~III~1]{Az-66} allows us to conclude, since as recalled above, if all points are transient, then the process tends to infinity almost surely.
\end{proof}
\section{Partial differential equation for the absorption probability}
\label{sec:PDE}
In a classical way, the generator of the Brownian motion in the interior of the orthant is defined by
\begin{equation*}
\mathcal{G}f (x)= \lim_{t\to 0} \frac{\mathbb{E}_x[f(Z_t)] - f(x) }{t}= \frac{1}{2}(\nabla \cdot \Sigma \nabla f) (x) + (\mu \cdot \nabla f) (x),
\end{equation*}
where we assume that $f$ is bounded in the first equality and that $f$ is twice differentiable in the second equality. In the rest of the paper, the following assumption is made.
\begin{assumption}
\label{as:neumann}
For all continuous, bounded functions $g$, the transition semigroup
\begin{equation*}
x\mapsto P_t g(x) := \mathbb{E}_x [g(Z_{t\wedge T})]
\end{equation*}
is differentiable, and satisfies the Neumann boundary condition $R_i \cdot P_t g(x)=0$ on the $i$th face of the orthant $x_i=0$.
\end{assumption}
\begin{remark}[Plausibility of Assumption~\ref{as:neumann}]
\label{rem:as4}
Many evidences suggest that this hypothesis is true:
\begin{itemize}
\item By \cite[Cor.~3.3]{An-09}, Assumption~\ref{as:neumann} is true provided we replace $T$ by the first hitting time of the intersection of two faces, or assuming that the process does not hit the intersection of two faces.
\item As a consequence of the above, Assumption~\ref{as:neumann} is true in dimension two.
\item By \cite{DeZa-05}, Assumption~\ref{as:neumann} holds true in the particular case of orthogonal reflections.
\item Assumption~\ref{as:neumann} is stated as a conjecture in \cite[(8.2b)]{HaRe-81a}; however, the latter article does not attempt to prove rigorously these regularity questions.
\item The paper \cite{LiRa-19} shows in full generality the pathwise differentiability with respect to the starting point $x$. We believe that a way to attack the proof of Assumption~\ref{as:neumann} could be to combine the results of \cite{LiRa-19} with the computations made in the proof of \cite[Cor.~3.3]{An-09}.
\end{itemize}
\end{remark}
\begin{proposition}[Partial differential equation]
\label{prop:PDE}
Under Assumptions~\ref{as:quasi_comp-S-1}, \ref{as:quasi_comp-S-2}, \ref{as:drift>0} and \ref{as:neumann},
the absorption probability \eqref{eq:absorption_probability} is the unique function $f$ which is
\begin{itemize}
\item bounded and continuous in the interior of the orthant $\mathbb R_+^d$ and on its boundary,
\item continuously differentiable in the interior of the orthant and on its boundary (except perhaps at the corner),
\end{itemize}
and which further satisfies the PDE:
\begin{itemize}
\item $\mathcal{G} f =0$ on the orthant (harmonicity),
\item $R_i \cdot \nabla f = 0$ on the $i$th face of the orthant $x_i=0$ (Neumann boundary condition),
\item $f(0)=1$ and $\lim_{x\to \infty} f(x)=0$ (limit values).
\end{itemize}
\end{proposition}
\begin{proof}
The proof is similar to \cite[Prop.~11]{ErFrHu-20}. We start with the \textit{sufficient condition}. Dynkin's formula leads to
\begin{equation*}
\mathbb{E}_x[f(Z_{t\wedge T})] = f(x) +\mathbb{E}_x \int_0^{t\wedge T} \mathcal{G}f (Z_s) \mathrm{d}s +
\sum_{i=1}^d \mathbb{E}_x \int_0^{t\wedge T}
(R \nabla f )_i
\mathrm{d}L^i_s.
\end{equation*}
There is a technical subtlety in applying Dynkin's formula, as the latter requires functions having a $\mathcal C^2$-regularity, which is a priori not satisfied at the origin in our setting. However, we may first apply this formula for $T_n=\inf\{t>0 : \vert Z_t\vert <\frac{1}{n}\}<T$, then $f$ has the desired regularity on $\mathbb R_+^d\setminus \{x:\vert x\vert<\frac{1}{n}\}$. One may conclude as $T_n$ converges increasingly to $T$ as $n\to\infty$.
Since $f$ is assumed to satisfy the PDE stated in Proposition~\ref{prop:PDE}, the latter sum of three terms is simply equal to $f(x)$. We further compute
\begin{equation*}
f(x) = \mathbb{E}_x[f(Z_{t\wedge T})] =
\mathbb{E}_x[f(Z_{t\wedge T})\mathds{1}_{T\leqslant t}]
+
\mathbb{E}_x[f(Z_{t\wedge T})\mathds{1}_{T> t}]
=f(Z_T) \mathbb P_x[T\leqslant t]+
\mathbb{E}_x[f(Z_{t\wedge T})\mathds{1}_{T> t}].
\end{equation*}
As $t\to\infty$, the above quantity converges to
\begin{equation*}
f(0)\mathbb P_x[T<\infty] +
\mathbb{E}_x\left[\lim_{t\to\infty}f(Z_{t})\mathds{1}_{T=\infty}\right]
= \mathbb P_x[T<\infty],
\end{equation*}
where the last equality comes from the limit values $\lim_{x\to \infty} f(x)=0$, $f(0)=1$ and from Proposition~\ref{prop:dichotomy}, which together imply that when $T=\infty$ we have $\lim_{t\to\infty}Z_{t}=\infty$.
We immediately deduce that
$f(x)=\mathbb P_x[T<\infty]$.
We now move to the \textit{necessary condition}. We denote the absorption probability~\eqref{eq:absorption_probability} by $f$ and show that it satisfies the PDE of Proposition~\ref{prop:PDE}. Consider the event $\{ T<\infty \} \in \mathcal{F}_\infty$ and define
\begin{equation*}
M_t=\mathbb{E} [\mathds{1}_{\{ T<\infty \}} \vert \mathcal{F}_{t \wedge T}],
\end{equation*}
which is a $\mathcal{F}_t$-martingale. Observe that $M_0=f(x)$ and, by the Markov property, $M_t=f(Z_{t \wedge T})$. We deduce that $\mathbb{E}_x[f(Z_{t \wedge T})]=\mathbb{E}[M_t \vert \mathcal{F}_0]=M_0=f(x)$.
By definition of $\mathcal{G}$, we obtain that for $x$ in the interior of the orthant,
\begin{equation*}
\mathcal{G}f (x)= \lim_{t\to 0} \frac{\mathbb{E}_x[f(Z_t)] - f(x) }{t} = 0.
\end{equation*}
The Neumann boundary condition and the differentiability follow from the fact that $f(x)=\mathbb{E}_x[f(Z_{t \wedge T})]$ and from Assumption~\ref{as:neumann}.
The limit values follow from our Propositions~\ref{prop:absorption_origin} and \ref{prop:absorption_infinity}.
\end{proof}
\begin{remark}[Duality between absorption probability and stationary distribution]
\label{rem:dual}
Let us define the dual generator $
\mathcal{G}^* f (x)= \frac{1}{2}(\nabla \cdot \Sigma \nabla f) (x) - (\mu \cdot \nabla f) (x)
$
as well as the matrix $R^* = 2 \Sigma -R \ \textnormal{diag} (\Sigma )$, whose columns are denoted by $R^*_i$. In the recurrent case, the stationary distribution satisfies the following PDE, see \cite[Eq.~(8.5)]{HaRe-81a}:
\begin{itemize}
\item $\mathcal{G}^* f =0$ in the orthant,
\item $R^*_i \cdot \nabla f -2 \mu_i f= 0$ on the $i$th face of the orthant defined by $x_i=0$.
\end{itemize}
As a consequence, the absorption probability satisfies a PDE (Proposition~\ref{prop:PDE}), which is dual to the one which holds for the stationary distribution.
\end{remark}
\section{Dual skew symmetry: proof of the main result}
\label{sec:skew}
This section is devoted to the proof of Theorem~\ref{thm:main}, which establishes the dual skew symmetry condition. We first prove two technical lemmas on the reflection matrix $R$ in \eqref{eq:def_reflection_matrix}.
\begin{lemma}
\label{lemme}
If $R$ satisfies Assumptions~\ref{as:quasi_comp-S-1} and \ref{as:quasi_comp-S-2}, then for all $i$, there exists $j\neq i$ such that $r_{ij} \neq 0$.
\end{lemma}
\begin{proof}
It is enough to prove Lemma~\ref{lemme} for $i=1$, as we would show the other cases similarly.
Consider $\widetilde R$ the principal submatrix of $R$ obtained by removing the first line and the first column. This matrix is completely $\mathcal{S}$ by Assumption~\ref{as:quasi_comp-S-2}, so that there exists $\widetilde X=(x_2,\ldots,x_d)^\top\geqslant 0$
such that $\widetilde R \widetilde X >0$. Consider now $\widetilde C_1=(r_{21},\ldots,r_{d1})^\top$, which is the first column of $R$ without its first coordinate. Let us choose $\lambda>0$ large enough such that $ \widetilde C_1 + \lambda \widetilde R \widetilde X >0$. If for all $j\neq 1$ we have $r_{1j}=0$, then for
$
X=(1,\lambda x_2,\ldots,\lambda x_d)^\top$
we would have
\begin{equation*}
RX
=
\begin{pmatrix} 1 & 0 \cdots 0 \\
\widetilde C_1 & \widetilde R \\
\end{pmatrix}
\begin{pmatrix}
1 \\
x_2 \\
\vdots \\
x_d
\end{pmatrix}
=\begin{pmatrix} 1 \\ \widetilde C_1+\lambda \widetilde R \widetilde X \end{pmatrix}>0
\end{equation*}
and then $R$ would be an $\mathcal{S}$-matrix, contradicting our Assumption~\ref{as:quasi_comp-S-1}.
\end{proof}
\begin{lemma}
\label{lemme2}
If $R$ satisfies Assumptions~\ref{as:quasi_comp-S-1} and \ref{as:quasi_comp-S-2}, and if in addition $\det R=0$, then $R$ has rank $d-1$, and there exist a positive column vector $U>0$ in $\ker R$ and a positive row vector $a>0$ such that $aR=0$.
\end{lemma}
\begin{proof}
The rank of the matrix $R$ is obviously $\leqslant d-1$, since $\det R=0$. We now show that the rank is $\geqslant d-1$.
Let $\widetilde R_j$ be the submatrix of $R$ obtained by removing the $j$th line and the $j$th column. These matrices are $\mathcal{S}$-matrices by Assumption~\ref{as:quasi_comp-S-2}, and we can choose
\begin{equation*}
\widetilde X_j=\begin{pmatrix}
\widetilde x_{1j} \\
\vdots \\
\widetilde x_{(j-1)j} \\
\widetilde x_{(j+1)j} \\
\vdots \\
\widetilde x_{dj}
\end{pmatrix} \geqslant 0
\quad \text{such that}
\quad
\widetilde R_j \widetilde X_j
=\widetilde Y_j=\begin{pmatrix}
\widetilde y_{1j} \\
\vdots \\
\widetilde y_{(j-1)j} \\
\widetilde y_{(j+1)j} \\
\vdots \\
\widetilde y_{dj}
\end{pmatrix}
>0
.
\end{equation*}
We now define the vertical vectors $X_j^\varepsilon=(\widetilde x_{ij})_{i=1,\ldots, d}$, setting $\widetilde x_{jj}=\varepsilon$ for some $\varepsilon>0$. We have
\begin{equation*}
R X_j^\varepsilon=Y_j^\varepsilon
=(y_{ij}^\varepsilon)_{i=1,\ldots, d},
\end{equation*}
where $y_{ij}^\varepsilon= \varepsilon r_{ij}+ \widetilde y_{ij}>0$ for $i \neq j$ and $\varepsilon>0$ small enough, and $y_{jj}^\varepsilon=\varepsilon + \widetilde L_j \widetilde X_j$, where we set
\begin{equation*}
\widetilde L_j=(r_{1j}, \ldots, r_{(j-1)j},r_{(j+1)j},\ldots ,r_{dj})
\end{equation*}
the $j$th line of $R$, with the $j$th coordinate $r_{jj}=1$ excluded. Since $R$ is not an $\mathcal{S}$-matrix by our Assumption~\ref{as:quasi_comp-S-1}, we must have $y_{jj}^\varepsilon=\varepsilon+ \widetilde L_j \widetilde X_j \leqslant 0$. We deduce that $y_{jj}^0= \widetilde L_j \widetilde X_j \leqslant -\varepsilon <0$.
Then, introducing the vectors
\begin{equation*}
X_j=\frac{1}{-y_{jj}^0 } X_j^0 \geqslant 0
\quad \text{and}
\quad
Y_j=(y_{ij})_{i=1,\ldots ,d} =R X_j,
\end{equation*}
we have
\begin{equation*}
Y_j=R X_j=\begin{pmatrix}
y_{1j} \\
\vdots \\
y_{(j-1)j} \\
- 1 \\
y_{(j+1)j} \\
\vdots \\
y_{dj}
\end{pmatrix} ,
\quad\text{where } y_{ij}=\frac{\widetilde y_{ij}}{-y_{jj}^0 } >0 \text{ for } i\neq j.
\end{equation*}
Denoting the matrix $P=(X_1, \ldots, X_d ) \geqslant 0$, we have
\begin{equation*}
-RP= \begin{pmatrix}
1 & -y_{12} & \dots & -y_{1d} \\
-y_{21} & 1 & \dots & -y_{2d} \\
\vdots & \vdots & \ddots & \vdots \\
- y_{d1} & -y_{d2} & \dots & 1
\end{pmatrix}= 2 \textnormal{Id} -T,
\text{ where } T=\begin{pmatrix}
1 & y_{12} & \dots & y_{1d} \\
y_{21} & 1 & \dots & y_{2d} \\
\vdots & \vdots & \ddots & \vdots \\
y_{d1} & y_{d2} & \dots & 1
\end{pmatrix}>0.
\end{equation*}
All coefficients of $T$ are positive. Consequently, using Perron-Frobenius theorem, $T$ has a unique maximal eigenvalue $r$, its associated eigenspace is one-dimensional and there exists a positive eigenvector $V$ associated to $r$. Let us remark that since $\det R=0$, then $\det (2 \textnormal{Id} -T)=\det(-RP)=0$ and $2$ is an eigenvalue of $T$. Then $r\geqslant 2$, and there are two cases to treat.
\begin{itemize}
\item Assume first that the maximal eigenvalue is $r>2$. Let $V>0$ be a positive associated eigenvector such that $TV=rV$. We deduce that $-RPV=2V-TV=(2-r)V$ and then $R(PV)=(r-2)V>0$, where $PV \geqslant 0$ since $P \geqslant 0$ and $ V > 0$. Then we have shown that $R$ is an $\mathcal{S}$-matrix, which contradicts Assumption~\ref{as:quasi_comp-S-1}. So we must be in the situation where $r=2$.
\item If $r=2$ is the maximal eigenvalue of $T$, and $V>0$ the positive eigenvector such that $TV=2V$, then we have $RU=0$ for $U=PV>0$. Furthermore $\dim \ker(2\textnormal{Id}-T)=1$ and then $d-1=\text{rank} (2\textnormal{Id}-T)=\text{rank} RP \leqslant \text{rank} R$ and then $R$ has rank $d-1$.
\end{itemize}
Left eigenspaces of $T$ are (right) eigenspaces of $T^\top$. If we take $a$ such that $aR=0$, then $a$ belongs to the left eigenspace associated to the eigenvalue $2$ of $T$. By Perron-Frobenius theorem, we deduce that we can choose $a>0$.
\end{proof}
We now prove a result showing that the hitting probability of the origin is never $0$, for all starting points.
\begin{lemma}
For all $x\in \mathbb{R}_+^d$, $f(x) > 0$.
\label{lemma:fpositive}
\end{lemma}
\begin{proof}
By Proposition~\ref{prop:absorption_origin}, there exists a point $y_0$ in the interior of the orthant such that $f(y_0)> 0$. By continuity of $f$ (Proposition~\ref{prop:PDE}), we can find an open neighbourhood $U$ of $y_0$ such that $f(y) > 0$ for all $y \in U$. Then we conclude that
\begin{equation*}
f(x)=\mathbb{E}_x [f(Z_{t\wedge T})]= \int_{\mathbb{R}_+^d} f(y) \mathbb{P}_x(X_{t\wedge T} = \mathrm{d}y) \geqslant \int_{U} f(y) \mathbb{P}_x(X_{t\wedge T} = \mathrm{d}y) >0.
\end{equation*}
(The first equality in the previous equation has already been proved in the proof of Proposition~\ref{prop:PDE}).
\end{proof}
Let us now prove the main result.
\begin{proof}[Proof of Theorem~\ref{thm:main}]
\ref{thm:main_it1} $\Rightarrow$ \ref{thm:main_it2}:
We assume that $f(x)=f_1(x_1)\cdots f_d(x_d)$ and we denote $\partial \ln f_i = f'_i /f_i$ (note that due to Proposition~\ref{prop:PDE}, the functions $f$ and $f_i$ are differentiable and by Lemma~\ref{lemma:fpositive}, $f_i(x_i) \neq 0$ for all $i$ and all $x_i$). On the boundary $x_i=0$, the Neumann boundary condition of Proposition~\ref{prop:PDE} implies that
\begin{equation*}
0=\frac{R_i \cdot \nabla f}{f}=R_i \cdot
\begin{pmatrix}
\partial \ln f_1 (x_1) \\
\vdots \\
\partial \ln f_d (x_d)
\end{pmatrix}
\text{ for } x_i=0.
\end{equation*}
In particular, for all $j \neq i$, taking $x_{i'}=0$ for all $i'\neq j$, we obtain
\begin{equation*}
R_i \cdot
\begin{pmatrix}
\partial \ln f_1 (0) \\
\vdots \\
\partial \ln f_j (x_j) \\
\vdots \\
\partial \ln f_d (0)
\end{pmatrix}
=0.
\end{equation*}
We deduce that for all $i$ and $j$ such that $i \neq j$, the function
$r_{ij} \partial \ln f_j (x_j) $
is a constant, which we can compute as $- \sum_{j'\neq j} r_{ij'} \partial \ln f_{j'} (0)$. By Lemma~\ref{lemme}, for all $j$ there exists $i \neq j$ such that $r_{ij}\neq 0$.
This implies that $\partial \ln f_j (x_j) $ is constant and then that $f_j$ is exponential: there exists $a_j$ such that $f_j(x_j)=e^{a_j x_j}$. The limit value $\lim_{x\to\infty} f(x) =0$ implies that $a \neq 0$.
\ref{thm:main_it2} $\Rightarrow$ \ref{thm:main_it1}: This implication is trivial by taking $f_i(x_i)=e^{a_i x_i}$.
\ref{thm:main_it2} $\Rightarrow$ \ref{thm:main_it3}: If $f(x)=e^{ax}$ satisfies the PDE of Proposition~\ref{prop:PDE}, then $R_i \cdot \nabla f(x)= a R_i e^{ax}=0 $ on the boundary face $x_i=0$. We obtain that $ a R_i =0$ for all $i$ and then that $a R =0$. We deduce that $\det R=0$ since $a\neq 0$.
\ref{thm:main_it3} $\Rightarrow$ \ref{thm:main_it2}: If $\det R =0$, then by Lemma~\ref{lemme2} one has $\dim \ker R=1$, and we can choose $a' \in\mathbb{R}^d$ such that $a'>0$ and $ a' R =0$. Then $a=- \frac{a'\mu}{a' \Sigma \cdot a'}\ a'<0$ is the unique vector which satisfies $a\cdot R=0$ and $a \Sigma \cdot a +a \mu =0 $. Then it is easy to verify that $e^{ax}$ satisfies the PDE of Proposition~\ref{prop:PDE}, while the boundary condition at infinity comes from the fact that $a<0$.
\end{proof}
\section{A generalization of Theorem~\ref{thm:main}: absorption on a facet}
\label{sec:gen}
Theorem~\ref{thm:main} can be generalized to the case where the RBM is absorbed at a facet of the orthant, with equation
\begin{equation*}
x_{i_1}=\cdots=x_{i_k}=0,
\end{equation*}
for some fixed $k\in \{1,\ldots, n \}$. The situation where $k=n$ is the case of an absorption at the apex of the cone, which is treated in detail in the present article. For the sake of brevity and to avoid too much technicality, we will not prove this generalization in this article, even though
all intermediate steps in the proof may be extended.
In the general case of a facet, let us state three assumptions which generalize Assumptions~\ref{as:quasi_comp-S-1},~\ref{as:quasi_comp-S-2} and~\ref{as:drift>0}. Let us define $\widetilde R$ (resp.\ $\widetilde \Sigma$) the principal sub-matrix of $R$ (resp.\ $\Sigma$), where we keep only the $i_1$th up to $i_k$th lines and columns.
\begin{itemize}
\item The new Assumption~\ref{as:quasi_comp-S-1} is that the reflection matrix $\widetilde R$ is not $\mathcal{S}$.
\item The new second assumption is that all principal sub-matrices of $R$ which do not contain $\widetilde R$ are completely-$\mathcal{S}$.
\item The third assumption about the positivity of the drift $\mu>0$ remains unchanged (even though we could probably weaken this hypothesis).
\end{itemize}
Under these assumptions, we may define the reflected Brownian motion $(Z_t)_{t\geq 0}$ until time
\begin{equation*}
\widetilde T = \inf \{t>0 : Z_t^{i_1}=\cdots =Z_t^{i_k}=0 \},
\end{equation*}
where $Z^i$ stands for the $i$th coordinate of $Z$. Let us denote the absorption probability
\begin{equation*}
\widetilde f(x)=\mathbb P_x[\widetilde T<\infty].
\end{equation*}
Then Theorem~\ref{thm:main} may be extended as follows. The following assertions are equivalent:
\begin{enumerate}[label={\rm(\roman{*}')},ref={\rm(\roman{*}')}]
\item $f$ has a product form.
\item $f$ is exponential, i.e., $f(x)= \exp (a_{i_1}x_{i_1}+ \cdots +a_{i_k}x_{i_k} )$ with $a_{i_j}\neq 0$.
\item $\det \widetilde R =0$.
\end{enumerate}
In this case, the vector $\widetilde a=(a_{i_1}, \ldots ,a_{i_k} )$ is negative and is the unique non-zero vector such that $\widetilde a \widetilde R =0$ and $\widetilde a \widetilde\Sigma \cdot \widetilde a + \widetilde a \widetilde\mu=0$, where we defined the vertical vector $\widetilde \mu =({\mu_{i_j}})_{j=1,\ldots , k}$.
\subsection*{Acknowledgments}
We are grateful to Philip Ernst and to John Michael Harrison for very interesting discussions about topics related to this article. We thank an anonymous referee for very useful remarks and suggestions.
\small
\section{Introduction and main results}
\subsection*{Reflected Brownian motion in orthants}
Reflected Brownian motion (RBM) in orthants $\mathbb R_+^d$ is a fundamental stochastic process. Starting from the eighties, it has been studied in depth, with focuses on its definition and semimartingale properties \cite{VaWi-85,Wi-85b,Wi-95}, its recurrence or transience \cite{Wi-85a,HoRo-93,Ch-96,BrDaHa-10,Br-11}, the possible particular (e.g., product) form of its stationary distribution \cite{HaWi-87,DiMo-09}, the asymptotics of its stationary distribution \cite{HaHa-09,DaMi-11}, its Lyapunov functions \cite{DuWi-94,Sa-17}, its links with other stochastic processes \cite{LG-87,Du-04,Le-17}, its use to approximate large queuing networks \cite{Ha-78,Fo-84,BaFa-87}, numerical methods to compute the stationary distribution \cite{DaHa-92}, links with complex analysis \cite{Fo-84,BaFa-87,FrRa-19,BMEPFrHaRa-21}, PDEs \cite{HaRe-81a}, etc. The RBM is characterized by a covariance matrix $\Sigma$, a drift vector $\mu$ and a reflection matrix $R$. We will provide in Section~\ref{sec:absorption} a precise definition. While $\Sigma$ and $\mu$ correspond to the Brownian behavior of the process in the interior of the cone, the matrix $R$ describes how the process is reflected on the boundary faces of the orthant. In the semimartingale case, RBM admits a simple description using local times on orthant faces, see~\eqref{eq:RBM_semimart}.
\subsection*{Dimensions $1$ and $2$}
The techniques to study RBM in an orthant very heavily depend on the dimension. In dimension $1$, RBM with zero drift in the positive half-line $\mathbb R_+$ is equal, in distribution, to the absolute value of a standard Brownian motion, via the classical Tanaka formula; if the drift is non-zero, the RBM in $\mathbb R_+$ is connected to the so-called bang-bang process \cite{Sh-81}. Most of the computations can be performed explicitly, using closed-form expressions for the transition kernel.
The case of dimension $2$ is historically the one which attracted the most of attention, and is now well understood. Thanks to a simple linear transform, RBM in a quadrant with covariance matrix is equivalent to RBM in a wedge with covariance identity, see~\cite[Appendix]{FrRa-19}. The very first question is to characterize the parameters of the RBM (opening of the cone $\beta$ and reflection angles $\delta,\varepsilon$, see Figure~\ref{fig:dim-2})\ leading to a semimartingale RBM, as then tools from stochastic calculus become available. The condition takes the form $\alpha<1$, see \cite{Wi-85a}, with
\begin{equation}
\label{eq:def_alpha}
\alpha=\frac{\delta+\varepsilon-\pi}{\beta}.
\end{equation}
As a second step, conditions for transience and recurrence were derived, see \cite{HoRo-93,Wi-85a}. Complex analysis techniques prove to be quite efficient in dimension $2$, see \cite{BaFa-87,BMEPFrHaRa-21}. In particular, this method leads to explicit expressions for the Laplace transforms of quantities of interest (stationary distribution in the recurrent case \cite{FrRa-19,BMEPFrHaRa-21}, Green functions in the transient case \cite{Fr-20}, escape and absorption probabilities \cite{ErFrHu-20,FoFrIv-22}).
\subsection*{Higher dimension}
As opposed to the previous cases, the case of $d>2$ is much most mysterious. However, necessary and sufficient conditions for the process to be a semimartingale are known, and read as follows: denote the reflection matrix by
\begin{equation}
\label{eq:def_reflection_matrix}
R=\begin{pmatrix}
1 & r_{12} & \dots & r_{1d} \\
r_{21} & 1 & \dots & r_{2d} \\
\vdots & \vdots & \ddots & \vdots \\
r_{d1} & r_{d2} & \dots & 1
\end{pmatrix}.
\end{equation}
The column vector
\begin{equation}
\label{eq:def_reflection_vector}
R_j=\begin{pmatrix}
r_{1j} \\
\vdots \\
r_{d j}
\end{pmatrix}
\end{equation}
represents the reflection vector on the orthant face $x_i=0$. Then the RBM is a semimartingale if and only if the matrix $R$ is completely-$\mathcal S$, in the following sense, see~\cite{ReWi-88,TaWi-93}.
By definition, a principal sub-matrix of $R$ is any matrix of the form $(r_{ij})_{(i,j)\in I^2}$, where $I$ is a non-empty subset of $\{1,\ldots,d\}$, possibly equal to $\{1,\ldots,d\}$. If $x$ is a vector in $\mathbb R^d$, we will write $x> 0$ (resp.\ $x\geqslant 0$) to mean that all its coordinates are positive (resp.\ non-negative). We define $x<0$ and $x\leqslant 0$ in the same way. The definition extends to matrices.
\begin{definition}[$\mathcal S$-matrix]
A square matrix $R$ is an $\mathcal{S}$-matrix if there exists $x \geqslant 0$ such that
$Rx > 0$. Moreover, $R$ is completely-$\mathcal{S}$ if all its principal sub-matrices are $\mathcal{S}$-matrices.
\end{definition}
Apart from the semimartingale property, very few is known about multidimensional RBM. In particular, necessary and sufficient conditions for transience or recurrence are not yet fully known in the general case, even though, under some additional hypothesis on $R$, some conditions are known \cite{HaWi-87bis,Ch-96}, or in dimension 3 \cite{ElBeYa-00,BrDaHa-10}. For example, if $R$ is assumed to be a non-singular $\mathcal{M}$-matrix (which means that $R$ is an $\mathcal{S}$-matrix whose off-diagonal entries are all non-positive), then $R^{-1}\mu <0$ is a necessary and sufficient condition for positive recurrence. Moreover, contrary to the two-dimensional case, no explicit expressions are available for quantities of interest such as the stationary distribution, in general.
\begin{figure}
\includegraphics[scale=0.7]{rebondwedge1.pdf}
\caption{Wedge and angles of reflection.}
\label{fig:dim-2}
\end{figure}
\subsection*{The historical skew symmetry condition}
The only notable and exceptional case, in which everything is known and behaves smoothly, is the so-called skew symmetric case, as discovered by Harrison \cite{Ha-78} in dimension $2$, and Harrison and Williams \cite{HaWi-87} in arbitrary dimension. They prove that the RBM stationary distribution has a remarkable product form
\begin{equation}
\label{eq:parameters_product_form}
\pi(x_1,\ldots, x_d) = c_1\cdots c_d\exp(-c_1x_1-\cdots -c_dx_d)
\end{equation}
if and only if the following relation between the covariance and reflection matrices holds:
\begin{equation}
\label{eq:skew_symmetry}
2\Sigma = R\cdot \diag \Sigma + \diag \Sigma \cdot R^\top.
\end{equation}
In the latter case, the stationary distribution admits the exponential product form given by \eqref{eq:parameters_product_form}, with parameters equal to
\begin{equation*}
(c_1,\ldots,c_d)^\top = -2\cdot (\diag \Sigma)^{-1} \cdot R^{-1} \cdot \mu,
\end{equation*}
with $\mu$ denoting the drift vector.
In dimension 2, if we translate this model from the quadrant to a wedge, condition \eqref{eq:skew_symmetry} is equivalent to $\alpha=0$, see \cite[Sec.~5.2]{FrRa-19} and our Figure~\ref{fig2:dim-2}.
Models having this skew symmetry are very popular, as they offer the possibility of computing the stationary distribution in closed-form. No generalization of the skew symmetry is known, except in dimension $2$, where according to \cite{DiMo-09}, the stationary distribution is a sum of $n\geq1$ exponential terms as in \eqref{eq:parameters_product_form} (with suitable normalization) if and only if $\alpha=-n$, where the parameter $\alpha$ is as in \eqref{eq:def_alpha}. The recent article \cite{BMEPFrHaRa-21} goes much further, generalizing again this result and finding new conditions on $\alpha$ to have simplified expressions of the density distribution.
The concept of skew symmetry has been explored in other cases than orthants, see for example \cite{Wi-87,OCOr-14}.
\begin{figure}
\includegraphics[scale=1.3]{rebondspeciaux1.png}
\hspace{1cm}
\includegraphics[scale=1.3]{rebondspeciaux2.png}
\caption{On the left, the standard skew symmetry condition in a wedge, corresponding to the condition $\alpha=0 $; on the right, the dual skew symmetry condition $\alpha=1 $.}
\label{fig2:dim-2}
\end{figure}
\subsection*{Our approach and contributions}
In this paper, we will \underline{not} work under the completely-$\mathcal S$ hypothesis. More precisely, we will assume that:
\begin{assumption}
\label{as:quasi_comp-S-1}
The reflection matrix $R$ is not $\mathcal S$.
\end{assumption}
\begin{assumption}
\label{as:quasi_comp-S-2}
All principal, strict sub-matrices of $R$ are completely-$\mathcal S$.
\end{assumption}
Before going further, observe that the appearance of $\mathcal S$-matrices is very natural in the present context. Indeed, for instance, $R$ is an $\mathcal S$-matrix if and only if there exists a convex combination of reflection vectors which belongs to the interior of the orthant. Such a condition would allow us to define the process as a semimartingale after the time of hitting the origin. Similarly, the fact that a given principal sub-matrix of $R$ is $\mathcal S$ translates into the property that it is possible to define the process as a semimartingale after its visits on the corresponding face.
Therefore, as we shall prove, the probabilistic counterpart of Assumptions~\ref{as:quasi_comp-S-1} and \ref{as:quasi_comp-S-2} is that we can define the process $(Z_t)_{t\geq0}$ as a semimartingale before time
\begin{equation}
\label{eq:def_absorption_time}
T:=\inf \{t>0: Z_t=0 \}\leq \infty,
\end{equation}
but not for $t\geq T$. For this reason, we will call $T$ in \eqref{eq:def_absorption_time} the absorption time:
if the process hits the apex of the cone, then $T<\infty$ and we will say that the process is absorbed at the origin. Indeed, because of Assumption~\ref{as:quasi_comp-S-1}, there is no convex combination of reflection vectors belonging to the orthant, and consequently, we cannot define the process as a semimartingale after time $T$. However, our process is a semimartingale in the random time interval $[0,T]$; this will be proved in Proposition~\ref{prop:existence_ASRBM}.
We will also assume that:
\begin{assumption}
\label{as:drift>0}
The drift of the RBM is positive, that is, all coordinates of $\mu$ are positive.
\end{assumption}
Under Assumptions~\ref{as:quasi_comp-S-1}, \ref{as:quasi_comp-S-2} and \ref{as:drift>0}, our process exhibits the following dichotomy: either it hits the origin of the cone in finite time, i.e., $T<\infty$, or it goes to infinity (in the direction of the drift) before hitting the apex, i.e., $T=\infty$ and $\vert Z_t\vert \to\infty$ as $t\to\infty$. See Figure~\ref{fig:abs-esc}. We will prove this dichotomy in Proposition~\ref{prop:dichotomy}.
This leads us to ask the following questions: what is the absorption probability
\begin{equation}
\label{eq:absorption_probability}
f(x)=\mathbb P_x[T<\infty]?
\end{equation}
Equivalently, what is the escape probability
\begin{equation*}
\mathbb P_x[T=\infty]=1-\mathbb P_x[T<\infty]=\mathbb{P} [\vert Z_t\vert\to \infty ]?
\end{equation*}
These questions are not only of theoretical nature: they also admit natural interpretations in population biology problems, in terms of extinction times of multitype populations \cite{LaRa-13}, or in risk theory, in terms of ruin of companies that collaborate to cover their mutual deficits \cite{AlAzMu-17,BaBoReWi-14,IvBo-15}.
\begin{figure}
\includegraphics[scale=0.7]{absorption2.pdf}
\hspace{1cm}
\includegraphics[scale=0.7]{escape2.pdf}
\caption{Two examples of paths of the process $(Z_t)_{t\geq0}$, with starting point $x$ marked in red. On the left, we have $T<\infty$, meaning that the process is absorbed in finite time at the apex of the cone. On the right, the process seems to escape to infinity, meaning that $T=\infty$.}
\label{fig:abs-esc}
\end{figure}
Because of its somehow dual nature, the problem of computing the absorption (or escape) probability is a priori as difficult as the problem of computing the stationary distribution in the semimartingale, recurrent case. Therefore, a natural question is to find an analogue of the skew symmetry \cite{Ha-78,HaWi-87} in this context, which we recalled here in \eqref{eq:parameters_product_form} and \eqref{eq:skew_symmetry}. The main result of the article is given in Theorem~\ref{thm:main} below. It is stated under four assumptions; while the first three have already been introduced, the final one, Assumption~\ref{as:neumann}, is of more technical nature and will be presented in Section~\ref{sec:PDE}. We conjecture that Assumption~\ref{as:neumann} is always true. For $x=(x_1, \ldots , x_n)\in \mathbb{R}_+^d$, $f(x)=\mathbb P_x[T<\infty]$ denotes the absorption probability~\eqref{eq:absorption_probability}.
\begin{theorem}[Dual skew symmetry in an orthant]
\label{thm:main}
Under Assumptions~\ref{as:quasi_comp-S-1}, \ref{as:quasi_comp-S-2}, \ref{as:drift>0} and \ref{as:neumann},
the following statements are equivalent:
\begin{enumerate}[label={\rm(\roman{*})},ref={\rm(\roman{*})}]
\item\label{thm:main_it1}The absorption probability has a product form, i.e., there exist functions $f_1,\ldots ,f_d$ such that
\begin{equation*}
f(x)=f_1(x_1)f_2(x_2) \cdots f_d(x_d).
\end{equation*}
\item\label{thm:main_it2}The absorption probability is exponential, i.e., there exists $a\in\mathbb{R}^d\setminus \{0\}$ such that
\begin{equation*}
f(x)= \exp(a\cdot x).
\end{equation*}
\item\label{thm:main_it3}The reflection vectors $R_1,\ldots,R_d$ defined in \eqref{eq:def_reflection_matrix} and \eqref{eq:def_reflection_vector} are coplanar, that is,
\begin{equation*}
\det R =0.
\end{equation*}
\end{enumerate}
When these properties are satisfied, the vector $a=(a_1,\ldots,a_n)$ in \ref{thm:main_it2} has negative coordinates and is the unique non-zero vector such that
\begin{equation}
\label{eq:a}
a R =0\quad \text{and}\quad a \Sigma \cdot a +a \mu=0.
\end{equation}
\end{theorem}
We refer to Figures~\ref{fig2:dim-2} and \ref{fig:dim-3} for a geometric illustration of the condition $\det R =0$ appearing in \ref{thm:main_it3}. See Figure~\ref{fig:ellipsoide} for a geometric illustration of the exponential decay rate $a$ in \eqref{eq:a}. When the parameters satisfy the assumptions (and conclusions) of Theorem~\ref{thm:main}, we will say that the model satisfies the \textit{dual skew symmetry} condition. This terminology will be explained in more detail in Remark~\ref{rem:dual}. In the case of dimension $2$, Theorem~\ref{thm:main} is proved in \cite{ErFrHu-20}. Assumption~\ref{as:neumann} will be discussed in Remark~\ref{rem:as4}. Note that the proof of \ref{thm:main_it3}$\Rightarrow$\ref{thm:main_it2}$\Rightarrow$\ref{thm:main_it1} does not use Assumption~\ref{as:neumann}.
\begin{figure}
\includegraphics[scale=0.6]{dual3dcapture1.JPG}
\caption{Condition $\det R =0$ in a $3$-dimensional orthant: the reflection vectors $R_1$, $R_2$ and $R_3$ are coplanar.}
\label{fig:dim-3}
\end{figure}
\begin{figure}
\includegraphics[clip=true,trim=0cm 1cm 0cm 0cm,scale=1.3]{ellipsoide.png}
\caption{In red color: the ellipsoid with equation $x\Sigma \cdot x +\mu x =0$; in blue: $\ker R^\top$ (of dimension one by Lemma~\ref{lemme2}); in green: the exponential decay rate $a$.}
\label{fig:ellipsoide}
\end{figure}
\subsection*{Structure of the paper}
\begin{itemize}
\item Section \ref{sec:absorption}: We define properly the process and show some of its pathwise properties. In particular, Proposition~\ref{prop:dichotomy} shows the dichotomy behavior (absorption vs.\ escape at infinity).
\item Section \ref{sec:PDE}: We state and prove a PDE for the density of the absorption probability (Proposition~\ref{prop:PDE}). This PDE is dual to the one satisfied by the stationary distribution in the recurrent case.
\item Section \ref{sec:skew}: We provide a proof of our main Theorem~\ref{thm:main}.
\item Section \ref{sec:gen}: We propose a generalization of Theorem~\ref{thm:main} with absorption on facets, not necessarily the origin.
\end{itemize}
\section{Definition and first properties of the absorbed reflected Brownian motion}
\label{sec:absorption}
\subsection*{Existence and definition}
Let $(W_t)_{t\geq0}$ be a $d$-dimensional Brownian motion of covariance matrix $\Sigma$. Let $\mu \in\mathbb{R}^d$ be a drift, and let $R$ be a $d$-dimensional square matrix \eqref{eq:def_reflection_matrix} with coefficients $1$ on the diagonal.
\begin{proposition}[Existence of an absorbed SRBM]
\label{prop:existence_ASRBM}
Under Assumption \ref{as:quasi_comp-S-2}, there exists an absorbed SRBM in the orthant, i.e., a semimartingale defined up to the absorption time $T\leq\infty$ as in \eqref{eq:def_absorption_time} and such that for all $t \leqslant T$,
\begin{equation}
\label{eq:RBM_semimart}
Z_t = x +W_t+\mu t + R L_t,
\end{equation}
where $L_t$ is a vector whose $i$th coordinate $L_t^i$ is a continuous, non-decreasing process starting from $0$, which increases only when the $i$th coordinate of the process $Z_t^i = 0$, and which is called the local time on the corresponding orthant face.
\end{proposition}
Under the additional hypothesis that $R$ is completely-$\mathcal S$, Proposition~\ref{prop:existence_ASRBM} is most classical: in this case, the RBM is well defined as the semimartingale \eqref{eq:RBM_semimart}, actually for any $t\in[0,\infty)$. Our contribution here is to prove that if $R$ is not an $\mathcal S$-matrix (our Assumption~\ref{as:quasi_comp-S-1}) and is therefore not completely-$\mathcal S$, then it is still possible to define the RBM as a semimartingale on the time interval $[0,T]$.
\begin{proof}
Although Proposition~\ref{prop:existence_ASRBM} is not formally proved in Taylor's PhD thesis \cite{Ta-90}, all necessary tools may be found there. More precisely, Taylor proves that when $R$ is completely-$\mathcal S$ (i.e., our Assumption~\ref{as:quasi_comp-S-2} plus the fact that $R$ is $\mathcal S$), then the RBM is an orthant $\mathbb R_+^d$ exists as a semimartingale globally on $[0,\infty)$. The proof in \cite{Ta-90} is then split into two parts:
\begin{itemize}
\item First, \cite[Chap.~4]{Ta-90} shows that the SRBM exists on $[0,T]$, with $T$ defined in \eqref{eq:def_absorption_time}. The fact that $R$ is an $\mathcal S$-matrix is nowhere used in the proof: the only needed hypotheses are that all principal, strict sub-matrices are completely-$\mathcal S$ (our Assumption~\ref{as:quasi_comp-S-2}).
\item As a second step, in \cite[Chap.~5]{Ta-90} (see in particular her Lemma~5.3), Taylor proves that if $R$ is an $\mathcal S$-matrix, then it is possible for the process started at the origin to escape the origin and to be well defined as a semimartingale.
\end{itemize}
Using only the first part of her arguments readily entails our Proposition~\ref{prop:existence_ASRBM}.
\end{proof}
\subsection*{Absorption and escape in asymptotic regimes}
We first prove two results which are intuitively clear, namely, that the absorption probability tends to one (resp.\ zero)\ when the starting point approaches the origin (resp.\ infinity), see Proposition~\ref{prop:absorption_origin} (resp.\ Proposition~\ref{prop:absorption_infinity}). Then, we will prove in Proposition~\ref{prop:dichotomy} the dichotomy already mentioned: either the process is absorbed in finite time, or it escapes to infinity as time goes to infinity. By convention, we will write $x\to 0$ (resp.\ $x\to\infty$) to mean that $\vert x\vert\to0$ (resp.\ $\vert x\vert\to\infty$) in the cone.
\begin{proposition}[Absorption starting near the origin]
\label{prop:absorption_origin}
One has
\begin{equation*}
\lim_{x\to 0}\mathbb{P}_x [T<\infty] = 1.
\end{equation*}
\end{proposition}
\begin{proposition}[Absorption starting near infinity]
\label{prop:absorption_infinity}
One has
\begin{equation*}
\lim_{x \to \infty}\mathbb{P}_x [T<\infty] = 0.
\end{equation*}
\end{proposition}
\begin{proposition}[Complementarity of escape and absorption]
\label{prop:dichotomy}
When $T=\infty$, then almost surely $\lim_{t\to\infty} \vert Z_t\vert = \infty$, i.e.,
\begin{equation*}
\mathbb{P}_x \left[ \left. \lim_{t\to\infty} \vert Z_t\vert = \infty \right\vert T=\infty \right] = 1.
\end{equation*}
This implies that $\mathbb{P}_x[T=\infty]=\mathbb{P}_x[\vert Z_{t\wedge T}\vert \to \infty ]$.
\end{proposition}
\begin{proof}[Proof of Proposition~\ref{prop:absorption_origin}]
Let us define $\tau_x = \inf \{t>0 : x+W_t+\mu t <0 \}$ (by convention $\inf \emptyset =\infty$), and consider the set
\begin{equation*}
\{ \tau_x <\infty \} = \{ \exists t>0 \text{ such that } x+W_t+\mu t <0 \}.
\end{equation*}
The proof consists in two steps. We first prove that $\{ \tau_x <\infty \} \subset \{ T<\infty \}$ and then show that $\lim_{x\to 0}\mathbb{P}[ \tau_x <\infty ] =1$.
\begin{enumerate}[label={\rm Step~\arabic{*}.},ref={\rm Step~\arabic{*}}]
\item\label{it:step1}
Assume that $\tau_x < \infty$ and fix a $t<\infty$ such that
$x+W_t+\mu t<0$. We are going to show that $T\leqslant t$.
We proceed by contradiction and assume that $t<T$. Then from \eqref{eq:RBM_semimart} we get that
\begin{equation*}
R L_t = Z_t - x-W_t-\mu t >0.
\end{equation*}
The last inequality comes from the fact that $Z_t\geqslant 0$ and that $x+W_t+\mu t <0$.
Remembering that $L_t\geqslant 0$, the fact that $ R L_t>0$ implies that $R$ is an $\mathcal{S}$-matrix, which contradicts Assumption~\ref{as:quasi_comp-S-1}.
We conclude that $T\leqslant t<\infty$. We have thus shown that $\{ \tau_x <\infty \} \subset \{ T<\infty \}$.
\item\label{it:step2} By Blumenthal's zero--one law, we have
\begin{equation*}
\mathbb{P}[ \tau_0=0 ] =1,
\end{equation*}
since
\begin{equation*}
\{\tau_0 =0\}=\bigcap_{t>0} \left\{ \inf_{s \leqslant t} (W_s+\mu s) <0 \right\} \in \mathcal{F}_{0+}= \bigcap_{t>0} \mathcal{F}_t,
\end{equation*}
where $\mathcal{F}_t =\sigma \{ W_s , s\leqslant t \}$.
This implies that $\mathbb{P}[\tau_0 <\infty ]=1$. We deduce that almost surely, there exists $t_0$ such that $W_{t_0}+\mu t_0<0$, and then for all $x<-W_{t_0}-\mu t_0$ we have $\tau_x <\infty$. Then
$\mathds{1}_{\{\tau_x < \infty\}}\underset{x\to 0}{\longrightarrow} 1 \text{ a.s.}$, and by dominated convergence we have
\begin{equation*}
\mathbb{P}[ \tau_x <\infty ]=\mathbb{E}[\mathds{1}_{\{\tau_x < \infty\}}] \underset{x\to 0}{\longrightarrow} 1.
\end{equation*}
\end{enumerate}
Thanks to \ref{it:step1} and \ref{it:step2}, we conclude that $\mathbb{P}_x[T<\infty]\geqslant \mathbb{P}[ \tau_x <\infty ]$ and therefore $\underset{x\to 0}{\lim}\mathbb{P}_x [T<\infty] = 1$, using the above estimate.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:absorption_infinity}]
Introduce the event
\begin{equation*}
B_x = \{\forall t\in\mathbb{R},\, x+W_t+\mu t >0\}.
\end{equation*}
For any element belonging to $B_x$, then
$Z_t=x+W_t+\mu t$ for all $t\in \mathbb{R}$ (the process never touches the boundary of the orthant, meaning no reflection on the boundary). We deduce that $Z_t>0$ for all $t\in \mathbb{R}$ and then that $B_x \subset \{ T=\infty \}$. Therefore,
\begin{equation*}
\mathbb P[B_x]\leqslant \mathbb{P}_x [T=\infty].
\end{equation*}
To conclude, we are going to show that $\lim_{x\to \infty} \mathbb{P}[B_x]=1$. It comes from the fact that a.s.\ $\inf_{t>0} \{ W_t +\mu t \}> -\infty$, since $\mu>0$ by Assumption~\ref{as:drift>0}. For all $x> -\inf_{t>0} \{ W_t +\mu t \}$, we have $x+W_t+\mu t >0$ for all $t$. We deduce that
$
\mathds{1}_{\{\forall t\in\mathbb{R}, x+W_t+\mu t >0\}}\underset{x\to \infty}{\longrightarrow} 1 \text{ a.s.}
$,
and by dominated convergence, we have
\begin{equation*}
\mathbb{P}[ B_x ]=\mathbb{E}[\mathds{1}_{\{\forall t\in\mathbb{R}, x+W_t+\mu t >0\}}] \underset{x\to \infty}{\longrightarrow} 1.\qedhere
\end{equation*}
\end{proof}
Before proving Proposition~\ref{prop:dichotomy}, we first recall some useful definitions and properties related to recurrence and transience of Markov processes. All of them are most classical, but having them here stated clearly will facilitate our argument.
These results and their proofs may be found in~\cite{Az-66}.
Consider a continuous, strong Feller Markov process $X_t$ on a locally
compact state space $E$ with countable basis. For $V\subset E$, let us define $\tau_V = \inf \{t > 0 : X_t \in V \}$.
\begin{itemize}
\item The point $x\in E$ is said to be recurrent if for all neigbourhoods $V$ of $x$,
\begin{equation*}
\mathbb{P}[\limsup \mathds{1}_V (X_t)=1]=1.
\end{equation*}
\item If a point is not recurrent, it is said to be transient. In this case, by \cite[Thm.~III~1]{Az-66}, there exists a neigbourhood $V$ of $x$ such that $\mathbb{P}[\limsup \mathds{1}_V (X_t)=1]=0$.
\item The point $x$ is said to lead to $y$ if for all neighbourhoods $V$ of $y$, we have $\mathbb{P}_x [\tau_V < \infty] > 0$.
The points $x$ and $y$ are said to communicate if $x$ leads to $y$ and $y$ leads to $x$. This defines an equivalence relation.
\item If two states communicate, they are both transient or both recurrent \cite[Prop.~IV~2]{Az-66}.
\item If all points are transient, then $X_t$ tends to $\infty$ as $t\to \infty$ almost surely \cite[Prop.~III~1]{Az-66}.
\end{itemize}
\begin{proof}[Proof of Proposition~\ref{prop:dichotomy}]
Define the process $(\widetilde{Z}_t)_{t\geq 0}$ as the process $(Z_t)_{t\geq 0}$ conditioned never to hit $0$ in a finite time. The transition semigroup of this new Markov process $\widetilde{Z}_t$ is defined, for $x\in \mathbb{R}_+^d \setminus \{ 0\}$ and $V \subset \mathbb{R}_+^d \setminus \{ 0\}$, by
\begin{equation*}
\mathbb{P}_x[\widetilde{Z}_t \in V]=\mathbb{P}_x[{Z}_t \in V \vert T=\infty].
\end{equation*}
All points of $\mathbb{R}_+^d \setminus \{0\}$ communicate, they thus constitute a unique equivalence class. We deduce that they all are transient or all recurrent. It is thus enough to show that one of them is transient, to show that they all are.
Let us take a point in the interior of $\mathbb{R}_+^d \setminus \{0\}$, for example $x=(1,\ldots, 1)$. Since $\mu >0$, by standard properties of Brownian motion we have
\begin{equation*}
\mathbb{P}[\forall t \in\mathbb{R},\,x+W_t+\mu t>0]>0.
\end{equation*}
In dimension one, this property directly derives from \cite[Eq.~1.2.4(1)]{BoSa-12} (on p.~252); it easily generalizes to all dimensions.
When this event of positive probability occurs, the process never touches the boundary and thus $\widetilde{Z}_t= x+W_t+\mu t \to \infty$ and $\limsup \mathds{1}_V (\widetilde{Z}_t)=0$, for any $V$ relatively compact neighbourhood of $x$. We have shown that there exists a neighbourhood $V$ of $x$ such that $\mathbb{P}_x [\limsup \mathds{1}_V (\widetilde{Z}_t)=1]<1$, which implies that $x$ is not recurrent and is then transient.
Using \cite[Prop.~III~1]{Az-66} allows us to conclude, since as recalled above, if all points are transient, then the process tends to infinity almost surely.
\end{proof}
\section{Partial differential equation for the absorption probability}
\label{sec:PDE}
In a classical way, the generator of the Brownian motion in the interior of the orthant is defined by
\begin{equation*}
\mathcal{G}f (x)= \lim_{t\to 0} \frac{\mathbb{E}_x[f(Z_t)] - f(x) }{t}= \frac{1}{2}(\nabla \cdot \Sigma \nabla f) (x) + (\mu \cdot \nabla f) (x),
\end{equation*}
where we assume that $f$ is bounded in the first equality and that $f$ is twice differentiable in the second equality. In the rest of the paper, the following assumption is made.
\begin{assumption}
\label{as:neumann}
For all continuous, bounded functions $g$, the transition semigroup
\begin{equation*}
x\mapsto P_t g(x) := \mathbb{E}_x [g(Z_{t\wedge T})]
\end{equation*}
is differentiable, and satisfies the Neumann boundary condition $R_i \cdot P_t g(x)=0$ on the $i$th face of the orthant $x_i=0$.
\end{assumption}
\begin{remark}[Plausibility of Assumption~\ref{as:neumann}]
\label{rem:as4}
Many evidences suggest that this hypothesis is true:
\begin{itemize}
\item By \cite[Cor.~3.3]{An-09}, Assumption~\ref{as:neumann} is true provided we replace $T$ by the first hitting time of the intersection of two faces, or assuming that the process does not hit the intersection of two faces.
\item As a consequence of the above, Assumption~\ref{as:neumann} is true in dimension two.
\item By \cite{DeZa-05}, Assumption~\ref{as:neumann} holds true in the particular case of orthogonal reflections.
\item Assumption~\ref{as:neumann} is stated as a conjecture in \cite[(8.2b)]{HaRe-81a}; however, the latter article does not attempt to prove rigorously these regularity questions.
\item The paper \cite{LiRa-19} shows in full generality the pathwise differentiability with respect to the starting point $x$. We believe that a way to attack the proof of Assumption~\ref{as:neumann} could be to combine the results of \cite{LiRa-19} with the computations made in the proof of \cite[Cor.~3.3]{An-09}.
\end{itemize}
\end{remark}
\begin{proposition}[Partial differential equation]
\label{prop:PDE}
Under Assumptions~\ref{as:quasi_comp-S-1}, \ref{as:quasi_comp-S-2}, \ref{as:drift>0} and \ref{as:neumann},
the absorption probability \eqref{eq:absorption_probability} is the unique function $f$ which is
\begin{itemize}
\item bounded and continuous in the interior of the orthant $\mathbb R_+^d$ and on its boundary,
\item continuously differentiable in the interior of the orthant and on its boundary (except perhaps at the corner),
\end{itemize}
and which further satisfies the PDE:
\begin{itemize}
\item $\mathcal{G} f =0$ on the orthant (harmonicity),
\item $R_i \cdot \nabla f = 0$ on the $i$th face of the orthant $x_i=0$ (Neumann boundary condition),
\item $f(0)=1$ and $\lim_{x\to \infty} f(x)=0$ (limit values).
\end{itemize}
\end{proposition}
\begin{proof}
The proof is similar to \cite[Prop.~11]{ErFrHu-20}. We start with the \textit{sufficient condition}. Dynkin's formula leads to
\begin{equation*}
\mathbb{E}_x[f(Z_{t\wedge T})] = f(x) +\mathbb{E}_x \int_0^{t\wedge T} \mathcal{G}f (Z_s) \mathrm{d}s +
\sum_{i=1}^d \mathbb{E}_x \int_0^{t\wedge T}
(R \nabla f )_i
\mathrm{d}L^i_s.
\end{equation*}
There is a technical subtlety in applying Dynkin's formula, as the latter requires functions having a $\mathcal C^2$-regularity, which is a priori not satisfied at the origin in our setting. However, we may first apply this formula for $T_n=\inf\{t>0 : \vert Z_t\vert <\frac{1}{n}\}<T$, then $f$ has the desired regularity on $\mathbb R_+^d\setminus \{x:\vert x\vert<\frac{1}{n}\}$. One may conclude as $T_n$ converges increasingly to $T$ as $n\to\infty$.
Since $f$ is assumed to satisfy the PDE stated in Proposition~\ref{prop:PDE}, the latter sum of three terms is simply equal to $f(x)$. We further compute
\begin{equation*}
f(x) = \mathbb{E}_x[f(Z_{t\wedge T})] =
\mathbb{E}_x[f(Z_{t\wedge T})\mathds{1}_{T\leqslant t}]
+
\mathbb{E}_x[f(Z_{t\wedge T})\mathds{1}_{T> t}]
=f(Z_T) \mathbb P_x[T\leqslant t]+
\mathbb{E}_x[f(Z_{t\wedge T})\mathds{1}_{T> t}].
\end{equation*}
As $t\to\infty$, the above quantity converges to
\begin{equation*}
f(0)\mathbb P_x[T<\infty] +
\mathbb{E}_x\left[\lim_{t\to\infty}f(Z_{t})\mathds{1}_{T=\infty}\right]
= \mathbb P_x[T<\infty],
\end{equation*}
where the last equality comes from the limit values $\lim_{x\to \infty} f(x)=0$, $f(0)=1$ and from Proposition~\ref{prop:dichotomy}, which together imply that when $T=\infty$ we have $\lim_{t\to\infty}Z_{t}=\infty$.
We immediately deduce that
$f(x)=\mathbb P_x[T<\infty]$.
We now move to the \textit{necessary condition}. We denote the absorption probability~\eqref{eq:absorption_probability} by $f$ and show that it satisfies the PDE of Proposition~\ref{prop:PDE}. Consider the event $\{ T<\infty \} \in \mathcal{F}_\infty$ and define
\begin{equation*}
M_t=\mathbb{E} [\mathds{1}_{\{ T<\infty \}} \vert \mathcal{F}_{t \wedge T}],
\end{equation*}
which is a $\mathcal{F}_t$-martingale. Observe that $M_0=f(x)$ and, by the Markov property, $M_t=f(Z_{t \wedge T})$. We deduce that $\mathbb{E}_x[f(Z_{t \wedge T})]=\mathbb{E}[M_t \vert \mathcal{F}_0]=M_0=f(x)$.
By definition of $\mathcal{G}$, we obtain that for $x$ in the interior of the orthant,
\begin{equation*}
\mathcal{G}f (x)= \lim_{t\to 0} \frac{\mathbb{E}_x[f(Z_t)] - f(x) }{t} = 0.
\end{equation*}
The Neumann boundary condition and the differentiability follow from the fact that $f(x)=\mathbb{E}_x[f(Z_{t \wedge T})]$ and from Assumption~\ref{as:neumann}.
The limit values follow from our Propositions~\ref{prop:absorption_origin} and \ref{prop:absorption_infinity}.
\end{proof}
\begin{remark}[Duality between absorption probability and stationary distribution]
\label{rem:dual}
Let us define the dual generator $
\mathcal{G}^* f (x)= \frac{1}{2}(\nabla \cdot \Sigma \nabla f) (x) - (\mu \cdot \nabla f) (x)
$
as well as the matrix $R^* = 2 \Sigma -R \ \textnormal{diag} (\Sigma )$, whose columns are denoted by $R^*_i$. In the recurrent case, the stationary distribution satisfies the following PDE, see \cite[Eq.~(8.5)]{HaRe-81a}:
\begin{itemize}
\item $\mathcal{G}^* f =0$ in the orthant,
\item $R^*_i \cdot \nabla f -2 \mu_i f= 0$ on the $i$th face of the orthant defined by $x_i=0$.
\end{itemize}
As a consequence, the absorption probability satisfies a PDE (Proposition~\ref{prop:PDE}), which is dual to the one which holds for the stationary distribution.
\end{remark}
\section{Dual skew symmetry: proof of the main result}
\label{sec:skew}
This section is devoted to the proof of Theorem~\ref{thm:main}, which establishes the dual skew symmetry condition. We first prove two technical lemmas on the reflection matrix $R$ in \eqref{eq:def_reflection_matrix}.
\begin{lemma}
\label{lemme}
If $R$ satisfies Assumptions~\ref{as:quasi_comp-S-1} and \ref{as:quasi_comp-S-2}, then for all $i$, there exists $j\neq i$ such that $r_{ij} \neq 0$.
\end{lemma}
\begin{proof}
It is enough to prove Lemma~\ref{lemme} for $i=1$, as we would show the other cases similarly.
Consider $\widetilde R$ the principal submatrix of $R$ obtained by removing the first line and the first column. This matrix is completely $\mathcal{S}$ by Assumption~\ref{as:quasi_comp-S-2}, so that there exists $\widetilde X=(x_2,\ldots,x_d)^\top\geqslant 0$
such that $\widetilde R \widetilde X >0$. Consider now $\widetilde C_1=(r_{21},\ldots,r_{d1})^\top$, which is the first column of $R$ without its first coordinate. Let us choose $\lambda>0$ large enough such that $ \widetilde C_1 + \lambda \widetilde R \widetilde X >0$. If for all $j\neq 1$ we have $r_{1j}=0$, then for
$
X=(1,\lambda x_2,\ldots,\lambda x_d)^\top$
we would have
\begin{equation*}
RX
=
\begin{pmatrix} 1 & 0 \cdots 0 \\
\widetilde C_1 & \widetilde R \\
\end{pmatrix}
\begin{pmatrix}
1 \\
x_2 \\
\vdots \\
x_d
\end{pmatrix}
=\begin{pmatrix} 1 \\ \widetilde C_1+\lambda \widetilde R \widetilde X \end{pmatrix}>0
\end{equation*}
and then $R$ would be an $\mathcal{S}$-matrix, contradicting our Assumption~\ref{as:quasi_comp-S-1}.
\end{proof}
\begin{lemma}
\label{lemme2}
If $R$ satisfies Assumptions~\ref{as:quasi_comp-S-1} and \ref{as:quasi_comp-S-2}, and if in addition $\det R=0$, then $R$ has rank $d-1$, and there exist a positive column vector $U>0$ in $\ker R$ and a positive row vector $a>0$ such that $aR=0$.
\end{lemma}
\begin{proof}
The rank of the matrix $R$ is obviously $\leqslant d-1$, since $\det R=0$. We now show that the rank is $\geqslant d-1$.
Let $\widetilde R_j$ be the submatrix of $R$ obtained by removing the $j$th line and the $j$th column. These matrices are $\mathcal{S}$-matrices by Assumption~\ref{as:quasi_comp-S-2}, and we can choose
\begin{equation*}
\widetilde X_j=\begin{pmatrix}
\widetilde x_{1j} \\
\vdots \\
\widetilde x_{(j-1)j} \\
\widetilde x_{(j+1)j} \\
\vdots \\
\widetilde x_{dj}
\end{pmatrix} \geqslant 0
\quad \text{such that}
\quad
\widetilde R_j \widetilde X_j
=\widetilde Y_j=\begin{pmatrix}
\widetilde y_{1j} \\
\vdots \\
\widetilde y_{(j-1)j} \\
\widetilde y_{(j+1)j} \\
\vdots \\
\widetilde y_{dj}
\end{pmatrix}
>0
.
\end{equation*}
We now define the vertical vectors $X_j^\varepsilon=(\widetilde x_{ij})_{i=1,\ldots, d}$, setting $\widetilde x_{jj}=\varepsilon$ for some $\varepsilon>0$. We have
\begin{equation*}
R X_j^\varepsilon=Y_j^\varepsilon
=(y_{ij}^\varepsilon)_{i=1,\ldots, d},
\end{equation*}
where $y_{ij}^\varepsilon= \varepsilon r_{ij}+ \widetilde y_{ij}>0$ for $i \neq j$ and $\varepsilon>0$ small enough, and $y_{jj}^\varepsilon=\varepsilon + \widetilde L_j \widetilde X_j$, where we set
\begin{equation*}
\widetilde L_j=(r_{1j}, \ldots, r_{(j-1)j},r_{(j+1)j},\ldots ,r_{dj})
\end{equation*}
the $j$th line of $R$, with the $j$th coordinate $r_{jj}=1$ excluded. Since $R$ is not an $\mathcal{S}$-matrix by our Assumption~\ref{as:quasi_comp-S-1}, we must have $y_{jj}^\varepsilon=\varepsilon+ \widetilde L_j \widetilde X_j \leqslant 0$. We deduce that $y_{jj}^0= \widetilde L_j \widetilde X_j \leqslant -\varepsilon <0$.
Then, introducing the vectors
\begin{equation*}
X_j=\frac{1}{-y_{jj}^0 } X_j^0 \geqslant 0
\quad \text{and}
\quad
Y_j=(y_{ij})_{i=1,\ldots ,d} =R X_j,
\end{equation*}
we have
\begin{equation*}
Y_j=R X_j=\begin{pmatrix}
y_{1j} \\
\vdots \\
y_{(j-1)j} \\
- 1 \\
y_{(j+1)j} \\
\vdots \\
y_{dj}
\end{pmatrix} ,
\quad\text{where } y_{ij}=\frac{\widetilde y_{ij}}{-y_{jj}^0 } >0 \text{ for } i\neq j.
\end{equation*}
Denoting the matrix $P=(X_1, \ldots, X_d ) \geqslant 0$, we have
\begin{equation*}
-RP= \begin{pmatrix}
1 & -y_{12} & \dots & -y_{1d} \\
-y_{21} & 1 & \dots & -y_{2d} \\
\vdots & \vdots & \ddots & \vdots \\
- y_{d1} & -y_{d2} & \dots & 1
\end{pmatrix}= 2 \textnormal{Id} -T,
\text{ where } T=\begin{pmatrix}
1 & y_{12} & \dots & y_{1d} \\
y_{21} & 1 & \dots & y_{2d} \\
\vdots & \vdots & \ddots & \vdots \\
y_{d1} & y_{d2} & \dots & 1
\end{pmatrix}>0.
\end{equation*}
All coefficients of $T$ are positive. Consequently, using Perron-Frobenius theorem, $T$ has a unique maximal eigenvalue $r$, its associated eigenspace is one-dimensional and there exists a positive eigenvector $V$ associated to $r$. Let us remark that since $\det R=0$, then $\det (2 \textnormal{Id} -T)=\det(-RP)=0$ and $2$ is an eigenvalue of $T$. Then $r\geqslant 2$, and there are two cases to treat.
\begin{itemize}
\item Assume first that the maximal eigenvalue is $r>2$. Let $V>0$ be a positive associated eigenvector such that $TV=rV$. We deduce that $-RPV=2V-TV=(2-r)V$ and then $R(PV)=(r-2)V>0$, where $PV \geqslant 0$ since $P \geqslant 0$ and $ V > 0$. Then we have shown that $R$ is an $\mathcal{S}$-matrix, which contradicts Assumption~\ref{as:quasi_comp-S-1}. So we must be in the situation where $r=2$.
\item If $r=2$ is the maximal eigenvalue of $T$, and $V>0$ the positive eigenvector such that $TV=2V$, then we have $RU=0$ for $U=PV>0$. Furthermore $\dim \ker(2\textnormal{Id}-T)=1$ and then $d-1=\text{rank} (2\textnormal{Id}-T)=\text{rank} RP \leqslant \text{rank} R$ and then $R$ has rank $d-1$.
\end{itemize}
Left eigenspaces of $T$ are (right) eigenspaces of $T^\top$. If we take $a$ such that $aR=0$, then $a$ belongs to the left eigenspace associated to the eigenvalue $2$ of $T$. By Perron-Frobenius theorem, we deduce that we can choose $a>0$.
\end{proof}
We now prove a result showing that the hitting probability of the origin is never $0$, for all starting points.
\begin{lemma}
For all $x\in \mathbb{R}_+^d$, $f(x) > 0$.
\label{lemma:fpositive}
\end{lemma}
\begin{proof}
By Proposition~\ref{prop:absorption_origin}, there exists a point $y_0$ in the interior of the orthant such that $f(y_0)> 0$. By continuity of $f$ (Proposition~\ref{prop:PDE}), we can find an open neighbourhood $U$ of $y_0$ such that $f(y) > 0$ for all $y \in U$. Then we conclude that
\begin{equation*}
f(x)=\mathbb{E}_x [f(Z_{t\wedge T})]= \int_{\mathbb{R}_+^d} f(y) \mathbb{P}_x(X_{t\wedge T} = \mathrm{d}y) \geqslant \int_{U} f(y) \mathbb{P}_x(X_{t\wedge T} = \mathrm{d}y) >0.
\end{equation*}
(The first equality in the previous equation has already been proved in the proof of Proposition~\ref{prop:PDE}).
\end{proof}
Let us now prove the main result.
\begin{proof}[Proof of Theorem~\ref{thm:main}]
\ref{thm:main_it1} $\Rightarrow$ \ref{thm:main_it2}:
We assume that $f(x)=f_1(x_1)\cdots f_d(x_d)$ and we denote $\partial \ln f_i = f'_i /f_i$ (note that due to Proposition~\ref{prop:PDE}, the functions $f$ and $f_i$ are differentiable and by Lemma~\ref{lemma:fpositive}, $f_i(x_i) \neq 0$ for all $i$ and all $x_i$). On the boundary $x_i=0$, the Neumann boundary condition of Proposition~\ref{prop:PDE} implies that
\begin{equation*}
0=\frac{R_i \cdot \nabla f}{f}=R_i \cdot
\begin{pmatrix}
\partial \ln f_1 (x_1) \\
\vdots \\
\partial \ln f_d (x_d)
\end{pmatrix}
\text{ for } x_i=0.
\end{equation*}
In particular, for all $j \neq i$, taking $x_{i'}=0$ for all $i'\neq j$, we obtain
\begin{equation*}
R_i \cdot
\begin{pmatrix}
\partial \ln f_1 (0) \\
\vdots \\
\partial \ln f_j (x_j) \\
\vdots \\
\partial \ln f_d (0)
\end{pmatrix}
=0.
\end{equation*}
We deduce that for all $i$ and $j$ such that $i \neq j$, the function
$r_{ij} \partial \ln f_j (x_j) $
is a constant, which we can compute as $- \sum_{j'\neq j} r_{ij'} \partial \ln f_{j'} (0)$. By Lemma~\ref{lemme}, for all $j$ there exists $i \neq j$ such that $r_{ij}\neq 0$.
This implies that $\partial \ln f_j (x_j) $ is constant and then that $f_j$ is exponential: there exists $a_j$ such that $f_j(x_j)=e^{a_j x_j}$. The limit value $\lim_{x\to\infty} f(x) =0$ implies that $a \neq 0$.
\ref{thm:main_it2} $\Rightarrow$ \ref{thm:main_it1}: This implication is trivial by taking $f_i(x_i)=e^{a_i x_i}$.
\ref{thm:main_it2} $\Rightarrow$ \ref{thm:main_it3}: If $f(x)=e^{ax}$ satisfies the PDE of Proposition~\ref{prop:PDE}, then $R_i \cdot \nabla f(x)= a R_i e^{ax}=0 $ on the boundary face $x_i=0$. We obtain that $ a R_i =0$ for all $i$ and then that $a R =0$. We deduce that $\det R=0$ since $a\neq 0$.
\ref{thm:main_it3} $\Rightarrow$ \ref{thm:main_it2}: If $\det R =0$, then by Lemma~\ref{lemme2} one has $\dim \ker R=1$, and we can choose $a' \in\mathbb{R}^d$ such that $a'>0$ and $ a' R =0$. Then $a=- \frac{a'\mu}{a' \Sigma \cdot a'}\ a'<0$ is the unique vector which satisfies $a\cdot R=0$ and $a \Sigma \cdot a +a \mu =0 $. Then it is easy to verify that $e^{ax}$ satisfies the PDE of Proposition~\ref{prop:PDE}, while the boundary condition at infinity comes from the fact that $a<0$.
\end{proof}
\section{A generalization of Theorem~\ref{thm:main}: absorption on a facet}
\label{sec:gen}
Theorem~\ref{thm:main} can be generalized to the case where the RBM is absorbed at a facet of the orthant, with equation
\begin{equation*}
x_{i_1}=\cdots=x_{i_k}=0,
\end{equation*}
for some fixed $k\in \{1,\ldots, n \}$. The situation where $k=n$ is the case of an absorption at the apex of the cone, which is treated in detail in the present article. For the sake of brevity and to avoid too much technicality, we will not prove this generalization in this article, even though
all intermediate steps in the proof may be extended.
In the general case of a facet, let us state three assumptions which generalize Assumptions~\ref{as:quasi_comp-S-1},~\ref{as:quasi_comp-S-2} and~\ref{as:drift>0}. Let us define $\widetilde R$ (resp.\ $\widetilde \Sigma$) the principal sub-matrix of $R$ (resp.\ $\Sigma$), where we keep only the $i_1$th up to $i_k$th lines and columns.
\begin{itemize}
\item The new Assumption~\ref{as:quasi_comp-S-1} is that the reflection matrix $\widetilde R$ is not $\mathcal{S}$.
\item The new second assumption is that all principal sub-matrices of $R$ which do not contain $\widetilde R$ are completely-$\mathcal{S}$.
\item The third assumption about the positivity of the drift $\mu>0$ remains unchanged (even though we could probably weaken this hypothesis).
\end{itemize}
Under these assumptions, we may define the reflected Brownian motion $(Z_t)_{t\geq 0}$ until time
\begin{equation*}
\widetilde T = \inf \{t>0 : Z_t^{i_1}=\cdots =Z_t^{i_k}=0 \},
\end{equation*}
where $Z^i$ stands for the $i$th coordinate of $Z$. Let us denote the absorption probability
\begin{equation*}
\widetilde f(x)=\mathbb P_x[\widetilde T<\infty].
\end{equation*}
Then Theorem~\ref{thm:main} may be extended as follows. The following assertions are equivalent:
\begin{enumerate}[label={\rm(\roman{*}')},ref={\rm(\roman{*}')}]
\item $f$ has a product form.
\item $f$ is exponential, i.e., $f(x)= \exp (a_{i_1}x_{i_1}+ \cdots +a_{i_k}x_{i_k} )$ with $a_{i_j}\neq 0$.
\item $\det \widetilde R =0$.
\end{enumerate}
In this case, the vector $\widetilde a=(a_{i_1}, \ldots ,a_{i_k} )$ is negative and is the unique non-zero vector such that $\widetilde a \widetilde R =0$ and $\widetilde a \widetilde\Sigma \cdot \widetilde a + \widetilde a \widetilde\mu=0$, where we defined the vertical vector $\widetilde \mu =({\mu_{i_j}})_{j=1,\ldots , k}$.
\subsection*{Acknowledgments}
We are grateful to Philip Ernst and to John Michael Harrison for very interesting discussions about topics related to this article. We thank an anonymous referee for very useful remarks and suggestions.
\small
|
1,314,259,994,922 | arxiv | \section{Introduction}
\medskip
In this paper, we study infinite tensor products of some algebraic structures.
In the literature, infinite tensor products are often defined as inductive limit of finite tensor products (see e.g. \cite{Black}, \cite{Bru} \cite{Flor}, \cite{GN}, \cite{Pow}).
As far as we know, the only alternative approach so far is the one by J. von Neumann, concerning \emph{infinite direct products of Hilbert spaces} (see \cite{vN}).
Some authors used this approach to define infinite tensor products of other functional analytic structures (see e.g. \cite{BC}, \cite{Gill} and \cite{Gui}).
The work of von Neumann attracted the attention of many physicists who are interested in ``quantum mechanics with infinite degrees of freedom'', as well as mathematicians whose interest is in the field of operator algebras (see e.g. \cite{AN}, \cite{BeC}, \cite{BC}, \cite{EK}, \cite{GS}, \cite{Sto71}, \cite{TW}).
\medskip
However, von Neumann's approach is not appropriate for purely algebraic objects.
The aim of this article is to study ``genuine infinite algebraic tensor products'' (i.e. ones that are defined in terms of multilinear maps instead of through inductive limits) of some algebraic structures.
There are several motivations behind this study.
\smnoind 1. Conceptually speaking, it is natural to define ``infinite tensor products'' as the object that produces a unique linear map from a multilinear map on a given infinite family of objects (see Definition \ref{def:inf-ten-prod}).
As infinite direct products of Hilbert spaces are important in both Physics and Mathematics, it is believed that such infinite tensor products of algebraic structures are also important.
\smnoind 2. We want to construct an infinite tensor product of Hilbert spaces that is easier for non-analyst to grasp (compare with the infinite direct product as defined by J. von Neumann; see Lemma \ref{lem:phi0-innpro} and Remark \ref{rem:cp-inf-direct-prod}(d)) and is more natural (see Theorem \ref{thm:ten-prod-hil-cT-mod}, Example \ref{eg:ten-Hil-mod} and Example \ref{eg:ten-prod-rep-BC}).
\smnoind 3. Given a family of groups $\{G_i\}_{i\in I}$, it is well-known that the group algebra of the group
$${\bigoplus}_{i\in I} G_i := \big\{[g_i]_{i\in I}\in \PI G_i: g_i = e \text{ except for finite number of }i\in I\big\}$$
is an inductive limit of finite tensor products.
However, if one wants to consider the group algebra $\BC[\PI G_i]$, one is forced to consider a ``bigger version of tensor products'' (see Example \ref{eg:gp-alg-inf-prod-gp}).
\medskip
In this article, the algebraic structures that we concern with are vector spaces, unital $^*$-algebras, inner-product spaces as well as $^*$-representations of unital $^*$-algebras on Hilbert spaces.
In our study, we discovered some interesting phenomena of infinite tensor products that do not have counterparts in the case of finite tensor products.
Most of these phenomena related to certain object, $\Omega_{I;X}$, defined as in Remark \ref{rem:alt-con-ten-prod}(d), which ``encodes the asymptotic information'' of a given family $\{X_i\}_{i\in I}$.
\medskip
In Section 2, we will begin our study by defining the infinite tensor product $(\BOI X_i, \Theta_X)$ of a family $\{X_i\}_{i\in I}$ of vector spaces.
Two particular concerns are bases of $\BOI X_i$ as well as the relationship between $\BOI X_i$ and inductive limits of finite tensor products of $\{X_i\}_{i\in I}$ (which depend on choices of fixed elements in $\PI X_i$).
In order to do these, we obtain a direct sum decomposition of $\BOI X_i$ indexed by a set $\Omega_{I;X}$ (see Theorem \ref{thm:bas-inf-ten}) with all the direct summand being inductive limits of finite tensor products (see Proposition \ref{prop:ju-inj}(b)).
From this, we also know that the canonical map
$$\Psi: \BOI L(X_i;Y_i) \to L(\BOI X_i;\BOI Y_i)$$
is injective (but not surjective).
As a consequence, $\BOI X_i$ is automatically a faithful module over the big unital commutative algebra $\BOI \BC$ (see Corollary \ref{cor:ten-prod-as-mod} and Example \ref{eg:u-ten-prod}).
Moreover, one may regard the canonical map
$$\Theta_\BC : \PI \BC \to \BOI \BC$$
as a generalised multiplication (see Example \ref{eg:u-ten-prod}(a)).
In this sense, one can make sense of infinite products like $(-1)^I$.
\medskip
Clearly, $\BOI A_i$ is a unital $^*$-algebra if all $A_i$ are unital $^*$-algebras.
We will study in Section 3, a natural $^*$-subalgebra $\BOI^\ut A_i$ of $\BOI A_i$ which is a direct sum over a subgroup $\Omega^\ut_{I;A}$ of the semi-group $\Omega_{I;A}$.
The reasons for considering this subalgebra are that it has good representations (see the discussion after Proposition \ref{prop:ten-prod-st-alg}), and it is
big enough to contain $\BC[\PI G_i]$ when $A_i = \BC[G_i]$ for all $i\in I$ (see Example \ref{eg:gp-alg-inf-prod-gp}(a)).
Moreover, if all $A_i$ are generated by their unitary elements (in particular, if $A_i$ are group algebras or $C^*$-algebras), then $\BOI^\ut A_i$ is the linear span of the tensor products of unitary elements in $A_i$.
We will show that $\BOI^\ut A_i$ can be identified with the crossed products of some twisted actions in the sense of Busby and Smith (i.e., a cocycle action with a $2$-cocycle) of $\Omega^\ut_{I;A}$ on $\BOI^e A_i$ (the unital $^*$-algebra inductive limit of finite tensor products of $A_i$).
Moreover, it is shown that $\BOI^\ut \BC$ can be identified with the group algebra of $\Omega_{I;\BC}^\ut$ (Corollary \ref{cor:boiut-C}).
We will also study the center of $\BOI^\ut A_i$ in the case when $A_i$ is generated by its unitary elements (for all $\ii$).
\medskip
In Section 4, we will consider tensor products of inner-product spaces.
If $\{H_i\}_{i\in I}$ is a family of inner-product spaces, we define a natural inner-product on a subspace $\BOI^\un H_i$ of $\BOI H_i$ (see Lemma \ref{lem:phi0-innpro}(b)).
In the case of Hilbert spaces, the completion $\bar\bigotimes_{i\in I}^{\phi_1} H_i$ of $\BOI^\un H_i$ is a ``natural dilation'' of the infinite direct product $\prod \tsi H_i$ as defined by J. von Neumann in \cite{vN} (see Remark \ref{rem:cp-inf-direct-prod}(b)).
Note that the construction for $\bar\bigotimes_{i\in I}^{\phi_1} H_i$ is totally algebraical and is more natural (see Example \ref{eg:ten-Hil-mod} and Example \ref{eg:ten-prod-rep-BC}).
Note also that one can construct $\prod \tsi H_i$ in a similar way as $\bar\bigotimes_{i\in I}^{\phi_1} H_i$ (see Remark \ref{rem:cp-inf-direct-prod}(d)).
On the other hand, there is an inner-product $\BC[\Omega^\ut_{I;\BC}]$-module structure on $\BOI^\un H_i$ which produces $\bar\bigotimes_{i\in I}^{\phi_1} H_i$ (see Theorem \ref{thm:ten-prod-hil-cT-mod}), as well as many other pre-inner-products on $\BOI^\un H_i$ (see Remark \ref{rem:hil-mod}(a)).
\medskip
Section 5 will be devoted to the study of $^*$-representations of unital $^*$-algebras.
More precisely, if $\Psi_i:A_i\to \CL(H_i)$ is a unital $^*$-representations ($i\in I$), we define a canonical $^*$-representation
$$\BOI^{\phi_1} \Psi_i\ :\ \BOI^\ut A_i\ \to \ \CL\big(\bar\bigotimes_{i\in I}^{\phi_1} H_i\big).$$
We will show in Theorem \ref{thm:inf-ten-c-st-alg}(c) that if all $\Psi_i$ are injective, then $\BOI^{\phi_1} \Psi_i$ is also injective.
This is equivalent to the canonical $^*$-representations of $\BOI^\ut \CL(H_i)$ on $\bar\bigotimes_{i\in I}^{\phi_1} H_i$ being injective, and is related to the ``strong faithfulness'' of the canonical action of $\Omega^\ut_{I;\CL(H)}$ on $\Omega^\un_{I;H}$ (see Remark \ref{rem-act-OA-OH}(b)).
Note however, that the corresponding tensor type representation of $\BOI^\ut \CL(H_i)$ on $\prod \tsi H_i$ is non-injective.
Consequently, if $(H_i,\pi_i)$ is a unitary representation of a group $G_i$ that induced an injective $^*$-representation of $\BC[G_i]$ on $H_i$ ($i\in I$), then we obtain injective ``tensor type'' $^*$-representation of $\BC[\PI G_i]$ on $\bar\bigotimes_{i\in I}^{\phi_1} H_i$ (see Corollary \ref{cor:ten-rep-prod-gps}).
On the other hard, we will show that $\bigoplus_{\rho\in \PI S(A_i)} \big( {\bar\bigotimes}_{i\in I}^{\phi_1} H_{\rho_i}, \BOI^{\phi_1} \pi_{\rho_i}\big)$ is an injective $^*$-representation of $\BOI^\ut A_i$ when all $A_i$ are $C^*$-algebras (Corollary \ref{cor:spat-ten-prod}).
Finally, we show that if all $A_i$ are unital Hilbert algebras, then so is $\BOI^\ut A_i$.
\medskip
\begin{ntn}
i). In this article, all the vector spaces, algebras as well as inner-product spaces are over the complex field $\BC$, although some results remain valid if one considers the real field instead.
\smnoind
ii). Throughout this article, $I$ is an infinite set, and $\KF$ is the set of all non-empty finite subsets of $I$.
\smnoind
iii). For any vector space $X$, we write $X^\times := X\setminus \{0\}$ and put $X^*$ to be the set of linear functionals on $X$.
If $Y$ is another vector space, we denote by $X\otimes Y$ and $L(X;Y)$ respectively, the algebraic tensor product of $X$ and $Y$, and the set of linear maps from $X$ to $Y$.
We also write $L(X) := L(X;X)$.
\smnoind
iv). If $\{X_i\}_{i\in I}$ is a family of vector spaces and $x\in \PI X_i$, we denote by $x_i$ the ``$i^{\rm th}$-coordinate'' of $x$ (i.e. $x = [x_i]_{i\in I}$).
If $x,y\in \PI X_i$ such that $x_i = y_i$ except for a finite number of $i\in I$, we write
\begin{quotation}
$x_i = y_i\ $ e.f.
\end{quotation}
\smnoind
v). If $V$ is a normed spaces, we denote by $\CL(V)$ and $V'$ the set of bounded linear operators and the set of bounded linear functionals respectively, on $V$.
Moreover, we set $\sph(V):= \{x\in V: \|x\| =1\}$ as well as $B_1(V):= \{x\in V: \|x\| \leq 1\}$.
\smnoind
vi). If $A$ is a unital $^*$-algebra, we denote by $e_A$ the identity of $A$ and $U_A:= \{a\in A: a^*a = e_A = aa^*\}$.
\end{ntn}
\medskip
\section{Tensor products of vector spaces}
\medskip
\emph{In this section, $\{X_i\}_{i\in I}$ and $\{Y_i\}_{i\in I}$ are families of non-zero vector spaces.}
\medskip
\begin{defn}\label{def:inf-ten-prod}
Let $Y$ be a vector space. A map $\Phi: \PI X_i\to Y$ is said to be \emph{multilinear} if $\Phi$ is linear on each variable.
Suppose that $\bigotimes_{i\in I} X_i$ is a vector space and $\Theta_X: \PI X_i\to \bigotimes_{i\in I} X_i$ is a multilinear map such that for any vector space $Y$ and any multilinear map $\Phi: \PI X_i\to Y$, there exists a unique linear map $\tilde \Phi: \bigotimes_{i\in I} X_i \to Y$ with $\Phi = \tilde \Phi\circ \Theta_X$.
Then $\left(\bigotimes_{i\in I} X_i, \Theta_X\right)$ is called the \emph{tensor product} of $\{X_i\}_{i\in I}$.
We will denote $\tsi x_i := \Theta_X(x)$ ($x\in \PI X_i$) and set $X^{\otimes I} := \bigotimes_{i\in I} X_i$ if all $X_i$ are equal to the same vector space $X$.
\end{defn}
\medskip
Let us first give the following simple example showing that non-trivial multilinear maps with infinite number of variables do exist.
They are also crucial for some constructions later on.
\medskip
\begin{eg}\label{eg:multi-linear}
(a) Let $\PI^1 \BC := \{\beta\in \PI \BC: \beta_i = 1 \text{ e.f.}\}$ and set
$$\varphi_1(\beta)
\ :=\ \begin{cases}
\Pi_{i\in I} \beta_i \ &\text{if } \beta\in \PI^1 \BC\\
0 & \text{otherwise}.
\end{cases}$$
It is not hard to check that $\varphi_1$ is a non-zero multilinear map from $\PI \BC$ to $\BC$.
If $\phi_1: \BOI\BC \to \BC$ is the linear functional induced by $\varphi_1$ (the existence of $\BOI \BC$ will be established in Proposition \ref{prop:exist-ten-prod}(a)), then $\phi_1$ is an involutive unital map.
\smnoind
(b) Let $\PI^0 \BC := \{\beta\in \PI \BC: \sum_{i\in I} \abs{\beta_i -1} < \infty\}$.
For each $\beta\in \PI^0 \BC$, the net $\{\Pi_{i\in F} \beta_i\}_{F\in \KF}$ converges to a complex number, denoted by $\PI \beta_i$ (see e.g. \cite[2.4.1]{vN}).
We define $\varphi_0(\beta) :=\ \Pi_{i\in I} \beta_i$ whenever $\beta\in \PI^0 \BC$ and set $\varphi_0|_{\PI\BC \setminus \PI^0 \BC} \equiv 0$.
As in part (a), $\varphi_0$ induces an involutive unital linear functional $\phi_0$
on $\BOI\BC$.
\end{eg}
\medskip
Clearly, infinite tensor products are unique (up to linear bijections) if they exist.
The existence of infinite tensor products follows from a similar argument as that for finite tensor products, but we give an outline here for future reference.
\medskip
\begin{prop}\label{prop:exist-ten-prod}
(a) The tensor product $\left(\bigotimes_{i\in I} X_i, \Theta_X\right)$ exists.
\smallskip\noindent
(b) If $\{A_i\}_{i\in I}$ is a family of algebras (respectively, $^*$-algebras), then $\bigotimes_{i\in I} A_i$ is an algebra (respectively, a $^*$-algebra) with
$(\tsi a_i)(\tsi b_i) := \tsi a_ib_i$ (and $(\tsi a_i)^* := (\tsi a_i^*)$) for $a,b\in \PI A_i$.
\smnoind
(c) If $\Psi_i: A_i \to L(X_i)$ is a homomorphism for each $i\in I$, there is a canonical homomorphism $\TBOIM \Psi_i: \bigotimes_{i\in I} A_i \to L\left(\bigotimes_{i\in I} X_i\right)$ such that $\big(\TBOIM \Psi_i\big)(\tsi a_i)\tsi x_i = \tsi \Psi_i(a_i)x_i$ ($a\in \PI A_i$ and $x\in \PI X_i)$.
\smnoind
(d) If $A = \bigoplus_{n=0}^\infty A_n$ is a graded algebra and $\bigoplus_{n=0}^\infty M_n$ is a graded left $A$-module, then $\bigoplus_{n=0}^\infty \bigotimes_{k\geq n} M_k$ is a graded $A$-module with
$a_m (\otimes_{k\geq n} x_k) = \otimes_{k\geq n} a_mx_k\in \bigotimes_{k \geq m+n} M_k$ ($a_m\in A_m; x\in \Pi_{k\geq n} M_k)$.
\end{prop}
\begin{prf}
Parts (b), (c) and (d) follow from the universal property of tensor products,
and we will only give a brief account for part (a).
Let $V$ be the free vector space generated by elements in $\PI X_i$ and $\Theta_0: \PI X_i\to V$ be the canonical map.
Suppose that $W:= {\rm span}\ \!W_0$, where
\begin{eqnarray}\label{eqt:def-W0}
\lefteqn{W_0 \ :=\ \big\{\lambda \Theta_0(u) + \Theta_0(v) - \Theta_0(w): \lambda\in \BC; u,v,w\in \PI X_i; \exists i_0\in I \text{ with }} \nonumber\\
&& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \lambda u_{i_0} + v_{i_0} = w_{i_0} \text{ and } u_j = v_j = w_j, \forall j\in I\setminus\{i_0\}\big\}.
\end{eqnarray}
If we put $\bigotimes_{i\in I} X_i := V/W$, and set $\Theta_X$ to be the composition of $\Theta_0$ with the quotient map from $V$ to $V/W$, then they will satisfy the requirement in Definition \ref{def:inf-ten-prod}.
\end{prf}
\medskip
In the following remark, we list some observations that may be used implicitly throughout this article.
\medskip
\begin{rem}\label{rem:alt-con-ten-prod}
(a) As $\Theta_X$ is multilinear, $\BOI X_i = {\rm span}\ \! \Theta_X\big(\PI X_i^\times \big)$.
\smnoind
(b) If $I_1$ and $I_2$ are non-empty disjoint subsets of $I$ with $I = I_1\cup I_2$, it follows, from the universal property, that $\bigotimes_{i\in I} X_i \cong \big(\bigotimes_{i\in I_1} X_i \big) \otimes \big( \bigotimes_{j\in I_2} X_j \big)$ canonically.
\smnoind
(c) $\BOI (X_i\otimes Y_i) \cong (\BOI X_i)\otimes (\BOI Y_i)$ canonically.
\smnoind
(d) For any $x, y\in \PI X_i^\times$, we denote
\begin{equation*}
x \sim y \quad \text{if} \quad x_i = y_i \ \ e.f.
\end{equation*}
Obviously, $\sim$ is an equivalence relation on $\PI X_i^\times$, and we set $[x]_\sim$ to be the equivalence class of $x\in \PI X_i^\times$.
Let $\Omega_{I;X}$ be the collection of such equivalence classes.
It is not hard to see that $\Omega_{I;\BC}$ is a quotient group of $\PI \BC^\times$, and that it acts freely on $\Omega_{I;X}$.
\smnoind
(e) The element $\tsi 1\in \BC^{\otimes I}$ is non-zero.
In fact, if $\tsi 1 = 0$, then $\BC^{\otimes I} = (0)$ (by Proposition \ref{prop:exist-ten-prod}(b)), and this implies the only multilinear map from $\PI \BC$ to $\BC$ being zero, which contradicts Example \ref{eg:multi-linear}.
\end{rem}
\medskip
The ``asymptotic object'' $\Omega_{I;X}$ as defined in part (c) above is crucial in the study of genuine infinite tensor product, as can be seen in our next result.
Let us first give some more notations here.
For every $u\in \PI X_i^\times$, we set
$$\PI^u X_i := \{x\in \PI X_i: x\sim u\}
\quad \text{and}\quad
\BOI^u X_i := {\rm span}\ \! \Theta_X(\PI^u X_i).$$
If $u\sim v$, then $\PI^u X_i = \PI^v X_i$, and we will also denote $\PI^{[u]_\sim} X_i := \PI^{u} X_i$ as well as $\bigotimes_{i\in I}^{[u]_\sim} X_i := \bigotimes_{i\in I}^{u} X_i$.
\medskip
\begin{thm}\label{thm:bas-inf-ten}
$\bigotimes_{i\in I} X_i = \bigoplus_{\omega\in \Omega_{I;X}} \bigotimes_{i\in I}^{\omega} X_i$.
\end{thm}
\begin{prf}
Suppose that $x^{(1)},...,x^{(n)}\in \PI X_i^\times$ and $0 = n_0 < \cdots < n_N = n$ is a sequence satisfying
$x^{(n_k+1)} \sim \cdots \sim x^{(n_{k+1})}$ for $k \in \{0,...,N-1\}$, but $x^{(n_k)}\nsim x^{(n_l)}$ whenever $1\leq k\neq l\leq N$.
We first show that if $\nu_1,...,\nu_n\in \BC$ with $\sum_{l=1}^n \nu_l \Theta_X(x^{(l)}) = 0$, then
$${\sum}_{l= n_k + 1}^{n_{k+1}} \nu_l \Theta_X(x^{(l)})\ =\ 0 \qquad (k = 0,..., N-1).$$
In fact, by the proof of Proposition \ref{prop:exist-ten-prod}(a), there exist $m\in \BN$, $\mu_1,..., \mu_m\in \BC$ and $\lambda_k \Theta_0(u^{(k)}) + \Theta_0(v^{(k)}) - \Theta_0(w^{(k)})\in W_0$ ($k=1,...,m$) such that
\begin{equation*}
{\sum}_{l=1}^n \nu_l\Theta_0(x^{(l)})
= {\sum}_{k=1}^m \mu_k\big(\lambda_k \Theta_0(u^{(k)}) + \Theta_0(v^{(k)}) - \Theta_0(w^{(k)})\big).
\end{equation*}
Observe that if one of the elements in $\{ u^{(k)}, v^{(k)}, w^{(k)}\}$ is equivalent to $x^{(1)}$ (under $\sim$), then so are the other two (see \eqref{eqt:def-W0}).
After renaming, one may assume that $u^{(k)}\sim v^{(k)}\sim w^{(k)}\sim x^{(1)}$ for $k=1,...,m_1$, but none of $u^{(k)}$, $v^{(k)}$ nor $w^{(k)}$ is equivalent to $x^{(1)}$ when $k\in \{m_1+1,...,m\}$.
Since the two sets
$$\{x^{(n_1+1)},...,x^{(n)}\}\cup \{u^{(m_1+1)},...,u^{(m)}\}\cup
\{v^{(m_1+1)},...,v^{(m)}\}\cup \{w^{(m_1+1)},...,w^{(m)}\}$$
and $\{x^{(1)},...,x^{(n_1)}\}\cup \{u^{(1)},...,u^{(m_1)}\}\cup \{v^{(1)},...,v^{(m_1)}\}\cup \{w^{(1)},...,w^{(m_1)}\}$ are disjoint and elements in $\Theta_0\left( \PI X_i\right)$ are linearly independent in $V$, we have
\begin{equation*}
{\sum}_{l= 1}^{n_1} \nu_l \Theta_0(x^{(l)})
\ - \ {\sum}_{k=1}^{m_1} \mu_k\big(\lambda_k \Theta_0(u^{(k)}) + \Theta_0(v^{(k)}) - \Theta_0(w^{(k)})\big)
\ = \ 0.
\end{equation*}
This implies that $\sum_{l= 1}^{n_1} \nu_l \Theta_X(x^{(l)}) = 0$.
Similarly, $\sum_{l= n_k+1}^{n_{k+1}} \nu_l \Theta_X(x^{(l)}) = 0$ for $k = 1,..., N-1$.
The above shows that $\left(\bigotimes_{i\in I}^{\omega_{M}} X_i\right) \cap \left(\sum_{k=1}^{M-1} \bigotimes_{i\in I}^{\omega_k} X_i\right) = \{0\}$ whenever $\omega_1,...,\omega_M$ are distinct elements in $\Omega_{I;X}$.
On the other hand, for every $x\in \PI X_i^\times$, one has $\Theta_X(x)\in \bigotimes_{i\in I}^{[x]_\sim} X_i$.
These give the required equality.
\end{prf}
\medskip
For any $F\in \KF$ and $u\in \PI X_i^\times$, one has a linear map $$J_F^u\ :\ {\bigotimes}_{i\in F} X_i\ \longrightarrow\ \BOI^u X_i$$
given by $J_F^u(\otimes_{i\in F}\ \! x_i) := \otimes_{j\in I}\ \! \ti x_j$ ($x_i\in X_i$), where
$\ti x_j := x_j$ when $j\in F$, and
$\ti x_j := u_j$ when $j\in I\setminus F$.
\medskip
For any $F,G\in \mathfrak{F}$ with $F\subseteq G$, a similar construction gives a linear map $J_{G;F}^u: \bigotimes_{i\in F} X_i \to \bigotimes_{i\in G} X_i$.
It is clear that $\left(\bigotimes_{i\in F} X_i, J^u_{G;F}\right)_{F\subseteq G\in \mathfrak{F}}$ is an inductive system in the category of vector spaces with linear maps as morphisms.
\medskip
\begin{prop}\label{prop:ju-inj}
(a) $J_F^u$ is injective for any $u\in \PI X_i^\times$ and $F\in \mathfrak{F}$.
Consequently, $\Theta_X(u)\neq 0$.
\smnoind
(b) The inductive limit of $\left(\bigotimes_{i\in F} X_i, J^u_{G;F}\right)_{F\subseteq G\in \mathfrak{F}}$ is $\left( \bigotimes_{i\in I}^u X_i, \{J_F^u\}_{F\in \mathfrak{F}}\right)$.
\end{prop}
\begin{prf}
(a) Suppose that $a\in \ker J_F^u$ and $\psi\in (\bigotimes_{i\in F} X_i)^*$.
For each $j\in I\setminus F$, choose $f_j\in X_j^*$ with $f_j(u_j) = 1$.
Remark \ref{rem:alt-con-ten-prod}(b) and the universal property give a linear map $\check \psi: \bigotimes_{i\in I} X_i \to \BC^{{\otimes I}}$ satisfying
$$\check\psi(\otimes_{i\in I} x_i) = \psi(\otimes_{i\in F}\ \!x_i)\left(\otimes_{j\in I\setminus F}\ \! f_j(x_j)\right) \qquad (x\in \PI X_i).$$
Thus, $\psi(a)(\tsi 1) = \check\psi(J_F^u(a)) = 0$, which implies that $a = 0$ (as $\psi$ is arbitrary) as required.
On the other hand, if $i_0\in I$, then $\Theta_X(u) = J_{\{i_0\}}^u(u_{i_0})\neq 0$.
\smnoind
(b) This follows directly from part (a).
\end{prf}
\medskip
Part (b) of the above implies that $\Theta_X(C^\omega)$ is a basis for $\BOI^\omega X_i$, where $C^\omega$ is as defined in the following result.
\medskip
\begin{cor}\label{cor:sub-sp}
(a) Let $c: \Omega_{I;X} \to \PI X_i^\times$ be a cross-section.
For each $\omega\in \Omega_{I;X}$ and $\ii$, we pick a basis $B^\omega_i$ of $X_i$ that contains $c(\omega)_i$ and set
$$C^\omega := \{x\in \PI^{\omega} X_i: x_i\in B^\omega_i, \forall i\in I\}.$$
If $C := \bigcup_{\omega\in \Omega_{I;X}} C^\omega$,
then $\Theta_X(C)$ is a basis for $\bigotimes_{i\in I} X_i$.
\smnoind
(b) If $\Phi_i:X_i\to Y_i$ is an injective linear map ($i\in I$),
the induced linear map $\BOI \Phi_i: \BOI X_i \to \BOI Y_i$ is injective.
\end{cor}
\medskip
\begin{prop}\label{prop:inj-TBOIM}
The map $\Psi: \BOI L(X_i;Y_i) \to L(\BOI X_i; \BOI Y_i)$ (given by the universal property) is injective.
\end{prop}
\begin{prf}
Suppose that $T^{(1)}, ..., T^{(n)}\in \PI L(X_i;Y_i)^\times$ are mutually inequivalent elements (under $\sim$), $F\in \mathfrak{F}$, $R^{(1)},...,R^{(n)}\in \bigotimes_{i\in F} L(X_i;Y_i)$ with $S^{(k)} := J_F^{T^{(k)}}(R^{(k)})$ ($k=1,...,n$) satisfying
\begin{equation*}
\Psi\big({\sum}_{k=1}^n S^{(k)}\big)\ =\ 0.
\end{equation*}
Using an induction argument, it suffices to show that $S^{(1)} = 0$.
If $n=1$, we take any $x\in \PI X_i^\times$ with $T_i^{(1)}x_i\neq 0$ ($\ii$).
If $n >1$, we claim that there is $x\in \PI X_i^\times$ such that $$[T^{(1)}_ix_i]_\ii\in \PI Y_i^\times\ \text{ and }\ [T^{(k)}_ix_i]_\ii \nsim [T^{(1)}_ix_i]_\ii \ \ (k =2,...,n).$$
In fact, let $I^k:= \{i\in I: T^{(k)}_i \neq T^{(1)}_i\}$, which is an infinite set for any $k=2,...,n$.
For any $i\in I$, we put $N_i := \{k\in \{2,,..,n\}: i\in I^k\}$ and
take any $x_i \in X_i \setminus \big(\bigcup_{k\in N_i} \ker (T_i^{(k)} - T_i^{(1)})\cup\ker T_i^{(1)}\big)$ (note that $X_i$ cannot be a finite union of proper subspaces).
Thus, $T^{(1)}_ix_i \neq 0$ (for each $\ii$) and $T^{(k)}_ix_i \neq T^{(1)}_ix_i$ (for $k\in\{2,..,n\}$ and $i\in I^k$).
Now, we have
$$\Psi(S^{(1)})\big(\BOI^x X_i\big) \cap \Big({\sum}_{k=2}^n \Psi(S^{(k)})\big(\BOI^x X_i\big)\Big) = (0)$$
by Theorem \ref{thm:bas-inf-ten} and the fact that $\Psi(S^{(l)})\big(\BOI^x X_i\big)\in \BOI^{y^{(l)}} Y_i$,
where $y^{(l)}_i = T^{(l)}_ix_i$ ($\ii; l= 1,...,n$).
Consequently, $\Psi(S^{(1)})\big|_{\BOI^x X_i} = 0$.
As $T^{(1)}_ix_i\neq 0$ ($i\in I$), it is easy to see that $R^{(1)} = 0$ as required.
\end{prf}
\medskip
Note that $\Psi$ is not surjective even if $X_i=Y_i =\BC$ ($\ii$) since in this case, $\Psi$ is a homomorphism and $\BOI \BC$ is commutative while $L(\BOI \BC)$ is not.
\medskip
The following result follows from Proposition \ref{prop:exist-ten-prod}(c), Corollary \ref{cor:sub-sp}(b) and Proposition \ref{prop:inj-TBOIM}, which say that an infinite tensor product of vector spaces is automatically a faithful module over a big commutative algebra.
\medskip
\begin{cor}\label{cor:ten-prod-as-mod}
If $X_i$ is a faithful $A_i$-module ($\ii$), then $\BOI X_i$ is a faithful $\BOI A_i$-module.
In particular, $\bigotimes_{i\in I} Y_i$ is a faithful unital $\BC^{\otimes I}$-module.
\end{cor}
\medskip
\begin{eg}\label{eg:u-ten-prod}
(a) If $\beta \in \PI \mathbb{C}^\times$, then ${\bigotimes}_{i\in I}^\beta \mathbb{C} = \mathbb{C}\cdot \tsi \beta_i$.
In fact, for any $F\in \mathfrak{F}$ and $\mu_i\in \mathbb{C}$ ($i\in F$), we have
$J_F^\beta({\otimes}_{i\in F}\ \!\mu_i) = \left({\Pi}_{i\in F} \ \!\mu_i/\beta_i\right)(\tsi \beta_i)$.
\smnoind
(b) Let $n\in \BN$, $I_1, ..., I_n$ be infinite disjoint subsets of $I$ with $I = \bigcup_{k=1}^n I_k$ and $\overline{\beta}=(\beta_1, ..., \beta_n)\in (\BC^\times)^n$.
Define $\widetilde{\beta}\in \PI \BC^\times$ by $\widetilde{\beta}_i = \beta_k$ whenever $i\in I_k$.
Then $\overline{\beta} \mapsto [\widetilde{\beta}]_\sim$
is an injective group homomorphism from $(\BC^\times)^n$ to $\Omega_{I;\BC}$.
\smnoind
(c) Let $G$ be a subgroup of $\BT^n\subseteq (\BC^\times)^n$ (where $\BT:=\{t\in \BC: \abs{t} =1\}$).
If $\overline{\beta^{(1)}},...,\overline{\beta^{(m)}}$ are distinct elements in $G$ and $\widetilde{\beta^{(1)}},...,\widetilde{\beta^{(m)}}\in \PI \BC^\times$ are as in part (b), then $\tsi \widetilde{\beta^{(1)}_i},...,\tsi \widetilde{\beta^{(m)}_i}$ are linearly independent in $\BC^{\otimes I}$.
Therefore, the $^*$-subalgebra of $\BC^{\otimes I}$ generated by $\{\tsi \widetilde{\beta}_i: \overline{\beta}\in G\}$ is $^*$-isomorphic to
the group algebra $\BC[G]$.
\end{eg}
\medskip
As $\tsi \alpha_i = (\PI \alpha_i) (\tsi 1)$ if $\alpha_i =1$ e.f., one may regard $\tsi \alpha_i$ as a generalisation of the product.
In this case, one can consider infinite products like $(-1)^I$.
\medskip
\section{Tensor products of unital $^*$-algebras}
\medskip
\emph{Throughout this section, $A_i$ is a unital $^*$-algebra with identity $e_i$ ($\ii$), and we set $\Omega^\ut_{I;A}:=\PI U_{A_i}/\sim$.}
\medskip
Notice that in this case, $\Omega_{I;A}$ is a $^*$-semi-group with identity and $\Omega^\ut_{I;A}$ can be regarded as a subgroup of $\Omega_{I;A}$ with the inverse being the involution on $\Omega_{I;A}$.
Moreover, $\bigotimes_{i\in I} A_i$ is a $\Omega_{I;A}$-graded $^*$-algebra in the sense that for any $\omega, \omega'\in \Omega_{I;A}$,
\begin{equation}\label{eqt:grading}
\Big({\bigotimes}_{i\in I}^{\omega} A_i\Big) \cdot \Big({\bigotimes}_{i\in I}^{\omega'} A_i\Big)
\ \subseteq\ {\bigotimes}_{i\in I}^{\omega\omega'} A_i
\ \text{ and }\
\Big({\bigotimes}_{i\in I}^{\omega} A_i\Big)^*
\ \subseteq\ {\bigotimes}_{i\in I}^{\omega^*} A_i.
\end{equation}
\medskip
By Proposition \ref{prop:ju-inj}(b), $\BOI^e A_i$ can be identified with the unital $^*$-algebra inductive limit of finite tensor products of $A_i$.
We will study the following $^*$-subalgebra that contains $\BOI^e A_i$:
$$\BOI^\ut A_i\ :=\ {\bigoplus}_{\omega\in \Omega^\ut_{I;A}} \BOI^\omega A_i.$$
The motivation behind the consideration of this subalgebra is partially from Example \ref{eg:gp-alg-inf-prod-gp}(a) below, and partially because it has good representations (see the discussion after Proposition \ref{prop:ten-prod-st-alg} below).
Moreover, if all $A_i$ are linear spans of $U_{A_i}$ (in particular, if they are $C^*$-algebras or group algebras), then $\BOI^\ut A_i$ is the linear span of $\Theta_A(\PI U_{A_i})$.
If $A_i = A$ for all $i\in I$, we denote $A^{\otimes I}_\ut := \BOI^\ut A_i$.
\medskip
\begin{eg}\label{eg:gp-alg-inf-prod-gp}
(a) Let $G_i$ be a group and $\BC[G_i]$ be its group algebra ($i\in I$).
If $\Lambda: \PI G_i \to \PI U_{\BC[G_i]}$ is the canonical map, then $\lambda := \Theta_{\BC[G]} \circ \Lambda$ gives a $^*$-isomorphism from $\BC[\PI G_i]$ to the $^*$-subalgebra
$$\BOI^{\Lambda(\PI G_i)} \BC[G_i]\ := \ {\sum}_{t\in \PI G_i} \BOI^{\Lambda(t)} \BC[G_i] \ \subseteq \ \BOI^\ut \BC[G_i].$$
In fact, $\lambda$ induces a $^*$-homomorphism from $\BC[\PI G_i]$ to $\BOI^\ut \BC[G_i]$.
Let $q: {\Pi}_\ii G_i \to {\Pi}_\ii G_i/{\oplus}_{i\in I} G_i$ be the quotient map.
For a fixed $s\in \PI G_i$, if we set
$${\bigoplus}_{i\in I}^s G_i\ :=\ \big\{t\in \PI G_i: q(t) = q(s)\big\},$$
then $s^{-1} \big(\bigoplus_{i\in I}^s G_i \big) = \bigoplus_{i\in I} G_i$.
Thus, $\{\lambda(t): t\in \bigoplus_{i\in I}^s G_i\}$ is a set of linearly independent elements in $\BOI \BC[G_i]$ (as $\lambda|_{\BC[\bigoplus_{i\in I} G_i]}$ is a bijection onto $\BOI^e \BC[G_i]$).
On the other hand, if $s^{(1)}, ..., s^{(n)}\in \PI G_i$ such that $q(s^{(k)})\neq q(s^{(l)})$ whenever $k\neq l$, then $\lambda(s^{(1)}),...,\lambda(s^{(n)})$ are linearly independent in $\BOI\BC[G_i]$ (see Theorem \ref{thm:bas-inf-ten}).
Consequently, $\{\lambda(t): t\in \PI G_i\}$ form a basis for $\BOI^{\Lambda(\PI G_i)} \BC[G_i]$.
\smnoind
(b) It is well-known that there is a twisted action $(\alpha, u)$, in the sense of Busby and Smith, of $\Omega_{I;G}:= {\Pi}_\ii G_i/{\oplus}_{i\in I} G_i$ on $\BC[\bigoplus_{i\in I} G_i]\cong \BOI^e \BC[G_i]$ (see \cite[2.1]{BS}) such that
$\BC[\PI G_i]$ is $^*$-isomorphic to the algebraic crossed-product $\BOI^e \BC[G_i]\rtimes_{\alpha,u} \Omega_{I;G}$.
\end{eg}
\medskip
There is a canonical action $\Xi$ of $\PI U_{A_i}$ on $\BOI^\ut A_i$ given by inner-automorphisms, i.e.
$$\Xi_u(a) := (\tsi u_i)\cdot a\cdot (\tsi u^*_i)
\qquad \big(u\in \PI U_{A_i}; a\in \BOI^\ut A_i\big).$$
This induces an action $\Xi^e$ of $\PI U_{A_i}$ on the subalgebra $\BOI^e A_i$.
The following result gives an identification of $\BOI^\ut A_i$ as the algebraic crossed-product (see e.g. \cite[p.166]{RSW}) of a cocycle twisted action (i.e. a twisted action in the sense of Busby and Smith) of $\Omega^\ut_{I;A}$ on $\BOI^e A_i$ induced by $\Xi^e$.
\medskip
Before we give this result, let us recall that an abelian group $G$ is \emph{divisible} if for any $g\in G$ and $n\in \BN$, there is $h\in G$ with $g = h^n$.
\medskip
\begin{thm}\label{thm:boiut-tw-cr-pd}
(a) There is a cocycle twisted action $(\check\Xi, m)$ of $\Omega^\ut_{I;A}$ on $\BOI^e A_i$ such that $\BOI^\ut A_i$ is $\Omega^\ut_{I;A}$-graded $^*$-isomorphic to $(\BOI^e A_i)\rtimes_{\check \Xi, m} \Omega^\ut_{I;A}$.
\smnoind
(b) Suppose that all $A_i$ are commutative.
If $\BOI^e A_i$ is a unital $^*$-subalgebra of a commutative $^*$-algebra $B$ with $U_B$ being divisible, $\BOI^\ut A_i$ is $\Omega_{I;A}^\ut$-graded $^*$-isomorphic to a unital $^*$-subalgebra of $B\otimes \BC[\Omega_{I;A}^\ut]$.
If $U_{\BOI^e A_i}$ is itself divisible, $\BOI^\ut A_i \cong (\BOI^e A_i)\otimes \BC[\Omega_{I;A}^\ut]$ as $\Omega_{I;A}^\ut$-graded $^*$-algebras.
\end{thm}
\begin{prf}
Let $c:\Omega^\ut_{I;A} \to \PI U_{A_i}$ be a cross-section with $c([e]_\sim) = e$.
\smnoind
(a) For any $\mu,\nu\in \Omega^\ut_{I;A}$, we set
$$\check \Xi_\mu\ :=\ \Xi^e_{c(\mu)} \quad \text{and} \quad m(\mu,\nu)\ :=\ \tsi c(\mu)_ic(\nu)_ic(\mu\nu)^{-1}_i.$$
As $c(\mu)c(\nu)\sim c(\mu\nu)$, we have $m(\mu,\nu)\in \BOI^e A_i$.
It is easy to check that $(\check \Xi, m)$ is a twisted action in the sense of Busby and Smith.
Furthermore, we define $\Psi: (\BOI^e A_i)\rtimes_{\check \Xi, m} \Omega^\ut_{I;A} \to \BOI^\ut A_i$ by
$$\Psi(f)\ :=\ {\sum}_{\omega\in \Omega^\ut_{I;A}} f(\omega)(\tsi c(\omega)_i)
\qquad \big(f\in (\BOI^e A_i)\rtimes_{\check \Xi, m} \Omega^\ut_{I;A}\big).
$$
It is not hard to verify that $\Psi$ is a bijective $\Omega^\ut_{I;A}$-graded $^*$-homomorphism.
\smnoind
(b) Let $\PI^e U_{A_i} := \PI^e A_i \cap \PI U_{A_i}$.
By the Baer's theorem, $\Theta_A|_{\PI^e U_{A_i}}$ can be extended to a group homomorphism $\varphi: \PI U_{A_i} \to U_B$.
Since
$$\varphi(c(\mu))\varphi(c(\nu))\varphi(c(\mu\nu))^{-1}
\ = \ \tsi c(\mu)_ic(\nu)_ic(\mu\nu)^{-1}_i
\qquad (\mu,\nu\in \Omega^\ut_{I;A}),$$
the map $\Phi: \BOI^\ut A_i \to B \otimes \BC[\Omega^\ut_{I;A}]$ given by
\begin{equation}\label{eqt:Phi}
\Phi(a)
:= (a \cdot \tsi c(\omega)^{-1}_i)\varphi(c(\omega)) \otimes \lambda(\omega)
\qquad
\big(a\in \BOI^\omega A_i; \omega\in \Omega^\ut_{I;A}\big)
\end{equation}
is a $\Omega^\ut_{I;A}$-graded $^*$-homomorphism.
If ${\sum}_{\omega\in \Omega^\ut_{I;A}} a^\omega\in \ker \Phi$ (with $a^\omega\in \BOI^\omega A_i$), then for every $\omega\in \Omega^\ut_{I;A}$, one has $(a^\omega \cdot \tsi c(\omega)^{-1}_i)\varphi(c(\omega)) = 0$, which implies $a^\omega = 0$, and hence $\Phi$ is injective.
The image of $\Phi$ is the linear span of
$$\big\{b\varphi(c(\omega))\otimes \lambda(\omega): b\in \BOI^eA_i; \omega\in \Omega^\ut_{I;A}\big\},$$
and it is clear that $\Phi$ is surjective if $B = \BOI^eA_i$.
\end{prf}
\medskip
\begin{rem}\label{rem:ut-prod}
(a) The cocycle twisted action $(\check\Xi, m)$ depend on the choice of a cross-section, and different cross-sections may give different twisted actions (although their crossed-products are all isomorphic).
On the other hand, the map $\Phi$ in part (b) also depends on the choice of a cross-section as well as the choice of an extension of $\Theta_A|_{\PI^e U_{A_i}}$.
\smnoind
(b) If $S_i$ is a set and $A_i$ is a $^*$-subalgebra of $\ell^\infty(S_i)$ ($i\in I$), then by Theorem \ref{thm:boiut-tw-cr-pd}(b), $\BOI^\ut A_i$ is a $^*$-subalgebra of $\ell^\infty(\PI S_i)\otimes \BC[\Omega^\ut_{I;A}]$.
Our first proof for this fact use \cite[18.4]{Cal} and \cite[7.1]{EM}.
\smnoind
(c) If all $A_i$ are commutative, then
$\BOI^\ut A_i \cong (\BOI^e A_i)\otimes \BC[\Omega_{I;A}^\ut]$ as $\Omega_{I;A}^\ut$-graded $^*$-algebras if and only if there is a group homomorphism $\pi: \Omega^\ut_{I;A} \to U_{\BOI^\ut A_i}$ such that $\pi(\omega)\in \BOI^\omega A_i$ ($\omega\in \Omega^\ut_{I;A}$).
In fact, if such a $\pi$ exists, one may replace $(a\cdot \tsi c(\omega)^{-1}_i)\varphi(c(\omega))$ in \eqref{eqt:Phi} with $a\pi(\omega^{-1})$ and show that the corresponding $\Phi$ is a $^*$-isomorphism.
\end{rem}
\medskip
Clearly, the second statement of Theorem \ref{thm:boiut-tw-cr-pd}(b) applies to the case when $A_i = \BC^{n_i}$ for some $n_i\in \BN$ ($\ii$).
In particular, Theorem \ref{thm:boiut-tw-cr-pd}(b) and its argument give the following corollary.
\medskip
\begin{cor}\label{cor:boiut-C}
If $\varphi_1$ is as in Example \ref{eg:multi-linear}(a) and $\varphi: \PI \BT\to \BT$ is a group homomorphism that extends $\varphi_1|_{\PI^1 \BT}$ (it existence is guaranteed by the Baer's theorem), then $\Phi(\tsi \alpha_i) := \varphi(\alpha) \lambda([\alpha]_\sim)$ ($\alpha \in \PI \BT$) is a well-defined $^*$-isomorphism from
$\BC^{\otimes I}_\ut$ onto $\BC[\Omega^\ut_{I;\BC}]$.
\end{cor}
\medskip
Conversely, it is clear that if $\varphi: \PI \BT\to \BT$ is any map such that $\Phi$ as defined in the above is a well-defined $^*$-isomorphism, then $\varphi$ is a group homomorphism extending $\varphi_1|_{\PI^1 \BT}$.
On the other hand, there is a simpler proof for Corollary \ref{cor:boiut-C}.
In fact, for $\alpha,\beta\in \PI \BT$ with $\alpha\sim \beta$, one has $\varphi(\alpha)^{-1}\cdot \tsi \alpha_i = \varphi(\beta)^{-1}\cdot \tsi \beta_i$.
Thus, $[\alpha]_\sim \mapsto \varphi(\alpha)^{-1} \cdot \tsi \alpha_i$ is a well-defined group homomorphism from $\Omega^\ut_{I;\BC}$ to $U_{\BC^{\otimes I}_\ut}$ such that $\{\varphi(\alpha)^{-1}\cdot \tsi \alpha_i: [\alpha]_\sim \in \Omega^\ut_{I;\BC}\}$ is a basis for $\BC^{\otimes I}_\ut$.
\medskip
\begin{eg}\label{eg:mT-Omega}
For any subgroup $G\subseteq \mathbb{T}^n$, the algebra in Example \ref{eg:u-ten-prod}(c) is a $^*$-subalgebras of $\BC^{\otimes I}_\ut$.
\end{eg}
\medskip
In the remainder of this section, we will show that the center of $\BOI^\ut A_i$ is the tensor product of centers of $A_i$ when $A_i = {\rm span}\ \!U_{A_i}$ for all $i\in I$.
\medskip
If $A$ is an algebra and $G$ is a group, we denote by $Z(A)$ and $Z(G)$ the center of $A$ and the center of $G$ respectively.
Clearly, the inclusion $\PI U_{Z(A_i)} \subseteq \PI U_{A_i}$ induces an injective group homomorphism from $\Omega^\ut_{I;Z(A)}$ to $\Omega^\ut_{I;A}$ and we regard the former as a subgroup of the later.
\medskip
\begin{thm}\label{thm:center}
Suppose that there is $F_0\in \KF$ with $A_i = {\rm span}\ \!U_{A_i}$ for any $i\in I_0:= I\setminus F_0$.
\smnoind
(a) $Z(\Omega^\ut_{I;A}) = \Omega^\ut_{I;Z(A)}$.
Moreover, $Z(\Omega^\ut_{I;A}) = \Omega^\ut_{I;A}$ if and only if all but a finite number of $A_i$ are commutative.
\smnoind
(b) Every element in $\Omega^\ut_{I;A}\setminus Z(\Omega^\ut_{I;A})$ has an infinite conjugacy class.
\smnoind
(c) $Z\big(\BOI^\ut A_i\big) = \BOI^\ut Z(A_i)$.
\end{thm}
\begin{prf}
(a) It is obvious that $\Omega^\ut_{I;Z(A)}\subseteq Z(\Omega^\ut_{I;A})$.
Suppose $u\in \PI U_{A_i}$ with $[u]_\sim \notin \Omega^\ut_{I;Z(A)}$.
There is an infinite subset $J\subseteq I_0$ such that $u_i\notin Z(A_i)$ ($i\in J$).
For each $i\in J$, one can find $v_i\in U_{A_i}$ such that $u_iv_i \neq v_iu_i$.
For any $i\in I\setminus J$, we put $v_i = e_{i}$.
Then $[v]_\sim \in \Omega^\ut_{I;A}$ and $[u]_\sim [v]_\sim \neq [v]_\sim[u]_\sim$.
Consequently, $[u]_\sim \notin Z(\Omega^\ut_{I;A})$.
This argument also shows that if the set $\{i\in I: Z(A_i)\neq A_i\}$ is infinite, then $Z(\Omega^\ut_{I;A}) \neq \Omega^\ut_{I;A}$.
Conversely, it is clear that $\Omega^\ut_{I;Z(A)} = \Omega^\ut_{I;A}$ if all but a finite numbers of $A_i$ are commutative.
\smnoind
(b) Suppose that $[u]_\sim \in \Omega^\ut_{I;A}\setminus Z(\Omega^\ut_{I;A})$ and $\{i_n\}_{n\in \BN}$ is a sequence of distinct elements in $I_0$ such that $u_{i_n}\notin Z(A_{i_n})$ ($n\in \BN$).
For each $n\in \BN$, choose $v_{i_n}\in U_{A_{i_n}}$ with $v_{i_n}u_{i_n}v_{i_n}^* \neq u_{i_n}$.
For any prime number $p$, we
set $w^{(p)}_{i_n} := v_{i_n}$ ($n\in \BN p$), and $w^{(p)}_i := e_{i}$ if $i\in I\setminus \{i_{n}: n\in \BN p\}$.
If $p$ and $q$ are distinct prime numbers, then
$$w^{(q)}_{i_n} u_{i_n} (w^{(q)}_{i_n})^*
\ =\ u_{i_n}
\ \neq\ w^{(p)}_{i_n} u_{i_n} (w^{(p)}_{i_n})^*
\qquad (n\in \BN p \setminus \BN q).$$
Consequently, $w^{(q)} u (w^{(q)})^*
\nsim w^{(p)} u (w^{(p)})^*$, and the conjugacy class of $[u]_\sim$ is infinite.
\smnoind
(c) Since $Z(\BOI^\ut A_i) = {\bigotimes}_{i\in F_0} Z(A_i)\otimes Z({\bigotimes}_{i\in I_0}^\ut A_i)$, we may assume that $A_i = {\rm span}\ \! U_{A_i}$ for all $\ii$.
In this case, $Z(\BOI^\ut A_i) = \big(\BOI^\ut A_i\big)^\Xi$, where
$\big(\BOI^\ut A_i\big)^\Xi$ is the fixed point algebra of the action $\Xi$ as defined above.
Moreover, one has $\BOI^\ut Z(A_i) \subseteq Z(\BOI^\ut A_i)$ and it remains to show that $\big(\BOI^\ut A_i\big)^\Xi \subseteq \BOI^\ut Z(A_i)$.
Let $v^{(1)},...,v^{(n)}\in \PI U_{A_i}$ be mutually inequivalent elements, $F\in \mathfrak{F}$ and $b_1,...,b_n\in \BOF A_i\setminus \{0\}$ such that $a := {\sum}_{k=1}^n J_F^{v^{(k)}}(b_k)\in \big(\BOI^\ut A_i\big)^\Xi$.
We first claim that $[v^{(k)}]_\sim \in \Omega^\ut_{I;Z(A)}$ ($k = 1,...,n$).
Suppose on the contrary that $[v^{(1)}]_\sim \notin \Omega^\ut_{I;Z(A)} = Z(\Omega^\ut_{I;A})$.
For every $u\in \PI U_{A_i}$, one has
$$\Xi_u\big(J_F^{v^{(1)}}(b_k)\big)\ \in \ \big(\BOI^{[uv^{(1)}u^*]_\sim} A_i\big) \setminus \{0\}.$$
As $\Xi_u(a) = a$, we see that $[uv^{(1)}u^*]_\sim \in \{[v^{(1)}]_\sim, ..., [v^{(n)}]_\sim\}$, which contradicts the fact that $\{[uv^{(1)}u^*]_\sim: [u]_\sim \in \Omega^\ut_{I;A}\}$ is an infinite set (by part (b)).
By enlarging $F$, we may assume that $v^{(k)}\in \PI U_{Z(A_i)}$ ($k = 1,...,n$).
For each $u\in \PI U_{A_i}$ and $k\in \{1,...,n\}$, one has $\Xi_u(J_F^{v^{(k)}}(b_k)) = J_F^{v^{(k)}}(b_k)$ and so, $b_k \in Z(\BOF A_i)$.
Therefore, $a \in \BOI^\ut Z(A_i)$ as expected.
\end{prf}
\medskip
The readers should notice that $\BOI^\ut Z(A_i)$ equals $\bigoplus_{\omega \in Z(\Omega^\ut_{I;A})} \BOI^\omega Z(A_i)$ instead of $\bigoplus_{\omega \in \Omega^\ut_{I;A}} \BOI^\omega Z(A_i)$ (strictly speaking, the later object does not make sense).
\medskip
\begin{eg}
(a) If $n_i\in \BN$ ($i\in I$), then
$Z\big(\BOI^\ut M_{n_i}(\BC)\big) \cong \BC^{\otimes I}_\ut$.
\smnoind
(b) If $G_i$ are icc groups, then $Z(\BOI^\ut \BC[G_i]) \cong \BC^{\otimes I}_\ut$ canonically.
\end{eg}
\medskip
We end this section with the following brief discussion on the non-unital case.
Suppose that $\{A_i\}_\ii$ is a family of $^*$-algebras, not necessarily unital.
If $M(A_i)$ is the double centraliser algebra of $A_i$ ($\ii$), we define an ideal, $\BOI^\ut A_i$, of $\BOI^\ut M(A_i)$ as follows:
$$\BOI^\ut A_i
\ :=\ {\rm span}\ \!\big\{J_F^u(a): F\in \KF; a\in {\bigotimes}_{i\in F} A_i; u\in \PI U_{M(A_i)}\big\}.$$
In general, $\BOI^\ut A_i$ is not a subset of $\BOI A_i$.
In a similar fashion, we define
$$\BOI^e A_i\ :=\ {\rm span}\ \!\big\{J_F^u(a): F\in \KF; a\in {\bigotimes}_{i\in F} A_i; u\in \PI U_{M(A_i)}; u\sim e\big\},$$ which is an ideal of $\BOI^e M(A_i)$.
By the proof of Theorem \ref{thm:boiut-tw-cr-pd}(a), one may identify $\BOI^\ut A_i$ as the ideal of $(\BOI^e M(A_i))\rtimes_{\check \Xi, m} \Omega^\ut_{I;M(A)}$ consisting of functions from $\Omega^\ut_{I;M(A)}$ to $\BOI^e A_i$ having finite supports.
\medskip
\section{Tensor products of inner-product spaces}
\medskip
\emph{Throughout this section, $(H_i, \langle \cdot, \cdot\rangle)$ is a non-zero inner-product space ($i\in I$).
Moreover, we denote $\Omega^\un_{I;H} := \PI \sph(H_i)/\sim$. }
\medskip
If $B$ is a unital $^*$-algebra and $X$ is a unital left $B$-module, a map $\langle \cdot, \cdot \rangle_B : X\times X \to B$ is called a \emph{(left) Hermitian $B$-form on $X$} if
$\langle ax + y, z \rangle_B = a \langle x, z \rangle_B + \langle y, z \rangle_B$ and $\langle x, y \rangle_B^* = \langle y, x \rangle_B$ ($x,y,z\in X; a\in B$).
It is easy to see that a Hermitian $B$-form on $X$ can be regarded as a $B$-bimodule map $\theta : X\otimes \ti X \to B$ satisfying $\theta(x\otimes \ti y)^* = \theta (y\otimes \ti x)$ (where $\ti X$ is the conjugate vector spaces of $X$ regarded as a unital right $B$-module in the canonical way).
Consequently, part (a) of the following result follows readily from the universal property of tensor products, while part (b) is easily verified.
\medskip
\begin{prop}\label{prop:ten-inn-prod}
(a) There is a Hermitian $\BC^{\otimes I}$-form on $\bigotimes_{i\in I} H_i$ such that
$\left< \tsi x_i, \tsi y_i \right>_{\BC^{\otimes I}}
:= \otimes_{i\in I} \left< x_i, y_i \right>$ ($x,y\in \PI H_i$).
\smnoind
(b) For a fixed $\mu\in \Omega^\un_{I;H}$, one has $\left< \Theta_H(x), \Theta_H(y)\right>_{\BC^{\otimes I}}
= \PI \langle x_i, y_i \rangle (\tsi 1)$ $(x,y\in \PI^\mu H_i)$.
This induces an inner-product on $\BOI^\mu H_i$ which coincides with the one given by the inductive limit of $\big(\bigotimes_{i\in F} H_i, J^\mu_{G;F}\big)_{F\subseteq G\in \mathfrak{F}}$, in the category of inner-product spaces with isometries as morphisms.
\end{prop}
\medskip
We want to construct a nice inner-product space from the above Hermitian $\BC^{\otimes I}$-form.
A naive thought is to appeal to a construction in Hilbert $C^*$-modules that produces a Hilbert space from a positive linear functional on $\BC^{\otimes I}$.
However, the difficulty is that there is no canonical order structure on $\BC^{\otimes I}$.
Nevertheless, we will do a similar construction using the functional $\phi_1$ in Example \ref{eg:multi-linear}(a).
In this case, one can only consider a subspace of $\BOI H_i$ (see Example \ref{eg:bad-expan} below).
\medskip
\begin{lem}\label{lem:phi0-innpro}
Suppose that $\bigotimes_{i\in I}^\ct H_i:= {\rm span}\ \!\Theta_H(\PI B_1(H_i))$, $\bigotimes_{i\in I}^\un H_i:= {\rm span}\ \!\Theta_H(\PI \sph(H_i))$ and
$$\langle \xi, \eta \rangle_{\phi_1}
\ :=\ \phi_1( \langle \xi, \eta \rangle_{\BC^{\otimes I}})
\qquad \big(\xi,\eta\in \BOI H_i\big).$$
\smnoind
(a) For any $\mu\in \Omega^\un_{I;H}$, the restriction of
$\langle \cdot, \cdot \rangle_{\phi_1}$ on $\bigotimes_{i\in I}^\mu H_i\times \bigotimes_{i\in I}^\mu H_i$ coincides with the inner-product in Proposition \ref{prop:ten-inn-prod}(b).
\smnoind
(b) $\left< \cdot, \cdot \right>_{\phi_1}$ is a positive sesquilinear form on $\bigotimes_{i\in I}^\ct H_i$ and is an
inner-product on $\bigotimes_{i\in I}^\un H_i$.
Moreover, if
\begin{equation*}
K\ :=\ \Big\{y\in \BOI^\ct H_i: \langle x, y\rangle_{\phi_1} = 0, \forall x\in \BOI^\ct H_i \Big\},
\end{equation*}
then $\bigotimes_{i\in I}^\ct H_i = K \oplus \bigotimes_{i\in I}^\un H_i$ (as vector spaces).
\smnoind
(c) If $I = I_1 \cup I_2$ and $I_1 \cap I_2 = \emptyset$, then $\bigotimes_{i\in I}^\un H_i = (\bigotimes_{i\in I_1}^\un H_i) \otimes (\bigotimes_{j\in I_2}^\un H_j)$ as inner-product spaces.
\end{lem}
\begin{prf}
(a) This part is clear.
\smnoind
(b) It is obvious that $\left< \cdot, \cdot \right>_{\phi_1}$ is a sesquilinear form on $\bigotimes_{i\in I}^\ct H_i$.
Let
$$E\ :=\ \big\{x\in \PI B_1(H_i): \|x_i\| < 1 \text{ for an infinite number of } i\in I\big\}$$
and $\ti K:= {\rm span}\ \! \Theta_H(E)$.
Clearly, $\bigotimes_{i\in I}^\ct H_i = \ti K\oplus \bigotimes_{i\in I}^\un H_i$.
Moreover, if $u\in \PI B_1(H_i)$ and $v\in E$, then $\langle u_i, v_i \rangle \neq 1$ for an infinite number of $i\in I$, which implies that $\langle \tsi u_i, \tsi v_i\rangle_{\phi_1} = 0$.
Consequently, $\ti K\subseteq K$.
We claim that $\left< \xi, \xi \right>_{\phi_1} \geq 0$ for any $\xi\in \bigotimes_{i\in I}^\ct H_i$.
Suppose that $\xi = \sum_{k=1}^n \lambda_k \tsi u^{(k)}_i$ with $\lambda_1,...,\lambda_n\in \BC$ and $u^{(1)}, ..., u^{(n)}\in \PI B_1(H_i)$.
Then
$$\left< \xi, \xi \right>_{\phi_1}
\ = \ {\sum}_{k,l=1}^n \lambda_k \bar\lambda_l \phi_1\big(\otimes_{i\in I} \langle u^{(k)}_i, u^{(l)}_i\rangle\big).$$
As in the above, $\phi_1\big(\otimes_{i\in I} \langle u^{(k)}_i, u^{(l)}_i\rangle\big) = 0$ if either $u^{(k)}$ or $u^{(l)}$ is in $E$.
Thus, by rescaling, we may assume that
$$u^{(1)}, ..., u^{(n)}\in \PI \sph(H_i).$$
Furthermore, we assume that there exist $0 = n_0 < \cdots < n_m = n$ such that
$u^{(n_p+1)} \sim \cdots \sim u^{(n_{p+1})}$ for all $p \in \{0,...,m-1\}$, but $u^{(n_p)}\nsim u^{(n_q)}$ whenever $1\leq p\neq q\leq m$.
It is not hard to check that $u^{(k)}\sim u^{(l)}$ if and only if $\langle u^{(k)}_i, u^{(l)}_i \rangle = 1$ e.f. (as $\|u^{(k)}_i\|, \|u^{(l)}_i\|\leq 1$).
Consequently, if $1\leq p\neq q\leq m$, \begin{equation}\label{eqt:phi0-disj}
\phi_1\big(\otimes_{i\in I} \langle u^{(k)}_i, u^{(l)}_i\rangle \big) = 0
\quad \text{when } n_p < k \leq n_{p+1} \text{ and } n_q < l \leq n_{q+1}.
\end{equation}
Therefore, in order to show $\left< \xi, \xi \right>_{\phi_1} \geq 0$, it suffices to consider the case when $u^{(k)} \sim u^{(l)}$ for all $k,l \in \{1,...,n\}$, which is the same as $\xi\in \bigotimes_{i\in I}^{u^{(1)}} H_i$.
Thus, $\left< \xi, \xi \right>_{\phi_1} \geq 0$ by part (a).
Next, we show that $\left< \cdot, \cdot \right>_{\phi_1}$ is an inner-product on $\bigotimes_{i\in I}^\un H_i$.
Suppose that $\xi = \sum_{k=1}^n \lambda_k \tsi u^{(k)}_i$ with $\lambda_1,...,\lambda_n\in \BC$ and $u^{(1)}, ..., u^{(n)}\in \PI \sph(H_i)$ such that $\left< \xi, \xi \right>_{\phi_1} = 0$.
If $n_0,...,n_m$ are as in the above, then
$$\phi_1\Big( \big< {\sum}_{k=n_{p}+1}^{n_{p+1}} \lambda_k \tsi u^{(k)}_i, {\sum}_{l=n_{q}+1}^{n_{q+1}} \lambda_l \tsi u^{(l)}_i\big>_{\BC^{\otimes I}} \Big)
\ = \ 0,$$
because of \eqref{eqt:phi0-disj} and the positivity of $\langle \cdot, \cdot\rangle_{\phi_1}$.
Hence, we may assume $u^{(k)} \sim u^{(l)}$ for all $k,l \in \{1,...,n\}$, and apply part (a) to conclude that $\xi = 0$.
Finally, as $\left< \cdot, \cdot \right>_{\phi_1}$ is an inner-product on $\BOI^\un H_i$ and we have both $\bigotimes_{i\in I}^\ct H_i = \ti K\oplus \bigotimes_{i\in I}^\un H_i$ and $\ti K \subseteq K$, we obtain
$K \subseteq \ti K$ as well.
\smnoind
(c) Observe that the linear bijection $\Psi: (\bigotimes_{i\in I_1} H_i) \otimes (\bigotimes_{j\in I_2} H_j) \to \bigotimes_{i\in I} H_i$ as in Remark \ref{rem:alt-con-ten-prod}(b) restricts to a surjection from $(\bigotimes_{i\in I_1}^\un H_i) \otimes (\bigotimes_{j\in I_2}^\un H_j)$ to $\bigotimes_{i\in I}^\un H_i$.
Moreover, for any $u, u'\in \Pi_{i\in I_1} \sph(H_i)$ and $v,v'\in \Pi_{j\in I_2} \sph(H_j)$, we have $(u,u')\sim (v,v')$ as elements in $\PI \sph(H_i)$ if and only if $u\sim u'$ and $v\sim v'$.
Thus, the argument is part (b) tells us that
$$\left< (\otimes_{i\in I_1} u_i) \otimes (\otimes_{j\in I_2} v_j),
(\otimes_{i\in I_1} u_i') \otimes (\otimes_{j\in I_2} v_j')\right>_{\phi_1
\ =\ \langle \otimes_{i\in I_1} u_i,
\otimes_{i\in I_1} u_i'\rangle_{\phi_1}
\langle \otimes_{j\in I_2} v_j,
\otimes_{j\in I_2} v_j'\rangle_{\phi_1}$$
This shows that $\Psi\big|_{(\bigotimes_{i\in I_1}^\un H_i) \otimes (\bigotimes_{j\in I_2}^\un H_j)}$ is inner-product preserving.
\end{prf}
\medskip
\emph{We denote by $\bar\bigotimes_{i\in I}^\mu H_i$ and $\bar\bigotimes_{i\in I}^{\phi_1} H_i$ the completions of $\bigotimes_{i\in I}^\mu H_i$ and ${\bigotimes}_{i\in I}^\un H_i$, respectively, under the norms induced by $\langle \cdot,\cdot \rangle_{\phi_1}$.}
\medskip
\begin{eg}\label{eg:bad-expan}
If $H_i = \BC$ ($i\in I$), then the sesquilinear form $\langle \cdot, \cdot \rangle_{\phi_1}$ is not positive on the whole space $\BOI H_i$ since
$\big\langle (\tsi 1/2 - \tsi 2), (\tsi 1/2 - \tsi 2)\big\rangle_{\phi_1} = -2$.
\end{eg}
\medskip
Set $\PI^{\rm eu} H_i:= \{x\in \PI H_i: x_i\in \sph(H_i) \text{ except for a finite number of }i\}$ and
$K$ be an inner-product space.
A multilinear map $\Phi: \PI^{\rm eu} H_i \to K$ (i.e. $\Phi$ is coordinatewise linear) is said to be \emph{componentwise inner-product preserving} if for any $\mu,\nu\in \Omega^\un_{I;H}$,
$$\left<\Phi(x), \Phi(y)\right> = \delta_{\mu,\nu} \PI\ \! \langle x_i, y_i \rangle
\qquad (x\in \PI^\mu H_i; y\in \PI^\nu H_i)$$
where $\delta_{\mu,\nu}$ is the Kronecker delta.
\medskip
\begin{thm}\label{thm:univ-prop-Hil-sp-ten-prod}
(a) $\bar\bigotimes_{i\in I}^{\phi_1} H_i \cong \bar\bigoplus_{\mu\in \Omega_{I;H}^\un}^{\ell^2} \bar\bigotimes_{i\in I}^{\mu} H_i$ canonically as Hilbert spaces.
\smnoind
(b) $\Theta_H|_{\PI^{\rm eu} H_i}: \PI^{\rm eu} H_i \to \BOI^\un H_i$ is a componentwise inner-product preserving multilinear map. For any inner-product space $K$ and any componentwise inner-product preserving multilinear map $\Phi: \PI^{\rm eu} H_i \to K$, there is a unique isometry $\ti \Phi: \BOI^\un H_i \to K$ such that $\Phi = \ti \Phi\circ \Theta_H|_{\PI^{\rm eu} H_i}$.
\end{thm}
\begin{prf}
(a) Clearly, $\BOI^\un H_i = \sum_{\mu\in \Omega^\un_{I;H}} \BOI^{\mu} H_i$.
Moreover, as in the proof of Lemma \ref{lem:phi0-innpro}(b), the two subspaces $\BOI^{\mu} H_i$ and $\BOI^{\nu} H_i$ are orthogonal if $\mu$ and $\nu$ are distinct elements in $\Omega^\un_{I;H}$.
The rest of the argument is standard.
\smnoind
(b) It is easy to see that $\Theta_H|_{\PI^{\rm eu} H_i}$ is componentwise inner-product preserving.
The uniqueness of $\ti \Phi$ follows from the fact that $\Theta_H(\PI^{\rm eu} H_i)$ generates $\BOI^\un H_i$.
To show the existence of $\ti \Phi$, we first define a multilinear map $\Phi_0: \PI H_i \to K$ by setting $\Phi_0 = \Phi$ on $\PI^{\rm eu} H_i$ and
$\Phi_0 = 0$ on $\PI H_i \setminus \PI^{\rm eu} H_i$.
Let $\ti \Phi_0: \BOI H_i \to K$ be the induced linear map and set $\ti \Phi := \ti \Phi_0|_{\BOI^\un H_i}$.
Suppose that $u,v\in \PI \sph(H_i)$, $\xi\in \BOI^{u} H_i$ and $\eta\in \BOI^v H_i$.
If $u\nsim v$, then $\langle \xi, \eta\rangle_{\phi_1} = 0 =
\langle \ti \Phi(\xi), \ti \Phi(\eta)\rangle$.
Otherwise, there exist $F\in \mathfrak{F}$ and $\xi_0, \eta_0\in \bigotimes_{i\in F} H_i$ such that $\xi = J^{u}_F(\xi_0)$, $\eta = J^{v}_F(\eta_0)$ and $u_i = v_i$ if $i\in I\setminus F$.
In this case,
$\langle \ti \Phi(\xi), \ti \Phi(\eta)\rangle
= \langle \xi_0, \eta_0\rangle
= \langle \xi, \eta\rangle_{\phi_1}$.
\end{prf}
\medskip
\begin{eg}\label{eg:ten-Hil-sp}
Suppose that $\Phi$ and $\varphi$ are as in Corollary \ref{cor:boiut-C}, and $\{\delta_\mu\}_{\mu\in \Omega^\un_{I;\BC}}$ is the canonical orthonormal basis for $\ell^2\big(\Omega^\un_{I;\BC}\big)$.
Note that $\Omega_{I;\BC}^\ut = \Omega_{I;\BC}^\un$ and consider the linear bijection $J: \BC[\Omega^\ut_{I;\BC}] \to \BC[\Omega^\un_{I;\BC}]$ given by $J(\lambda([\alpha]_\sim)) := \delta_{[\alpha]_\sim}$ ($\alpha\in \PI \BT$).
By Example \ref{eg:u-ten-prod}(a) and Theorem \ref{thm:univ-prop-Hil-sp-ten-prod}(a), the map $J\circ \Phi$ induces a Hilbert space isomorphism $\hat\Phi:\bar\bigotimes_{i\in I}^{\phi_1} \BC \to \ell^2\big(\Omega^\un_{I;\BC}\big)$ such that $\hat\Phi(\tsi \beta_i) = \varphi(\beta)\delta_{[\beta]_\sim}$ ($\beta\in \PI\BT$).
\end{eg}
\medskip
We would like to compare $\bar\bigotimes_{i\in I}^{\phi_1} H_i$ with the infinite directed product as defined in \cite{vN}, when $\{H_i\}_\ii$ is a family of Hilbert spaces.
Let us first recall from \cite[Definition 3.3.1]{vN} that $x\in \PI H_i$ is a \emph{$C_0$-sequence} if $\sum_{i\in I} \big| \|x_i\| - 1 \big|$ converges.
As in \cite[Definition 3.3.2]{vN}, if $x$ and $y$ are $C_0$-sequences such that $\sum_{i\in I} \big| \langle x_i, y_i\rangle - 1 \big|$ converges, then we write $x \approx y$.
Denote by $[x]_\approx$ the equivalence class of $x$ under $\approx$, and by $\Gamma_{I;H}$ the set of all such equivalence classes (see \cite[Definition 3.3.3]{vN}).
\medskip
Let $\prod\otimes_{i\in I} H_i$ be the infinite direct product Hilbert space as defined in \cite{vN}, and
$\prod\otimes_\ii \ \!x_i$ be the element in $\prod{\otimes}_{i\in I} H_i$ corresponding to a $C_0$-sequence $x$ as in \cite[Theorem IV]{vN}.
Notice that if $x\in \PI^{\rm eu} H_i$, then $x$ is a $C_0$-sequence, and we have a multilinear map
$$\Upsilon: \PI^{\rm eu} H_i\ \longrightarrow\ \prod{\otimes}_{i\in I} H_i.$$
On the other hand, for any $\mathfrak{C}\in \Gamma_{I;H}$, we denote $\prod\otimes_{i\in I}^\mathfrak{C} H_i$ to be the closed subspace of $\prod\otimes_{i\in I} H_i$ generated by $\{ \prod\otimes_\ii\ \! x_i : x\in \mathfrak{C}\}$
(see \cite[Definition 4.1.1]{vN}).
\medskip
\begin{prop}\label{prop:cp-vN}
Let $\{H_i\}_{i\in I}$ be a family of Hilbert spaces.
\smnoind
(a) $[x]_\sim \mapsto [x]_\approx$ ($x\in \PI \sph(H_i)$) gives a well-defined surjection $\kappa_H: \Omega^\un_{I;H} \to \Gamma_{I;H}$.
Moreover, for any $x,y\in \PI \sph(H_i)$, there is a bijection between $\kappa_H^{-1}([x]_\approx)$ and $\kappa_H^{-1}([y]_\approx)$.
\smnoind
(b) There exists a linear map $\ti \Upsilon: \BOI^\un H_i \to \prod{\otimes}_{i\in I} H_i$ such that $\Upsilon = \ti \Upsilon \circ \Theta_H |_{\PI^{\rm eu} H_i}$ and $\ti \Upsilon\mid_{\BOI^\mu H_i}$ extends to a Hilbert space isomorphism $\ti\Upsilon^\mu: \bar\bigotimes_{i\in I}^{\mu} H_i \to \prod\otimes_{i\in I}^{\kappa_H(\mu)} H_i$ ($\mu\in \Omega^\un_{I;H}$).
\end{prop}
\begin{prf}
(a) Clearly, if $x\sim z$, then $x\approx z$ and $\kappa_H$ is well-defined.
\cite[Lemma 3.3.7]{vN} tells us that $\kappa_H$ is surjective.
Furthermore, there exists a unitary $u_i\in \CL(H_i)$ such that $u_ix_i = y_i$ ($i\in I$), and $[u_i]_{i\in I}$ induces the required bijective correspondence in the second statement.
\smnoind
(b) By the argument of Theorem \ref{thm:univ-prop-Hil-sp-ten-prod}(b), one can construct a linear map $\ti \Upsilon$ such that $\Upsilon = \ti \Upsilon \circ \Theta_H |_{\PI^{\rm eu} H_i}$.
By the argument of part (a), we see that $\ti \Upsilon\left(\BOI^{[u]_\sim} H_i\right) \subseteq \prod\otimes_{i\in I}^{[u]_\approx} H_i$ ($u\in \PI \sph(H_i)$).
Furthermore, by Lemma \ref{lem:phi0-innpro}(a), Proposition \ref{prop:ten-inn-prod}(b) and \cite[Theorem IV]{vN}, we see that $\ti \Upsilon|_{\BOI^{[u]_\sim} H_i}$ is an isometry.
Finally, $\ti \Upsilon|_{\BOI^{[u]_\sim} H_i}$ has dense range (by \cite[Lemma 4.1.2]{vN}).
\end{prf}
\medskip
Notice that $\ti \Upsilon$ is, in general, unbounded but
Remark \ref{rem:cp-inf-direct-prod}(b) below tells us that $\bar\bigotimes_\ii^{\phi_1} H_i$ is a ``natural dilation'' of $\prod \tsi H_i$.
On the other hand, Remark \ref{rem:cp-inf-direct-prod}(d) says that it is possible to construct $\prod \tsi H_i$ in a similar way as that for $\bar\bigotimes_\ii^{\phi_1} H_i$.
Note however, that the construction of $\bar\bigotimes_\ii^{\phi_1} H_i$ is totally algebraical and $\bar\bigotimes_\ii^{\phi_1} H_i$ itself seems to be more natural (see Theorem \ref{thm:ten-prod-hil-cT-mod} and Example \ref{eg:ten-prod-rep-BC} below).
\medskip
\begin{rem}\label{rem:cp-inf-direct-prod}
Suppose that $\{H_i\}_{i\in I}$ is a family of Hilbert spaces.
\smnoind
(a) $\sim$ and $\approx$ are different even in the case when $I = \mathbb{N}$ and $H_i = \BC$ ($i\in \BN$) because one can find $x,y\in \Pi_{i\in \BN} \BT$ with $x_i \neq y_i$ for all $i\in \BN$ but $\sum_{i=1}^\infty \big|\langle x_i, y_i \rangle - 1\big|$ converges.
In fact, $\kappa_H^{-1}([x]_\approx)$ is an infinite set.
\smnoind
(b) By \cite[Lemma 4.1.1]{vN}, we have
\begin{equation*}\label{eqt:decomp-inf-dir-prod}
\prod{\otimes}_{i\in I} H_i\ =\ {\bar\bigoplus}_{\mathfrak{C}\in \Gamma_{I;H}}^{\ell^2} \prod{\otimes}_{i\in I}^\mathfrak{C} H_i.
\end{equation*}
Therefore, Theorem \ref{thm:univ-prop-Hil-sp-ten-prod}(a) and Proposition \ref{prop:cp-vN} tell us that for a fixed $\gamma_0\in \Gamma_{I;H}$, one has a canonical Hilbert space isomorphism $${\bar\bigotimes}_{i\in I}^{\phi_1} H_i
\ \cong\ \ell^2\big(\kappa_H^{-1}(\gamma_0)\big) \bar \otimes \big(\prod {\otimes}_{i\in I} H_i\big).$$
\smnoind
(c) For each $i\in I$, let $K_i$ be an inner-product space such that $H_i$ is the completion of $K_i$.
Then $\bar\bigotimes_{i\in I}^{\phi_1} K_i$ is, in general, not canonically isomorphic to $\bar\bigotimes_{i\in I}^{\phi_1} H_i$ because $\Omega_{I;K}^\un \subsetneq \Omega_{I;H}^\un$ if $K_i\subsetneq H_i$ for an infinite number of $i\in I$.
On the other hand, if $I$ is countable, for any $x\in \PI \sph(H_i)$, there exists $y\in \PI \sph(K_i)$ such that $x\approx y$.
This shows that the restriction, $\kappa_{H;K}$, of $\kappa_H$ on $\Omega^\un_{I;K}$ is also a surjection onto $\Gamma_{I;H}$.
However, we do not know if the cardinality of $\kappa_{H;K}^{-1}(\mathfrak{C})$ are the same for different $\mathfrak{C}\in \Gamma_{I;H}$.
\smnoind
(d) If $\phi_0$ is as in Example \ref{eg:multi-linear}(b), it is easy to see that
$$\langle \prod \otimes u_i, \prod \otimes v_i\rangle
= \phi_0\big(\langle \tsi u_i, \tsi v_i\rangle_{\BC^{\otimes I}} \big)
\qquad (u,v\in \PI^\un H_i).$$
Thus, the sesquilinear form $\phi_0\big(\langle\cdot, \cdot\rangle_{\BC^{\otimes I}} \big)$ produces $\prod \otimes H_i$.
If one wants a self-contained alternative construction for $\prod \otimes H_i$, one needs to establish the positivity of $\phi_0\big(\langle\cdot, \cdot\rangle_{\BC^{\otimes I}} \big)$, which can be reduced to showing the positivity when all $H_i$ are of the same finite dimension.
\end{rem}
\medskip
In the remainder of this section, we show that $\BOI^\un H_i$ can be completed into a $C^*(\Omega^\ut_{I;\BC})$-module, which gives many pre-inner products on $\BOI^\un H_i$ including $\langle\cdot , \cdot\rangle_{\phi_1}$.
In the following, we use the convention that the $A$-valued inner-product of an inner-product $A$-module is $A$-linear in the first variable (where $A$ is a pre-$C^*$-algebra).
On the other hand, we recall that if $G$ is a group and $\lambda_g$ is the canonical image of $g$ in $\BC[G]$, the map ${\sum}_{g\in G} \alpha_g \lambda_g \mapsto \alpha_e$ ($\alpha_g\in \BC$), where $e\in G$ is the identity, extends to a faithful tracial state $\chi_G$ on $C^*(G)$.
\medskip
\begin{thm}\label{thm:ten-prod-hil-cT-mod}
(a) There exists an inner-product $\BC[\Omega^\ut_{I;\BC}]$-module structure on $\BOI^\un H_i$.
If $\bar\bigotimes_{i\in I}^{\rm mod} H_i$ is the Hilbert $C^*(\Omega^\ut_{I;\BC})$-module given by the completion of this $\BC[\Omega^\ut_{I;\BC}]$-module, we have a canonical Hilbert space isomorphism
\begin{equation}\label{eqt:ten-phi=mod-ten}
{\bar\bigotimes}_{i\in I}^{\phi_1} H_i\ \cong\ \big({\bar\bigotimes}_{i\in I}^{\rm mod} H_i\big) \bar\otimes_{\chi_{\Omega^\ut_{I;\BC}}} \BC.
\end{equation}
\smnoind
(b) If $G\subseteq \Omega^\ut_{I;\BC}$ is a subgroup and $\mathcal{E}_G: C^*(\Omega^\ut_{I;\BC}) \to C^*(G)$ is the canonical conditional expectation, there is an inner-product $\BC[G]$-module structure on $\BOI^\un H_i$, whose completion coincides with the Hilbert $C^*(G)$-module $\big({\bar\bigotimes}_{i\in I}^{\rm mod} H_i\big) \bar\otimes_{\mathcal{E}_G} C^*(G)$.
\end{thm}
\begin{prf}
(a) Clearly, $\BOI^\un H_i$ is a $\BC^{\otimes I}_\ut$-submodule of the $\BC^{\otimes I}$-module $\BOI H_i$ (see Proposition \ref{prop:exist-ten-prod}(c)).
Moreover,
one has a linear ``truncation''
$E$ from $\BC^{\otimes I} = \big({\bigoplus}_{\omega\in \Omega_{I;\BC}\setminus \Omega^\ut_{I;\BC}} \BOI^\omega \BC\big) \oplus \BC^{\otimes I}_\ut$ to $\BC^{\otimes I}_\ut$ sending $(\alpha, \beta)$ to $\beta$.
Define
$$\langle \xi, \eta \rangle_{\BC^{\otimes I}_\ut}
:= E\big( \langle \xi, \eta \rangle_{\BC^{\otimes I}} \big)
\qquad \big(\xi, \eta\in \BOI^\un H_i\big),$$
which is a Hermitian $\BC^{\otimes I}_\ut$-form because by \eqref{eqt:grading}, we have
$$E(ab) = E(a)b \quad \text{and}\quad E(a^*) = E(a)^*
\qquad (a\in \BC^{\otimes I};b\in \BC^{\otimes I}_\ut).$$
For any $u,v\in \PI \sph(H_i)$, we write $u\sim_\s v$ if there exists $\beta\in \PI \BT$ such that $u_i = \beta_i v_i$ e.f.
Then $\sim_\s$ is an equivalence relation on $\PI \sph(H_i)$ satisfying
\begin{equation}\label{eqt:sims-equiv}
u\sim_\s v\quad \text{if and only if}\quad \langle \tsi u_i, \tsi v_i \rangle_{\BC^{\otimes I}} \in \BC^{\otimes I}_\ut.
\end{equation}
Let $\Phi$ and $\varphi$ be as in Corollary \ref{cor:boiut-C}.
Suppose that $\xi = \sum_{k=1}^n \alpha_k \tsi u^{(k)}_i$ with $\alpha_1,...,\alpha_n\in \BC$ and $u^{(1)}, ..., u^{(n)}\in \PI \sph(H_i)$.
We first show that $\Phi(\langle \xi, \xi \rangle_{\BC^{\otimes I}_\ut}) \in C^*(\Omega^\ut_{I;\BC})_+$.
As in the proof of Lemma \ref{lem:phi0-innpro}(b), it suffices to consider the case when $u^{(k)} \sim_\s u^{(1)}$ for any $k \in \{1,...,n\}$ (because of Relation \eqref{eqt:sims-equiv}).
Let $F\in \mathfrak{F}$ and $\beta^{(1)},...,\beta^{(n)}\in \PI \BT$ such that $u^{(k)}_i = \beta^{(k)}_iu^{(1)}_i$ ($i\in I\setminus F$; $k=1,...,n$).
For any $k,l\in \{1,...,n\}$, we have
$$
\Phi\big((\Pi_{i\in F} \langle u^{(k)}_i, u^{(l)}_i \rangle_i) (\otimes_{i\in I\setminus F}\ \!\beta_i^{(k)}
\overline{\beta_i^{(l)}})\big)
\ =\ \langle \ti\varphi_F(u^{(k)}), \ti\varphi_F(u^{(l)}) \rangle_F,$$
where $\ti\varphi_F(u^{(k)}) := \big(\varphi(\beta^{(k)}) \Pi_{i\in F} \beta^{(k)}_i\big)^{-1} (\otimes_{i\in F}\ \! u^{(k)}_i)\otimes \lambda_{[\beta^{(k)}]_\sim}$ and
$\langle \cdot, \cdot \rangle_F$ is the canonical $\BC[\Omega^\ut_{I;\BC}]$-valued inner-product on $(\bigotimes_{i\in F} H_i) \otimes \BC[\Omega^\ut_{I;\BC}]$.
Therefore,
\begin{equation*}\label{eqt:mod-pos}
\Phi(\langle \xi ,\xi \rangle_{\BC^{\otimes I}_\ut})
\ = \ \left\langle {\sum}_{k=1}^n \alpha_k \ti\varphi_F(u^{(k)}), {\sum}_{k=1}^n \alpha_k \ti\varphi_F(u^{(k)}) \right\rangle_F
\ \geq \ 0.
\end{equation*}
Next, we show that $\chi_{\Omega^\ut_{I;\BC}}\circ \Phi\circ E = \phi_1$.
Let $\alpha\in \PI \BC^\times$.
If $\alpha\nsim 1$, then $\chi_{\Omega^\ut_{I;\BC}}\circ \Phi\circ E(\tsi \alpha_i) = 0$ (as $\Phi(E(\tsi \alpha_i))\notin \BC\cdot \lambda_{[1]_\sim}\setminus \{0\}$, whether or not $[\alpha]_\sim\in \Omega^\ut_{I;\BC}$) and we also have $\phi_1(\tsi \alpha_i) = 0$.
If $\alpha \sim 1$, then $\tsi \alpha_i = (\PI \alpha_i)(\tsi 1) = (\PI \alpha_i)\lambda_{[1]_\sim}$, which implies that
$\chi_{\Omega^\ut_{I;\BC}}(\Phi(\tsi \alpha_i))
= \PI \alpha_i
= \phi_1(\tsi \alpha_i)$.
Thus, we have
\begin{equation}\label{eqt:cp-chi-phi0}
\chi_{\Omega^\ut_{I;\BC}}\big(\Phi(\langle \xi, \eta \rangle_{\BC^{\otimes I}_\ut})\big)
= \langle \xi, \eta \rangle_{\phi_1}
\qquad \big(\xi, \eta\in \BOI^\un H_i\big).
\end{equation}
As a consequence, if $\Phi(\langle \xi, \xi \rangle_{\BC^{\otimes I}_\ut}) = 0$, we know from Lemma \ref{lem:phi0-innpro}(b) that $\xi = 0$.
This gives an inner-product $\BC[\Omega^\ut_{I;\BC}]$-module structure on $\BOI^\un H_i$.
Furthermore, the required isomorphism $\bar\bigotimes_{i\in I}^{\phi_1} H_i \cong (\bar\bigotimes_{i\in I}^{\rm mod} H_i) \bar \otimes_{\chi_{\Omega^\ut_{I;\BC}}} \BC$ also follows from
\eqref{eqt:cp-chi-phi0}.
\smnoind
(b) Since $\BOI^\un H_i$ is a $\BC[G]$-module (under the identification of $\BC[G]$ with $\bigoplus_{\omega\in G} \BOI^\omega \BC$ under the $^*$-isomorphism $\Phi$ in Corollary \ref{cor:boiut-C}), every element in $(\BOI^\un H_i)\otimes_{\BC[G]} \BC[G]$ is of the form $\xi\otimes_{\BC[G]} 1$ for some $\xi\in \BOI^\un H_i$.
Moreover, if $\xi,\eta\in \BOI^\un H$, then
\begin{equation}\label{eqt:G-mod}
\langle \xi\otimes_{\BC[G]} 1, \eta\otimes_{\BC[G]} 1\rangle_{({\bar\bigotimes}_{i\in I}^{\rm mod} \BC)\bar\otimes_{\mathcal{E}_G} C^*(G)}
\ =\ \mathcal{E}_G(\Phi(\langle \xi, \eta\rangle_{\BC^{\otimes I}_\ut}))
\ =\ \Phi(E_G(\langle \xi, \eta\rangle_{\BC^{\otimes I}})),
\end{equation}
where $E_G$ is the linear ``truncation'' map from $\BC^{\otimes I}$ to $\bigoplus_{\omega\in G} \BOI^\omega \BC$ defined as in part (a).
Therefore, $\Phi(E_G(\langle \cdot, \cdot\rangle_{\BC^{\otimes I}}))$ is a positive Hermitian $\BC[G]$-form on $\BOI^\un H_i$.
Obviously, $\chi_{\Omega^\ut_{I;\BC}} = \chi_G\circ \mathcal{E}_G$, and by \eqref{eqt:cp-chi-phi0},
$$\chi_G(\Phi(E_G(\langle \xi, \eta\rangle_{\BC^{\otimes I}})))
= \chi_{\Omega^\ut_{I;\BC}}(\Phi(\langle \xi, \eta\rangle_{\BC^{\otimes I}_\ut}))
= \langle \xi, \eta\rangle_{\phi_1}
\qquad \big(\xi,\eta\in \BOI^\un H\big).$$
This implies that $\Phi(E_G(\langle \cdot, \cdot\rangle_{\BC^{\otimes I}}))$ is non-degenerate (since $\langle\cdot, \cdot\rangle_{\phi_1}$ is non-degenerate by Lemma \ref{lem:phi0-innpro}(b)).
Now, Equation \eqref{eqt:G-mod} tells us that the Hilbert $C^*(G)$-module $\big({\bar\bigotimes}_{i\in I}^{\rm mod} H_i\big)\bar\otimes_{\mathcal{E}_G} C^*(G)$ is the completion of $\BOI^\un H_i$ under the norm induced by the $\BC[G]$-valued inner-product $\Phi(E_G(\langle \cdot, \cdot\rangle_{\BC^{\otimes I}}))$.
\end{prf}
\medskip
Let $\{e\}$ be the trivial subgroup of $\Omega^\ut_{I;\BC}$.
Since one can identify $E_{\{e\}}$ with $\phi_1$ (through the argument of Theorem \ref{thm:ten-prod-hil-cT-mod}(b)), one has
$${\bar\bigotimes}_{i\in I}^{\phi_1} H_i \ \cong \ \big({\bar\bigotimes}_{i\in I}^{\rm mod} H_i\big)\bar\otimes_{\mathcal{E}_{\{e\}}} \BC.$$
\medskip
\begin{rem}\label{rem:hil-mod}
(a) For any subgroup $G\subseteq \Omega^\ut_{I;\BC}$ and any faithful state $\varphi$ on $C^*(G)$,
the Hilbert space
$$\Big(\big({\bar\bigotimes}_\ii^{\rm mod} H_i\big)\bar\otimes_{\mathcal{E}_G} C^*(G)\Big) \bar\otimes_\varphi \BC$$
induces an inner-product on $\BOI^\un H_i$.
\smnoind
(b) If $x\in \PI^0 \BC$ (see Example \ref{eg:multi-linear}(b)), then $\sup_{i\in I} \abs{x_i} < \infty$.
This, together with the surjectivity of $\kappa_\BC$ (see Proposition \ref{prop:cp-vN}(a)), tells us that $\Gamma_{I;\BC}$ is a group under the multiplication: $[x]_\approx\cdot [y]_\approx := [xy]_\approx$ (where $(xy)_i := x_iy_i$ for any $i\in I$).
Moreover, $\kappa_\BC: \Omega^\ut_{I;\BC} = \Omega^\un_{I;\BC} \to \Gamma_{I;\BC}$ is a group homomorphism, which induces a surjective $^*$-homomorphism $\bar \kappa_\BC: C^*(\Omega^\ut_{I;\BC}) \to C^*(\Gamma_{I;\BC})$.
\smnoind
(c) It is natural to ask whether $\big((\bar\bigotimes_\ii^{\rm mod} H_i) \bar\otimes_{\bar\kappa_\BC} C^*(\Gamma_{I;\BC})\big)\bar\otimes_{\chi_{\Gamma_{I;\BC}}}\BC$ is isomorphic to $\prod \tsi H_i$ canonically.
Unfortunately, it is not the case.
In fact, for any $x,y\in \PI^\un H_i$, we denote $x \approx_\BT y$ if there exists $\alpha\in \PI \BT$ with $\alpha\approx 1$ such that $x_i = \alpha_i y_i$ e.f.
It is easy to check that $\approx_\BT$ is an equivalent relation standing strictly between $\sim$ and $\approx$ in general.
Moreover, one has
$$\big< ((\tsi x_i)\otimes_{\bar\kappa_\BC} 1)\otimes_{\chi_{\Gamma_{I;\BC}}} 1, ((\tsi y_i)\otimes_{\bar\kappa_\BC} 1)\otimes_{\chi_{\Gamma_{I;\BC}}} 1\big>
= 0
\quad \text{whenever } x\not\approx_\BT y,$$
while $\big< \prod \tsi x_i, \prod \tsi y_i\big> = 0$ whenever $x\not\approx y$.
Note however, that if all $H_i = \BC$, then $\approx_\BT$ and $\approx$ coincide, and one can show that the two Hilbert spaces
$\big((\bar\bigotimes_\ii^{\rm mod} \BC) \bar\otimes_{\kappa_\BC} C^*(\Gamma_{I;\BC})\big)\bar\otimes_{\chi_{\Gamma_{I;\BC}}}\BC$ and $\prod \tsi \BC$ coincide canonically.
\end{rem}
\medskip
\begin{eg}\label{eg:ten-Hil-mod}
(a) It is clear that $\bar\bigotimes_\ii^{\rm mod} \BC = C^*(\Omega^\ut_{I;\BC})$.
For any state $\varphi$ on $C^*(\Omega^\ut_{I;\BC})$, the Hilbert space $(\bar\bigotimes_\ii^{\rm mod} \BC)\bar\otimes_\varphi \BC$ is the GNS construction of $\varphi$.
\smnoind
(b) If $G$ is a subgroup of $\Omega^\ut_{I;\BC}$, we have
$$\big({\bar\bigotimes}_{i\in I}^{\rm mod} \BC\big)\bar\otimes_{\mathcal{E}_G} C^*(G)
\ \cong\ \ell^2(\Omega^\ut_{I;\BC}/G) \bar\otimes C^*(G).$$
In fact, let $q: \Omega^\ut_{I;\BC}\to \Omega^\ut_{I;\BC}/G$ be the quotient map and $\sigma: \Omega^\ut_{I;\BC}/G \to \Omega^\ut_{I;\BC}$ be a cross-section.
One has a bijection from $\Omega^\ut_{I;\BC}$ to $(\Omega^\ut_{I;\BC}/G)\times G$ sending $\omega$ to $(q(\omega), \sigma(q(\omega)^{-1})\omega)$.
This induces a bijective linear map $\Delta : \BC[\Omega^\ut_{I;\BC}] \to \bigoplus_{\Omega^\ut_{I;\BC}/G} \BC[G]$ such that
for any $\omega\in \Omega^\ut_{I;\BC}$ and $\varepsilon \in \Omega^\ut_{I;\BC}/G$,
$$\Delta(\lambda_{\omega})_\varepsilon
\ := \ \begin{cases}
\lambda_{\sigma(\varepsilon^{-1})\omega} \ \ & \text{if }q(\omega)= \varepsilon\\
0 & \text{otherwise}.
\end{cases}$$
Let $\Phi: \BOI^\un \BC = \BC^{\otimes I}_\ut \to \BC[\Omega^\ut_{I;\BC}]$ and $\varphi:\PI \BT\to \BT$ be as in Corollary \ref{cor:boiut-C}.
Suppose that $\alpha,\beta\in \PI \BC^\times$.
If $[\alpha\beta^{-1}]_\sim$ does not belong to $G$, then $E_G(\langle \tsi \alpha_i, \tsi \beta_i\rangle_{\BC^{\otimes I}}) = 0$, and
$$\big\langle \Delta\circ\Phi\big(\tsi \alpha_i\big), \Delta\circ\Phi(\tsi \beta_i\big) \big\rangle_{\bigoplus_{\Omega^\ut_{I;\BC}/G}^{\ell^2} \BC[G]}
\ = \ 0.$$
On the other hand, if $[\alpha\beta^{-1}]_\sim\in G$, then
\begin{eqnarray*}
\big\langle \Delta\circ\Phi\big(\tsi \alpha_i\big), \Delta\circ\Phi(\tsi \beta_i\big) \big\rangle_{\bigoplus_{\Omega^\ut_{I;\BC}/G}^{\ell^2} \BC[G]
& = & \varphi(\alpha \beta^{-1}) \lambda_{[\alpha\beta^{-1}]_\sim}\\
& = & \Phi(\tsi \alpha_i\beta^{-1}_i)
\ = \ \Phi(E_G(\langle \tsi \alpha_i, \tsi \beta_i\rangle_{\BC^{\otimes I}})).
\end{eqnarray*}
This shows that $\Delta\circ \Phi$ is an inner-product $\BC[G]$-module isomorphism from $\BOI^\un \BC$ (equipped with the inner-product $\BC[G]$-module structure as in Theorem \ref{thm:ten-prod-hil-cT-mod}(b)) onto $\bigoplus_{\Omega^\ut_{I;\BC}/G}^{\ell^2} \BC[G]$.
\end{eg}
\medskip
\section{Tensor products of $^*$-representations of $^*$-algebras}
\medskip
\emph{In this section, $\{(A_i, H_i, \Psi_i)\}_{i\in I}$ is a family of unital $^*$-representations, in the sense that $A_i$ is a unital $^*$-algebra, $H_i$ is a Hilbert space and $\Psi_i: A_i \to \CL(H_i)$ is a unital $^*$-homomorphism ($i\in I$).}
\medskip
Suppose that $\Psi_0:=\TBOIM\Psi_i: \BOI A_i \to L(\BOI H_i)$ is the map as in Proposition \ref{prop:exist-ten-prod}(c).
It is easy to check that
\begin{equation}\label{eqt:ten-psi-inv}
\big< \Psi_0 (a) \xi, \eta\big>_{\BC^{\otimes I}}
=\big< \xi, \Psi_0(a^*) \eta\big>_{\BC^{\otimes I}}
\qquad \big(a\in {\bigotimes}_{i\in I} A_i;\xi,\eta\in {\bigotimes}_{i\in I} H_i\big).
\end{equation}
Furthermore, one has the following result (which is more or less well-known).
\medskip
\begin{prop}\label{prop:ten-prod-st-alg}
For any $\mu\in \Omega^\un_{I;H}$, the map $\TBOIM\Psi_i$ induces a unital $^*$-representation ${\bigotimes}_{i\in I}^\mu\Psi_i: \bigotimes_{i\in I}^e A_i \to \CL(\bar\bigotimes_{i\in I}^\mu H_i)$.
If, in addition, all $\Psi_i$ are injective, then so is $\BOI^\mu\Psi_i$.
\end{prop}
\medskip
Consequently, one has a unital $^*$-representation of $\BOI^e A_i$ on the Hilbert space $\bar\bigotimes^{\phi_1}_{i\in I} H_i$.
However, it seems impossible to extend it to a unital $^*$-representation of $\BOI A_i$ on $\bar\bigotimes^{\phi_1}_{i\in I} H_i$.
The biggest $^*$-subalgebra $\BOI A_i$ that we can think of, for which such extension is possible, is the subalgebra $\BOI^\ut A_i$.
Example \ref{eg:ten-prod-rep-BC}(a) also tells us that it is probably the right subalgebra to consider.
\medskip
Let us digress a little bit and give another $^*$-representation of $\BOI^\ut A_i$, which is a direct consequence of Proposition \ref{prop:ten-prod-st-alg}, Theorem \ref{thm:boiut-tw-cr-pd}(a) and \cite[Theorem 4.1]{BS} (it is not hard to verify that the representation as given in \cite[Theorem 4.1]{BS} is injective when $\BOI^{\mu}\Psi_i$ is injective).
Note however, that such a $^*$-representation is not canonical since it depends on the choices a cross-section $c:\Omega^\ut_{I;A} \to \PI U_{A_i}$ (see Remark \ref{rem:ut-prod}(a)).
\medskip
\begin{cor}
Suppose that $\Psi_i$ are injective.
For any $\mu\in \Omega^\un_{I;H}$, the injection $\BOI^{\mu}\Psi_i$ induces an injective unital $^*$-representation of
$\BOI^\ut A_i$ on $(\bar\bigotimes_{i\in I}^\mu H_i)\otimes \ell^2(\Omega^\ut_{I;A})$.
\end{cor}
\medskip
Let us now go back to the discussion of the tensor product type representation of $\BOI^\ut A_i$.
Observe that $\{\Psi_i\}_\ii$ induces a canonical action $\alpha^\Psi: \Omega^\ut_{I;A}\times \Omega^\un_{I;H} \to \Omega^\un_{I;H}$.
For simplicity, we will denote $\alpha^\Psi_\omega(\mu)$ by $\omega\cdot \mu$ ($\omega\in \Omega^\ut_{I;A}; \mu\in \Omega^\un_{I;H}$).
\smnoind
\medskip
\begin{thm}\label{thm:inf-ten-c-st-alg}
(a) The map $\TBOIM \Psi_i$ induces a unital $^*$-representation $\bigotimes_{i\in I}^{\phi_1} \Psi_i: \BOI^\ut A_i \to \CL\big( \bar\bigotimes_{i\in I}^{\phi_1} H_i \big)$.
\smnoind
(b) $\big( \bar\bigotimes_{i\in I}^{\phi_1} H_i, ({\bigotimes}_{i\in I}^{\phi_1} \Psi_i)|_{\BOI^eA_i}\big) = \bigoplus_{\mu\in \Omega_{I;H}^\un} \big(\bar\bigotimes_{i\in I}^{\mu} H_i, {\bigotimes}_{i\in I}^{\mu} \Psi_i \big)$.
\smnoind
(c) If all $\Psi_i$ are injective, then so is $\BOI^{\phi_1}\Psi_i$.
\end{thm}
\begin{prf}
(a) Set $\Psi_0:=\TBOIM \Psi_i$.
For any $\mu\in \Omega^\un_{I;H}$, $\omega\in \Omega^\ut_{I;A}$ and $a\in \PI^\omega A_i$, it is clear that
\begin{equation}\label{eqt:Psi-otimes-mu}
\Psi_0(\tsi a_i)\big(\BOI^\mu H_i \big)\ \subseteq\ \BOI^{\omega\cdot \mu} H_i.
\end{equation}
Suppose that $u\in \omega$ and $F\in \mathfrak{F}$ such that $a_i = u_i$ for $i\in I\setminus F$.
If $\xi= J_{F'}^x(\xi_0)$ where $x\in \mu$, $F'\in \mathfrak{F}$ with $F\subseteq F'$ and $\xi_0\in \bigotimes_{i\in F'} H_i$, then
$$\left< \Psi_0(\tsi a_i) \xi,
\Psi_0(\tsi a_i) \xi \right>_{\BC^{\otimes I}}
\ =\ \big< \big({\bigotimes}_{i\in F} \Psi_i(a_i)\otimes \id\big)\xi_0, \big({\bigotimes}_{i\in F} \Psi_i(a_i)\otimes \id\big)\xi_0 \big> (\tsi 1).$$
This means that $\Psi_0(\tsi a_i)$ is bounded on $\big(\BOI^\un H_i,\langle \cdot, \cdot\rangle_{\phi_1}\big)$ (see Theorem \ref{thm:univ-prop-Hil-sp-ten-prod}(a) and Proposition \ref{prop:ten-inn-prod}(b)) and produces a unital homomorphism $\BOI^{\phi_1} \Psi_i: \BOI^\ut A_i \to \CL\big( \bar\bigotimes_{i\in I}^{\phi_1} H_i \big)$.
Now, Relation \eqref{eqt:ten-psi-inv} tells us that $\BOI^{\phi_1} \Psi_i$ preserves the involution.
\smnoind
(b) This part follows directly from the argument of part (a).
\smnoind
(c) Set $\Psi := \BOI^{\phi_1} \Psi_i$.
Suppose that $v^{(1)}, ..., v^{(n)}\in \PI U_{A_i}$ are mutually inequivalent elements, $F\in \mathfrak{F}$, $b^{(1)},...,b^{(n)}\in \bigotimes_{i\in F} A_i$ and $a^{(k)} := J_F^{v^{(k)}}(b^{(k)})$ ($k=1,...,n$) such that
\begin{equation*}\label{eqt:ker-Psi}
\Psi\big({\sum}_{k=1}^n a^{(k)}\big)\ =\ 0.
\end{equation*}
By induction, it suffices to show that $a^{(1)} = 0$.
By replacing $a^{(k)}$ with $(v^{(1)})^{-1}a^{(k)}$ if necessary, we may assume that $v^{(1)}_i = e_i$ ($\ii$).
If $n=1$, we take an arbitrary $\xi\in \PI \sph(H_i)$.
If $n >1$, we claim that there exists $\xi\in \PI\sph(H_i)$ such that
\begin{equation}\label{eqt:strong-faith}
\xi\ \nsim\ [V^{(k)}_i\xi_i]_\ii
\qquad (k=2,...,n),
\end{equation}
where $V^{(k)}_i := \Psi_i(v^{(k)}_i)$.
In fact, if $k\in \{2,...,n\}$ and $i\in I^k := \{i\in I: v^{(k)}_i \neq e_i\}$ (which is an infinite set), the subset $\sph (H_i) \cap \ker (V^{(k)}_i - \id_{H_i})$ is nowhere dense in $\sph (H_i)$ as $\ker (V^{(k)}_i - \id_{H_i})$ is a proper closed subspace of $H_i$ (note that $\Psi_i$ is injective).
For any $i\in I$, we consider $N_i := \{k\in\{2,...,n\}: i\in I^k\}$.
By the Baire Category Theorem, for every $i\in I$, one can choose $\xi_i\in \sph(H_i) \setminus \bigcup_{k\in N_i} \ker (V^{(k)}_i - \id_{H_i})$.
Now, $\xi:=[\xi_i]_\ii$ will satisfy Relation \eqref{eqt:strong-faith}.
Since $\Psi(a^{(1)})\big(\BOI^\xi H_i\big) \subseteq \BOI^\xi H_i$ and
$$\BOI^\xi H_i\ \cap\ {\sum}_{k=2}^n\Psi(a^{(k)})\big(\BOI^\xi H_i\big)\ =\ \{0\}$$
(because of Theorem \ref{thm:bas-inf-ten} as well as \eqref{eqt:Psi-otimes-mu} and \eqref{eqt:strong-faith}),
we have $\Psi(a^{(1)})|_{\BOI^\xi H_i} = 0$.
Therefore, part (b) and Proposition \ref{prop:ten-prod-st-alg} tells us that $a^{(1)} = 0$.
\end{prf}
\medskip
\begin{rem}\label{rem-act-OA-OH}
(a) By the argument of Theorem \ref{thm:inf-ten-c-st-alg}(c), if all $\Psi_i$ are injective, then $\alpha^\Psi$ is \emph{strongly faithful} in the sense that for any finite subset $F\subseteq \Omega^\ut_{I;A} \setminus \{e\}$, there exists $\mu\in \Omega^\un_{I;H}$ with $\omega\cdot \mu \neq \mu$ ($\omega\in F$).
\smnoind
(b) If $y, z\in \PI H_i$ are $C_0$-sequences and $u,v \in \PI U_{A_i}$, then
\begin{equation}\label{eqt:defn-ti-alpha}
y \approx z \quad \text{if and only if} \quad [\Psi_i(u_i)y_i]_{i\in I}\approx [\Psi_i(u_i)z_i]_{i\in I}
\end{equation}
and $[\Psi_i(u_i)y_i]_{i\in I}\approx [\Psi_i(v_i)y_i]_{i\in I}$ whenever $u\sim v$.
Thus, $\{\Psi_i\}_{i\in I}$ induces an action
$\ti \alpha^\Psi: \Omega^\ut_{I;A}\times \Gamma_{I;H} \to \Gamma_{I;H}$.
Again, we write $\omega\cdot \gamma$ for $\ti \alpha^\Psi_\omega(\gamma)$ ($\omega\in \Omega^\ut_{I;A}; \gamma\in\Gamma_{I;A}$).
The map $\kappa_H$ in Proposition \ref{prop:cp-vN}(a) is \emph{equivariant} in the sense that $\kappa_H\circ \alpha^\Psi_\omega = \ti\alpha^\Psi_\omega\circ \kappa_H$ ($\omega\in \Omega^\ut_{I;A}$).
\smnoind
(c) If all $A_i$ are $C^*$-algebras and all $\Psi_i$ are irreducible, then $\alpha^\Psi$ is transitive.
\end{rem}
\medskip
\begin{cor}\label{cor:rep-on-inf-dir-prod}
There is a unital $^*$-representation $\prod \tsi \Psi_i: \BOI^\ut A_i \to \CL( \prod \tsi H_i )$ such that
for any $\mu\in \Omega^\un_{I;H}$, $\omega\in \Omega^\ut_{I;A}$ and $b\in \BOI^\omega A_i$,
\begin{equation}\label{eqt:PIti<->BOI}
\big(\prod \tsi \Psi_i\big)(b)\circ \ti\Upsilon^{\mu}
\ = \ \ti\Upsilon^{\omega\cdot \mu} \circ \big(\BOI^{\phi_1}\Psi_i\big)(b)\big|_{\bar\bigotimes_{i\in I}^\mu H_i},
\end{equation}
where $\ti\Upsilon^\mu$ is as in Proposition \ref{prop:cp-vN}(b).
\end{cor}
\begin{prf}
By Proposition \ref{prop:cp-vN}(b),
there is a bounded linear map
$$\big(\prod \tsi \Psi_i\big)(b): \prod \otimes_\ii^{\kappa_H(\mu)} H_i \to \prod \otimes_\ii^{\omega\cdot\kappa_H(\mu)} H_i$$
such that Equality \eqref{eqt:PIti<->BOI} holds (see also Remark \ref{rem-act-OA-OH}(b)).
Since we have $\sup_{\mu\in \Omega^\un_{I;H}} \big\|(\BOI^{\phi_1}\Psi_i)(b)|_{\bar\bigotimes_{i\in I}^\mu H_i}\big\| < \infty$ (because of Theorem \ref{thm:inf-ten-c-st-alg}(a)), we know from Proposition \ref{prop:cp-vN}(a) and \cite[Lemma 4.1.1]{vN} that $(\prod \tsi \Psi_i)(b)$ induces an element in $\CL(\prod \tsi H_i )$, which clearly gives a $^*$-representation.
\end{prf}
\medskip
It is natural to ask if $\prod \tsi \Psi_i$ is injective if all $\Psi_i$ are.
However, $\prod \tsi \Psi_i$ is never injective as can be seen in
Example \ref{eg:ten-prod-rep-BC}(b) and the discussion following it.
\medskip
\begin{eg}\label{eg:ten-prod-rep-BC}
For any $\ii$, let $A_i = \BC = H_i$ and $\iota_i: A_i \to \CL(H_i)$ be the canonical map.
Suppose that $\Phi$, $\varphi$ and $\hat \Phi$ are as in Example \ref{eg:ten-Hil-sp}.
\smnoind
(a) Let $\Lambda: \BC[\Omega^\ut_{I;\BC}] \to \CL(\ell^2(\Omega^\ut_{I;\BC}))$ be the left regular representation.
For every $\alpha,\beta\in \PI \BT$, one has
$$\big(\hat\Phi^*\circ \Lambda(\lambda_{[\alpha]_\sim})\circ \hat\Phi\big)(\tsi \beta_i)
\ = \ \varphi(\alpha^{-1}) \tsi \alpha_i\beta_i
\ = \ \big(\BOI^{\phi_1} \iota_i\big)(\Phi^{-1}(\lambda_{[\alpha]_\sim}))(\tsi \beta_i).$$
Consequently, $\BOI^{\phi_1} \iota_i$ can be identified with $\Lambda$ (under $\Phi$ and $\hat \Phi$).
\smnoind
(b) Let $\alpha\in \PI\BT$ such that $\alpha\nsim 1$ but $\alpha \approx 1$ with $\PI \alpha_i =1$.
If $\beta\in \PI \BC$ is a $C_0$-sequence with $\|\prod \tsi \beta_i\| = 1$, one has $\|\prod \tsi \alpha_i\beta_i\| = 1$ and
$$\big\langle \prod \tsi \alpha_i\beta_i, \prod \tsi \beta_i \big\rangle
\ =\ 1,$$
which imply that $\prod \tsi \alpha_i\beta_i = \prod \tsi \beta_i$.
Therefore, $(\prod \tsi \iota_i)(\tsi \alpha_i) = \id$ but $\tsi \alpha_i \neq \tsi 1$.
Consequently, $\prod \tsi \iota_i$ is non-injective (actually, $(\prod \tsi \iota_i)\circ \Phi^{-1}$ is non-injective as a group representation of $\Omega^\ut_{I;\BC}$).
\end{eg}
\medskip
In general, even $\big(\prod \tsi \Psi_i\big)|_{\BOI^\ut \BC e_i}$ is non-injective.
In fact, suppose that $\alpha$ is as in the above.
For any $C_0$-sequence $\xi\in \PI H_i$, with $\|\prod \tsi \xi_i\| = 1$, the same argument as
Example \ref{eg:ten-prod-rep-BC}(b) tells us that $\prod \tsi \alpha_i\xi_i = \prod \tsi \xi_i$.
Thus, $\big(\prod \tsi \Psi_i\big)(\tsi e_i - \tsi \alpha_i e_i) = 0$.
\medskip
On the other hand, by Theorem \ref{thm:inf-ten-c-st-alg} and Corollary \ref{cor:rep-on-inf-dir-prod}, there exist canonical $^*$-homomorphisms
$$J^{\phi_1}: \BOI^\ut \CL(H_i) \to \CL\big(\bar\bigotimes_\ii^{\phi_1} H_i\big)
\ \ \text{and} \ \
J^\Pi: \BOI^\ut \CL(H_i) \to \CL\big(\prod \tsi H_i\big).$$
Notice that $J^{\phi_1}$ is injective but $J^\Pi$ is never injective.
\medskip
\begin{cor}\label{cor:ten-rep-prod-gps}
Let $\pi_i: G_i\to U_{\CL(H_i)}$ be a unitary representation of a group $G_i$, for each $i\in I$.
\smnoind
(a) There exist canonical unitary representations $\BOI^{\phi_1} \pi_i$ and $\prod \tsi \pi_i$ of $\PI G_i$ on $\bar\bigotimes_\ii^{\phi_1} H_i$ and $\prod \tsi H_i$ respectively.
\smnoind
(b) If the induced $^*$-representation $\hat \pi_i: \BC[G_i]\to \CL(H_i)$ is injective for all $i\in I$, the induced $^*$-representation $\widehat{\BOI^{\phi_1} \pi_i}$ of $\BC[\PI G_i]$ is also injective.
\end{cor}
\begin{prf}
(a) Let $\BOI^\ut \pi_i := \Theta_{\CL(H)}\circ \PI \pi_i: \PI G_i \to \BOI^\ut \CL(H_i)$.
Then
$$\BOI^{\phi_1} \pi_i\ :=\ J^{\phi_1}\circ \BOI^\ut \pi_i
\quad \text{and}\quad
\prod\tsi \pi_i\ :=\ J^{\Pi}\circ \BOI^\ut \pi_i$$
are the required representations.
\smnoind
(b) By Theorem \ref{thm:inf-ten-c-st-alg}(c), $\BOI^{\phi_1} \hat\pi_i$ is injective.
As $\widehat{\BOI^{\phi_1} \pi_i}$ is the restriction of $\BOI^{\phi_1} \hat\pi_i$ on $\BC[\PI G_i]$ (see Example \ref{eg:gp-alg-inf-prod-gp}(a)), it is also injective.
\end{prf}
\medskip
\begin{cor}
$\prod \tsi \Psi_i$ is never irreducible, and neither do $\BOI^{\phi_1}\Psi_i$.
\end{cor}
\begin{prf}
Let $\tau_i: \BC\to A_i$ be the canonical unital map and set $\check \Psi_i := \Psi_i\circ \tau_i$ ($i\in I$).
Suppose that $\alpha, \beta\in \PI \BT$ with $\alpha \not\approx \beta$ and $\xi\in \PI^\un H_i$.
Then $[\alpha_i\xi_i]_\ii \not\approx [\beta_i\xi_i]_\ii$ and the two unit vectors
$$\big(\prod \tsi\check\Psi_i\big)(\tsi \alpha_i)\big(\prod \tsi \xi_i\big)
\quad \text{and} \quad
\big(\prod \tsi\check\Psi_i\big)(\tsi \beta_i)\big(\prod \tsi \xi_i\big)$$
are orthogonal.
Consequently, $\dim\ \!(\prod \tsi \check \Psi_i)(\BC^{\otimes I}_\ut) > 1$.
As $(\prod \tsi \Psi_i)\circ (\BOI \tau_i) = \prod \tsi \check\Psi_i$, we have $(\prod \tsi \check\Psi_i)(\BC^{\otimes I}_\ut) \subseteq Z\big((\prod \tsi \Psi_i)(\BOI^\ut A_i)\big)$ and $\prod \tsi \Psi_i$ is not irreducible.
A similar but easier argument also shows that $\BOI^{\phi_1}\Psi_i$ is not irreducible.
\end{prf}
\medskip
For any $C^*$-algebra $A$, we denote by $S(A)$ the state space of $A$ and by $(H_\rho, \pi_\omega, \xi_\omega)$ the GNS construction of $\omega\in S(A)$.
We would like to consider a natural injective $^*$-representation of $\BOI^\ut A_i$ defined in terms of $(H_{\omega_i}, \pi_{\omega_i})$.
\medskip
If $\rho\in \PI S(A_i)$ and $\check \rho$ is defined as
$$\check \rho(a) := \big\langle \big(\BOI^{\phi_0} \pi_{\rho_i}\big)(a)(\tsi \xi_{\rho_i}), (\tsi \xi_{\rho_i})\big\rangle
\qquad \big(a\in \BOI^\ut A_i\big),$$
then the closure of $\big(\BOI^{\phi_1} \pi_{\rho_i}\big)(\BOI^\ut A_i)(\tsi \xi_{\rho_i})$ will coincide with $H_{\check \rho} := \bar\bigoplus_{\omega\in \Omega^\ut_{I;A}}\bar\bigotimes_{i\in I}^{\omega\cdot [\xi_\rho]_\sim} H_{\rho_i} \subseteq \bar\bigotimes_\ii^{\phi_1} H_{\rho_i}$.
We set
$\pi_{\check \rho}(a) := \big(\BOI^{\phi_1} \pi_{\rho_i}\big)(a)|_{H_{\check\rho}}$.
Notice that if all $\rho_i$ are pure states, then $H_{\check\rho} = \bar\bigotimes_\ii^{\phi_1} H_{\rho_i}$ (see Remark \ref{rem-act-OA-OH}(c)).
\medskip
\begin{cor}\label{cor:spat-ten-prod}
Let $A_i$ be a $C^*$-algebra ($\ii$).
The $^*$-representation $\Psi_A:=\bigoplus_{\rho\in \PI S(A_i)} (H_{\check \rho}, \pi_{\check\rho})$ is injective.
Consequently, the $^*$-representation $\Phi_A:=\bigoplus_{\rho\in \PI S(A_i)} \big( {\bar\bigotimes}_{i\in I}^{\phi_1} H_{\rho_i}, \BOI^{\phi_1} \pi_{\rho_i}\big)$ is also injective.
\end{cor}
\begin{prf}
Suppose that $(H_i, \Psi_i)$ is a universal $^*$-representation of $A_i$ ($i\in I$).
Let $F, u^{(1)}, ..., u^{(n)}, b^{(1)},...,b^{(n)}$ as well as $a^{(1)},...,a^{(n)}$ be as in the proof of Theorem \ref{thm:inf-ten-c-st-alg}(c) with
$\Psi_A\Big({\sum}_{k=1}^n a^{(k)}\Big) = 0$.
Again, it suffices to show that $a^{(1)} = 0$, and we may assume that $u^{(1)}_i = e_i$ ($i\in I$).
If $n = 1$, we take any $x\in \PI \sph(H_i)$.
If $n>1$, we take an element $x\in \PI \sph(H_i)$ satisfying
$$x\ \nsim\ \big[\Psi_i\big(u^{(k)}_i\big)x_i\big]_{i\in I}
\qquad (k = 2,...,n)$$
(the argument of Theorem \ref{thm:inf-ten-c-st-alg}(c) ensures its existence).
Let us set
$\rho_i(a):= \langle \Psi_i(a)x_i,x_i\rangle$
when $i\in I\setminus F$,
and pick any $\rho_i\in S(A_i)$ when $i\in F$.
For every $i\in I\setminus F$, one may regard $\big(H_{\rho_i}, \pi_{\rho_i}\big)$ as a subrepresentation of $(H_i, \Psi_i)$ such that $\xi_{\rho_i}\in H_{\rho_i}$ is identified with $x_i\in H_i$.
Then $x$ can be considered as an element in $H_{\check\rho}$.
Since $x \nsim \big[\pi_{\rho_i}\big(u^{(k)}_i\big)x_i\big]_{i\in I}$ for all $2\leq k \leq n$, the argument of Theorem \ref{thm:inf-ten-c-st-alg}(c) tells us that
$$\big(\BOI^{[x]_\sim} \pi_{\rho_i}\big)(a^{(1)})\eta
\ =\ 0
\qquad \big(\eta\in \BOI^x H_{\rho_i}\big).$$
Consequently, $\big(\BOF\pi_{\rho_i}\big)\big(b^{(1)}\big) = 0$ and
$b^{(1)} =0$ (as $\rho_i$ is arbitrary when $i\in F$).
The second statement follows readily from the first one.
\end{prf}
\medskip
Notice also that $\big( {\bar\bigotimes}_{i\in I}^{\phi_1} H_{\rho_i}, \BOI^{\phi_1} \pi_{\rho_i}\big)$ is in general not a cyclic representation, and $(H_{\check \rho}, \pi_{\check\rho})$ can be regarded as a cyclic analogue of it.
\medskip
We end this paper with the following result concerning tensor product of Hilbert algebras.
\medskip
\begin{cor}\label{cor:hil-alg}
Let $\{A_i\}_\ii$ is a family of unital Hilbert algebras (see e.g. \cite[Definition VI.1.1]{Take}) such that $\|e_i\| =1$ ($\ii$).
Then $A:=\BOI^\ut A_i$ is also a unital Hilbert algebra with $\|\tsi e_i\| = 1$.
\end{cor}
\begin{prf}
Note that since $\| e_i \| =1$, one has $\|u_i\| = 1$ for any $u_i\in U_{A_i}$.
Thus, we have $\BOI^\ut A_i \subseteq \BOI^\un A_i$, which gives an inner product $\langle \cdot, \cdot\rangle_A$ on $A$.
Observe that $\BOI^{\omega} A_i$ is orthogonal to $\BOI^{\omega'} A_i$ (in terms of $\langle \cdot, \cdot\rangle_A$) whenever $\omega$ and $\omega'$ are distinct elements in $\Omega^\ut_{I;A}$.
Thus, in order to show the involution on $A$ being an isometry, it suffices to check that $\|x^*\| = \|x\|$ whenever $x\in \BOI^\omega A_i$ and $\omega\in \Omega^\ut_{I;A}$.
In fact, for any $u\in \PI U_{A_i}$, $F\in \KF$ and $a\in {\bigotimes}_{i\in F} A_i$, we have
$$\|J_F^u(a)^*\|
\ = \ \|J^{u^*}_F(a^*)\|
\ = \ \|a^*\|
\ = \ \|a\|
\ = \ \|J^u_F(a)\|,$$
because the involution on ${\bigotimes}_{i\in F} A_i$ is an isometry.
Let $H_i$ be the completion of $A_i$ (with respect to the inner-product) and $\Psi_i: A_i \to \CL(H_i)$ be the canonical unital $^*$-representation ($\ii$).
Since
$$\BOI^{\phi_1} \Psi_i(a)b
\ = \ ab
\qquad (a,b\in A),$$
Theorem \ref{thm:inf-ten-c-st-alg}(a) tells us that for each $x\in A$, one has $\langle xy,z\rangle_A = \langle y,x^*z\rangle_A$ ($y,z\in A$) and $\sup_{\|y\|\leq 1}\|xy\| < \infty$.
Finally, as $A$ is unital, we see that $A$ is a Hilbert algebra (with $\|\tsi e_i\| =1$).
\end{prf}
\medskip
Consequently, if all $A_i$ are weakly dense unital $^*$-subalgebras of finite von-Neumann algebras, then so is $\BOI^\ut A_i$.
\medskip
|
1,314,259,994,923 | arxiv | \section{Introduction}
The plastic deformation of crystalline materials is primarily carried out by the motion of a large number of atomistic line defects, i.e. dislocations. Based on the accumulated knowledge about the behavior of individual dislocations \cite{Hirth}, discrete dislocation dynamics (DDD) models \cite{Kubin1992,Ghoniem2000,Xiang2003,Cai2007,Mordehai,Gu2015} have been well developed for the study of crystal plasticity in a wide range of mechanical problems. For engineer applications, however, DDD models are limited to samples of small size (order of microns or below), because of their high computational costs. Hence continuum models, where material microstructures are represented by continuous density distributions of dislocations resulting from the local homogenization of the discrete dislocation networks, are practically preferred \cite{Nye1953,Kroener1963,NelsonToner1981,Mura1987,Groma1997,ElAzab2000,Acharya2001,Arsenlis2002,Groma2003,
Hochrainer2007,Xiang2009_JMPS,ZhuXH2010,Zhu2014_IJP,GeersJMPS2014,Hochrainer2014,Geers_JMPS2015,Zhu_continuum3D,Acharya2015,Schulz2015,
Finel2016,Ngan_JMPS2016,Monavari2016,ZhuScripta2016,dipole1D,ZhuNiuXiang2016,Ngan2017}.
In order to incorporate the orientation-dependent dislocation densities and the anisotropic dislocation interaction and motion in the continuum model, we have employed a pair of dislocation density potential functions (DDPFs) to describe the dislocation distribution \cite{Xiang2009_JMPS,ZhuXH2010,ZhuXiangCMS2012,ZX2014,Zhu_continuum3D}. In this representation, the intersections of the contour lines (of integer multiples of the length of the Burgers vector) of the two DDPFs $\phi$ and $\psi$ are the locations of the dislocations, see Sec.~\ref{sec:ddpf} for the model in two-dimensions (where dislocations are infinite straight lines). Essentially, the DDPF $\psi$ characterizes the local distribution of the active slip planes and the DDPF $\phi$ restricted on a slip plane describes the local dislocation distribution within that plane. As a result, the derived continuum dislocation dynamics model takes the form of a PDE system of two DDPFs $\phi$ and $\psi$, instead of equations of the single variable of scalar dislocation density in the existing two-dimensional continuum models in the literature reviewed above for geometrically necessary dislocations.
While previous continuum model based on DDPFs focused on dislocation glide within slip planes \cite{Xiang2009_JMPS,ZhuXH2010,Zhu2014_IJP,Zhu_continuum3D},
the continuum dislocation dynamics equations derived in this paper incorporate both dislocation motions of glide and climb. The continuum dislocation model based on DDPFs
also provides a mathematical framework for rigorous analysis of the properties of the interaction and dynamics of dislocations and further incorporation of other important dislocation mechanisms at the continuum level (such as the Frank-Read sources \cite{Zhu2014_IJP} and dynamics of dislocation dipoles \cite{dipole1D,ZhuNiuXiang2016}).
In dislocation-density-based continuum models that are derived from the DDD model, the leading order dislocation interaction is given by an integral over the dislocations in the entire system and is referred to as the long-range dislocation interaction. The correction terms that improve a continuum model as an approximation to the DDD model often take the form of higher order derivatives of dislocation densities that depend only on the local arrangement of dislocations, and are referred to as the short-range dislocation interaction terms.
In the existing dislocation-density-based models of plasticity,
although the long-range dislocation-dislocation interactions are well-captured by direct averaging, the short-range interactions have to be incorporated with special treatments.
This is because the mutual interaction force between two dislocations can grow as strong as the order of $1/r$, where $r$ is the dislocation spacing, which leads to a strong dependence of dislocation dynamics on the local discrete arrangement of dislocations and further influences the plastic behavior of materials. However, when a discrete dislocation network is treated by a dislocation continuum, such short-range interactions are averaged to zero.
Therefore, the development of continuum modelling of dislocations highly relies on effective ways to capture the short-range interactions on a coarse-grained scale.
For two-dimensional dislocation configurations where all dislocations are infinitely straight and mutually parallel, Groma et al. \cite{Groma2003} developed a continuum formulation for the short-range dislocation interaction based on a statistical approach, and such statistical method was further extended by Dickel et al. \cite{Dickel2014} to identify the role played by dislocation dipoles in crystal plasticity. However, it has been argued by comparisons with discrete dislocation dynamics simulations (Roy et al. \cite{Roy2008}) that the short-range dislocation interaction formulas obtained based on statistic approaches do not necessarily apply to deterministic distributions of dislocations.
A class of representative two-dimensional dislocation configurations widely studied in the literature are distributions (pile-ups) of dislocation walls consisting of straight and mutually parallel dislocations (e.g. \cite{Roy2008,Schuouwenaars2010,Voskoboinikov2007JMPS,Cameron2011,Geers2013,Zhu_2Ddipoles2014,Schulz2014,Schulz2015}). In this scenario, dislocation-dislocation interactions take place in both directions that are in and normal to the dislocation slip planes.
To the best knowledge of the authors', most available analytical results employing dislocation densities were obtained for regular dislocation wall structures, where dislocations within each wall are vertically aligned and uniformly spaced in the direction normal to the dislocation slip planes. For example, in their comparisons with results of discrete dislocation model, Roy et al. \cite{Roy2008} also used semi-continuum analysis (in which discreteness normal to the slip planes are maintained) for pile-up of dislocation walls.
With the matched asymptotic techniques, Voskoboinikov and coworkers \cite{Voskoboinikov2007JMPS} calculated the discrete positions of a simple dislocation structure formed by one horizontal row of straight dislocations near a dislocation lock, where the dislocation density becomes singular under a continuum setting.
Hall \cite{Cameron2011} generalized the approach in Ref.~\cite{Voskoboinikov2007JMPS} to determine the discrete positions of the dislocation walls of infinite length near the grain boundaries. For regular dislocation walls, Geers and coworkers \cite{Geers2013} identified five regimes for the interaction energy by a single parameter depending on the driving force, the horizonal and the vertical spacing between neighbouring dislocations, and they also studied the continuum limit of the equilibrium state in each regime as the number of the regular walls tends to infinity. The analysis in Ref.~\cite{Geers2013} also suggests that a single field variable describing the dislocation density is not sufficient for the discrete-to-continuum transition for the configuration with dislocation regular walls. Zhu and Chapman \cite{Zhu_2Ddipoles2014} examined the equilibria of periodically arranged dipole walls, and a natural transition between dipolar configurations was found controlled by the dipole height to width ratio.
By investigating the local behavior of the mean-field stress exerted by a row of dislocations, Schulz {\it et al.} added to the continuum system a dislocation density gradient term depending on the mesh size in their finite element calculations \cite{Schulz2014}. Schmitt {\it et al.} derived continuum internal stress formula for dislocation glide by homogenization of dislocation microstructures under the assumption that the geometrically necessary dislocations form regular walls \cite{Schulz2015}. Their obtained formula is similar in its form to the short-range dislocation interaction term obtained by Groma et al. \cite{Groma2003} using statistical approach.
In this paper, we first systematically examine the perturbed regular edge dislocation wall structures and derive continuum short-range interaction formulas from discrete dislocation dynamics model by asymptotic analysis.
The derived accurate short-range interaction formulation for such
representative deterministic dislocation distributions, together with the available results in the literature reviewed above, is able to give more complete understanding of the nature
of the short-range dislocation interactions for parallel dislocations with the same Burgers vector in the continuum model.
In particular,
our continuum short-range formulation is expressed by higher order derivatives of the dislocation distribution; although it is similar to the continuum formula derived using other approaches \cite{Groma2003,Schulz2015}, the exact expressions are different. Moreover, by using two field variables (two DDPFs), our continuum formulation incorporates the anisotropy of the short-range dislocation interactions in directions along or normal to the dislocation slip planes, in addition to the anisotropic dislocation motions of glide and climb.
Such anisotropy is not included in the available continuum short-range interaction formulas \cite{Groma2003,Schulz2015}, and although it was examined in Ref.~\cite{Geers2013} by regular dislocation walls, no continuum formulation is available in the existing literature to account for such dislocation interaction anisotropy for general cases.
We then incorporate these continuum short-range interaction contributions in our continuum PDE model. These terms are local in the sense that they depend on the first and second partial derivatives of the DDPFs instead of their integrals. The full continuum force (including both the long-range and short-range continuum forces) provides a good approximation to the discrete dislocation dynamics model.
Mathematically, these new terms in the continuum model serve as stabilizing terms that maintain the same stability properties as the discrete dislocation dynamics model. Moreover, since these short-range interaction terms are in the form of second order partial derivatives of the DDPFs $\phi$ and $\psi$, they also serve as regularization terms to the continuum long-range force terms that are in the form of integrals of first partial derivatives of $\phi$ and $\psi$.
The rest of this paper is organized as follows.
In Sec.~\ref{sec:ddd}, we reviewed the discrete dislocation dynamics model, from which the continuum formulation of short-range interactions will be derived. In Sec.~\ref{sec:ddpf}, we present the continuum framework for dislocation walls based on the representation of dislocation density potential functions, where the force on dislocations consists only of the long-range Peach-Koehler force.
In Sec.~\ref{sec:long-rang}, we show that without short-range interactions, the continuum long-range Peach-Koehler force is inconsistent with the Peach-Koehler force in the discrete dislocation model for many common dislocation distributions.
In Sec.~\ref{sec:continuum-short}, we derive continuum expressions for the dislocation short-range interactions from the discrete dislocation dynamics model. We focus on the dislocation configurations identified in Sec.~\ref{sec:long-rang} where
the continuum long-range force fails to provide stabilizing effect compared with the discrete model.
In Sec.~\ref{eqn:contiuum-ddpf}, we present the DDPF-based continuum dislocation dynamics model that incorporates both the long-range and the short range continuum forces.
In Sec.~\ref{sec:stability}, we show the new continuum model is indeed able to stabilize the perturbed dislocation structures as the discrete dislocation model does.
In Sec.~\ref{sec:numerical}, numerical simulations are performed to validate the continuum model.
\section{Discrete dislocation dynamics model}
\label{sec:ddd}
In this section, we briefly reviewed the discrete dislocation dynamics model, from which the continuum formulation of short-range interactions will be derived.
We consider a system of parallel straight edge dislocations, see Fig. \ref{fig:a}. In this case, the dislocation dynamics can be reduced to a two-dimensional spatial problem, in which the dislocations are points in the plane orthogonal to the direction of the dislocation lines, that is, parallel to the $z$-axis. The Burgers vector $\boldsymbol{b}$ is along the $x$-axis. The locations of dislocations are denoted by the points $\{(x_m,y_n)\}$ for integer $m$ and $n$.
\begin{figure}[htbp]
\centering
\includegraphics[width=80mm]{a-eps-converted-to.pdf}
\caption{A system of parallel straight edge dislocations.}
\label{fig:a}
\end{figure}
The Peach-Koehler force $f$ on a dislocation is a configurational force associated with the change of free energy $\mathrm{d}W$ due to a displacement $\mathrm{d}l$ of the dislocation: $\mathrm{d}W=-f\mathrm{d}l$. The Peach-Koehler force per unit length on the dislocation is related to the stress field by \cite{Hirth}
\begin{equation}\label{eq:PK}
\textbf{f}=(\boldsymbol{\sigma}\cdot\textbf{b})\times\boldsymbol{\tau}=
\left(\begin{array}{ccc}
\sigma_{\scriptscriptstyle xx} & \sigma_{\scriptscriptstyle xy} & \sigma_{\scriptscriptstyle xz}\\
\sigma_{\scriptscriptstyle yx} & \sigma_{\scriptscriptstyle yy} & \sigma_{\scriptscriptstyle yz}\\
\sigma_{\scriptscriptstyle zx} & \sigma_{\scriptscriptstyle zy} & \sigma_{\scriptscriptstyle zz}\\
\end{array}\right)
\left(\begin{array}{ccc}
b\\0\\0\\
\end{array}\right)
\times\left(\begin{array}{ccc}
0\\0\\1\\
\end{array}\right)
=\left(\begin{array}{ccc}
b \sigma_{\scriptscriptstyle xy}\\-b\sigma_{\scriptscriptstyle xx}\\0\\
\end{array}\right),
\end{equation}
where $\textbf{b}$ is the Burger's vector with the magnitude $b$, $\boldsymbol{\tau}=(0,0,1)$ is the dislocation line direction, and $\boldsymbol{\sigma}$ is the stress tensor. The component of the Peach-Koehler force in the $x$ direction is parallel to the plane containing both the Burgers vector and the dislocation line direction (which is the slip plane), and is the glide force. The component of the Peach-Koehler force in the $y$ direction is normal to the direction of the Burgers vector $\boldsymbol{b}$ and the dislocation line direction, and is the climb force. From Eq.(~\ref{eq:PK}), we have the glide force $f_{\text{g}}=b \sigma_{\scriptscriptstyle xy}$ and the climb force $f_{\text{c}}=-b\sigma_{\scriptscriptstyle xx}$.
Using isotropic linear elasticity theory, an edge dislocation located at the point $(0, 0)$ generates the following stress field \cite{Hirth}
\begin{equation}\sigma_{\scriptscriptstyle xy}(x,y)=\sigma_{\scriptscriptstyle yx}(x,y)=\frac{\mu b}{2\pi(1-\nu)}\frac{x(x^2-y^2)}{(x^2+y^2)^2}\triangleq G_1(x, y),\label{eqn:g1}\end{equation}
\begin{equation}\sigma_{\scriptscriptstyle xx}(x,y)=\frac{-\mu b}{2\pi(1-\nu)}\frac{y(3x^2+y^2)}{(x^2+y^2)^2}\triangleq G_2(x, y),\label{eqn:g2}\end{equation}
where $\mu$ is the shear modulus and $\nu$ is the Poisson ratio, and other stress components vanish.
Therefore, for a dislocation located at $(x_{m_0}, y_{n_0})$, the glide force on it generated by another dislocation located at $(x_{m},y_{n})$ is
$\frac{\mu b^2}{2\pi(1-\nu)}\frac{(x_{m_0}-x_m)((x_{m_0}-x_m)^2-(y_{n_0}-y_n)^2)}{[(x_{m_0}-x_m)^2+(y_{n_0}-y_n)^2]^2}
$.
By superposition, the total glide force acting on the dislocation located at $(x_{m_0}, y_{n_0})$ is
\begin{equation}\label{eq:glide-dd}f_{\text{g}}^{\text{dd}} (x_{m_0},y_{n_0})= \frac{\mu b^2}{2\pi(1-\nu)}\sum_{(m,n)\neq(m_0,n_0)}\frac{(x_{m_0}-x_{m})[(x_{m_0}-x_{m})^2-(y_{n_0}-y_{n})^2]}
{[(x_{m_0}-x_{m})^2+(y_{n_0}-y_{n})^2]^2}.
\end{equation}
Similarly, the total climb force acting on the dislocation located at $(x_{m_0}, y_{n_0})$ is
\begin{equation}\label{eq:climb-dd}f_{\text{c}}^{\text{dd}} (x_{m_0},y_{n_0})= \frac{\mu b^2}{2\pi(1-\nu)}\sum_{(m,n)\neq(m_0,n_0)}\frac{(y_{m_0}-y_{m})[3(x_{m_0}-x_{m})^2+(y_{n_0}-y_{n})^2]}
{[(x_{m_0}-x_{m})^2+(y_{n_0}-y_{n})^2]^2}.
\end{equation}
With applied stress, the total glide and climb forces acting on the dislocation located at $(x_{m_0}, y_{n_0})$ can be written as
\begin{eqnarray}
f_{\text{g}} (x_{m_0},y_{n_0})=f_{\text{g}}^{\text{dd}} (x_{m_0},y_{n_0})+b \sigma_{\scriptscriptstyle xy}^0,\label{eq:glide-dd-app}\\
f_{\text{c}} (x_{m_0},y_{n_0}) =f_{\text{c}}^{\text{dd}} (x_{m_0},y_{n_0})-b \sigma_{\scriptscriptstyle xx}^0,\label{eq:climb-dd-app}
\end{eqnarray}
where $\sigma_{\scriptscriptstyle xx}^0$ and $\sigma_{\scriptscriptstyle xy}^0$ are the components of the applied stress tensor.
In discrete dislocation dynamics, the local dislocation velocity $\textbf{v}$ is given by the following mobility law in terms of the Peach-Koehler force \cite{Kubin1992,Ghoniem2000,Xiang2003,Cai2007} as
$\textbf{v}=\textbf{M}\cdot\textbf{f}$,
where $\textbf{M}$ is the mobility tensor and $\textbf{f}$ is the Peach-Koehler force. Following \cite{Xiang2003}, the mobility tensor can be written as
$\textbf{M}=m_{\text{g}}(\textbf{I}-\textbf{n}\otimes\textbf{n})+m_{\text{c}}\textbf{n}\otimes\textbf{n}$,
where $m_{\text{g}}$ is the mobility constant for dislocation glide, $m_{\text{c}}$ is the mobility constant for dislocation climb, $\textbf{I}$ is the identity matrix, and $\textbf{n}$ is the normal direction of the slip plane. For the edge dislocation array being considered, $\textbf{n}=(0,1,0)^T$, and the dislocation velocity is given by
\begin{equation}\label{eq:velocity}
\textbf{v}
=\left(\begin{array}{ccc}
v_{\text{g}} \\ v_{\text{c}}\\0\\
\end{array}\right)
=\left(\begin{array}{ccc}
m_{\text{g}} f_{\text{g}} \\m_{\text{c}} f_{\text{c}}\\0\\
\end{array}\right).
\end{equation}
where the continuum Peach-Koehler force is $\textbf{f}=(f_{\text{g}}, f_{\text{c}},0)^T$.
Note that when the line direction of all the dislocation lines is changed to $\boldsymbol{\tau}=(0,0,-1)$, the Peach-Koehler force components in Eqs.~(\ref{eq:glide-dd}) and (\ref{eq:climb-dd}) do not change because both the dislocation line direction and the stress change their signs. In this case, the total glide and climb forces with applied stress in Eqs.~(\ref{eq:glide-dd-app}) and (\ref{eq:climb-dd-app}) change to
$f_{\text{g}} (x_{m_0},y_{n_0})=f_{\text{g}}^{\text{dd}} (x_{m_0},y_{n_0})-b \sigma_{\scriptscriptstyle xy}^0$ and
$f_{\text{c}} (x_{m_0},y_{n_0}) =f_{\text{c}}^{\text{dd}} (x_{m_0},y_{n_0})+b \sigma_{\scriptscriptstyle xx}^0$.
\section{Continuum formulation for dynamics of dislocation ensembles using dislocation density potential functions}
\label{sec:ddpf}
We consider the system of parallel straight edge dislocations as shown in Fig.~\ref{fig:a}. The number of the dislocations in the vertical direction or horizontal direction is large and can be considered as infinity.
As all the existing continuum dislocation dynamics models reviewed in the introduction, our continuum model is able to describe smoothly varying dislocation structures and holds in an averaged sense for general dislocation structures by homogenizing the discrete dislocations within some representative volumes centered at each point \cite{Zhu_continuum3D}.
To represent the resulting dislocation continuum, we employ a pair of dislocation density potential functions (DDPFs) \cite{Zhu_continuum3D} $\phi(x,y)$ and $\psi(x,y)$, such that, for this two-dimensional problem,
the intersection of the contour lines
\begin{equation}
\phi(x,y)=ib \ \ {\rm and}\ \ \psi(x,y)=jb,
\end{equation}
$i,j=0,\pm1,\pm2,\cdots$, are the dislocation lines, see Fig.~\ref{fig:case1-density}. Given a smoothly varying dislocation structure, the local slip planes are represented by the contour lines of the DDPF $\psi$, while the dislocation lines within a slip plane are described locally by the contour lines of another DDPF $\phi$ restricted on that plane.
\begin{figure}[htbp]
\centering
\includegraphics[width=80mm]{density-rep212-eps-converted-to.pdf}\\
\caption{Representation of dislocation ensembles by the dislocation density potential functions (DDPFs). Given a smoothly varying dislocation structure, the contour line of one DDPF $\psi$ coincide with the slip planes, while the dislocation lines within a slip plane are given by the contour lines of another DDPF $\phi$ restricted on that plane. The local average active slip plane spacing $d_{\rm sl}$ and the local dislocation spacing within a slip plane $d_{\rm in}$ are given by Eqs.~(\ref{eqn:dsl}) and (\ref{eqn:din}), respectively. Note that in general $\nabla\phi$ is not necessarily normal to $\nabla\psi$.}\label{fig:case1-density}
\end{figure}
With this continuum representation of dislocation distributions,
the local dislocation line direction is determined from the DDPFs by
\begin{equation}
\label{eq:tau}
\boldsymbol{\tau}=\frac{\nabla \phi\times\nabla \psi}{\|\nabla \phi\times\nabla \psi\|}.
\end{equation}
The local normal direction of the dislocation slip plane is in the direction of $\nabla\psi$, and the local average active slip plane spacing is given by
\begin{equation}
\label{eqn:dsl}
d_{\rm sl}=\frac{b}{\|\nabla \psi\|}.
\end{equation}
Using the fact that the local dislocation line direction is in the direction of $\nabla\phi\times\nabla\psi$, it can be calculated that the local dislocation spacing within a slip plane is
\begin{equation}
\label{eqn:din}
d_{\rm in}=\frac{b\|\nabla \psi\|}{\|\nabla\phi\times\nabla\psi\|}.
\end{equation}
(In fact, $d_{\rm in}=
b/ \mbox{length of}\ \nabla\phi \ \mbox{in the slip plane}$.)
For the two-dimensional problem considered in this paper, the Nye dislocation density tensor is reduced to a scalar dislocation density function $\rho(x, y)$, which is the number of dislocations per unit area \cite{Nye1953,Kroener1963,Xiang2006}. (In fact, here the Nye dislocation density tensor $\pmb \alpha=\rho(x,y)\mathbf b\otimes \pmb \tau$, where $\mathbf b=(b,0,0)$ and $\pmb \tau=(0,0,1)$ or $(0,0,-1)$.)
Here we define $\rho(x,y)$ to be the signed dislocation density: which is positive when the dislocations are in the $+z$ direction and negative when they are in the $-z$ direction.
Since the local dislocation number density in the DDPF framework is
$ \frac{1}{d_{\rm in}d_{\rm sl}}=\frac{\|\nabla \phi\times\nabla \psi\|}{b^2}$,
the signed dislocation density can then be written as
\begin{equation}\label{eqn:density}
\rho(x,y)=\frac{1}{b^2}(\nabla \phi\times\nabla \psi\cdot\textbf{k}),
\end{equation}
where $\textbf{k}$ is the unit vector in the $+z$ direction.
For example, consider the case when the distribution of dislocations is uniform in $y$ direction (normal to the slip plane) and nonuniform in $x$ direction (within the slip plane). The DDPFs that describe this dislocation distribution is $\phi(x,y)=\phi(x)$ and $\psi(x,y)=\frac{by}{D}$, where $D$ is the uniform active slip plane spacing. The dislocation density in this case is $\rho(x,y)=\frac{\phi'(x)}{bD}$.
For dislocation dynamics problems, the DDPFs $\phi$ and $\psi$ also depend on time $t$ and their evolution implicitly describes the dynamics of dislocations at the continuum level, which is
\begin{equation}\label{eq:evn-eqs1}
\left\{
\begin{array}{l}
\phi_t+\textbf{v}\cdot\nabla \phi=0,\\
\psi_t+\textbf{v}\cdot\nabla \psi=0,
\end{array}
\right.
\end{equation}
where $\textbf{v}=(v_{\text{g}}, v_{\text{c}})^T$ is the local dislocation velocity at the continuum level and is calculated from the continuum Peach-Koehler force $\textbf{f}=(f_{\text{g}}, f_{\text{c}})^T$ following the mobility law in Eq.~\eqref{eq:velocity} in the two dimensional form. Here the continuum glide force $f_{\text{g}}$ and the continuum climb force $f_{\text{c}}$ are
\begin{eqnarray}
f_{\text{g}}=f_{\text{g}}^{\text{dc}}+(\pmb\tau\cdot {\mathbf k})b\sigma_{xy}^0,\label{eqn:fglide-tot}\\
f_{\text{c}}=f_{\text{c}}^{\text{dc}}-(\pmb\tau\cdot {\mathbf k})b\sigma_{xx}^0,
\end{eqnarray}
where $f_{\text{g}}^{\text{dc}}$ and $f_{\text{c}}^{\text{dc}}$ are the continuum glide and climb forces due to the stress field of dislocations, and the second term in each equation is the force due to the applied stress.
The leading order continuum Peach-Koehler force due to the long-range dislocation interaction is given below in terms of the DDPFs $\phi$ and $\psi$, using the dislocation density $\rho$ in Eq.~(\ref{eqn:density}):
\begin{eqnarray}\label{eq:con-glide}\nonumber
f_{\text{g}}^{\text{dc,0}} (x,y)&=&\frac{\mu b^2}{2\pi(1-\nu)}\int^{+\infty}_{-\infty}\int^{+\infty}_{-\infty}\frac{(x-x_1)[(x-x_1)^2-(y-y_1)^2]}
{[(x-x_1)^2+(y-y_1)^2]^2}\rho(x_1,y_1)dx_1dy_1,\\
\end{eqnarray}
\begin{eqnarray}\label{eq:con-climb}\nonumber
f_{\text{c}}^{\text{dc,0}} (x,y)&=&\frac{\mu b^2}{2\pi(1-\nu)}\int^{+\infty}_{-\infty}\int^{+\infty}_{-\infty}\frac{(y-y_1)[3(x-x_1)^2+(y-y_1)^2]}
{[(x-x_1)^2+(y-y_1)^2]^2}\rho(x_1,y_1)dx_1dy_1.\\
\end{eqnarray}
These continuum long-range forces are obtained by straightforward averaging from the discrete dislocation dynamics model in Eqs.~(\ref{eq:glide-dd}) and (\ref{eq:climb-dd}) \cite{Nye1953,Kroener1963,Mura1987}.
While previous continuum model based on DDPFs focused on dislocation glide within slip planes \cite{Xiang2009_JMPS,ZhuXH2010,Zhu_continuum3D},
the continuum dislocation dynamics equations in Eq.~(\ref{eq:evn-eqs1}) incorporate both dislocation motions of glide and climb. Compared with the level set discrete dislocation dynamics method \cite{Xiang2003} in which only the intersection of the {\it zero} level sets of two level set functions is meaningful, the continuum dislocation dynamics equations of the two DDPFs hold {\it everywhere} in the simulation domain, i.e. the intersections of {\it all} the level set pairs of the two DDPFs are meaningful here. As all the existing continuum dislocation dynamics models reviewed in the introduction, our continuum model is able to describe smoothly varying dislocation structures and holds in an averaged sense for general dislocation structures by homogenizing the discrete dislocations within some representative volumes centered at each point \cite{Zhu_continuum3D}.
As to be discussed in Sec.~\ref{sec:long-rang}, it is essential to include in continuum Peach-Koehler force the contributions due to short-range dislocation interactions, whose accurate expressions will be derived in the next few sections.
\section{Inconsistency between the continuum long-range force and the discrete dislocation model}
\label{sec:long-rang}
We observe that the continuum Peach-Koehler forces based on the long-range dislocation interaction in Eqs.~\eqref{eq:con-glide} and \eqref{eq:con-climb} are not always consistent with the forces from the discrete dislocation dynamics model, especially when the long-range dislocation interaction vanishes. For example, when the distribution of dislocations is uniform in the $y$ direction, the dislocation density only depends on the spatial variable $x$, i.e. $\rho(x,y)=\rho(x)$.
Substituting this density into Eq.~(\ref{eq:con-glide}) and using $\int^{+\infty}_{-\infty}\frac{(x^2-y^2)}{(x^2+y^2)^2}dy=0$, we have
\begin{eqnarray}\nonumber
f_{\text{g}}^{\text{dc,0}} (x,y)&=&\frac{\mu b^2}{2\pi(1-\nu)}\int^{+\infty}_{-\infty}\int^{+\infty}_{-\infty}\frac{(x-x_1)[(x-x_1)^2-(y-y_1)^2]}{[(x-x_1)^2+(y-y_1)^2]^2}\rho(x_1)dx_1dy_1\\ \nonumber
&=& \frac{\mu b^2}{2\pi(1-\nu)}\int^{+\infty}_{-\infty}(x-x_1)\rho(x_1)dx_1\int^{+\infty}_{-\infty}
\frac{[(x-x_1)^2-(y-y_1)^2]}{[(x-x_1)^2+(y-y_1)^2]^2}dy_1\vspace{1ex}\\
&=&0.
\end{eqnarray}
Thus the continuum glide force in Eq.~(\ref{eq:con-glide}) vanishes for this case.
We then calculate the glide force for this case using the discrete dislocation dynamics model.
Since the distribution of dislocations is uniform in the $y$ direction, the locations of dislocations can be written as $\{(x_m,y_{n_0}+jD)|m,j=0,\pm 1,\pm2,\cdots\}$, where $D$ is the uniform inter-dislocation spacing in the $y$ direction.
On the dislocation located at $(x_{m_0},y_{n_0})$, the
glide force calculated from the discrete dislocation dynamics formula in Eq.~(\ref{eq:glide-dd}) is
\begin{eqnarray}\label{eq:f-g-y}\nonumber
f_{\text{g}}^{\text{dd}} (x_{m_0},y_{n_0})
&=&\frac{\mu b^2}{2\pi(1-\nu)} \sum_{m} \sum_{j=-\infty}^{+\infty}{\frac{(x_{m_0}-x_{m})[(x_{m_0}-x_{m})^2-(jD)^2] }{[(x_{m_0}-x_{m})^2+(jD)^2]^2}}\\
&=&\frac{\mu \pi b}{(1-\nu)D^2}\sum_{m\neq m_0} \frac{x_{m_0}-x_{m}}{\cosh(2\pi \frac{x_{m_0}-x_{m}}{D})-1}.
\end{eqnarray}
This glide force in general is nonzero. This disagreement shows that in the continuum model, in addition to the leading order contribution from the long-range dislocation interaction, it is essential to incorporate short-range dislocation interactions at higher orders in the coarse-graining process from the discrete dislocation dynamics model.
In this paper, we will derive continuum formulas for these short-range dislocation interactions. We first identify all the cases in which the glide or climb force due to the long-range dislocation interaction vanishes.
The long-range forces are easily calculated in the Fourier space, in which the force formulas in Eq.~(\ref{eq:con-glide}) and (\ref{eq:con-climb}) become
\begin{equation}\label{eq:Fourier-glide-con}
\hat{f}_{\text{g}}^{\text{dc,0}} (k_1, k_2)=4\pi^2b \hat{G_1}(k_1, k_2)\hat{\rho}(k_1, k_2)=-\frac{2\mu b^2}{1-\nu}\frac{\mathrm{i}k_1k_2^2}{(k_1^2+k_2^2)^2}\hat{\rho}(k_1, k_2),
\end{equation}
\begin{equation}\label{eq:Fourier-climb-con}
\hat{f}_{\text{c}}^{\text{dc,0}} (k_1, k_2)= 4\/\pi^2b\hat{G_2}(k_1,k_2)\hat{\rho}(k_1, k_2)=-\frac{2\mu b^2}{1-\nu}\frac{\mathrm{i}k_2^3}{(k_1^2+k_2^2)^2}\hat{\rho}(k_1,k_2),
\end{equation}
where $\hat{f}$ is the Fourier coefficient of $f$ of the component $e^{\mathrm{i}(k_1x+k_2y)}$, $\mathrm{i}$ is the imaginary unit and $k_1, k_2$ are the wave numbers. Recall that the functions $G_1(x,y)$ and $G_2(x,y)$ are defined in Eqs.~\eqref{eqn:g1} and \eqref{eqn:g2}.
{\bf (i) The long-range glide force vanishes, i.e. $f_{\text{g}}^{\text{dc,0}}=0$.} This is equivalent to $\hat{f}_{\text{g}}^{\text{dc,0}} (k_1, k_2)=0$ for any $k_1$ and $k_2$.
Following Eq.~(\ref{eq:Fourier-glide-con}), if $\hat{f}_{\text{g}}^{\text{dc,0}} (k_1, k_2)=0$, at least one of the following three conditions holds for any fixed $k_1, k_2$: $k_1 = 0$ but $ k_2\neq0$; $ k_2 = 0$ but $ k_1\neq0$; or $\hat{\rho}(k_1,k_2)=0$ if $k_1, k_2\neq 0$. Thus all the solutions of $f_{\text{g}}^{\text{dc,0}}=0$ are given by
\begin{eqnarray}\nonumber
\rho(x, y)&=&\sum_{k_1}\sum_{k_2}\hat{\rho}(k_1, k_2)e^{\mathrm{i}(k_1x+k_2y)}\\ \nonumber
&=&\sum_{k_1\neq 0}\hat{\rho}(k_1, 0)e^{\mathrm{i}k_1x}+\sum_{k_2\neq 0}\hat{\rho}(0, k_2)e^{\mathrm{i}k_2y}\\
&=&\rho_1(x)+\rho_2(y),\label{eqn:vanish1}
\end{eqnarray}
where $\rho_1(x)$ and $\rho_2(y)$ are some functions.
{\bf (ii) The long-range climb force vanishes, i.e. $f_{\text{c}}^{\text{dc,0}}=0$}. This is equivalent to $\hat{f}_{\text{c}}^{\text{dc,0}} (k_1, k_2)=0$ for any $k_1$ and $k_2$. Following Eq.~(\ref{eq:Fourier-climb-con}), if $\hat{f}_{\text{c}}^{\text{dc,0}} (k_1, k_2)=0$, at least one of the following two conditions holds for any fixed $k_1, k_2$: $ k_2 = 0$ but $k_1\neq0$; or $\hat{\rho}(k_1,k_2)=0$ if $k_2\neq 0$. Thus all the solutions of $f_{\text{c}}^{\text{dc,0}}=0$ are given by
\begin{eqnarray}\nonumber
\rho(x, y)&=&\sum_{k_1}\sum_{k_2}\hat{\rho}(k_1, k_2)e^{\mathrm{i}(k_1x+k_2y)}\\ \nonumber
&=&\sum_{k_1\neq 0}\hat{\rho}(k_1, 0)e^{\mathrm{i}k_1x} \\
&=&\rho_3(x) ,\label{eqn:vanish2}
\end{eqnarray}
where $\rho_3(x)$ is some function.
In summary, the long-range glide force in the continuum model vanishes if and only if the dislocation density has the form $\rho(x,y)=\rho_1(x)+\rho_2(y)$, and the long-range climb force in the continuum model vanishes if and only if the dislocation density has the form $\rho(x,y)=\rho_3(x)$ (which means that
the dislocation distribution is uniform in the $y$ direction). However, the forces calculated from the discrete dislocation dynamics model are not necessarily zero, see the example in Eq.~(\ref{eq:f-g-y}).
In these cases, it is essential to keep the next order forces that represent the short-range dislocation interaction due to the discreteness of dislocation distributions, in the coarse-graining process from the discrete dislocation dynamics model.
In the next section, we examine these cases and derive continuum force expressions to capture such short-range interactions of dislocations.
\section{Continuum force formulation due to short-range dislocation interactions}\label{sec:continuum-short}
In this section, we derive continuum expressions for the dislocation short-range interactions from the discrete dislocation dynamics model.
We focus on
the dislocation configurations identified in Sec.~\ref{sec:long-rang} where
the continuum long-range force fails to provide stabilizing effect compared with the discrete model. These dislocation distributions are uniform either within the slip planes (in the $x$ direction) or in the direction normal to the slip planes (in the $y$ direction), i.e.,
\begin{equation}\label{eqn:linear_rho}
\rho=\rho(x) \ {\rm or}\ \rho(y).
\end{equation}
We consider the dislocation configurations that are not far from a unform distribution (i.e. in the linear regime of the deviations). The perturbations are small in the sense of the maximum norm. We neglect the force due to applied stress in this section.
Using the representation of DDPFs described in Sec.~\ref{sec:ddpf}, such a perturbed uniform dislocation wall can be described by
\begin{equation}
\phi=\frac{b}{B}x+\tilde{\phi}, \ \ \ \ \psi=\frac{b}{D}y+\tilde{\psi},
\end{equation}
where $B$ is the inter-dislocation spacing in a slip plane and $D$ is the inter-slip plane spacing in the uniform dislocation wall. From the formula of $\rho$ in Eq.~(\ref{eqn:density}), it is easy to show that Eq.~(\ref{eqn:linear_rho}) holds under the following necessary condition in the linear regime that the perturbations in a DDPF $\phi$ or $\psi$ are either functions of $x$ or $y$, i.e.
\begin{equation}
\left\{
\begin{array}{l}
\tilde{\phi}=\tilde{\phi}(x) \ {\rm or}\ \tilde{\phi}(y)\vspace{1ex}\\
\tilde{\psi}=\tilde{\psi}(x) \ {\rm or}\ \tilde{\psi}(y).
\end{array}
\right.
\end{equation}
These dislocation configurations can be summarized into four cases as shown in Fig.~\ref{fig:case1-4}.
In Case 1, the dislocation distribution is uniform in the direction normal to the slip planes, but nonuniform in a slip plane. This dislocation structure can be described using DDPFs $\phi$ and $\psi$ as $\phi=\phi(x)$, $\psi=by/D$.
In Case 2, each row of dislocations has a small perturbation in the direction normal to the slip planes,
and the perturbations are uniform in the direction normal to the slip planes. This dislocation structure is given by $\phi=bx/B$, $\psi=by/D+\tilde{\psi}(x)$, where $\tilde{\psi}(x)$ is some function.
In Case 3, the dislocation distribution is uniform in any slip plane, but nonuniform in the direction normal to the slip planes. This dislocation structure is given by $\phi=bx/B$, $\psi=\psi(y)$.
Finally, in Case 4, each column of dislocations has a small perturbation, and the perturbations are uniform for all the columns of dislocations. This dislocation structure is given by $\phi=bx/B+\tilde{\phi}(y)$, $\psi=by/D$.
We then derive for each of these four cases a continuum formula of the short-range dislocation interaction force from the discrete dislocation dynamics model reviewed in Sec.~\ref{sec:ddd}. In this discrete to continuum process, we employ asymptotic analysis under the assumption that $L>>B,D$ where $L$ is the length unit of the continuum model. This means that there are a large number of dislocations contained in a unit area of the domain of the continuum model.
Note that in this limit process, $b/B$ and $b/D$ are fixed finite (small) numbers, and $B$ and $D$ are greater than a few multiples of the Burgers vector length $b$ such that the core regions of different dislocations are not overlapped.
Note that although we use linear assumption, the obtained continuum model still holds for configurations significantly deviated from the uninform distributions. See the numerical examples in Sec.~\ref{sec:numerical}.
\begin{figure}[htbp] \centering
\subfigure[Case 1]{\label{fig:case1}\includegraphics[width=2.5in]{case1eps-eps-converted-to.pdf}}
\subfigure[Case 2]{\label{fig:case2}\includegraphics[width=2.5in]{case2eps-eps-converted-to.pdf}}
\subfigure[Case 3]{\label{fig:case3}\includegraphics[width=2.5in]{case3eps-eps-converted-to.pdf}}
\subfigure[Case 4]{\label{fig:case4}\includegraphics[width=2.5in]{case4eps-eps-converted-to.pdf}}
\caption{Four cases of dislocation distributions with vanishing glide or climb force due to the long-range dislocation interaction. Case 1: $\phi=\phi(x)$, $\psi=by/D$. Case 2: $\phi=bx/B$, $\psi=by/D+\tilde{\psi}(x)$.
Case 3: $\phi=bx/B$, $\psi=\psi(y)$. Case 4: $\phi=bx/B+\tilde{\phi}(y)$, $\psi=by/D$. See the text for the description of each case.} \label{fig:case1-4}
\end{figure}
\subsection{Case 1}
The structure of dislocations in this case is shown schematically in Fig.~\ref{fig:case1}, which
is uniform in the direction normal to the slip planes (in the $y$ direction), but nonuniform in a slip plane (in the $x$ direction). This dislocation structure is described by
\begin{equation}\label{eqn:case1}
\phi=\phi(x)=\frac{b}{B}x+\tilde{\phi}(x), \ \ \psi=\psi(y)=\frac{b}{D}y,
\end{equation}
where $\tilde{\phi}(x)$ is some small perturbation such that $\tilde{\phi}(x)<<b$ and $\phi'(x)>0$.
Using Eq.~(\ref{eqn:density}), the dislocation density $\rho=\rho(x)=\frac{1}{D}(\frac{1}{B}+\frac{\tilde{\phi}'(x)}{b})$, and accordingly, the continuum Peach-Koehler force due to the long-range dislocation interaction vanishes as shown in Sec.~\ref{sec:long-rang}. We will derive a continuum formula of the short-range dislocation interaction force from the discrete dislocation dynamics model.
We first consider the glide force. In this case, the discrete dislocation dynamics model in Eq.~(\ref{eq:glide-dd}) gives the following expression for the glide force on the dislocation located at $(x_m,y_n=nD)$:
\begin{eqnarray}\label{eq:case1-dis}
f_{\text{g}}^{\rm dd}(x_m,y_n)&=&\frac{\mu b^2}{2\pi(1-\nu)}\sum_{j\neq m} \sum_{k=-\infty}^{+\infty}
{\frac{(x_m-x_j)[(x_m-x_j)^2-(kD)^2]}{[(x_m-x_j)^2+(kD)^2]^2}}\nonumber\\
&=&\frac{\pi \mu b^2}{(1-\nu)D^2}\sum_{j\neq m}\frac{x_m-x_j}{\cosh 2\pi \frac{x_m-x_j}{D}-1}\nonumber\\
&=&\frac{\pi \mu b^2}{(1-\nu)D^2}\sum_{j=1}^{+\infty}\left(\frac{x_m-x_{m+j}}{\cosh 2\pi \frac{x_m-x_{m+j}}{D}-1}+\frac{x_m-x_{m-j}}{\cosh 2\pi \frac{x_m-x_{m-j}}{D}-1}\right).
\end{eqnarray}
We will derive a continuum expression from Eq.~\eqref{eq:case1-dis} in the limit of the length unit of the continuum model $L>>B$, $D$ and $b$. The continuum expression will be based on the DDPF $\phi(x)$ in Eq.~(\ref{eqn:case1}) such that $\phi(x_m)=mb$, $m=0,\pm 1, \pm 2, \cdots$.
We then have
\begin{equation}\label{eq:case1-x-phi-02}
x_m-x_{m+j}=-jB+\frac{B}{b}[\tilde{\phi}(x_{m+j})- \tilde{\phi}(x_{m})].
\end{equation}
Using the assumption $\tilde{\phi}<<b$, we can make the following Taylor expansion at $x_m-x_{m+j}=-jB$:
\begin{equation}\label{eq:case1-x-phi-03}
{\textstyle \frac{x_m-x_{m+j}}{\cosh 2\pi \frac{x_m-x_{m+j}}{D}-1}=\frac{-jB}{\cosh 2\pi \frac{jB}{D}-1}+\frac{B}{b}\cdot\frac{\cosh 2\pi \frac{jB}{D}-1- 2\pi \frac{jB}{D}\sinh2\pi \frac{jB}{D}}{(\cosh 2\pi \frac{jB}{D}-1)^2}[\tilde{\phi}(x_{m+j})- \tilde{\phi}(x_{m})]+\cdots.} \end{equation}
We can then approximation the glide force in Eq.~(\ref{eq:case1-dis}) by
\begin{equation}\label{eq:case1-dis-02}
{\textstyle f_{\text{g}}^{\rm dd}(x_m,y_n)
\approx \frac{\pi \mu b^2}{(1-\nu)D^2}\sum_{j=1}^{+\infty} \frac{B}{b}\frac{\cosh 2\pi \frac{jB}{D}-1- 2\pi \frac{jB}{D}\sinh2\pi \frac{jB}{D}}{(\cosh 2\pi \frac{jB}{D}-1)^2}[\tilde{\phi}(x_{m-j})+\tilde{\phi}(x_{m+j})- 2\tilde{\phi}(x_{m})].}
\end{equation}
Following Eq.~(\ref{eq:case1-x-phi-02}), we have
\begin{equation}
\tilde{\phi}(x_{m-j})+\tilde{\phi}(x_{m+j})- 2\tilde{\phi}(x_{m})
=\frac{b}{B}(2x_m-x_{m+j}-x_{m-j})
=-\frac{b}{B}(jb)^2x_{\phi\phi}
=\frac{b^3}{B}\frac{\phi_{xx}}{\phi^3_x}j^2.
\end{equation}
Note that since we have assumed $\phi'(x)>0$, $x$ can also be considered as a function of $\phi$. Thus Eq.~(\ref{eq:case1-dis-02}) can be approximated by
\begin{eqnarray}\label{eq:case1-dis-03}
f_{\text{g}}^{\rm dd}(x_m,y_n) \nonumber
&\approx &\frac{\pi \mu b^4}{(1-\nu)D^2}\frac{\phi_{xx}}{\phi^3_x}\sum_{j=1}^{+\infty} \frac{\cosh 2\pi \frac{jB}{D}-1- 2\pi \frac{jB}{D}\sinh2\pi \frac{jB}{D}}{(\cosh 2\pi \frac{jB}{D}-1)^2}j^2 \\
&=&\frac{\pi \mu bB}{1-\nu}\phi_{xx}\sum_{j=1}^{+\infty} \frac{[\cosh 2\pi \frac{jB}{D}-1- 2\pi \frac{jB}{D}\sinh2\pi \frac{jB}{D}](\frac{jB}{D})^2}{(\cosh 2\pi \frac{jB}{D}-1)^2},\nonumber\\
&=&-\frac{\pi \mu bD}{1-\nu}g_1\left(\frac{B}{D}\right)\phi_{xx},
\end{eqnarray}
where the function $g_1(s)$ is defined as
\begin{equation}
\label{eq:fg1}
g_1(s)=\sum_{j=1}^{+\infty} \frac{[2\pi js\sinh(2\pi js)-\cosh(2\pi js)+1](js)^2s}{[\cosh(2\pi js)-1]^2}.
\end{equation}
In the continuum model, it would be more convenient to have a simple formula for the coefficient instead of the summation in Eq.~\eqref{eq:fg1}. Obtaining analytical formula for such a summation is difficult. In the following, we will derive an approximate formula for it.
First, when $B/D$ is small, the summation in Eq.~(\ref{eq:fg1}) can be considered as an approximation to some integral with $\Delta x=B/D$ as follows
\begin{eqnarray}
\label{eq:fg1_app}
g_1\left(\frac{B}{D}\right)&=&\frac{1}{2}\sum_{j\neq 0} \frac{[2\pi \frac{jB}{D}\sinh2\pi \frac{jB}{D}-\cosh 2\pi \frac{jB}{D}+1](\frac{jB}{D})^2}{(\cosh 2\pi \frac{jB}{D}-1)^2}\cdot\frac{B}{D}\nonumber\\
&\approx&\frac{1}{2}\left[\int^{+\infty}_{-\infty} \frac{(2\pi x\sinh2\pi x-\cosh 2\pi x+1)x^2}{(\cosh 2\pi x-1)^2}dx\right.\nonumber\\
&&-\left.\lim_{x\rightarrow 0}\frac{(2\pi x\sinh2\pi x-\cosh 2\pi x+1)x^2}{(\cosh 2\pi x-1)^2}\cdot \frac{B}{D}\right]\nonumber\\
&=&\frac{1}{2}\left(\int^{+\infty}_{-\infty} \frac{2\pi x^3\sinh2\pi x}{(\cosh 2\pi x-1)^2}dx
-\int^{+\infty}_{-\infty} \frac{x^2}{\cosh 2\pi x-1}dx
-\frac{1}{2\pi^2}\frac{B}{D}\right)\nonumber\\
&=&\frac{1}{2}\left(\int^{+\infty}_{-\infty} \frac{2x^2}{\cosh 2\pi x-1}dx
-\frac{1}{2\pi^2}\frac{B}{D}\right)\nonumber\\
&=&\frac{1}{6\pi}-\frac{1}{4\pi^2}\frac{B}{D}.
\end{eqnarray}
Note that in these calculations, the approximation from the summation in the first line to the integral in the second line is based on the trapezoidal rule and the fact that the integrand decays exponentially as $x\rightarrow\pm\infty$.
Thus by Eqs.~\eqref{eq:case1-dis-03}--\eqref{eq:fg1_app}, we have the following continuum approximation of the glide force on the dislocation
\begin{equation}\label{eq:case1-result3}
f_{\text{g}}^{\rm dc}=-\frac{\mu b^2}{6(1-\nu)|\psi_y|}\left( 1-\frac{3}{2\pi}\frac{|\psi_y|}{|\phi_x|}\right)\phi_{xx}.
\end{equation}
Here we have used $\frac{b}{B}\approx|\phi_x|$ and $\frac{b}{D}=|\psi_y|$ by Eq.~(\ref{eqn:case1}).
Note that the above approximation holds when $B/D$ is small. When $B/D$ is large, all the terms in the summation in $g_1$ are exponentially small controlled by $e^{-\frac{B}{D}}$, and accordingly $g_1$ is exponentially small. On the other hand, there is an important property that $g_1> 0$ always holds. Thus when $B/D$ is large, we use $\varepsilon/(6\pi)$ to approximate $g_1$, where $\varepsilon$ is some small positive constant.
That is,
\begin{equation}\label{eq:truncationg1}
g_1(s)\approx\left\{
\begin{array}{ll}
\frac{1}{6\pi}-\frac{s}{4\pi^2}, & \hbox{if $1-\frac{3}{2\pi}s>\varepsilon$}; \vspace{1ex}\\
\frac{\varepsilon}{6\pi}, & \hbox{otherwise for}\ s\geq 0.
\end{array}
\right.
\end{equation}
Fig.~\ref{fig:case1-01} shows good match between the results from the approximation of the function $g_1(s)$ and its exact formula in Eq.~\eqref{eq:fg1} for different values of $s$.
Using the approximations in the two regimes obtained above, we have the following continuum approximation of the glide force on the dislocation for all values of $B/D$:
\begin{equation}\label{eq:case1-result3new}
f_{\text{g}}^{\rm dc}=-\frac{\mu b^2}{6(1-\nu)|\psi_y|}\left[ 1-\frac{3}{2\pi}\frac{|\psi_y|}{|\phi_x|}\right]_{\varepsilon +}\phi_{xx},
\end{equation}
where the notation $[\cdot]_{\varepsilon +}$ is defined as
\begin{equation}\label{eq:truncation}
[h]_{\varepsilon +}=\left\{
\begin{array}{ll}
h, & \hbox{if $h>\varepsilon$;} \\
\varepsilon, & \hbox{if $h\leq\varepsilon$.}
\end{array}
\right.
\end{equation}
We would like to remark that in addition to its accuracy, the form of the continuum force formula in Eq.~\eqref{eq:case1-result3new} is also essential to maintain the strict stability of the evolution equations, see Eq.~\eqref{eq:case1-evn3}.
\begin{figure}[htbp]
\centering
\includegraphics[width=4.5in]{case1numericalcontinuum-eps-converted-to.pdf}
\caption{Comparison of the approximation of the function $g_1(s)$ in Eq.~\eqref{eq:truncationg1} (the red piecewise linear curve) and its exact formula in Eq.~\eqref{eq:fg1} (the blue dash curve, calculated numerically) for different values of $s$, where $\varepsilon=0.02$.}
\label{fig:case1-01}
\end{figure}
Note that when the line direction of these dislocations changes to $\boldsymbol{\tau}=( 0,0,-1)$, we may have
$\phi_x<0$, and this case can be included by modifying the continuum glide force in Eq.~(\ref{eq:case1-result3new}) as
\begin{equation}\label{eq:case1-result5}
f_{\text{g}}^{\rm dc}=-{\rm sgn}(\phi_x)\frac{\mu b^2}{6(1-\nu)|\psi_y|}\left[ 1-\frac{3}{2\pi}\frac{|\psi_y|}{|\phi_x|}\right]_{\varepsilon +}\phi_{xx},
\end{equation}
where the function ${\rm sgn}(s)$ gives the sign of $s$. This continuum expression does not depend on the sign of $\psi_y$.
Next we derive continuum expression of the climb force for this case. On the dislocation at $(x_m,y_n)$, the climb force from the discrete dislocation dynamics model in Eq.~(\ref{eq:climb-dd}) is
\begin{equation}\label{eq:case1-dis-climb}
f_{\text{c}}^{\rm dd}(x_m,y_n)=\frac{\mu b^2}{2\pi(1-\nu)}\sum_{j\neq m} \sum_{k=-\infty}^{+\infty}
{\frac{(0-kD)(3(x_m-x_j))^2+(0-kD)^2)}{[(x_m-x_j)^2+(0-kD)^2]^2}}
=0.
\end{equation}
Thus the continuum expression of the climb force in this case is
\begin{equation}\label{eq:case1-result4}
f_{\text{c}}^{\rm dc}\equiv 0.
\end{equation}
Substituting the continuum expressions of $f_{\text{g}}^{\rm dc}$ and $f_{\text{c}}^{\rm dc}$ in Eqs.~(\ref{eq:case1-result5}) and (\ref{eq:case1-result4}) into the evolution equation of $\phi$ in (\ref{eq:evn-eqs1}), with the mobility law in Eq.~\eqref{eq:velocity},
the final form of the evolution equation for Case 1 is
\begin{eqnarray}\label{eq:case1-evn3}
\phi_t- \frac{m_{\text{g}}\mu b^2}{6(1-\nu)}\frac{ |\phi_x|}{|\psi_y| }\left[ 1 -\frac{3}{2\pi}\frac{|\psi_y|}{|\phi_x|}\right]_{\varepsilon+}\phi_{xx}=0.
\end{eqnarray}
\subsection{Case 2}
The structure of dislocations in this case is shown schematically in Fig.~\ref{fig:case2}.
Each row of dislocations has a small perturbation in the direction normal to the slip planes (in the $y$ direction),
and the perturbations are uniform in the $y$ direction.
This dislocation structure is described by
\begin{equation}\label{eqn:case2}
\phi=\frac{b}{B}x, \ \ \psi=\frac{b}{D}y+\tilde{\psi}(x),
\end{equation}
where $\tilde{\psi}(x)$ is some small perturbation with $\tilde{\psi}(x)<<b$ and $Bb/D$.
The continuum Peach-Koehler force due to the long-range dislocation interaction vanishes as shown in Sec.~\ref{sec:long-rang}.
In the discrete model of this case, if we denote the locations of the dislocations on the $\psi=0$ row by
$(x_j=jB, y_j)$ for $j=0,\pm 1, \pm 2, \cdots$, i.e.,
\begin{equation}\label{eq:case2-y}
\frac{b}{D}y_j+\tilde{\psi}(x_j)=0,
\end{equation}
the glide force on the dislocation $(x_{m},y_{m})$ using Eq.~(\ref{eq:glide-dd}) is
\begin{eqnarray}\label{eq:case2-dis-glide}
\nonumber
f_{\text{g}}^{\text{dd}}(x_{m},y_{m}) &=&\frac{\mu b^2}{2\pi(1-\nu)}\sum_{j\neq m}\sum_{k=-\infty}^{+\infty}
{\frac{(x_m-x_j)[(x_m-x_j)^2-(y_m-(y_j+kD))^2]}{[(x_m-x_j)^2+(y_m-(y_j+kD))^2]^2}}\\
&=&\frac{\pi \mu b^2}{ (1-\nu)D^2}\sum_{j\neq m}\frac{ (x_m-x_j)[\cosh2\pi \frac{x_m-x_j}{D} \cos2\pi \frac{y_m-y_j}{D}-1]}{(\cosh 2\pi \frac{x_m-x_j}{D}-\cos 2\pi \frac{y_m-y_j}{D})^2}\nonumber\\
&\approx&\frac{\pi \mu b^2}{ (1-\nu)D^2}\nonumber\\
&&\cdot\sum_{j\neq m}{\textstyle \frac{ (x_m-x_j)\left[\cosh2\pi \frac{x_m-x_j}{D}-1-\left(\cosh2\pi \frac{x_m-x_j}{D}+2\right) \left(1-\cos2\pi \frac{y_m-y_j}{D}\right)
\right]}{\left(\cosh 2\pi \frac{x_m-x_j}{D}-1\right)^2}}.\nonumber\\
\end{eqnarray}
Here we have summed up the contributions from each column first. When $j=m$, the glide force on the dislocation $(x_{m},y_{m})$ imposed by the vertical dislocation array containing this dislocation itself is zero. The last approximation is obtained by Taylor expansions
using the fact that $\cosh 2\pi \frac{x_m-x_j}{D}-1>>1- \cos2\pi \frac{y_m-y_j}{D}$ for $j\neq m$,
which is due to $x_j=jB$ and $y_j<<D$ and $B$. The latter can be derived from the assumption $\tilde{\psi}(x)<<b$ and $Bb/D$ and the definition of $y_j$ in Eq.~\eqref{eq:case2-y}.
Next we derive a continuum expression from the summation in Eq.~\eqref{eq:case2-dis-glide} when $B, D << L$, the length unit of the continuum model. As in Eq.~\eqref{eq:case1-dis} in Case 1, the summation in Eq.~\eqref{eq:case2-dis-glide} can be written in a symmetric way as
\begin{eqnarray}\label{eq:case2-dis-glide2}
\nonumber
f_{\text{g}}^{\text{dd}}(x_{m},y_{m})
&\approx&\frac{\pi \mu b^2}{ (1-\nu)D^2}\\
&&\cdot\sum_{j=1}^{+\infty}\left\{{\textstyle \frac{ (x_m-x_{m+j})\left[\cosh2\pi \frac{x_m-x_{m+j}}{D}-1-\left(\cosh2\pi \frac{x_m-x_{m+j}}{D}+2\right) \left(1-\cos2\pi \frac{y_m-y_{m+j}}{D}\right)
\right]}{\left(\cosh 2\pi \frac{x_m-x_{m+j}}{D}-1\right)^2}}\right.\nonumber\\
&&+\left.{\textstyle \frac{ (x_m-x_{m-j})\left[\cosh2\pi \frac{x_m-x_{m-j}}{D}-1-\left(\cosh2\pi \frac{x_m-x_{m-j}}{D}+2\right) \left(1-\cos2\pi \frac{y_m-y_{m-j}}{D}\right)
\right]}{\left(\cosh 2\pi \frac{x_m-x_{m-j}}{D}-1\right)^2}}\right\}.\nonumber\\
\end{eqnarray}
Using $x_j=jB$, Eq.~\eqref{eq:case2-y}, and the assumption $y_j<<D$, we can calculate as in Case 1 that
\begin{eqnarray}\label{eq:case2-dis-02}
f_{\text{g}}^{\rm dd}(x_m,y_m) &\approx &\frac{2\mu \pi^3}{ (1-\nu)D^2}\sum_{j=1}^{+\infty}jB\frac{\cosh 2\pi \frac{jB}{D}+2 }{(\cosh 2\pi \frac{jB}{D}-1)^2} \nonumber\\
&&\cdot [\tilde{\psi}(x_{m+j})- \tilde{\psi}(x_{m-j})][\tilde{\psi}(x_{m+j})- 2\tilde{\psi}(x_{m})+\tilde{\psi}(x_{m-j})]\nonumber\\
&\approx &\frac{4\mu\pi^3D^2 }{1-\nu}\tilde{\psi}_{xx}\tilde{\psi}_x \sum_{j=1}^{+\infty}\left(\frac{jB}{D}\right)^4\frac{\cosh 2\pi \frac{jB}{D}+2 }{(\cosh 2\pi \frac{jB}{D}-1)^2}\nonumber\\
&= &O(\tilde{\psi}^2)\nonumber\\
&\approx &0.
\end{eqnarray}
Note that we only keep linear terms of the small perturbation $\tilde{\psi}$. Thus the continuum expression of the glide force in this case is
\begin{equation}\label{eq:case2-result4}
f_{\text{g}}^{\rm dc}\equiv 0.
\end{equation}
Next we will derive a continuum expression of the climb force in this case. The discrete expression given by Eq.~\eqref{eq:climb-dd} is
\begin{eqnarray}\label{eq:case2-dis-climb}
\nonumber
f_{\text{c}}^{\text{dd}}(x_{m},y_{m}) &=&\frac{\mu b^2}{2\pi(1-\nu)}\sum_{j\neq m}\sum_{k=-\infty}^{+\infty}
\frac{(y_m-(y_j+kD))[3(x_m-x_j)^2+(y_m-(y_j+kD))^2]}{[(x_m-x_j)^2+(y_m-(y_j+kD))^2]^2}\\ \nonumber
&=&\frac{\mu b^2}{ 2(1-\nu)D}\sum_{j\neq m}\frac{ \sin2\pi\frac{y_m-y_j}{D}}{(\cosh2\pi \frac{x_m-x_j}{D} -\cos2\pi \frac{y_m-y_j}{D})^2} \\\nonumber
&~&\cdot\left\{\cosh 2\pi \frac{x_m-x_j}{D}-\cos 2\pi \frac{y_m-y_j}{D}+2\pi \frac{x_m-x_j}{D} \sinh2\pi \frac{x_m-x_j}{D}\right\}\\
&\approx&\frac{\pi\mu b^2}{ (1-\nu)D^2}\sum_{j\neq m}\frac{(y_m-y_j)(\cosh 2\pi \frac{x_m-x_j}{D}-1+2\pi \frac{x_m-x_j}{D} \sinh2\pi \frac{x_m-x_j}{D})}{(\cosh2\pi \frac{x_m-x_j}{D} -1)^2}.\nonumber\\
\end{eqnarray}
Using the same method as before, Eq.~\eqref{eq:case2-dis-climb} can be approximated by
\begin{eqnarray}\label{eq:case2-con-climb}
f_{\text{c}}^{\text{dd}}(x_{m},y_{m})
\approx\frac{\pi \mu bD^2}{(1-\nu)B}\tilde{\psi}_{xx}\sum_{j=1}^{+\infty} \frac{[\cosh 2\pi \frac{jB}{D}-1+ 2\pi \frac{jB}{D}\sinh2\pi \frac{jB}{D}](\frac{jB}{D})^2}{(\cosh 2\pi \frac{jB}{D}-1)^2}\cdot\frac{B}{D}.\nonumber\\
\end{eqnarray}
Further using $|\psi_y|=\frac{b}{D}$, $|\phi_x|=\frac{b}{B}$, and taking into consider the dislocations in the opposite direction (i.e. $\psi_y<0$), as in Case 1, we have
\begin{eqnarray}\label{eq:case2-con-climb2}
f_{\text{c}}^{\text{dc}}={\rm sgn}(\psi_y)
\frac{\pi \mu b^2|\phi_x|}{(1-\nu)|\psi_y|^2}g_2\left(\frac{|\psi_y|}{|\phi_x|}\right)\psi_{xx},
\end{eqnarray}
where function $g_2$ is defined as $g_2(\frac{B}{D})=\sum_{j=1}^{+\infty} \frac{[\cosh 2\pi \frac{jB}{D}-1+ 2\pi \frac{jB}{D}\sinh2\pi \frac{jB}{D}](\frac{jB}{D})^2}{(\cosh 2\pi \frac{jB}{D}-1)^2}\cdot\frac{B}{D}$.
Substituting the obtained $f_{\text{g}}^{\text{dc}}$ and $f_{\text{c}}^{\text{dc}}$ into the evolution equation of $\psi$ in Eq.~\eqref{eq:evn-eqs1}, with the mobility law in Eq.~\eqref{eq:velocity}, the evolution equation of dislocations for this case can be written as
$\psi_t+
\frac{\pi m_{\text{c}} \mu b^2|\phi_x|}{(1-\nu)|\psi_y|}g_2\left(\frac{|\psi_y|}{|\phi_x|}\right)\psi_{xx}=0$. It is easy to see that $g_2(s)>0$ for $s>0$. This means that the obtained evolution equation is not wellposed. In order to obtain a wellposed equation, we can keep higher order derivative terms in the continuum approximation, which will make the equation very complicated. Alternatively, we simply choose a simple regularization term of second order to ensure the wellposedness of the continuum model, which leads to the following evolution equation for Case 2:
\begin{eqnarray}\label{eq:case2-evn-3}
\psi_t- \frac{m_{\text{c}}\mu b^2 }{6(1-\nu) }\varepsilon\psi_{xx} =0,
\end{eqnarray}
where $\varepsilon>0$ is the same small parameter as that in Eq.~\eqref{eq:truncationg1}.
\subsection{Case 3}
The structure of dislocations in this case is shown schematically in Fig.\ref{fig:case3}, which is uniform in each slip plane (in the $x$ direction), but slip planes of these dislocations are nonuniform (in the $y$ direction). This dislocation structure is described by
\begin{equation}\label{eqn:case3}
\phi=\frac{b}{B}x, \ \ \psi=\frac{b}{D}y+\tilde{\psi}(y),
\end{equation}
where $\tilde{\psi}(y)$ is some small perturbation such that $\psi'(y)>0$.
Using Eq.~(\ref{eqn:density}), the dislocation density in this case is
\begin{equation}\label{eqn:case3density}
\rho=\rho(y)=\frac{1}{B}\left(\frac{1}{D}+\frac{\tilde{\psi}'(y)}{b}\right).
\end{equation}
Based on the conclusions in Sec.~\ref{sec:long-rang} (Eqs.~(\ref{eqn:vanish1}) and (\ref{eqn:vanish2})),
the continuum long-range glide force vanishes, whereas the continuum long-range climb force does not. Therefore, in this case, the integral expression in Eq.~(\ref{eq:con-climb}) is able to give a nonvanishing leading order continuum approximation for the climb force, and we only need to derive a continuum formula for the short-range glide force.
Using the discrete model in Eq.~(\ref{eq:glide-dd}), the glide force on the dislocation located at
$(x_m=mB,y_n)$ in this case is
\begin{equation}\label{eq:case3-dis-glide}
f^{\text{dd}}_{\text{g}}(x_m,y_n)=
\frac{\mu b^2}{2\pi(1-\nu)}\sum_{k\neq n} \sum_{j=-\infty}^{+\infty}
{\frac{-jB[(jB)^2-(y_n-y_k)^2]}{[(jB)^2+(y_n-y_k)^2]^2}}
=0.
\end{equation}
This means that the glide force in this case indeed vanishes. Therefore, in this case,
\begin{equation}\label{eq:case3-result4}
f_{\text{g}}^{\rm dc}\equiv 0.
\end{equation}
{\bf Remark}: In this case, we have shown that the integral expression in Eq.~(\ref{eq:con-climb}) is able to give a nonvanishing leading order continuum approximation for the climb force. It is interesting to note that this integral expression with the dislocation density $\rho$ in Eq.~(\ref{eqn:case3density}) in this case can be further simplified to a local expression: $f_{\text{c}}^{\text{dc,0}}=\frac{2\mu b}{(1-\nu)B}\tilde{\psi}(y)$, if the perturbation $\tilde{\psi}$ goes to zero at infinity.
\subsection
{Case 4}
The structure of dislocations in this case is shown schematically in Fig.~\ref{fig:case4}.
Each column of dislocations has a small perturbation in their own slip planes (in the $x$ direction),
and the perturbations are uniform in the $x$ direction.
This dislocation structure is described by
\begin{equation}\label{eqn:case4}
\phi=\frac{b}{B}x+\tilde{\phi}(y), \ \ \psi=\frac{b}{D}y,
\end{equation}
where $\tilde{\phi}(y)$ is some small perturbation with $\tilde{\phi}(y)<<b$ and $Db/B$.
The continuum Peach-Koehler force due to the long-range dislocation interaction vanishes as shown in Sec.~\ref{sec:long-rang} because the scalar dislocation density calculated by Eq.~\eqref{eqn:density} is a constant.
In the discrete model of this case, we denote the locations of the dislocations on the $\phi=0$ column by
$(x_k, y_k=kD)$ for $k=0,\pm 1, \pm 2, \cdots$, i.e.,
\begin{equation}\label{eq:case4-x}
\frac{b}{B}x_k+\tilde{\phi}(y_k)=0.
\end{equation}
The glide force on the dislocation $(x_{n},y_{n})$ using Eq.~(\ref{eq:glide-dd}) is
\begin{eqnarray} \label{eq:case4-dis-glide-1}\nonumber
f^{\text{dd}}_{\text{g}}(x_{n},y_{n})&=&\frac{\mu b^2}{2\pi(1-\nu)}\sum_{k\neq n}\sum_{j=-\infty}^{+\infty}
{\frac{(x_n-(x_k+jB))[(x_n-(x_k+jB))^2-(y_n-y_k)^2]}{[(x_n-(x_k+jB))^2+(y_n-y_k)^2]^2}}\\ \nonumber
&=&\frac{\mu b^2}{2(1-\nu)B}\sum_{k\neq n}\frac{\sin 2\pi \frac{x_n-x_k}{B}}{(\cosh 2\pi \frac{y_n-y_k}{B}-\cos2\pi \frac{x_n-x_k}{B})^2}\\ \nonumber
&~&\cdot \left( \cosh2\pi \frac{y_n-y_k}{B}-\cos2\pi \frac{x_n-x_k}{B}-2\pi \frac{y_n-y_k}{B}\sinh 2\pi \frac{y_n-y_k}{B}\right)\\
&\approx&\frac{\mu b^2}{2(1-\nu)B}\sum_{k\neq n}\frac{\sin 2\pi \frac{x_n-x_k}{B}
(\cosh2\pi \frac{y_n-y_k}{B}-1-2\pi \frac{y_n-y_k}{B}\sinh 2\pi \frac{y_n-y_k}{B})}{(\cosh 2\pi \frac{y_n-y_k}{B}-1 )^2}.\nonumber\\
\end{eqnarray}
Here we have summed up the contributions from each row first. When $k=n$, the glide force on the dislocation $(x_{n},y_{n})$ imposed by the row of dislocations containing this dislocation itself is zero.
The last approximation is obtained by Taylor expansions
using the fact that $\cosh 2\pi \frac{y_n-y_k}{B}-1>>1- \cos2\pi \frac{x_n-x_k}{B}$ for $k\neq n$,
which is due to $y_k=kD$ and $x_k<<B$ and $D$. The latter can be derived from the assumption $\tilde{\phi}(y)<<b$ and $Db/B$ and the definition of $x_k$ in Eq.~\eqref{eq:case4-x}. The relative error of this approximation is $O({\displaystyle \max_k}|x_k|/D)^2$.
Following Eq.~\eqref{eq:case4-x}, we have the Taylor expansion that
\begin{eqnarray}\label{eq:case4-x-tilde} \nonumber
x_k-x_n &=&-\frac{B}{b}\tilde{\phi}(y_k)+\frac{B}{b}\tilde{\phi}(y_n)\\
&=&-\frac{B}{b}\tilde{\phi}_y(y_n)(y_k-y_n)-\frac{B}{2b}\tilde{\phi}_{yy}(y_n)(y_k-y_n)^2+O((y_k-y_n)^3).
\end{eqnarray}
Using Eqs.~\eqref{eq:case4-dis-glide-1} and \eqref{eq:case4-x-tilde} and $y_k=kD$, we have
\begin{eqnarray} \label{eq:case4-dis-glide-2}
f^{\text{dd}}_{\text{g}}(x_{n},y_{n})&\approx&\frac{\mu b^2}{2(1-\nu)B}\sum_{k=1}^{+\infty}\left(\sin 2\pi \frac{x_n-x_{n+k}}{B}+\sin 2\pi \frac{x_n-x_{n-k}}{B}\right)\nonumber\\
&&\cdot\frac{
\cosh2\pi \frac{kD}{B}-1-2\pi \frac{kD}{B}\sinh 2\pi \frac{kD}{B}}{(\cosh 2\pi \frac{kD}{B}-1 )^2}\nonumber\\
&\approx&\frac{\pi\mu b}{(1-\nu)B}\tilde{\phi}''(y_n)\sum_{k=1}^{+\infty}\frac{
(\cosh2\pi \frac{kD}{B}-1-2\pi \frac{kD}{B}\sinh 2\pi \frac{kD}{B})(kD)^2}{(\cosh 2\pi \frac{kD}{B}-1 )^2}.
\end{eqnarray}
Using the definition of the function $g_1$ in Eq.~\eqref{eq:fg1} and the approximation in Eq.~\eqref{eq:truncationg1}, we have the continuum approximation
\begin{equation}\label{eq:case4-glide-result}
f^{\text{dc}}_{\text{g}}=-\frac{\pi\mu b B^2}{(1-\nu)D}\ g_1\left(\frac{D}{B}\right)\tilde{\phi}_{yy}\approx-\frac{\mu b^2 |\psi_y| }{6(1-\nu)|\phi_x|^2}\left[1-\frac{3}{2\pi}\frac{|\phi_x|}{|\psi_y|}\right]_{\varepsilon+}\phi_{yy}.
\end{equation}
Here we have used $\tilde{\phi}_{yy}=\phi_{yy}$.
As in Case 1, when the line direction of these dislocations changes to $\boldsymbol{\tau}=( 0,0,-1)$, we may have
$\phi_x<0$, and this case can be included by modifying the continuum glide force in Eq.~\eqref{eq:case4-glide-result} as
\begin{equation}\label{eq:case4-result5}
f^{\text{dc}}_{\text{g}}=-{\rm sgn}(\phi_x)\frac{\mu b^2 |\psi_y| }{6(1-\nu)|\phi_x|^2}\left[1-\frac{3}{2\pi}\frac{|\phi_x|}{|\psi_y|}\right]_{\varepsilon+}\phi_{yy}.
\end{equation}
This continuum expression does not depend on the sign of $\psi_y$.
As in the previous cases, we also calculate the continuum approximation of the climb force in this case from the discrete model in Eq.~(\ref{eq:climb-dd}), and the result is
\begin{eqnarray} \nonumber
f^{\text{dd}}_{\text{c}}(x_{n},y_{n})&=&\frac{\mu b^2}{2\pi(1-\nu)}\sum_{k\neq n}\sum_{j=-\infty}^{+\infty}
{\frac{(y_n-y_k)[3(x_n-(x_k+jB))^2+(y_n-y_k)^2]}{[(x_n-(x_k+jB))^2+(y_n-y_k)^2]^2}}\\ \nonumber
&=&\frac{\mu b^2}{2(1-\nu)B}\sum_{k\neq n}\frac{1}{(\cosh 2\pi \frac{y_n-y_k}{B}-\cos2\pi \frac{x_n-x_k}{B})^2}\\ \nonumber
&&{\textstyle \cdot [-2\pi \frac{y_n-y_k}{B}(\cosh 2\pi \frac{y_n-y_k}{B}\cos2\pi \frac{x_n-x_k}{B}-1)}\\ \nonumber
&&{\textstyle +2 \sinh 2\pi \frac{y_n-y_k}{B}(\cosh 2\pi \frac{y_n-y_k}{B} -\cos2\pi \frac{x_n-x_k}{B})]}\\ \nonumber
&=&O(\tilde{\phi}'(y_n)\tilde{\phi}''(y_n))\\
&\approx&0. \label{eq:case4-dis-climb-2}
\end{eqnarray}
Again, we have used the fact that $\cosh 2\pi \frac{y_n-y_k}{B}-1>>1- \cos2\pi \frac{x_n-x_k}{B}$ for $k\neq n$, to obtain the expansions.
Therefore, in this case,
\begin{equation}\label{eq:case4-result4}
f_{\text{c}}^{\rm dc}\equiv 0.
\end{equation}
Substituting Eqs.~\eqref{eq:case4-result5} and \eqref{eq:case4-result4} into Eq.~(\ref{eq:evn-eqs1}), we have the following evolution equation for this case:
\begin{equation}\label{eq:case4-evn-3}
\phi_t
- \frac{m_{\text{g}}\mu b^2 }{6(1-\nu)}\frac{ |\psi_y| }{|\phi_x|}\left[1-\frac{3}{2\pi}\frac{|\phi_x|}{|\psi_y|}\right]_{\varepsilon+}\phi_{yy}=0.
\end{equation}
\section{Continuum dislocation dynamics model incorporating short-range interactions}\label{eqn:contiuum-ddpf}
In this section, we present the continuum dislocation dynamics model that incorporates the short range dislocation interactions discussed in the previous section.
\subsection{The continuum dislocation dynamics model based on DDPFs}
We have shown in Sec.~\ref{sec:long-rang} that a continuum model with only the long-range Peach-Koehler force is not always able to capture the behaviors of discrete dislocation dynamics. It will be shown in Sec.~\ref{sec:stability} that such inconsistency leads to insufficiency in the stabilizing effect of the continuum model compared with the discrete dislocation dynamics model. As a result, in numerical simulations using such a continuum model, there is no effective mechanism to eliminate some numerical oscillations generated during the simulations.
In Sec.~\ref{sec:ddpf}, we have presented
the framework of our DDPF-based continuum dislocation dynamics model, see Eq.~\eqref{eq:evn-eqs1}.
We incorporate into our continuum model the continuum short-range forces obtained in the previous section for the cases where the continuum long-range glide or climb force vanishes.
With these short-range terms and including the contributions of the applied stress field, the continuum dislocation dynamics equations in Eq.~\eqref{eq:evn-eqs1} become
\begin{equation}\label{eq:evn-general-1}
\left\{
\begin{array}{l}
\phi_t
+{\mathbf v}\cdot\nabla\phi
= \frac{m_{\text{g}}\mu b^2}{6(1-\nu)}\frac{ |\phi_x|}{|\psi_y| }\left[ 1 -\frac{3}{2\pi}\frac{|\psi_y|}{|\phi_x|}\right]_{\varepsilon+}\phi_{xx}
+\frac{m_{\text{g}}\mu b^2 }{6(1-\nu)}\frac{ |\psi_y| }{|\phi_x|}\left[1-\frac{3}{2\pi}\frac{|\phi_x|}{|\psi_y|}\right]_{\varepsilon+}\phi_{yy}, \vspace{1ex}\\
\psi_t
+{\mathbf v}\cdot\nabla\psi
=\frac{m_{\text{c}}\mu b^2 }{6(1-\nu) }\varepsilon\psi_{xx},
\end{array}
\right.
\end{equation}
where
\begin{eqnarray}
{\mathbf v}&=&(v_{\text g},v_{\text c}),\vspace{1ex}\label{eqn:6-2}\\
v_{\text g}&=& {\textstyle \frac{m_{\text{g}}}{b}} (\pmb\tau\cdot\mathbf k)\ G_1*(\nabla \phi \times \nabla \psi\cdot \boldsymbol{k})+m_{\text{g}}(\pmb\tau\cdot {\mathbf k})b\sigma_{xy}^0,\vspace{1ex}\nonumber\\
&=&{\textstyle \frac{m_{\text{g}} \mu }{2\pi(1-\nu)}(\pmb\tau\cdot\mathbf k)
\int^{+\infty}_{-\infty}\int^{+\infty}_{-\infty}
\frac{(x-x_1)[(x-x_1)^2-(y-y_1)^2]}{[(x-x_1)^2+(y-y_1)^2]^2}
[\nabla \phi (x_1,y_1)\times \nabla \psi(x_1,y_1)\cdot \boldsymbol{k}] \ dx_1dy_1} \vspace{1ex}\nonumber\\
&~&+m_{\text{g}}(\pmb\tau\cdot {\mathbf k})b\sigma_{xy}^0, \vspace{1ex}\\
v_{\text c}&=&-{\textstyle \frac{m_{\text{c}}}{b}} (\pmb\tau\cdot\mathbf k)\ G_2*(\nabla \phi \times \nabla \psi\cdot \boldsymbol{k})-m_{\text{c}}(\pmb\tau\cdot {\mathbf k})b\sigma_{xx}^0, \vspace{1ex}\nonumber\\
&=&{\textstyle \frac{m_{\text{c}} \mu }{2\pi(1-\nu)}(\pmb\tau\cdot\mathbf k)
\int^{+\infty}_{-\infty}\int^{+\infty}_{-\infty}
\frac{(y-y_1)[3(x-x_1)^2+(y-y_1)^2]}{[(x-x_1)^2+(y-y_1)^2]^2}
[\nabla \phi (x_1,y_1)\times \nabla \psi(x_1,y_1)\cdot \boldsymbol{k}] \ dx_1dy_1} \vspace{1ex}\nonumber\\
&~& -m_{\text{c}}(\pmb\tau\cdot {\mathbf k})b\sigma_{xx}^0,\vspace{1ex}\\
\pmb \tau&=&{\textstyle \frac{\nabla \phi \times \nabla \psi}{\|\nabla \phi \times \nabla \psi\|}},\vspace{1ex}\\
\mathbf k&=&(0,0,1)^T,
\end{eqnarray}
with
\begin{eqnarray}
G_1(x,y)&=&{\textstyle \frac{\mu b}{2\pi(1-\nu)}\frac{(x-x_1)[(x-x_1)^2-(y-y_1)^2]}{[(x-x_1)^2+(y-y_1)^2]^2}},\vspace{1ex}\label{eqn:6-7}\\
G_2(x,y)&=&{\textstyle -\frac{\mu b}{2\pi(1-\nu)}\frac{(y-y_1)[3(x-x_1)^2+(y-y_1)^2]}{[(x-x_1)^2+(y-y_1)^2]^2}}.\label{eqn:6-8}
\end{eqnarray}
Under the assumptions that the length of the Burgers vector $b<<L$, where $L$ is the unit length of the continuum model, and the average dislocation spacing $B\sim D<<L$, it is easy to find that the ratio of the second order partial derivative terms vs the long-range terms $\mathbf v\cdot \nabla \phi$ and $\mathbf v\cdot \nabla \psi$ in Eq.~\eqref{eq:evn-general-1} is $O(b/L)<<1$. Here we have used the fact that $\phi_x=O(b/B)$, $\phi_{xx}=O(b/(BL))$, and similar orders for other partial derivatives of $\phi$ and $\psi$.
Recall that continuum short-range interaction terms provide good approximations to the discrete dislocation model when the continuum long-range force vanishes for some non-trivial perturbed dislocation walls. For a general dislocation distribution described by the continuum model, the full continuum force (including both the long-range and short-range continuum forces) still provides a good approximation to the discrete dislocation dynamics model under the assumption that a point in the continuum model corresponds to one of these dislocation microstructures of perturbed regular dislocation walls, which is a common technique for the coarse-graining from micro- or meso-scopic models to continuum models.
Mathematically, these short-range terms in the continuum model serve as stabilizing terms that maintain the same stability properties as the discrete dislocation dynamics model, as will be shown in Sec.~\ref{sec:stability}.
Recall also that the main advantage of continuum model based on DDPFs \cite{Xiang2009_JMPS,ZhuXH2010,Zhu2014_IJP,Zhu_continuum3D} is being able to describe the orientation-dependent dislocation densities of curved dislocations. The dislocation glide within its slip plane due to the long-range Peach-Koehler force is regularized by the local curvature term due to line tension effect.
In the continuum dynamics equations in Eq.~\eqref{eq:evn-general-1} obtained in this paper, the short-range interaction terms are in the form of second partial derivatives of the DDPFs and are able to provide regularization in the cross-section of the dislocations for both glide and climb.
Combining these two regularization effects, we expect to have a full well-posed continuum dislocation dynamics model based on DDPFs. Moreover, the use of two DDPFs $\phi$ and $\psi$ in the continuum dislocation dynamics model enables the study of the anisotropic behaviors of dislocation ensembles within and out of their slip planes. These will be further explored in the future work.
\subsection{Continuum model for dislocation glide}\label{subsec:glidemodel}
In this subsection, we consider the dynamics of dislocations only by their glide. In this case, we assume the average inter-slip plane distance is $D$ \cite{Zhu_continuum3D}, that is, $\psi(x,y)=\frac{b}{D}y$ is always fixed. Applying our continuum model in Eq.~\eqref{eq:evn-general-1} to this case, i.e., the dislocations only move in the $x$ direction. In this case, Eq.~\eqref{eq:evn-general-1} becomes
\begin{eqnarray}\label{eq:evn-glide}
&&\phi_t
+{\textstyle \frac{m_{\text{g}}}{D}}|\phi_x| G_1*\phi_x+ m_{\text{g}}b\sigma_{xy}^0|\phi_x|\vspace{1ex}\nonumber\\
&&\hspace{0.5in} = {\textstyle\frac{m_{\text{g}}\mu bD}{6(1-\nu)} |\phi_x|\left[ 1 -\frac{3D}{2\pi b|\phi_x|}\right]_{\varepsilon+}\phi_{xx}
+\frac{m_{\text{g}}\mu b^3 }{6(1-\nu)D|\phi_x|}\left[1-\frac{3D|\phi_x|}{2\pi b}\right]_{\varepsilon+}\phi_{yy}},
\end{eqnarray}
where
\begin{eqnarray}
&& G_1*\phi_x(x,y)={\textstyle \frac{ \mu b}{2\pi(1-\nu)}
\int^{+\infty}_{-\infty}\int^{+\infty}_{-\infty}
\frac{(x-x_1)[(x-x_1)^2-(y-y_1)^2]}{[(x-x_1)^2+(y-y_1)^2]^2}
\phi_x (x_1,y_1) \ dx_1dy_1}.
\end{eqnarray}
In this case, the continuum model in Eq.~\eqref{eq:evn-glide} can be written as:
\begin{eqnarray}\label{eq:evn-glide-phi}
\phi_t+v_{\text{g}}\phi_x=0,
\end{eqnarray}
where the total glide velocity $v_{\text{g}}=m_{\text{g}}f_{\text{g}}$,
the continuum total glide force $f_{\text{g}}=f_{\text{g}}^{\text{dc}}+{\rm sgn}(\phi_x)b\sigma_{xy}^0$ as given by Eq.~\eqref{eqn:fglide-tot}, and the continuum force due to interactions between dislocations
\begin{eqnarray}\label{eq:evn-glide-conserv}
f_{\text{g}}^{\rm dc}={\textstyle {\rm sgn}(\phi_x)\left\{
\frac{1}{D} G_1*\phi_x
-\frac{m_{\text{g}}\mu bD}{6(1-\nu)} \left[ 1 -\frac{3b}{2\pi D|\phi_x|}\right]_{\varepsilon+}\phi_{xx}
-\frac{m_{\text{g}}\mu b^3 }{6(1-\nu)D\phi_x^2}\left[1-\frac{3b|\phi_x|}{2\pi D}\right]_{\varepsilon+}\phi_{yy}\right\}}\nonumber\\
\end{eqnarray}
including both the long-range interaction force (the first term) and the short-range interaction forces (the remaining two terms) on the dislocations.
When the dislocation distribution is uniform in the $y$ direction, which is Case 1 in Sec.~\ref{sec:continuum-short}, Eq.~\eqref{eq:evn-glide} reduces to
\begin{eqnarray}\label{eq:case1-evn3-simulation}
\phi_t+ m_{\text{g}}b\sigma_{xy}^0|\phi_x|- \frac{m_{\text{g}}\mu b^2D}{6(1-\nu)} |\phi_x|\left[ 1 -\frac{3b}{2\pi D|\phi_x|}\right]_{\varepsilon+}\phi_{xx}=0.
\end{eqnarray}
In this case, the continuum total force in Eq.~\eqref{eq:evn-glide-conserv} reduces to Eq.~\eqref{eq:case1-result5}.
\subsection{Comparison with scalar dislocation density based continuum models}
In this subsection,
we examine the evolution of the signed dislocation density $\rho$ defined
Eq.~\eqref{eqn:density} in terms of the DDPFs $\phi$ and $\psi$.
We first consider the continuum model of $\phi$ and $\psi$ in the form of Eq.~\eqref{eq:evn-eqs1}. From Eqs.~\eqref{eqn:density} and \eqref{eq:evn-eqs1}, we can calculate that
\begin{equation}\label{eq:evn-rho}
\rho_t+\nabla\cdot(\rho {\mathbf v})=0,
\end{equation}
where $\textbf{v}=(v_{\text{g}}, v_{\text{c}})^T$ is the dislocation velocity.
In fact,
\begin{eqnarray} \label{eq:evn-rho-0}
\rho_t&=&\frac{1}{b^2}(\phi_x\psi_y-\psi_x\phi_y)_t\nonumber\\ \nonumber
&=&\frac{1}{b^2}(\phi_{xt}\psi_y+\phi_x\psi_{yt}-\psi_{xt}\phi_y-\psi_x\phi_{yt})\\ \nonumber
&=&\frac{1}{b^2}\{(-\boldsymbol{v}\cdot\nabla\phi)_{x}\psi_y+(-\boldsymbol{v}\cdot\nabla\psi)_{y}\phi_x-(-\boldsymbol{v}\cdot\nabla\psi)_{x}\phi_y-(-\boldsymbol{v}\cdot \nabla\phi)_{y}\psi_x\}\\ \nonumber
&=&\frac{1}{b^2}\{(-v_1\phi_x\psi_y+v_1\psi_x\phi_y)_x+(-v_2\phi_x\psi_y+v_2\psi_x\phi_y)_y\} \\
&=&-\nabla\cdot (\rho\boldsymbol{v}).
\end{eqnarray}
In most of the continuum dislocation dynamics models in the literature, the evolution equation is written in the conservative form in Eq.~\eqref{eq:evn-rho}.
Here we only consider the geometrically necessary dislocations. When only the long-range Peach-Koehler force is considered, the dislocation velocity $\textbf{v}$ is expressed by the mobility law in Eq.~\eqref{eq:velocity} and the long-range force $\textbf{f}=(f_{\text{g}}, f_{\text{c}})^T$ in Eqs.~\eqref{eq:con-glide} and \eqref{eq:con-climb} in terms of $\rho$. These form a closed evolution equation for the dislocation density $\rho$.
However, the modified continuum dislocation dynamics models incorporated with short-range interaction terms in Eq.~\eqref{eq:evn-general-1} in general is not able to be described fully by the evolution of $\rho$.
The reason is that in our continuum model incorporates the anisotropy of dislocation structure and motion within and out of the slip planes, whereas the single scalar dislocation density $\rho$ is only able to describe isotropy dislocation structure and motion.
When we only consider the glide motion of dislocations as in Sec.~\ref{subsec:glidemodel}, following Eq.~\eqref{eqn:density}, the dislocation density is $\rho=(\nabla \phi \times \nabla \psi\cdot \boldsymbol{k})/b^2 =\frac{1}{bD}\phi_x$, and Eq.~\eqref{eq:evn-glide-phi} can be written as
\begin{eqnarray}\label{eq:evn-glide-conserv-rho0}
\rho_t+(\rho v_{\text{g}})_x=0,
\end{eqnarray}
where $v_{\text{g}}=m_{\text{g}}f_{\text{g}}$,
$f_{\text{g}}=f_{\text{g}}^{\text{dc}}+{\rm sgn}(\rho)b\sigma_{xy}^0$, and
\begin{eqnarray}\label{eq:evn-glide-conserv-rho}
f_{\text{g}}^{\rm dc}={\textstyle {\rm sgn}(\rho)\left\{
b G_1*\rho
-\frac{m_{\text{g}}\mu b^2D^2}{6(1-\nu)}\left[ 1 -\frac{3}{2\pi D^2\rho}\right]_{\varepsilon+}\rho_x
-\frac{m_{\text{g}}\mu b^2 }{6(1-\nu)D^2}\frac{1 }{\rho^2}\left[1-\frac{3D^2\rho}{2\pi}\right]_{\varepsilon+}\phi_{yy}\right\}}.\nonumber\\
\end{eqnarray}
Although Eq.~\eqref{eq:evn-glide-conserv-rho0} is in a conservative form of the dislocation density $\rho$, the continuum total glide force in Eq.~\eqref{eq:evn-glide-conserv-rho} also depends on $\phi_{yy}$, which cannot be simply expressed in terms of $\rho$.
Especially,
for the dislocation structure of Case 4 shown in Fig.~\ref{fig:case4}, the dislocation density $\rho\equiv1/(BD)$, thus the representation by $\rho$ alone is not able to tell the difference between this dislocation structure and a uniform distribution.
For the dislocation structure of Case 1 shown in Fig.~\ref{fig:case1} (without the applied stress), our continuum model
in Eq.~\eqref{eq:case1-evn3}
can indeed be rewritten as an evolution equations of the dislocation density $\rho$ following $\rho=\frac{1}{bD}\phi_x$, which is
\begin{eqnarray}\label{eq:case1-evn4}
\rho_t- \frac{m_{\text{g}}\mu b^2}{6(1-\nu)}\left(D^2|\rho|\left[ 1 -\frac{3}{2\pi D^2\rho}\right]_{\varepsilon+}\rho_x\right)_x=0.
\end{eqnarray}
In this case, only the local short-range force is nonvanishing, which is
\begin{eqnarray}\label{eq:evn-glide-conserv-rho-case1}
f_{\text{g}}^{\rm dc}=-{\rm sgn}(\rho)
\frac{m_{\text{g}}\mu b^2D^2}{6(1-\nu)} \left[ 1 -\frac{3}{2\pi D^2\rho}\right]_{\varepsilon+}\rho_x.
\end{eqnarray}
In the available continuum formulas in the literature for this case using different methods \cite{Groma2003,Schulz2015}, their local forces are proportional to $\rho_x/|\rho|$ when only the geometrically necessary dislocations are considered. The corresponding term in our continuum model for this case
in Eqs.~\eqref{eq:case1-evn4} and \eqref{eq:evn-glide-conserv-rho-case1} is $\rho_xD^2$,
which means that for this special case, the isotropic dislocation density $\rho$ in the denominator in the models in the literature should be replaced by a more accurate expression $1/D^2$ where $D$ is the average inter-dislocation distance normal to the slip plane.
Again we can see that our model using two DDPFs $\phi$ and $\psi$ are able to anisotropy of dislocation structure and motion within and out of the slip planes, which is not able to be achieved by using the traditional scalar dislocation density $\rho$.
\section{Stability using the new continuum model}\label{sec:stability}
In this section, we examine the stability of the uniform dislocation distributions using the derived continuum model in Eq.~\eqref{eq:evn-general-1}. Consider a uniform distribution of dislocations represented by $\phi_0=\frac{b}{B}x$, $\psi_0=\frac{b}{D}y$. This uniform distribution subject to a small perturbation can be written as
\begin{equation}
\left\{
\begin{array}{l}
\phi=\frac{b}{B}x+\tilde{\phi}(x,y,t), \\
\psi=\frac{b}{D}y+\tilde{\psi}(x,y,t),
\end{array}
\right.
\end{equation}
where $\tilde{\phi}(x,y,t)$ and $\tilde{\psi}(x,y,t)$ are small perturbation functions.
Using Eq.~\eqref{eqn:density}, the dislocation density for this distribution up to linear order of the small perturbations is
\begin{equation}
\rho(x, y, t)=\frac{(\nabla {\phi} \times \nabla {\psi})\cdot\boldsymbol{k} }{b^2}\approx\frac{1}{BD}+\frac{1}{bD}\tilde{\phi}_x+\frac{1}{bB}\tilde{\psi}_y.
\end{equation}
Substituting the above $\phi$ and $\psi$ into the continuum model in Eq.~\eqref{eq:evn-general-1} with Eqs.~\eqref{eqn:6-2}--\eqref{eqn:6-8}, the linearized evolution equations of $\tilde{\phi}(x,y,t), \tilde{\psi}(x,y,t)$, written in the Fourier space, is
\begin{eqnarray}\nonumber
\hat{\tilde{\phi}}_t
&=&-\frac{2m_{\text{g}}\mu b^2 }{1-\nu}\left\{\frac{1}{BD}\frac{k_1^2k_2^2}{(k_1^2+k_2^2)^2}
+\frac{D}{12B }\left[ 1 -\frac{3}{2\pi}\frac{B}{D}\right]_{\varepsilon+}k_1^2
+\frac{ B }{12D}\left[1-\frac{3}{2\pi}\frac{D}{B}\right]_{\varepsilon+}k_2^2\right\}
\hat{\tilde{\phi}}\\
&~&-\frac{2m_{\text{g}}\mu b^2 }{(1-\nu)B^2}\frac{k_1k_2^3}{(k_1^2+k_2^2)^2}\hat{\tilde{\psi}}, \label{eq:evn-fourier-21}\\
\hat{\tilde{\psi}}_t
&=&-\frac{2m_{\text{c}}\mu b^2 }{(1-\nu)D^2}\frac{k_1k_2^3}{(k_1^2+k_2^2)^2}\hat{\tilde{\phi}}
-\frac{2m_{\text{c}}\mu b^2 }{1-\nu}\left[\frac{1}{BD}\frac{k_2^4}{(k_1^2+k_2^2)^2}
+\frac{1 }{12}\varepsilon k_1^2\right]\hat{\tilde{\psi}}, \label{eq:evn-fourier-22}
\end{eqnarray}
where $k_1$ and $k_2$ are frequencies in the $x$ and $y$ directions, respectively. Here we have used
$\hat{G_1}(k_1,k_2)=-i\frac{\mu b}{2\pi^2(1-\nu)}\frac{k_1k_2^2}{(k_1^2+k_2^2)^2}$ and
$\hat{G_2}(k_1,k_2)=-i\frac{\mu b}{2\pi^2(1-\nu)}\frac{k_2^3}{(k_1^2+k_2^2)^2}$
for $G_1(x,y)$ and $G_2(x,y)$ in Eqs.~\eqref{eqn:6-7} and \eqref{eqn:6-8}.
The evolution of $\hat{\tilde{\phi}}$ and $\hat{\tilde{\psi}}$ described by Eqs.~\eqref{eq:evn-fourier-21} and \eqref{eq:evn-fourier-22} is determined by the two eigenvalues of the coefficient matrix solved from
the characteristic polynomial
\begin{equation}\label{eq: characteristic polynomial}\left|
\begin{array}{cc}
\lambda+A+a_1+a_2&R \\
C& \lambda+S+s_1\\
\end{array}
\right|=0,
\end{equation}
where
\begin{eqnarray} \nonumber
&A=\frac{2m_{\text{g}}\mu b^2 }{(1-\nu)BD}\frac{k_1^2k_2^2}{(k_1^2+k_2^2)^2}, \ \
R=\frac{2m_{\text{g}}\mu b^2 }{(1-\nu)B^2}\frac{k_1k_2^3}{(k_1^2+k_2^2)^2}, \vspace{1ex}\\ \nonumber
&C=\frac{2m_{\text{c}}\mu b^2 }{(1-\nu)D^2}\frac{k_1k_2^3}{(k_1^2+k_2^2)^2}, \ \
S=\frac{2m_{\text{c}}\mu b^2 }{(1-\nu)BD}\frac{k_2^4}{(k_1^2+k_2^2)^2}, \vspace{1ex}\\ \nonumber
&a_1=\frac{m_{\text{g}}\mu b^2D }{6(1-\nu)B }\left[ 1 -\frac{3}{2\pi}\frac{B}{D}\right]_{\varepsilon+}k_1^2, \ \
a_2=\frac{m_{\text{g}}\mu b^2 B }{6(1-\nu)D}\left[1-\frac{3}{2\pi}\frac{D}{B}\right]_{\varepsilon+}k_2^2, \vspace{1ex}\\ \nonumber
&s_1=\frac{\varepsilon m_{\text{c}}\mu b^2 }{6(1-\nu)}k_1^2.
\end{eqnarray}
The Fourier coefficients of the small perturbations $\hat{\tilde{\phi}}$ and $\hat{\tilde{\psi}}$ decay when the two eigenvalues $\lambda_1,\lambda_2<0$.
Due to $AS=RC$, the characteristic polynomial in Eq.~\eqref{eq: characteristic polynomial} becomes
\begin{equation}
\lambda^2+(A+a_1+a_2+S+s_1)\lambda+(a_1+a_2)(S+s_1)+As_1=0.
\end{equation}
Thus the two eigenvalues are
\begin{eqnarray}
\lambda_{1,2}&=&{\textstyle \frac{-(A+a_1+a_2+S+s_1)\pm\sqrt{(A+a_1+a_2+S+s_1)^2-4[(a_1+a_2)(S+s_1)+As_1]}}{2}}\label{eqn:eigen1}\\
&=&{\textstyle \frac{-(A+a_1+a_2+S+s_1)\pm\sqrt{(A+a_1+a_2-S-s_1)^2+4AS}}{2}}\label{eqn:eigen2}.
\end{eqnarray}
Note that $A, S, a_1, a_2, s_1\geq0$. By Eq.~\eqref{eqn:eigen2}, we know that both $\lambda_1$ and $\lambda_2$ are real. It is easy to conclude from Eq.~\eqref{eqn:eigen1} that $\lambda_{1,2}<0$ when $k_1\neq 0$ or $k_2\neq 0$ (because the term $4[(a_1+a_2)(S+s_1)+As_1]>0$ in this case), and $\lambda_1=\lambda_2=0$ when $k_1=k_2=0$. Therefore, when $(k_1,k_2)\neq (0,0)$, $\hat{\tilde{\phi}}$ and $\hat{\tilde{\psi}}$ always decay and the uniform distribution of dislocations is stable using the derived continuum model in Eq.~\eqref{eq:evn-general-1}.
This stability result provides a basis for wellposedness of the continuum model in Eq.~\eqref{eq:evn-general-1} as well as stability of numerical solutions for it. These topics will be further explored in the future work.
When only the continuum long-range Peach-Koehler force is considered, i.e., the second partial derivative terms on the right-hand side of the PDE system in Eq.~\eqref{eq:evn-general-1} vanish, the linearized equations for the small perturbations $\tilde{\phi}$ and $\tilde{\psi}$ in the Fourier space are
\begin{eqnarray}
\hat{\tilde{\phi}}_t
&=&-\frac{2m_{\text{g}}\mu b^2 }{1-\nu}\frac{1}{BD}\frac{k_1^2k_2^2}{(k_1^2+k_2^2)^2}
\hat{\tilde{\phi}}-\frac{2m_{\text{g}}\mu b^2 }{(1-\nu)B^2}\frac{k_1k_2^3}{(k_1^2+k_2^2)^2}\hat{\tilde{\psi}}, \label{eq:evn-fourier-212}\\
\hat{\tilde{\psi}}_t
&=&-\frac{2m_{\text{c}}\mu b^2 }{(1-\nu)D^2}\frac{k_1k_2^3}{(k_1^2+k_2^2)^2}\hat{\tilde{\phi}}
-\frac{2m_{\text{c}}\mu b^2 }{1-\nu}\frac{1}{BD}\frac{k_2^4}{(k_1^2+k_2^2)^2}
\hat{\tilde{\psi}}. \label{eq:evn-fourier-222}
\end{eqnarray}
Same as the discussion in Sec.~\ref{sec:long-rang}, when $k_1=0$ or $k_2=0$, there is no stabilizing force (which is the glide force) for $\tilde{\phi}$; and when $k_2=0$, there is no stabilizing force (which is the climb force) for $\tilde{\psi}$. In these cases, numerical oscillations in simulations cannot be stabilized without the second order partial derivative terms.
Recall that the second order partial derivative terms in Eq.~\eqref{eq:evn-general-1} are based on the short-range interactions of dislocations. Those terms coming from the short-range glide forces (in the $\phi$-equation) agree with the glide forces using the discrete dislocation model for uniform dislocation distributions subject to small perturbations in the glide direction. For the climb force,
a regularization term (in the $\psi$-equation) is added in addition to the stabilizing effect provided by the
long-range climb force.
\section{Numerical simulations}
\label{sec:numerical}
In this section, we perform numerical simulations to validate the derived continuum model. In addition to the nondimensionalization before simulations, we set Poisson ratio $\nu=1/3$.
\subsection{Comparisons of the continuum force with the discrete model}
We first examine the total glide force in the continuum model including both the long-range and short-range contributions given by Eq.~\eqref{eq:evn-glide-conserv} by comparisons with the discrete dislocation dynamics model.
Recall that the continuum short-range glide force terms are derived from the discrete dislocation model for uniform dislocation distributions subject to small perturbations in the glide direction.
We first consider the dislocation distributions of Case 1 in Sec.~\ref{sec:continuum-short}, where the dislocation distributions are uniform in the $y$ direction. This problem is reduced to a one-dimensional problem depending only on $x$.
{\bf Example 1}
\begin{figure}[htbp]\centering
\subfigure[]
{\label{fig:case1-eg2-xphi1}\includegraphics[width=2.0in]{Case1GlideErf-Phi10B1-eps-converted-to.pdf}}
\subfigure[]
{\label{fig:case1-eg2-r1}\includegraphics[width=2.0in]{Case1GlideErf-Force10B1-eps-converted-to.pdf}}\\
\subfigure[]
{\label{fig:case1-eg2-xphi2}\includegraphics[width=2.0in]{Case1GlideErf-Phi5B1-eps-converted-to.pdf}}
\subfigure[]
{\label{fig:case1-eg2-r2}\includegraphics[width=2.0in]{Case1GlideErf-Force5B1-eps-converted-to.pdf}}\\
\subfigure[]
{\label{fig:case1-eg2-xphi3}\includegraphics[width=2.0in]{Case1GlideErf-Phi1B1-eps-converted-to.pdf}}
\subfigure[]
{\label{fig:case1-eg2-r3}\includegraphics[width=2.0in]{Case1GlideErf-Force1B1-eps-converted-to.pdf}}
\caption{Example 1: Continuum glide force compared with the discrete model for distributions of dislocation walls for different values of concentration width $w$ (defined in Eq.~\eqref{eqn:width}). The concentration width $w=10B$ in (a) and (b), $w=5B$ in (c) and (d), and $w=B$ in (e) and (f). Images (a), (c), and (e) show the profile of $\phi(x)$ (red curve) and locations of the dislocation walls for each value of the width $w$. The black dots on the horizontal line indicate the locations of the dislocation walls, and the blue dots show the corresponding values of $\phi$ in the continuum model. Images (b), (d), and (f) show values of the glide force $f_{\rm g}$ on the dislocation walls calculated by using the continuum model (red circles) and by using the discrete dislocation model (blue stars).
} \label{fig:case1-eg2}
\end{figure}
Assume the dislocation distribution is described by
\begin{equation}\label{eqn:width}
\phi(x)=
\begin{cases}
-\frac{Nb}{2} &\mbox{if $x=-\frac{NB}{2}$ }\\
\frac{Nb}{2}{\rm erf}(\frac{x}{w}) &\mbox{if $-\frac{NB}{2}<x<\frac{NB}{2}$ }\\
\frac{Nb}{2} &\mbox{if $x=\frac{NB}{2}$ }
\end{cases}
\end{equation}
where ${\rm erf}(x)=\frac{2}{\sqrt{\pi}}\int_0^x e^{-u^2} du$, and $\psi(y)=\frac{b}{D}y$. Periodic boundary condition is assumed in the $x$ direction. We set $D=50b$, $B=30b$, and $N=40$. The dislocation walls are concentrated within the region in the center with width $w$. We perform simulations for the cases of $w=10B$, $w=5B$, $w=B$. The profiles of the DDPF $\phi$ and the locations of the dislocation walls are shown in Fig.~\ref{fig:case1-eg2} (a), (c), and (e), and the corresponding glide forces calculated by the continuum model in Eq.~\eqref{eq:evn-glide-conserv} (which reduces to Eq.~\eqref{eq:case1-result5} in this case) and by the discrete dislocation model are plotted in Fig.~\ref{fig:case1-eg2} (b), (d), and (f), respectively. It can be seen that the results of the continuum model agree
excellently with those of discrete model for smoothly varying (the case of $w=10B$ in Fig.~\ref{fig:case1-eg2} (a),(b)) and even concentrated (the case of $w=5B$ in Fig.~\ref{fig:case1-eg2} (c),(d)) distributions of dislocation walls. For extremely concentrated distribution of dislocation walls as shown in Fig.~\ref{fig:case1-eg2} (e) with $w=B$, the overall continuum approximation Fig.~\ref{fig:case1-eg2} (f) is still reasonably good. At the two ends of the concentrated distribution where the dislocation density changes dramatically, our continuum approximation gives the strongest force as in the discrete model, although there are discrepancies in the exact values. (Recall that the continuum formulations are derived based on smoothly-varying dislocation densities.)
{\bf Example 2}
In this example, we examine the continuum glide force in Eq.~\eqref{eq:case1-result5} for different values of the ratio $B/D$ for distributions of dislocation walls with uniform active slip plane spacing. Recall that $B$ is the average inter-dislocation distance within a slip plane and $D$ is the average slip plane spacing. For these dislocation distributions, we choose the DDPFs $\psi(y)=\frac{b}{D}y$ and $\phi(x)$ determined by the following equation
\begin{equation}\label{eq:case1-x-phi1}
\frac{b}{B}x=\phi+b\sin\left(\frac{2\pi\phi}{40b}\right).
\end{equation}
This a uniform dislocation wall distribution with perturbation in the $x$ direction, and the DDPF
$\phi$ can be written as $\phi(x)=\frac{b}{B}x+\tilde {\phi}(x)$, where $\tilde {\phi}(x)$ is a small perturbation, see Fig.~\ref{fig:case1b}(a). The period of this distribution is $N=40$ dislocation walls. We fix $D=50b$ and vary the value of $B$.
\begin{figure}[htpb]
\centering
\subfigure[]
{\label{fig:case1a1}\includegraphics[width=1.5in]{Case1GlideSin-Phi1-eps-converted-to.pdf}}
\subfigure[]
{\label{fig:case1b1}\includegraphics[width=1.5in]{Case1GlideSin-ForceB151-eps-converted-to.pdf}}
\subfigure[]
{\label{fig:case1b2}\includegraphics[width=1.5in]{Case1GlideSin-ForceB401-eps-converted-to.pdf}}\\
\subfigure[]
{\label{fig:case1b3}\includegraphics[width=1.5in]{Case1GlideSin-ForceB501-eps-converted-to.pdf}}
\subfigure[]
{\label{fig:case1b4}\includegraphics[width=1.5in]{Case1GlideSin-ForceB1001-eps-converted-to.pdf}}
\subfigure[]
{\label{fig:case1b5}\includegraphics[width=1.5in]{Case1GlideSin-ForceB2001-eps-converted-to.pdf}}
\caption{Example 2: Continuum glide force compared with that of the discrete model for distributions of dislocation walls for different values of the ratio $B/D$ for distributions of dislocation walls with uniform active slip plane spacing (given by Eq.~\eqref{eq:case1-x-phi1}). (a) The profile of the DDPF $\phi$ (red curve) and locations of the dislocation walls. The black dots on the horizontal line indicate the locations of the dislocation walls, and the blue dots show the corresponding values of $\phi$ in the continuum model. Images (b)-(f) show the continuum glide force (red circles) compared with the force calculated from the discrete dislocation model (blue stars) for the cases of (b) $B=15b$, (c) $B=40b$, (d) $B=50b$, (e) $B=100b$, and (f) $B=200b$, respectively.}\label{fig:case1b}
\end{figure}
The values of the glide force calculated by the continuum model and comparisons with the results of the discrete dislocation model are shown in Fig.~\ref{fig:case1b}(b)-(f) for the cases of $B=15b,40b,50b,100b,200b$.
When the inter-dislocation wall distance $B$ is smaller than the slip plane spacing $D$, as shown in Fig.~\ref{fig:case1b}(b), the continuum glide force agrees excellently with the force in discrete model. In this case, the glide force is significant: around $10^{-3}\mu b$, in agreement with the strong interaction between neighboring dislocation walls. When the inter-dislocation wall distance $B$ is comparable with the slip plane spacing $D$, as shown in Fig.~\ref{fig:case1b}(c) and (d), the continuum glide force agrees well with the force in discrete model with small errors. In this case, the glide force becomes smaller: around $10^{-4}\mu b$, which is again consistent with the weak interaction between neighboring dislocation walls in this case. When the inter-dislocation wall distance $B$ is much greater than the slip plane spacing $D$, the interaction between neighboring dislocation walls should be negligible, which is reflected by the small values of the forces calculated by the continuum and the discrete models shown in Fig.~\ref{fig:case1b}(e) and (f) (at the order of $\leq 10^{-6}\mu b$ and $\leq 10^{-10}\mu b$). In this sense, the continuum model still provides a good approximation to the discrete model in this case, although the values calculated by the two models are not necessarily exactly the same. The latter differences at the negligible orders are due to the simplification of our continuum model in Eq.~\eqref{eq:case1-result5} from its exact form in Eqs.~\eqref{eq:case1-dis-03} and \eqref{eq:fg1} using the simplification in Eq.~\eqref{eq:truncationg1}.
{\bf Example 3}
\begin{figure}[htbp]
\centering
\subfigure[Continuum model (full)]
{\label{fig:2d-glide-D30B50-dis}\includegraphics[width=2.2in]{Continuum-eps-converted-to.pdf}}
\subfigure[Continuum long-range force]
{\label{fig:2d-glide-D30B50-con}\includegraphics[width=2.2in]{Continuum0-eps-converted-to.pdf}}\\
\subfigure[Discrete model]
{\label{fig:2d-glide-D30B50-con0}\includegraphics[width=2.2in]{Discrete-eps-converted-to.pdf}}\\
\subfigure[Error of continuum model (full)]
{\label{fig:2d-glide-D30B50-err}\includegraphics[width=2.2in]{error-eps-converted-to.pdf}}
\subfigure[Error of continuum long-range force]
{\label{fig:2d-glide-D30B50-err0}\includegraphics[width=2.2in]{error0-eps-converted-to.pdf}}
\caption{Example 3: Continuum glide force and comparison with that of the discrete dislocation model for a general dislocation distribution given by Eq.~\eqref{eqn:generaldistribution}. The force unit is $\mu b$.}\label{fig:fullforce}
\end{figure}
In this example, we examine the continuum glide force in Eq.~\eqref{eq:evn-glide-conserv} for a general dislocation distribution. The dislocation distribution is given by
\begin{equation}\label{eqn:generaldistribution}
\left\{
\begin{array}{l}
\phi(x,y)=\frac{b}{B}x+
0.02 \sin \left(\frac{2\pi}{L_1}10 x\right) \sin \left( \frac{2\pi}{L_1}2y\right), \\ \psi(x,y)=\frac{b}{D}y+
0.02\sin \left(\frac {2\pi}{L_2}2 x \right)\sin \left( \frac{2\pi}{L_2}5 y\right),
\end{array}
\right.
\end{equation}
where $D=50b$, $B=30b$, $L_1=40B$ and $L_2=60D$. Here $L_1$ and $L_2$ are the periods of the perturbations in the $x$ and $y$ directions, respectively, and the wavenumbers of the perturbations in DDPFs $\phi$ and $\psi$ are $(10,2)$ and $(2,5)$, respectively.
Fig.~\ref{fig:fullforce} shows the values of the continuum glide force calculated by Eq.~\eqref{eq:evn-glide-conserv} and comparisons with the results obtained using the discrete dislocation model. It can be seen that the glide force profile calculated by the continuum model including both the long-range and short-range interactions (in Fig.~\ref{fig:fullforce}(a)) excellently keeps the overall features of the glide force distribution calculated by the discrete dislocation model (in Fig.~\ref{fig:fullforce}(c)), whereas
the continuum long-range glide force alone (in Fig.~\ref{fig:fullforce}(b)) loses too much detailed information compared with the discrete model (in Fig.~\ref{fig:fullforce}(c)). Moreover, as shown in Fig.~\ref{fig:fullforce}(d) and (e), the full continuum force successfully reduces the maximum error of the continuum long-range force by half, although the continuum short-range terms are derived only from special distributions of dislocations.
\subsection{Dynamics simulations}
In this subsection, we present some simulation results for the dynamics of the dislocation structures and compare the results with those of the discrete model. We consider the dislocation distribution of Case 1 in Sec.~\ref{sec:continuum-short}. In this case, the continuum model is given by Eq.~\eqref{eq:case1-evn3-simulation}.
We fix the uniform active slip plane spacing $\psi_y=D=50b$.
The initial state of the evolution is a dislocation wall system of $N=40$ dislocation walls with average spacing $B=30b$. The left half of these dislocation walls consist of dislocations with direction in the $+z$ direction, and the right half consist of dislocations with direction in the $-z$ direction. Initially, these dislocation walls have equal spacing. An initial profile of the DDPF $\phi$ is shown by the blue curve in Fig.~\ref{fig:ddpf-evolve}. We use periodic boundary condition in the simulations. The dislocation walls at the two ends of the simulation domain are fixed.
We evolve the dislocation system under applied shear stress $\sigma_{xy}^0=-0.0009\mu b$ and $-0.09\mu b$.
\begin{figure}[htbp]\centering
\label{fig-evn1-potential-trend}\includegraphics[width=3in]{phievolution-0003-eps-converted-to.pdf}
\caption{Evolution of the dislocation walls system (represented by the evolution of the DDPF $\phi$) and equilibrium locations of dislocation walls (dots on the $x$-axis) under applied shear stress $\sigma_{xy}^0=-0.0009$. The blue curve is the initial profile of $\phi(x)$, and the red curve is the profile of $\phi(x)$ of the final, equilibrium state. }\label{fig:ddpf-evolve}
\end{figure}
\begin{figure}[htbp]\centering
\subfigure[Wall locations for $\sigma_{xy}^0=-0.0009\mu b$]
{\label{fig-evn1-equlibrium-density}\includegraphics[width=2.2in]{phicomparisonbetweencontinuumanddiscrete-eps-converted-to.pdf}}
\subfigure[Wall density for $\sigma_{xy}^0=-0.0009\mu b$]
{\label{fig-evn1-equilibrium-comprison}\includegraphics[width=2.2in]{density-0003-eps-converted-to.pdf}}\\
\subfigure[Wall locations for $\sigma_{xy}^0=-0.009\mu b$]
{\label{fig-evn2-equlibrium-density}\includegraphics[width=2.2in]{comparisondiscretecontinuumresults-003-eps-converted-to.pdf}}
\subfigure[Wall density for $\sigma_{xy}^0=-0.009\mu b$]
{\label{fig-evn2-equilibrium-comprison}\includegraphics[width=2.2in]{density-003-eps-converted-to.pdf}}
\caption{Equilibrium dislocation wall pile-ups calculated using our continuum model and comparisons with the results of the discrete dislocation model under the applied shear stress $\sigma_{xy}^0=-0.0009\mu b$ (images (a) and (b)) and $\sigma_{xy}^0=-0.009\mu b$ (images (c) and (d)). The profiles of the DDPF $\phi$ and locations of the dislocation walls in the equilibrium states are shown in (a) for $\sigma_{xy}^0=-0.0009\mu b$ and (c) for $\sigma_{xy}^0=-0.009\mu b$. The black dots on the $x$-axis indicate the locations of the dislocation walls, and the blue dots show the corresponding values of $\phi$ in the continuum model. The densities of the dislocation walls (given by $\phi'(x)$) in the equilibrium states are shown in (b) for $\sigma_{xy}^0=-0.0009\mu b$ and (d) for $\sigma_{xy}^0=-0.009\mu b$.}\label{fig:dynamics-equilibrium}
\end{figure}
Evolution of the dislocation walls system represented by the DDPF $\phi$ under applied shear stress $\sigma_{xy}^0=-0.0009$ is shown in Fig.~\ref{fig:ddpf-evolve}. It can be seen that during the evolution,
some opposite-direction dislocation wall pairs initially in the middle annihilate, and the remaining dislocation walls are piled-up at the two ends of the domain. Finally, an equilibrium state is reached, in which the $+z$ dislocation walls are piled up at the left end of the domain and the $-z$ dislocation walls are piled up at the right end of the domain.
The obtained equilibrium dislocation wall distributions under applied shear stress $\sigma_{xy}^0=-0.0009\mu b$ and $-0.09\mu b$ and comparisons with the results obtained by discrete dislocation model are shown in Fig.~\ref{fig:dynamics-equilibrium}.
In both cases, the simulation results using the continuum model agree excellently with the results of discrete dislocation model for these pile-ups of dislocation walls, even though the dislocation wall densities are high in the pile-ups and vanishes in the middle of the domain.
\section{Conclusions}
In this study, we have considered
systems of parallel straight dislocation walls and have identified four cases of these dislocation structures where the continuum long-range glide or climb force vanishes but the corresponding Peach-Koehler force from the discrete dislocation model does not. We have developed continuum descriptions
for the short-range dislocation interactions for these four cases by using asymptotic analysis.
The obtained continuum short-range interaction formulas are incorporated
in the continuum model for dislocation dynamics based on a pair of
dislocation density potential functions that represent continuous distributions of dislocations.
This derived continuum model is able to describe the anisotropic dislocation interaction and motion. It has been shown that after incorporating these short-range interaction terms, the continuum model is able to provide strong stabilizing effect as does by the discrete dislocation dynamics model.
Since these short-range interaction terms are in the form of second order partial derivatives of the DDPFs $\phi$ and $\psi$, they also serve as regularization terms in the evolution equations of $\phi$ and $\psi$.
The derived continuum model is validated by
comparisons with the discrete dislocation dynamical simulation results.
Multiple pairs of the DDPFs can be employed in continuum model to describe the dynamics of dislocations with multiple Burgers vectors \cite{Zhu_continuum3D}. The short-range interactions between dislocations with different Burgers vectors may involve dislocation reaction and dissociation in addition to the elastic interactions \cite{Hirth,ZhuScripta2016}. Continuum formulations incorporating these interactions will be explored in the future work.
|
1,314,259,994,924 | arxiv | \section{Introduction}
Many fields of astrophysics aim to measure increasingly faint signals. For instance, there is great interest at present in detecting and characterizing the B-mode \cite{Zaldarriaga:1997p247} of the polarized Cosmic Microwave Background (CMB). The strength of this signal is not yet determined by theory, but a strong upper limit is 170~nK \cite{Page:2007p205}, a tiny fraction of the CMB total intensity signal ($\sim2.7$~K).
An instrument built to detect very faint signals will almost certainly be heavily affected by systematic errors. It is increasingly important to be able to model the effects of receiver systematic errors on the measured signal, and on the receiver sensitivity. We want to be able predict the level of receiver systematic errors, show their impact on the gathered data, and make quantitative comparisons between different receiver architectures.
Previous analytic and semi-analytic approaches to characterizing systematic effects in receivers have usually employed Jones matrices to describe receiver components and Mueller matrices to characterize the effects of receiver systematics on the observed Stokes parameters \cite{Hu:2003p1726, ODea:2007p1678}. The use of Jones matrices to describe individual receiver components has several shortcomings. Only the forward path of the signal through the instrument is modeled -- internal scattering caused by reflections from poorly matched components is not included; and Jones matrix modeling is unable to describe component noise, and hence receiver sensitivity. Modeling a receiver with a full analytic description of the outputs and sensitivity in terms of individual component parameters allows us to identify which parameters of each component in a receiver are most important, and concentrate our efforts on improving them.
This paper introduces a technique and software for developing full analytic descriptions of receiver outputs and sensitivities in terms of lab-measureable errors in individual components. In this technique components are modeled by electrical scattering matrices. When describing a network of components with Jones matrices the forward-path cascaded response can be obtained through simple matrix manipulation and multiplication. The scattering matrix formulation does not share this simplicity of calculation: only the case of cascaded 2-port devices is amenable to a relatively simple analytic solution. This paper describes an algorithm for calculating the response of arbitrarily connected networks of components. We present software which implements this algorithm, and apply it to two common polarimeter architectures: differencing polarimeters, and pseudo-correlation polarimeters.
This software allows us to make robust analytic comparisons of receiver architectures. Errors in individual receiver components can be parameterized and propagated into the description of the receiver performance, e.g. the instrument Mueller matrix. We hence have a powerful tool for guiding the instrument design process and diagnosing the causes of non-ideal instrument behavior.
\section{Electrical Scattering Matrices}
We model the behavior of individual receiver components, and the full receiver, using electrical scattering matrices. The electrical scattering matrix (hereafter referred to as the scattering matrix) is a representation of a network using the ideas of incident, reflected, and transmitted waves. It provides a complete description of an $N$-port network as seen at its $N$ ports. A significant advantage of modeling receiver components with scattering matrices is that \emph{noise} can easily be included in the modeling. The noise produced by a device is modeled with a noise wave vector; see e.g. \cite{Wedge:1992p149}.
Consider the arbitrary $N$-port network shown in Figure~\ref{fig:N-port_network}. We denote the incident wave at port $i$ by $V_{i}^{+}$, the reflected wave by $V_{i}^{-}$, and the noise wave produced by the network at that port by $c_{i}$. These quantities are related by the scattering matrix $\mathbf{S}$ and noise wave vector $\mathbf{c}$ as follows:
\begin{equation}
\renewcommand{\arraystretch}{1.4}
\begin{bmatrix}
V_{1}^{-} \\
\vphantom{\vdots} V_{2}^{-} \\
\vphantom{\vdots} \vdots \\
\vphantom{\vdots} V_{N}^{-}
\end{bmatrix}
=
\begin{bmatrix}
S_{11} & S_{12} & \cdots & S_{1N} \\
\vphantom{\vdots} S_{21} & & & \vdots \\
\vphantom{\vdots} \vdots & & & \\
\vphantom{\vdots} S_{N1} & \cdots & & S_{NN}
\end{bmatrix}
\begin{bmatrix}
V_{1}^{+} \\
\vphantom{\vdots} V_{2}^{+} \\
\vphantom{\vdots} \vdots \\
\vphantom{\vdots} V_{N}^{+}
\end{bmatrix}
+
\begin{bmatrix}
c_{1} \\
\vphantom{\vdots} c_{2} \\
\vphantom{\vdots} \vdots \\
\vphantom{\vdots} c_{N}
\end{bmatrix}
\label{eqn:scattering_matrix_description}
\end{equation}
The noise wave voltages $c_i$ of an $N$-port network are complex time-varying random variables characterized by a correlation matrix $\mathbf{C}$
\begin{align*}
\mathbf{C} = & \langle \mathbf{c} \otimes \mathbf{c}^{\dagger} \rangle \\
= &
\begin{bmatrix}
\langle \vert c_1 \vert^{2} \rangle & \langle c_1 c_2^* \rangle & \cdots & \langle c_1 c_N^* \rangle \\
\vphantom{\vdots} \langle c_2 c_1^* \rangle & & & \vdots \\
\vphantom{\vdots} \vdots & & & \\
\vphantom{\vdots} \langle c_N c_1^* \rangle & \cdots & & \langle \vert c_N \vert^{2} \rangle
\end{bmatrix}
\end{align*}
where the angle brackets indicate time averaging, $\dagger$ indicates the conjugate transpose operation, and $\otimes$ indicates the outer product (or Kronecker product). The diagonal terms of $\mathbf{C}$ give the noise power deliverable at each port in a 1~Hz bandwidth. The off-diagonal terms are correlation products. The noise correlation matrix $\mathbf{C}$ for a passive network is determined from its scattering matrix $\mathbf{S}$ by \cite{Wedge:1991p159}
\begin{equation}
\mathbf{C} = kT(\mathbf{I}-\mathbf{S}\mathbf{S}^{\dagger}) \label{eqn:noise_correlation_matrix_for_passive_device}
\end{equation}
where $k$ is Boltzmann's constant, $T$ is the physical temperature of the network, and $\mathbf{I}$ is the identity matrix. The noise correlation matrix for an active network can be determined by measurement or modeling.
\section{Solving Arbitrary Networks}
Consider the arbitrarily connected network of $N$-port networks shown in Figure~\ref{fig:arbitrary_N_port_network}. We need an algorithm to calculate the scattering matrix $\mathbf{S}$ and noise wave vector $\mathbf{c}$ which describe the connected network. The algorithm derived here is an extension of the algorithm described in \cite{Filipsson:1981p60}, with added noise wave vector manipulation. Similar algorithms are used numerically in {\verb SUPERMIX } \cite{Ward:1999p26}.
First, let us consider the effect of connecting together ports $k$ and $m$ of an $N$-port network described by Equation~\ref{eqn:scattering_matrix_description}. Connecting the ports means that $V_{k}^{+} = V_{m}^{-}$ and $V_{k}^{-} = V_{m}^{+}$. Manipulation of rows $k$ and $m$ of Equation~\ref{eqn:scattering_matrix_description} then gives us the expressions
\begin{align}
V_m^- & = \sum_{i\neq k,m} \frac{S_{mi}}{1-S_{mk}}V_{i}^{+} + \frac{S_{mm}}{1-S_{mk}}V_{k}^{-} + \frac{c_{m}}{1-S_{mk}} \label{eqn:Vmminus_expression}\\
V_k^- & = \sum_{i\neq k,m} \frac{S_{ki}}{1-S_{km}}V_{i}^{+} + \frac{S_{kk}}{1-S_{km}}V_{m}^{-} + \frac{c_{k}}{1-S_{km}} \label{eqn:Vkminus_expression}
\end{align}
By substituting Equations~\ref{eqn:Vmminus_expression} and \ref{eqn:Vkminus_expression} into each other we can obtain expressions for $V_{m}^{-}$ and $V_k^{-}$. Substituting these expressions into Equation~\ref{eqn:scattering_matrix_description} we can obtain a new expression for the reflected wave $V_{i}^{-}$:
\begin{align}
\nonumber V_i^- = & \sum_{j\neq k,m} \Bigg[ S_{ij} + \frac{(1-S_{km})(1-S_{mk})}{(1-S_{km})(1-S_{mk})-S_{kk}S_{mm}} \\
\nonumber & \Bigg( \frac{S_{ik}S_{mj}}{1-S_{mk}} + \frac{S_{ik}S_{mm}S_{kj}}{(1-S_{mk})(1-S_{km})} \\
\nonumber & + \frac{S_{im}S_{kj}}{1-S_{km}} + \frac{S_{im}S_{kk}S_{mj}}{(1-S_{km})(1-S_{mk})}\Bigg) \Bigg]V_j^+ \\
\nonumber & + c_i + S_{ik} \Bigg( \frac{(1-S_{km})c_{m}+S_{mm}c_{k}}{(1-S_{km})(1-S_{mk})-S_{kk}S_{mm}}\Bigg) \\
& + S_{im} \Bigg( \frac{(1-S_{mk})c_{k}+S_{kk}c_{m}}{(1-S_{km})(1-S_{mk})-S_{kk}S_{mm}}\Bigg) \label{eqn:new_Viminus_expression}
\end{align}
From Equation~\ref{eqn:new_Viminus_expression} we obtain replacement expressions for the elements $S_{ij}$ of $\mathbf{S}$ and the noise waves $c_i$:
\begin{align}
\nonumber S_{ij}^{\textrm{new}} = & S_{ij} + A \Big[S_{ik}S_{mj}(1-S_{km}) + S_{ik}S_{kj}S_{mm} \\
& + S_{im}S_{kj}(1-S_{mk})+S_{im}S_{mj}S_{kk}\Big] \label{eqn:S_replacement_formula} \\
\nonumber c_{i}^{\textrm{new}} = & c_{i} + A \Big[ \big(S_{im}S_{kk}+S_{ik}(1-S_{km})\big)c_{m} \\
& +\big(S_{ik}S_{mm}+S_{im}(1-S_{mk})\big)c_{k}\Big] \label{eqn:c_replacement_formula} \\
\nonumber \textrm{where } A = & \frac{1}{(1-S_{km})(1-S_{mk})-S_{kk}S_{mm}}
\end{align}
Rows and columns $k$ and $m$ are then removed from $\mathbf{S}$ and rows $k$ and $m$ are removed from $\mathbf{c}$ to create the scattering matrix and noise vector which describe the new $(N-2)$-port network.
\begin{figure}
\centering
\subfloat[]{
\includegraphics[width=3.35in]{N-port_network.pdf}
\label{fig:N-port_network}
}
\\
\subfloat[]{
\includegraphics[width=2in]{arbitrary_Nport_network.pdf}
\label{fig:arbitrary_N_port_network}
}
\caption{(a) An arbitrary $N$-port network. The total signal at each port $p_i$ consists of an incident signal $V_i^+$, a reflected signal $V_i^-$, and a noise signal $c_i$. The reflected signal $V_i^-$ is a weighted sum of the incident signal at port $i$ and transmitted signals from the other ports of the network, the coefficients of the sum being the elements of the network scattering matrix $\mathbf{S}$. The noise signals are given by the noise wave vector $\mathbf{c}$. (b) An arbitrarily connected network of N-port devices.}
\label{anotherlabel}
\end{figure}
\subsection{Algorithm}\label{par:network_evaluation_algorithm}
To obtain the scattering matrix and noise wave vector which describe the arbitrarily connected network shown in Figure~\ref{fig:arbitrary_N_port_network} begin by forming the scattering matrix and noise wave vector which describe the unconnected network:
\begin{align}
\mathbf{S}
=
\begin{bmatrix}
\mathbf{S}_1 & \mathbf{0} & \cdots & \mathbf{0} \\
\vphantom{\vdots} \mathbf{0} & \mathbf{S}_2 & & \vdots \\
\vphantom{\vdots} \vdots & & & \\
\vphantom{\vdots} \mathbf{0} & \cdots & & \mathbf{S}_N
\end{bmatrix}
,
\mathbf{c}
=
\begin{bmatrix}
\mathbf{c}_1 \\
\vphantom{\vdots} \mathbf{c}_2 \\
\vphantom{\vdots} \vdots \\
\vphantom{\vdots} \mathbf{c}_N
\end{bmatrix}
\label{eqn:scattering_of_unconnected_network}
\end{align}
We then successively form each connection in the network. For each connection, find the rows and columns $k$ and $m$ of $\mathbf{S}$ and $\mathbf{c}$ in Equation~\ref{eqn:scattering_of_unconnected_network} which correspond to the ports being connected. Use the replacement formulae given by Equations~\ref{eqn:S_replacement_formula} and \ref{eqn:c_replacement_formula} to adjust the $\mathbf{S}$ matrix and $\mathbf{c}$ vector. Remove rows and columns $k$ and $m$ from $\mathbf{S}$, and rows $k$ and $m$ from $\mathbf{c}$. Repeat for each remaining connection until we are left with the scattering matrix $\mathbf{S}$ and noise vector $\mathbf{c}$ which describe the fully connected network.
\section{Software Implementation} \label{sec:software_implementation}
We have derived an algorithm in \S\ref{par:network_evaluation_algorithm} for finding the scattering matrix and noise wave vector which describe an arbitrarily connected network. We need to be able to apply it to arbitrary receivers with parameterized scattering matrices describing the receiver components and obtain analytic expressions for the outputs and noise in terms of the component parameters.
The algorithm must be implemented in a programming language with the ability to manipulate symbolic algebraic expressions. This programming language must also support pointers (or equivalent data structure) to allow the creation of a navigateable network. We implemented the algorithm in MATLAB\footnote{http://www.mathworks.com/}, which has a powerful and well developed symbolic algebra toolbox. While it does not have a native pointer data type (as of version R2008b), a third party open-source pointer library\footnote{http://code.google.com/p/pointer/} adds this capability. The software package we developed to perform this modeling is called {\verb SNS }\footnote{Download at http://www.astro.caltech.edu/$\sim$ogk/SNS/}.
\subsection{Representing a Network}
\begin{figure}
\centering
\includegraphics[width=3.25in]{representing_a_network_nodes_Nports.pdf}
\caption{Schematic showing the nature of node and $N$-port objects and how they connect to each other in the software.}
\label{fig:representing_a_network}
\end{figure}
A network is represented by nodes and $N$-port objects, as shown in Figure~\ref{fig:representing_a_network}. They are both pointer objects. Each $N$-port object contains an array of references to the nodes connected to each of its $N$ ports, a scattering matrix $\mathbf{S}$, a noise wave vector $\mathbf{c}$, and a variable $N$, the number of ports of the object.
Each node object contains two pointers; these refer to the $N$-port objects the node connects to in the ``forward'' and ``backward'' directions, and which port number the connection is made to ($N_{fp}$ and $N_{bp}$ respectively). Note that the forward and backward directions are completely arbitrary; they are merely a helpful concept when trying to visualize the operation of the algorithms which act on the network.
The network is constructed by creating all the node and $N$-port objects using functions called {\verb makeNode() } and {\verb makeNport() }, assigning scattering matrices and noise wave vectors to the $N$-port objects, and connecting each node to its forward and backward $N$-port objects. A {\verb connectNode } function hides the complexity of assigning references to the appropriate array locations and assigning the appropriate port numbers to variables.
Nodes are classified into four types: input, output, central, and terminated. We want to calculate the performance of a network in terms of the response seen at the outputs due to signals presented at the inputs. The central and terminated nodes are removed by the network-solving program.
Once all the objects have been created, assigned matrices and vectors, and connected, it suffices to describe the network by the four arrays of nodes. Due to the fully connected nature of the network representation it is possible to start at any node and navigate to any other node by following the appropriate links between objects.
\subsection{Solving a Network} \label{sec:solving_a_network}
The network-solving program accepts four arrays of nodes: the inputs, outputs, central nodes, and terminated nodes. It returns the scattering matrix and noise wave vector for the connected network, where the central and terminated nodes have been removed through application of the algorithm described in \S\ref{par:network_evaluation_algorithm}.
The first step the software performs is to remove the terminated nodes, if there are any. The software assumes that the terminations are perfectly matched and at a common physical temperature. It modifies the object scattering matrices to remove the terminated nodes, and adds the noise terms produced by the terminated nodes to the noise wave vectors.
The network \emph{sans} terminated nodes is then passed to a recursive network shrinking program. This program begins with the first central node and applies the algorithm given by Equations~\ref{eqn:S_replacement_formula} and \ref{eqn:c_replacement_formula} to a sub-network consisting of the two $N$-port objects connected to that particular node. A new $N$-port object is created and assigned the resulting scattering matrix and noise wave vector. All the nodes which were connected to the now-defunct $N$-port objects are reconnected to this new $N$-port object at the appropriate ports. A new network is formed by excluding the central node just considered and the program is recursively called on this new network. This continues until there are no more central nodes, at which point the scattering matrix and noise wave vector of the single remaining $N$-port object are returned.
Applying the algorithm in this fashion, rather than to the entire network at once, means that the size of the matrix $\mathbf{S}$ in Equation~\ref{eqn:scattering_of_unconnected_network} is kept small, speeding up computation. This is not an optimum solution to network shrinking, but we have found it to be significantly faster than applying the algorithm to the full unconnected network for networks of more than a few $N$-port objects.
The description given above glosses over the significant complexity in keeping track of which nodes should be connected where, and certain configurations of nodes and $N$-port objects which would cause the default implementation of the algorithm to fail. The majority of the code is dedicated to performing these functions; only a small fraction of the code actually carries out the calculations described by the algorithm.
\subsection{Example Code Listing}
\begin{figure}[!t]
\includegraphics[width=3.49in]{example_network.pdf}
\caption{An arbitrary network of $N$-port devices to illustrate the software. Nodes are indicated by open circles, $N$-port devices by rectangles. $N$-port device $i$ is described by scattering matrix $\mathbf{S}_{i}$ and noise wave vector $\mathbf{c}_{i}$. Input nodes to the network are $\textrm{In}_{i}$, central nodes $\textrm{Cn}_{i}$, terminated nodes $\textrm{Tn}_{i}$, and output nodes $\textrm{On}_{i}$.}
\label{fig:software_example_network}
\end{figure}
To illustrate the operation of the program consider the network shown in Figure~\ref{fig:software_example_network}. It is represented in software by connected lists of nodes and $N$-port objects, as shown in the following code listing:
{\tiny
\begin{verbatim}
P1 = makeNport(); P2 = makeNport(); P3 = makeNport();
P4 = makeNport(); P5 = makeNport();
P1.S = S1; P2.S = S2; P3.S = S3; P4.S = S4; P5.S = S5;
P1.c = c1; P2.c = c2; P3.c = c3; P4.c = c4; P5.c = c5;
In1 = makeNode(); In2 = makeNode(); On1 = makeNode();
On2 = makeNode(); Tn1 = makeNode(); Cn1 = makeNode();
Cn2 = makeNode(); Cn3 = makeNode();
Cn4 = makeNode(); Cn5 = makeNode();
connectNode(In1,[],1,P1,1); connectNode(In2,[],1,P1,4);
connectNode(Cn1,P1,2,P3,1); connectNode(Cn2,P1,3,P4,1);
connectNode(Cn3,P3,2,P2,1); connectNode(Cn4,P4,2,P2,4);
connectNode(Cn5,P3,3,P5,1); connectNode(Tn1,P2,3,[],1);
connectNode(On1,P5,2,[],1); connectNode(On2,P2,2,[],1);
inputs = {In1 In2}; outputs = {On1 On2};
cnodes = {Cn1 Cn2 Cn3 Cn4 Cn5}; tnodes = {Tn1};
[S, c] = getScatteringRecursive(inputs,outputs,cnodes,tnodes);
\end{verbatim}
}
The {\verb getScatteringRecursive } program performs the actions described in \S\ref{sec:solving_a_network}, and returns the scattering matrix $\mathbf{S}$ and noise wave vector $\mathbf{c}$ for the resulting 4-port object. Nodes $\textrm{In}_{1}$, $\textrm{In}_{2}$, $\textrm{On}_{1}$, and $\textrm{On}_{2}$ are connected sequentially to ports 1 to 4 of this object.
\section{Polarimetry}
The example presented in the coming section, \S\ref{sec:arch_comparison}, compares two receiver architectures commonly used to measure linear polarization at radio to sub-mm wavelengths. This section provides necessary background information by sketching a brief summary of polarization. It shows how a receiver may be described by a Mueller matrix, and shows how to derive a receiver Mueller matrix from the scattering matrix produced by the software described in \S\ref{sec:software_implementation}.
\subsection{Brief Summary of Polarization}
An electromagnetic signal is said to be polarized if there is some lasting amplitude or phase relation between its orthogonal modes. The coherency vector \cite{Hamaker:1996p442} captures this relation:
\begin{align*}
\mathbf{e} = & \langle
\begin{bmatrix}
E_{x}(t)E_{x}^{*}(t) \\
E_{x}(t)E_{y}^{*}(t) \\
E_{y}(t)E_{x}^{*}(t) \\
E_{y}(t)E_{y}^{*}(t)
\end{bmatrix}
\rangle \\
= & \langle \mathbf{E} \otimes \mathbf{E}^{*} \rangle
\end{align*}
Here $\mathbf{E}$ is the complex vector of the orthogonal modes $E_{x}(t)$ and $E_{y}(t)$ of the signal, $\langle \ldots \rangle$ indicates time averaging, and $\otimes$ indicates the outer product.
If the signal $\mathbf{E}$ is acted on by an object described by a Jones matrix $\mathbf{J}$, i.e. $\mathbf{E}_{out} = \mathbf{JE}$, then the new coherency vector is given by
\begin{align}
\mathbf{e}_{out} = & (\mathbf{J} \otimes \mathbf{J}^{*})\mathbf{e} \label{eqn:rd:coherency_vector_from_Jones}
\end{align}
The polarization state of a signal is usually described by the Stokes parameters, $I$, $Q$, $U$, and $V$. $I$ describes the total intensity of the signal, $Q$ and $U$ describe the linear polarization state, and $V$ describes the circular polarization state. The Stokes vector $\mathbf{e}^{S}$ is obtained from the coherency vector $\mathbf{e}$ by
\begin{align}
\mathbf{e}^{S} = &
\begin{bmatrix}
I \\ Q \\ U \\ V
\end{bmatrix}
= \mathbf{Te} \label{eqn:rd:Stokes_vector_from_coherency} \\
\textrm{where } \mathbf{T} = &
\begin{bmatrix}
1 & 0 & 0 & 1 \\
1 & 0 & 0 & -1 \\
0 & 1 & 1 & 0 \\
0 & -i & i & 0
\end{bmatrix} \label{eqn:rd:T_matrix_definition}
\end{align}
We see that $\mathbf{T}$ is a coordinate transformation of the coherency vector to the abstract Stokes frame.
The Stokes parameters are a convenient and powerful way of the describing the state of polarization of an electromagnetic signal. From Equation~\ref{eqn:rd:Stokes_vector_from_coherency} we have:
\begin{align}
\nonumber I =& \langle \vert E_{x}(t) \vert^{2} \rangle + \langle \vert E_{y}(t) \vert^{2} \rangle \\%\label{eqn:rd:stokes_I_linear}\\
\nonumber Q =& \langle \vert E_{x}(t) \vert^{2} \rangle - \langle \vert E_{y}(t) \vert^{2} \rangle \\%\label{eqn:rd:stokes_Q_linear}\\
\nonumber U = & 2\langle \Re\{ E_{x}(t)E_{y}^{*}(t) \} \rangle \\
\nonumber = & \langle E_{x}(t)E_{y}^{*}(t)\rangle + \langle E_{x}^{*}(t)E_{y}(t) \rangle \\% \label{eqn:rd:stokes_U_linear} \\
\nonumber V = & 2\langle \Im\{ E_{x}(t)E_{y}^{*}(t) \} \rangle \\
= & -i[ \langle E_{x}(t)E_{y}^{*}(t)\rangle - \langle E_{x}^{*}(t)E_{y}(t) \rangle ] \label{eqn:Stokes_parameter_definitions}
\end{align}
\paragraph{Mueller Calculus}
Suppose that a signal defined by the complex electric field vector $\mathbf{E}$ and coherency vector $\mathbf{e}$ is modified by an object described by the Jones matrix $\mathbf{J}$. From Equations~\ref{eqn:rd:coherency_vector_from_Jones} and \ref{eqn:rd:Stokes_vector_from_coherency} we see that the output signal $\mathbf{E}_{out} = \mathbf{JE}$ will be described by the Stokes vector
\begin{align*}
\mathbf{e}^{S}_{out} = & \mathbf{T}(\mathbf{J} \otimes \mathbf{J}^{*})\mathbf{T}^{-1} \mathbf{e}^{S} \\
= & \mathbf{M}\mathbf{e}^{S}
\end{align*}
The matrix $\mathbf{M} = \mathbf{T}(\mathbf{J} \otimes \mathbf{J}^{*})\mathbf{T}^{-1}$ is called the Mueller matrix. It represents the action of the object characterized by Jones matrix $\mathbf{J}$ in the Stokes vector space.
Mueller calculus is a matrix method for manipulating Stokes vectors. We denote the Mueller matrix elements as
\begin{align*}
\mathbf{M} = &
\begin{bmatrix}
M_{II} & M_{IQ} & M_{IU} & M_{IV} \\
M_{QI} & M_{QQ} & M_{QU} & M_{QV} \\
M_{UI} & M_{UQ} & M_{UU} & M_{UV} \\
M_{VI} & M_{VQ} & M_{VU} & M_{VV}
\end{bmatrix}
\end{align*}
Mueller matrices are a convenient means of describing the action of an astronomical polarimeter. Of particular interest are the $M_{QI}$ and $M_{UI}$ parameters, which describe the leakage of the total intensity $I$ into the measured linear polarization vector components. Much of the radio and mm/sub-mm spectrum is only slightly linearly polarized, hence non-zero values of $M_{QI}$ and $M_{UI}$ can imply serious contamination of the measured linear polarization vector by the total intensity signal.
\subsection{Deriving Receiver Mueller Matrix}
\begin{figure}
\centering
\includegraphics[width=2.5in]{typical_receiver_ExEy_inputs.pdf}
\caption{A arbitrary receiver, where orthogonal linear polarizations $E_{x}(t)$ and $E_{y}(t)$ are presented at inputs 1 and 2 respectively. $D$ is the output at port $m$. The receiver is described by scattering matrix $\mathbf{S}$.}
\label{fig:rd:mueller_matrix_from_scattering_matrix}
\end{figure}
Say we have calculated the scattering matrix which describes the behavior of a receiver. For polarimeters, a natural way of expressing the receiver's performance is with a Mueller matrix. We need to translate the receiver scattering matrix into a Mueller matrix which describes the action of the receiver on the Stokes vector of the incident electromagnetic signal.
Consider the arbitrary receiver shown in Figure~\ref{fig:rd:mueller_matrix_from_scattering_matrix}. Orthogonal linear polarizations $E_{x}(t)$ and $E_{y}(t)$, representing either signals in transmission lines, orthogonal modes in waveguide, or orthogonal modes in free-space, are connected to ports 1 and 2 of the receiver respectively. Receiver output $D$ is connected to port $m$. The receiver is described by the scattering matrix $\mathbf{S}$.
The output $E_{m}(t)$ seen at port $m$ is given by (assuming that the connections to ports 3 to $N$ are reflectionless):
\begin{align}
\nonumber E_m(t) = & S_{m1}E_{x}(t) + S_{m2}E_{y}(t)
\end{align}
The power contained in the signal $E_{m}(t)$ is then measured. At radio wavelengths this is often achieved through the use of a square-law detector diode. At mm and sub-mm wavelengths a bolometer might be used. The measured power $P_{D}$ is given by:
\begin{align}
\nonumber P_{D} = & \alpha \langle E_{m}(t)E_{m}(t)^{*}\rangle \\
\nonumber = & \alpha\Big[\langle \vert E_{x}(t) \vert^{2}\rangle \vert S_{m1}\vert^{2} + \langle \vert E_{y}(t) \vert^{2}\rangle \vert S_{m2}\vert^{2} \\
\nonumber &+ \langle E_{x}(t)E_{y}^{*}(t) \rangle S_{m1}S_{m2}^{*} \\
& + \langle E_{x}^{*}(t)E_{y}(t) \rangle S_{m1}^{*}S_{m2} \Big] \label{eqn:rd:P_output_scattering}
\end{align}
where $\langle \ldots \rangle$ indicates time averaging, $\alpha$ is a proportionality constant dependent on the power detection method, and we assume that the instrument scattering matrix parameters are constant during the average time period. Now let
\begin{align}
\nonumber P_{D} = & M_{DI}I + M_{DQ}Q + M_{DU}U + M_{DV}V \\
\nonumber = & M_{DI}\langle\vert E_{x}(t) \vert^{2} + \vert E_{y}(t) \vert^{2}\rangle \\
\nonumber & + M_{DQ}\langle\vert E_{x}(t) \vert^{2}-\vert E_{y}(t) \vert^{2}\rangle \\
\nonumber & + M_{DU}\langle E_{x}(t)E_{y}^{*}(t) + E_{x}^{*}(t)E_{y}(t) \rangle \\
& - iM_{DV}\langle E_{x}(t)E_{y}^{*}(t) - E_{x}^{*}(t)E_{y}(t) \rangle \label{eqn:rd:P_output_Mueller}
\end{align}
where we have used the definition of the Stokes parameters given in Equation~\ref{eqn:Stokes_parameter_definitions}.
By comparing Equations~\ref{eqn:rd:P_output_scattering} and \ref{eqn:rd:P_output_Mueller} we can obtain the contribution of each Stokes parameter to the power measured at output D:
\begin{align}
\nonumber M_{DI} = & \frac{\alpha}{2}\{ \vert S_{m1}\vert^{2} + \vert S_{m2}\vert^{2} \} \\
\nonumber M_{DQ} = & \frac{\alpha}{2}\{ \vert S_{m1}\vert^{2} - \vert S_{m2}\vert^{2} \} \\
\nonumber M_{DU} = & \frac{\alpha}{2}\{ S_{m1}S_{m2}^{*} + S_{m1}^{*}S_{m2} \} \\
M_{DV} = & \frac{i\alpha}{2}\{ S_{m1}S_{m2}^{*} - S_{m1}^{*}S_{m2} \} \label{eqn:Mueller_from_S}
\end{align}
We can derive the receiver Mueller matrix by applying this technique to all the outputs of the receiver.
\section{Polarimeter Architecture Comparison} \label{sec:arch_comparison}
Two basic types of architectures are used to measure the polarization of an electromagnetic signal: differencing polarimeters, an architecture commonly used in polarization sensitive bolometer-based polarimeters \cite{Jones:2007p1886}; and pseudo-correlation (or correlation) polarimeters, an architecture commonly used in coherent, LNA or mixer based, polarimeters such as QUIET \cite{Newburgh:2010p1887}.
Differencing polarimeters measure the difference in power between orthogonal linear modes of the electromagnetic signal; see the definition of $Q$ in Equation~\ref{eqn:Stokes_parameter_definitions} for inspiration. Correlation, or pseudo-correlation, architectures measure the polarization state by measuring the correlation between orthogonal modes. Correlation polarimeters are required to preserve the phase of the incident signal. They are hence only feasible if coherent (i.e. phase-preserving) amplifiers or mixers are available.
The choice of which architecture to use for a particular experiment is often dominated by sensitivity considerations. At low frequencies ($< \sim 60$~GHz) the availability of low-noise coherent amplifiers has favored correlation architectures \cite{Zmuidzinas:1999p1898}. At higher frequencies, the fundamental quantum limits that amplifier noise is subject to has favored direct detection technologies such as bolometers, and hence differencing polarimeter architectures, for continuum polarimetry experiments. However, continuing improvement in coherent amplifier technology at high frequencies has pushed their performance closer to the quantum limit, e.g. \cite{Gaier:2003p142}. As coherent amplifier technology improves, sensitivity may no longer be the deciding factor between technologies, and hence architectures.
Differencing and correlation architectures measure the polarization information of the incident signal in very different ways, and hence suffer from different sources of systematic error. A careful analysis of the fundamental strengths and weaknesses of each architecture is needed. {\verb SNS } is well suited to perform this analysis. In this section we use {\verb SNS } to derive receiver Mueller matrices for examples of these two polarimeter architectures. This analysis highlights the different sources of systematic error in these architectures.
\begin{figure}
\centering
\includegraphics[width=3.49in]{receiver_diagrams.pdf}
\caption{Examples of two commonly used polarimeter architectures. (left) A pseudo-correlation polarimeter. (right) A differencing polarimeter. Orthogonal waveguide modes $E_{x}$ and $E_{y}$ are circularized or rotated, and extracted from waveguide by an orthomode transducer (OMT). In the pseudo-correlation architecture the signals are further processed. The powers in signals $D_{1}$ to $D_{4}$ are detected and processed to obtain the Stokes parameters, as explained in the text.}
\label{fig:receiver_diagrams}
\end{figure}
\subsection{Differencing Polarimeters}
An example of a differencing polarimeter architecture is shown in Figure~\ref{fig:receiver_diagrams}~(right). The powers in orthogonal linear modes $D_{1}$ and $D_{2}$ are detected and differenced to obtain one of the linear polarization parameters. Differencing polarimeters have a much simpler architecture than pseudo-correlation polarimeters. However, they measure only a single linear Stokes parameter; a duplicate receiver oriented at $45^{\circ}$ to the first is needed to measure the second linear Stokes parameter.
\subsection{Pseudo-correlation Polarimeters}
An example of a pseudo-correlation polarimeter architecture is shown in Figure~\ref{fig:receiver_diagrams}~(left). The Stokes parameters of linear polarization in a circular basis are given by:
\begin{align}
\nonumber E_{l}(t) =& \frac{1}{\sqrt{2}}[E_{x}(t) - iE_{y}(t)] \\
\nonumber E_{r}(t) =& \frac{1}{\sqrt{2}}[E_{x}(t) + iE_{y}(t)] \\
\nonumber Q =& 2\langle \Re\{ E_{l}(t)E_{r}^{*}(t) \} \rangle \\
U = & -2\langle \Im\{ E_{l}(t)E_{r}^{*}(t) \} \rangle \label{eqn:rd:stokes_U_circular}
\end{align}
Pseudo-correlation polarimeters measure the linear Stokes vector by correlating circular polarization signals $E_{l}$ and $E_{r}$ with ($D_{3}$ and $D_{4}$) and without ($D_{1}$ and $D_{2}$) a $90^{\circ}$ phase shift. The powers in signals $D_{1}$ to $D_{4}$ are detected and combined to obtain the linear Stokes parameters.
While the architecture of a pseudo-correlation polarimeter is more complex than that of a differencing polarimeter, it does provide some significant advantages. For instance, both linear polarization parameters can be measured with a single optical assembly, providing twice the information for the same focal plane area occupied.
\subsection{Parameterized Scattering Matrices} \label{sec:matrix_parametrizations}
Some of the components in the receivers shown in Figure~\ref{fig:receiver_diagrams} have parameterized scattering matrices. While it is possible to describe every component in a receiver with a suitable parameterized scattering matrix, the resulting analytic expressions for the outputs soon become too complicated to be useful when written down.
In this analysis the components are assumed to be perfectly matched, i.e. the diagonal elements of the parameterized scattering matrices are zero.
\subsubsection{Circularizer}
The circular phase shifter translates orthogonal linear polarizations into orthogonal circular polarizations. It introduces a $90^{\circ}$ phase shift into one orthogonal linear mode, and is oriented at $45^{\circ}$ to the OMT linear axis.
A possible parameterization of the circularizer's scattering matrix as shown in Figure~\ref{fig:receiver_diagrams} is:
\begin{align*}
\mathbf{S}_{circ} = \frac{L_{c}}{\sqrt{2}}
\begin{bmatrix}
0 & 1 & 1 & 0 \\
1 & 0 & 0 & -e^{i(\frac{\pi}{2}+\theta_{c})} \\
1 & 0 & 0 & e^{i(\frac{\pi}{2}+\theta_{c})} \\
0 & -e^{i(\frac{\pi}{2}+\theta_{c})} & e^{i(\frac{\pi}{2}+\theta_{c})} & 0
\end{bmatrix}
\end{align*}
Here $L_{c}^{2}$ is the insertion loss of the circularizer, and $\theta_{c}$ is the error in the $90^{\circ}$ phase shift. The circularizer is assumed to be otherwise perfect.
\subsubsection{Faraday Rotator}
The Faraday rotator shown in the differencing polarimeter in Figure~\ref{fig:receiver_diagrams} is a component sometimes used in PSBs \cite{Keating:2008p1850}. It modulates the measured polarization signal by introducing a variable rotation to the plane of linear polarization of the incident signal. A similar effect can be achieved with a rotating birefringent half-waveplate, or a wire grid.
The scattering matrix for the rotator considered here is:
\begin{align*}
\mathbf{S}_{fr} =
\begin{bmatrix}
0 & \cos(\theta_{ps}) & \sin(\theta_{ps}) & 0 \\
\cos(\theta_{ps}) & 0 & 0 & -\sin(\theta_{ps}) \\
\sin(\theta_{ps}) & 0 & 0 & \cos(\theta_{ps}) \\
0 & -\sin(\theta_{ps}) & \cos(\theta_{ps}) & 0
\end{bmatrix}
\end{align*}
Here $2\theta_{ps}$ is the time-dependent linear plane rotation introduced by the Faraday rotator.
\subsubsection{OMT}
The OMT extracts orthogonal linear modes from the waveguide. The scattering matrix parameterization considered here is:
\begin{align*}
\mathbf{S}_{OMT} =
\begin{bmatrix}
0 & D_{x} & d_{yx} & 0 \\
D_{x} & 0 & 0 & d_{xy} \\
d_{yx} & 0 & 0 & D_{y}\\
0 & d_{xy} & D_{y} & 0
\end{bmatrix}
\end{align*}
Here $D_{x}$ and $D_{y}$ measure the insertion loss for each orthogonal mode, while $d_{xy}$ and $d_{yx}$ measure the leakage of one mode into the other. From conservation of energy considerations in a passive component we have the constraints $\vert D_{x} \vert = \vert D_{y} \vert = \abs{D}$, $\vert d_{xy} \vert = \vert d_{yx} \vert = \abs{d}$, and $\abs{D}^{2}+\abs{d}^{2}=L_{omt}^{2}$, where $L_{omt}^{2}$ is the insertion loss of the OMT. The parameters may have arbitrary phase.
\subsubsection{LNAs and Phase Switch}
The scattering matrices for the LNAs in the pseudo-correlation polarimeter are given by:
\begin{align*}
\mathbf{S}_{L,R} =
\begin{bmatrix}
0 & 0 \\
G_{L,R} & 0
\end{bmatrix}
\end{align*}
Here $G_{L}$ and $G_{R}$ are the complex voltage gains of the left and right circular polarization amplifiers respectively.
The phase switch in the pseudo-correlation polarimeter modulates the phase of one of the signal arms relative to the other. Its scattering matrix is given by:
\begin{align*}
\mathbf{S}_{ps} =
\begin{bmatrix}
0 & e^{i\theta_{ps}} \\
e^{i\theta_{ps}} & 0
\end{bmatrix}
\end{align*}
Here $\theta_{ps}$ is the time-dependent phase shift introduced by the phase switch, usually shifting between $0^{\circ}$ and $180^{\circ}$.
\subsection{Polarimeter Mueller Matrix Elements}
We now build a connected model of each polarimeter in {\verb SNS } using the matrix parameterizations given in \S\ref{sec:matrix_parametrizations}. We use Equation~\ref{eqn:Mueller_from_S} to obtain expressions for the powers measured at each receiver output in terms of the incident signal Stokes parameters.
For the ideal pseudo-correlation polarimeter, i.e. where all the components are perfect, the outputs are:
\begin{align}
\nonumber P_{D_{1}} = & \frac{1}{2} I - \frac{1}{2} U \\
\nonumber P_{D_{2}} = & \frac{1}{2} I + \frac{1}{2} U \\
\nonumber P_{D_{3}} = & \frac{1}{2} I -\frac{1}{2} Q \\
P_{D_{4}} = & \frac{1}{2} I + \frac{1}{2} Q \label{eqn:correlation_polarimeter_diode_powers}
\end{align}
Here we have assumed that $\theta_{ps} = 0$ and $\alpha = 1$. To measure the Stokes parameters we take $Q_{m} = P_{D_{4}} - P_{D_{3}}$, $U_{m} = P_{D_{2}}- P_{D_{1}}$, and $I_{m} = \frac{1}{2}(P_{D_{1}}+P_{D_{2}}+P_{D_{3}}+P_{D_{4}})$.
For the ideal differencing polarimeter the outputs are:
\begin{align}
\nonumber P_{D_{1}} = & \frac{1}{2} I + \frac{1}{2}\cos(2\theta_{ps}) Q - \frac{1}{2}\sin(2\theta_{ps}) U \\
P_{D_{2}} = & \frac{1}{2} I - \frac{1}{2}\cos(2\theta_{ps}) Q + \frac{1}{2}\sin(2\theta_{ps}) U \label{eqn:differencing_polarimeter_diode_powers}
\end{align}
Which linear Stokes parameter we measure depends on the plane rotation introduced by the Faraday rotator, and is given by $L_{m} = P_{D_{1}}-P_{D_{2}}=\cos(2\theta_{ps}) Q - \sin(2\theta_{ps}) U $. The measured total intensity is $I_{m} = P_{D_{1}}+P_{D_{2}}$.
The Mueller matrix parameters of particular interest in CMB polarization studies are: $M_{II}$, $M_{QQ}$, and $M_{UU}$, the diagonal elements of the Mueller matrix; $M_{QI}$ and $M_{UI}$, the leakage of the total intensity signal into the (generally) small linear polarization signal; and $M_{QU}$ and $M_{UQ}$, which measure the rotation of the linear polarization vector by the receiver.
These Mueller matrix parameters for the pseudo-correlation polarimeter are:
{\small
\begin{align}
\nonumber M_{II} = & \frac{L_{c}^{2}L_{omt}^{2}}{2}\Big[ \abs{G_{L}}^{2} + \abs{G_{R}}^{2} \Big] \\
\nonumber M_{QQ} = & L_{c}^{2}\abs{G_{L}G_{R}}\Big[ \abs{D}^{2}\cos(\theta_{3}) + \abs{d}^{2}\cos(\theta_{4}) \Big] \\
\nonumber M_{UU} = & L_{c}^{2}\abs{G_{L}G_{R}}\Big[ \cos(\theta_{c})\{ \abs{D}^{2}\cos(\theta_{3}) - \abs{d}^{2}\cos(\theta_{4}) \} \\
\nonumber & - \sin(\theta_{c})\abs{Dd}\{\sin(\theta_{1}) + \sin(\theta_{2}) \} \Big] \\
\nonumber M_{QI} = & L_{c}^{2}\abs{G_{L}G_{R}}\abs{Dd}\Big[ \cos(\theta_{2}) +\cos(\theta_{1}) \Big] \\
\nonumber M_{UI} = & L_{c}^{2}\abs{G_{L}G_{R}}\abs{Dd}\Big[\sin(\theta_{1}) - \sin(\theta_{2}) \Big] \\
\nonumber M_{QU} = & L_{c}^{2}\abs{G_{L}G_{R}}\Big[ \cos(\theta_{c})\{ \abs{D}^{2}\sin(\theta_{3}) - \abs{d}^{2}\sin(\theta_{4}) \} \\
\nonumber & + \sin(\theta_{c})\abs{Dd}\{\cos(\theta_{2}) - \cos(\theta_{1}) \} \Big] \\
M_{UQ} = & -L_{c}^{2}\abs{G_{L}G_{R}}\Big[ \abs{D}^{2}\sin(\theta_{3}) + \abs{d}^{2}\sin(\theta_{4}) \Big] \label{eqn:correlation_pol_mueller_matrix_elements} \\
\nonumber \textrm{where } \theta_{1} = & \theta_{D_{y}} - \theta_{d_{xy}} - (\theta_{G_{L}} - \theta_{G_{R}}+\theta_{ps}) \\
\nonumber \theta_{2} = & \theta_{D_{x}} - \theta_{d_{yx}} + (\theta_{G_{L}} - \theta_{G_{R}}+\theta_{ps}) \\
\nonumber \theta_{3} = & \theta_{D_{x}} - \theta_{D_{y}} + (\theta_{G_{L}} - \theta_{G_{R}}+\theta_{ps}) \\
\nonumber \theta_{4} = & \theta_{d_{xy}} - \theta_{d_{yx}} + (\theta_{G_{L}} - \theta_{G_{R}}+\theta_{ps})
\end{align}
}
Here $X = \abs{X}e^{i\theta_{X}}$. We have implicitly assumed that the responses of all the power detectors are equal and stable.
To keep the comparison between the architectures reasonable, we need to include varying power detection sensitivity in the differencing polarimeter. Let the power detection proportionality constants (see Equation~\ref{eqn:rd:P_output_scattering}) be $\alpha_{1}$ and $\alpha_{2}$ for outputs 1 and 2 respectively. We also need to decide on the rotation angle of the Faraday rotator to specify which linear Stokes parameter we actually measure. Let $\theta_{ps} = \pm 45^{\circ}$ (i.e. Stokes U). We then obtain the Mueller matrix parameters:
{\small
\begin{align}
\nonumber M_{II} = & \frac{L_{omt}^{2}}{2}\Big[ \alpha_{1} + \alpha_{2}\Big] \\
\nonumber M_{UU} = & \frac{\sin(2\theta_{ps})}{2}\Big[ \alpha_{1} + \alpha_{2}\Big] \Big[ \abs{D}^{2} - \abs{d}^{2} \Big] -\cos(2\theta_{ps})\abs{Dd} \\
\nonumber & \Big[ \alpha_{1}\cos(\theta_{D_{x}}-\theta_{d_{xy}}) - \alpha_{2} \cos(\theta_{D_{y}}-\theta_{d_{yx}}) \Big] \\
\nonumber M_{UI} = & \frac{L_{omt}^{2}}{2}\Big[ \alpha_{1} - \alpha_{2}\Big] \\
\nonumber M_{UQ} = & \frac{\cos(2\theta_{ps})}{2}\Big[ \alpha_{2} - \alpha_{1}\Big]\Big[ \abs{D}^{2} - \abs{d}^{2} \Big] -\sin(2\theta_{ps})\abs{Dd}\\
& \Big[ \alpha_{1}\cos(\theta_{D_{x}}-\theta_{d_{xy}}) - \alpha_{2} \cos(\theta_{D_{y}}-\theta_{d_{yx}}) \Big] \label{eqn:differencing_pol_mueller_matrix_elements}
\end{align}
}
\subsubsection{Discussion} \label{sec:architecture_comparison}
One of the greatest sources of systematic error in polarimeters is leakage of the total intensity signal into the measured linear polarization amplitude, $P = \sqrt{Q^{2}+U^{2}}$. The fractional contribution to $P$ from total intensity leakage, $\Delta P_{I} = M_{PI}/M_{II}$, is given by:
\begin{align*}
\Delta P_{I} = \frac{\sqrt{M_{QI}^{2}+M_{UI}^{2}}}{M_{II}}
\end{align*}
Assume that we have two differencing polarimeters oriented such that they measure $Q$ and $U$ respectively, identical except for their values of $\alpha$. The ``$Q$'' polarimeter has values $\alpha_{1},\alpha_{2}$, while the ``$U$'' polarimeter has $\alpha_{3},\alpha_{4}$. For the pseudo-correlation and differencing polarimeters we then have:
\begin{align*}
\Delta P_{I}^{c} = & \frac{2\sqrt{2}\abs{G_{L}G_{R}}}{\abs{G_{L}}^{2}+\abs{G_{R}}^{2}}\abs{d}\sqrt{(1-\abs{d}^{2})(1+\cos(\theta_{1}+\theta_{2}))} \\
\Delta P_{I}^{d} = & 2\frac{\sqrt{(\alpha_{1}-\alpha_{2})^{2}+(\alpha_{3}-\alpha_{4})^{2}}}{\alpha_{1}+\alpha_{2}+\alpha_{3}+\alpha_{4}}
\end{align*}
As a simplification, assume that $\Delta \alpha = (\alpha_{1}-\alpha_{2})/\alpha_{2}=(\alpha_{3}-\alpha_{4})/\alpha_{4}$, and let $\Delta \abs{G}^{2} = \frac{\abs{G_{L}}^{2}-\abs{G_{R}}^{2}}{\abs{G_{R}}^{2}}$. Take the worst-case phase scenario, where $\cos(\theta_{1}+\theta_{2})=1$. We now have:
\begin{align}
\Delta P_{I}^{c} = & 4\abs{d}\sqrt{1-\abs{d}^{2}}\frac{\sqrt{1+\Delta \abs{G}^{2}}}{2+\Delta \abs{G}^{2}} \label{eqn:corr_ItoP_leakage} \\
\Delta P_{I}^{d} = & \frac{\sqrt{2}\Delta \alpha}{2+\Delta \alpha} \label{eqn:diff_ItoP_leakage}
\end{align}
Equations~\ref{eqn:corr_ItoP_leakage} and \ref{eqn:diff_ItoP_leakage} are plotted in Figure~\ref{fig:ItoPleakage_comparison}. Several attributes are noteworthy: $\Delta P_{I}$ is \emph{independent} of the OMT cross polarization $\abs{d}^{2}$ for the differencing polarimeter, but is heavily dependent on the power sensitivity imbalance $\Delta \alpha$; $\Delta P_{I}$ is almost independent of gain imbalance $\Delta \abs{G}^{2}$ for the pseudo-correlation polarimeter, but is dependent on the OMT cross polarization.
\begin{figure}
\centering
\includegraphics[width=3.25in]{DiffVSCorr_MPI_leakage.pdf}
\caption{Comparison of the effect of imbalance on the fractional total intensity to polarization leakage for pseudo-correlation and differencing polarimeter architectures. For the pseudo-correlation architecture, $\Delta = \Delta\abs{G}^{2}$. For the differencing architecture $\Delta = \Delta\alpha$. $\abs{d}^{2}$ is the OMT cross polarization.}
\label{fig:ItoPleakage_comparison}
\end{figure}
Figure~\ref{fig:ItoPleakage_comparison} clearly illustrates the difference between the polarimeter architectures in terms of total intensity to polarization leakage. Correlation polarimeters are very insensitive to what is generally the most unstable parameter in a coherent receiver: fluctuating LNA gain. They are moderately sensitive to OMT cross polarization $\abs{d}^{2}$. The comparatively high sensitivity of differencing polarimeters to power detection imbalance can be reduced if $\Delta\alpha$ is stable and well-known; the data can then be corrected and the leakage of $I$ into $P$ reduced.
\subsection{Pseudo-Correlation Polarimeter Noise Temperature}
A very powerful benefit of using scattering matrices to model receivers is the ability to perform noise analysis. We specify parameterized noise wave vectors for the pre-LNA components and ignore any noise produced by the components ``down stream'' of the LNAs, as their contribution will be negligible if the LNA gain is high.
If the noise wave vector of the pseudo-correlation polarimeter is given by $\mathbf{c}$, then the noise power measured at the output $D_{i}$ in a 1~Hz bandwidth is given by $P_{i} = \alpha \langle c_{j}c_{j}^{*}\rangle$, where $c_{j}$ is the noise wave vector element corresponding to output $D_{i}$.
Suppose that component $k$ of $M$ total components has $N$ ports, and is specified by the scattering matrix $\mathbf{S}^{k}$ and the noise wave vector $\mathbf{c}^{k}$. $c_{j}$ is given by:
\begin{align*}
c_{j} = & \sum_{k=1}^{M} c_{j}^{k}, \textrm{where } c_{j}^{k} = \sum_{i=1}^{N}b_{i}^{k}c_{i}^{k}
\end{align*}
Noise waves from different devices are not correlated: $\langle c_{i}^{k}(c_{j}^{m})^{*} \rangle = 0$ for $k\neq m$. So, $P_{i}$ is given by:
\begin{align*}
P_{i} = & \alpha \sum_{k=1}^{M}P_{i}^{k} \\
\textrm{where }P_{i}^{k} = & \sum\sum\Big( \mathbf{C}^{k}\cdot \big(\mathbf{b}^{k} \otimes (\mathbf{b}^{k})^{\dagger}\big) \Big)
\end{align*}
Here $\mathbf{C}^{k}$ is the noise correlation matrix for component $k$, $\cdot$ is the matrix dot product, $\mathbf{b}^{k}$ is the vector $[b_{1}^{k} \ldots b_{N}^{k}]^{\textrm{T}}$, $\otimes$ is the outer product, and $^{\dagger}$ is the hermitian conjugate. The $\sum\sum$ indicates a sum over all the matrix elements.
The noise correlation matrices for the passive pre-LNA components are obtained using Equation~\ref{eqn:noise_correlation_matrix_for_passive_device}. To get the noise correlation matrices for the LNAs we make two simplifications to the HEMT noise correlation matrix model in \cite{Wedge:1992p149}: first, the off-diagonal terms of an LNA noise correlation matrix ($\langle c_{1}c_{2}^{*}\rangle$ and $\langle c_{1}^{*}c_{2}\rangle$) are much smaller than the diagonal terms so we take them to be zero. Second: $\langle \vert c_{2} \vert^{2} \rangle \simeq \vert S_{21} \vert^{2} \langle \vert c_{1} \vert^{2} \rangle \simeq k \vert S_{21} \vert^{2}T_{\textrm{N}}$, where $T_{\textrm{N}}$ is the amplifier noise temperature and $k$ is Boltzmann's constant.
Receiver noise temperature is referenced to the input. We consider the receiver temperature $T_{i}$ at output $D_{i}$ to be the temperature of a thermal source seen equally at each input which produces the same total output noise power in a noiseless receiver:
\begin{align}
\nonumber P_{i} = & \alpha \Big( \sum_{j=1}^{N_{\textrm{in}}}\abs{S_{ij}}^{2} \Big) T_{i} \\
\therefore T_{i} = & \frac{\sum_{k=1}^{M} P_{i}^{k}}{\sum_{j=1}^{N_{\textrm{in}}}\abs{S_{ij}}^{2}}
\end{align}
Here $N_{\textrm{in}}$ is the number of inputs to the receiver, and $\mathbf{S}$ is the receiver scattering matrix.
Applying this technique to the pseudo-correlation polarimeter we derive the receiver temperatures for the outputs $D_{1}$ to $D_{4}$:
\begin{align}
\nonumber T_{1} = & T_{c} + \frac{T_{p}}{L_{c}^{2}}\Big[ \frac{G}{E-F} - 1 \Big] + \frac{T_{N}}{L_{c}^{2}}\Big[ \frac{G}{E-F}\Big] \\
\nonumber T_{2} = & T_{c} + \frac{T_{p}}{L_{c}^{2}}\Big[ \frac{G}{E+F} - 1 \Big] + \frac{T_{N}}{L_{c}^{2}}\Big[ \frac{G}{E+F}\Big] \\
\nonumber T_{3} = & T_{c} + \frac{T_{p}}{L_{c}^{2}}\Big[ \frac{G}{E-H} - 1 \Big] + \frac{T_{N}}{L_{c}^{2}}\Big[ \frac{G}{E-H}\Big] \\
T_{4} = & T_{c} + \frac{T_{p}}{L_{c}^{2}}\Big[ \frac{G}{E+H} - 1 \Big] + \frac{T_{N}}{L_{c}^{2}}\Big[ \frac{G}{E+H}\Big] \label{eqn:T_expressions} \\
\textrm{where }\nonumber T_{c} = & T_{p}\Big[\frac{1}{L_{c}^{2}}-1\Big] \\
\nonumber G = & \abs{G_{L}}^{2}+\abs{G_{R}^{2}} \\
\nonumber E = & \big(\abs{G_{L}}^{2}+\abs{G_{R}^{2}}\big)L_{omt}^{2} \\
\nonumber F = & 2\abs{G_{L}G_{R}}\abs{Dd}\Big[ \cos(\theta_{1}) + \cos(\theta_{2}) \Big] \\
\nonumber H = & 2\abs{G_{L}G_{R}}\abs{Dd}\Big[\sin(\theta_{1}) - \sin(\theta_{2}) \Big]
\end{align}
Here we have assumed that the amplifiers have noise temperature $T_{N}$, and that the circularizer and OMT are at a physical temperature of $T_{p}$. $\theta_{1}$ and $\theta_{2}$ are given in Equation~\ref{eqn:correlation_pol_mueller_matrix_elements}.
\subsubsection{Discussion}
Which parameters in the multiparameter expression for $T_{1}$ in Equation~\ref{eqn:T_expressions} have the greatest impact on the receiver temperature? The most obvious are $L_{c}^{2}$ and $L_{omt}^{2}$, the insertion losses of the circularizer and OMT respectively. Setting those aside, how do the other parameters affect the receiver temperature?
The minimum value that $T_{1}$ can have is $T_{N}$. Our sensitivity impact metric is then $T_{1}/T_{N} \geq 1$. We fix $L_{c}^{2} = -0.1$~dB and $L_{omt}^{2}=-0.2$~dB, and set $\abs{G_{R}}=1$, $\abs{G_{L}}=\sqrt{1+\Delta\abs{G}^{2}}$, $\theta_{G_{L}}=0$, and $\theta_{D_{x}}=0$ (only relative phases matter here). This leaves us with six parameters: $\abs{d}^{2}$, $\Delta\abs{G}^{2}$, $\theta_{G_{R}}$, $\theta_{D_{y}}$, $\theta_{d_{xy}}$, and $\theta_{d_{yx}}$.
\begin{figure}
\centering
\includegraphics[width=3.49in]{CorrPol_T3sensitivity_small.png}
\caption{Plots of $T_{1}/T_{N}$ against various parameters. We see that the receiver is most sensitive to the cross polarization, $\abs{d}^{2}$, and is negligibly sensitive to the other parameters. We assume $T_{N}=T_{P}=15$~K. See text for details.}
\label{fig:T3_sensitivity}
\end{figure}
We generate random sets of physically realistic values for these parameters, and evaluate the metric $T_{1}/T_{N}$ for each set. We plot $T_{1}/T_{N}$ against each parameter under consideration in Figure~\ref{fig:T3_sensitivity}. It is immediately clear that the most important parameter in this set is $\abs{d}^{2}$. Discarding the least important parameters we now have:
\begin{align}
\nonumber T_{1} \backsimeq & \frac{1}{L_{c}^{2}}\frac{T_{p}+T_{N}}{L_{omt}^{2}-2\abs{d}\sqrt{L_{omt}^{2}-\abs{d}^{2}}} - T_{p} \\
\backsimeq & \frac{T_{p}(1-L_{c}^{2}L_{omt}^{2})+T_{N}}{L_{c}^{2}L_{omt}^{2}} \textrm{ since } \abs{d}^{2} \ll L_{omt}^{2} \label{eqn:T1_simplified}
\end{align}
This simplified expression for $T_{1}$ is exactly what we would derive using a conventional noise temperature analysis, indicating that the software has calculated the noise temperature correctly.
\section{Conclusions}
We have presented {\verb SNS }, a MATLAB-based software library written to aid in the design and analysis of receiver architectures. It uses electrical scattering matrices and noise wave vectors to describe receiver architectures of arbitrary topology and complexity.
We use {\verb SNS } to compare two polarimeter architectures commonly used to perform measurements of the polarized CMB: differencing polarimeters, an architecture commonly used in PSB-based polarimeters; and pseudo-correlation polarimeters, an architecture commonly used in coherent polarimeters. This analysis highlights the differing sources of systematic error in these architectures: $I$ to $P$ leakage in pseudo-correlation polarimeters is almost immune to gain imbalance, but sensitive to OMT cross polarization; while $I$ to $P$ leakage in differencing polarimeters is immune to OMT cross polarization, but very sensitive to power detection imbalance.
We show how {\verb SNS } can be used to calculate analytical expressions for the receiver noise temperature of arbitrary receivers. Analytic expressions for the receiver temperature of a pseudo-correlation polarimeter are derived, and are found to be consistent with those obtained from conventional receiver temperature calculations.
\section*{Acknowledgments}
Thanks go to Dr Paul Grimes for helpful discussions and pointers to papers, to Prof. Mike Jones for helpful suggestions, support and motivation, and to Nikolai Yu. Zolotykh for developing the {\verb pointer } library for MATLAB. OGK acknowledges the support of a Dorothy Hodgkin Award in funding his studies while a student at Oxford, and the support of a W.M. Keck Institute for Space Studies Postdoctoral Fellowship at Caltech.
|
1,314,259,994,925 | arxiv | \section{Introduction}
Clusters of galaxies are the largest and most massive gravitationally bound systems in the Universe.
Clusters of galaxies are considered, at the same time, unique astrophysical laboratories and powerful cosmological probes \citep[e.g.,][]{White93, Bartlett94, Via99, Borgani01, Vikhlinin09, Rozo10, Mantz10, Allen11,Benson13, Bocquet15}.
Clusters grow from the highest density peaks in the early universe and thus their mass function is a tracer of the underlying cosmology \citep[e.g.,][]{Press74, Bahcall93, Gonzalez 12}.
Due to the steep dependance between number density and mass in the dark matter halo mass function, deriving cluster mass accurately is of paramount importance and large observational efforts have been devoted to this goal over the past three decades.
Different indirect methods, each of them leveraging unique observables of these systems, have been developed in the literature in order to weigh the most massive structures in the universe. These
are:
(i) measuring the richness of a cluster, i.e. counting the number of galaxies associated with that cluster within a given radius \citep[e.g.,][]{Abell58, Zwicky68, Carlberg96, Yee99, Yee03, Rozo09, Rykoff12, Rykoff14, Andreon14, Andreon15, Andreon16, Saro15}. (ii) measuring the radial velocities of the cluster members, which yields the velocity dispersion of a cluster and can be used to derive the cluster's mass from the virial theorem, under the assumption that the structure is virialized \citep[e.g.,][]{Girardi96, Mercurio03, Demarco05, Demarco07}. (iii) measuring the intensity of the hot X-ray emitting
intracluster medium, if this gas is in hydrostatic equilibrium and by factoring in its density and temperature distribution \citep[e.g.,][]{Gioia90, Vikhlinin98, Bohringer00, Pacaud07, Suhada12, Ettori13}. (iv) measuring the inverse-Compton scatter of cosmic microwave background (CMB) photons off the energetic electrons in the hot intracluster gas. The resultant characteristic spectral distortion to the CMB is known as the Sunyaev-Zel'dovich effect \citep[SZE,][]{Sunyaev72, Staniszewski09, Hasselfield13, Planck15, Bleem15}. (v) By measuring the coherent distortion that weak gravitational lensing produces on background galaxies, which has the advantage that it does not need prior knowledge on the baryon fraction of the cluster or its dynamical state \citep[e.g.,][]{Bartelmann01,Hoekstra07, Mahdavi08, High12, Hoekstra12, vonderLinden14, Umetsu14, Sereno15}.
While there are large cluster samples selected from optical and near infrared photometric surveys up to $z<1.5$ \citep[e.g.,][]{Gladders00, Koester07, Menanteau10, Hao10, Brodwin11, Wen12, Rykoff14, Ascaso14, Bleem15a}, in recent years mid infrared photometric surveys with {\it Spitzer} have extended the landscape.
The Infrared Array Camera \citep[IRAC,][]{Fazio04} onboard the {\it Spitzer} Space Telescope has proven to be a sensitive tool for studying galaxy clusters. Ongoing {\it Spitzer} wide-area surveys are proving effective at identifying large samples of galaxy clusters down to low masses at 1.5$<$ z$ < $2 \citep[e.g.,SDWFS, SWIRE, CARLA, SSDF;][]{Eisenhardt08,Papovich08,Wilson09,Demarco10,Galametz10,Stanford12,Zeimann12,Brodwin13,Galametz13,Muzzin13,Wylezalek13,Rettura14}, where current X-ray and SZE observations are restricted to only the most massive systems at these redshifts \citep{Brodwin11, Muzzin13}.
Even larger samples of clusters at $0.4<z<2.0$ will soon be available from upcoming and planned large scale surveys like the Dark Energy Survey (DES, The Dark Energy Survey Collaboration 2005), KiDS \citep{deJong13}, {\it Euclid}
\citep{Laureijs11}, LSST (LSST Dark Energy Science Collaboration 2012), and {\it WFIRST}. However, until the next generation SZE instrumentation (e.g., ACTpol, SPTpol, SPT3G - in any case only covering the Southern sky) or next generation X-ray telescopes (e.g., eRosita, Athena) become available, measuring the masses of the bulk of the high redshift clusters at 0.4$<$z$\lesssim$2 remains challenging.
In order to provide an efficient and reliable mass proxy for high redshift clusters up to z$\sim $2, in this paper, we calibrate a richness-mass relation using archival 4.5$\mu$m data on a sample
of published X-ray and SZE-selected clusters at 0.4$<$z$<$2.0. At these redshifts, the 4.5$\mu$m band traces rest-frame near-infrared light from the galaxies which is emitted by the high mass to light ratio stellar population. Thus, if the integrated mass function of galaxies is correlated with the cluster dark matter halo in which they reside, the near-infrared richness should provide a reasonable tracer of cluster mass.
This method of mass measurement has the advantage over the others described above because it is purely photometric, does not require apriori knowledge of the dynamical state of the cluster, and is observationally easy to obtain. We require only the cluster position, an approximate redshift estimate and at least 90sec-depth coverage of IRAC 4.5$\mu$m data, over a single pointing of 5$^{\prime}\times$5$^{\prime}$ field of view.
The plan of the paper is as follows. In Section 2 we describe the archival cluster sample we have adopted throughout the work and describe how the cluster masses were derived. In Section 3 we present the {\it Spitzer} photometric cataloging procedure adopted. In Section 4 we present the definition of our richness indicator and study its dependance on survey depth and aperture radius adopted. In Section 5 we calibrate the mass-richness relation for each sub-sample individually and combined. In Section 6 we discuss our results, the possibility of extending our method to other mid-infrared (MIR) all-sky surveys, and the implication of our findings on future wide field infrared surveys such as those that will be undertaken with {\it Euclid}. In section 7 we summarize the results.
Throughout, we adopt a $\Omega_{\Lambda} = 0.7$, $\Omega_{m} = 0.3$ and $H_{0} = 70\ \rm{km} \rm{s}^{-1} \rm{Mpc}^{-1}$ cosmology, and use magnitudes in the AB system.
\section{Sample Selection}
In the following sections we present the cluster samples drawn from the literature and the archival {\it Spitzer} data adopted in our analysis. Our aim is to assemble a large sample of clusters with known masses and redshifts for which archival IRAC data at 4.5 $\mu$m is publicly available. We define two cluster subsamples, based on literature X-ray masses and literature SZE masses.
\subsection{X-ray Clusters Sample}
The starting point for this sample is the Meta-catalog of X-ray detected clusters of galaxies (MCXC), a catalog of compiled properties of X-ray detected clusters of galaxies \citep[][and references therein]{Piffaretti11}. This catalog is based on the {\it ROSAT} All Sky Survey \citep[RASS,][]{Voges99} data on 1743 clusters at $0.003< z<1.261$ that have been homogeneously evaluated within the radius, $R_{500}$, corresponding to an overdensity of 500 times the critical density. For each cluster, the MCXC provides redshift\footnote{typical redshift uncertainty is $\sigma_{z}<$ 0.001 \citep[see discussion in][]{Liu15}}, coordinates, $R_{500}$, and X-ray luminosity in the $0.1-2.4$ keV band, $L_{500, [0.1-2.4 keV]}$. Based on the values published in \citet{Piffaretti11}, for each cluster we also derive the angular size, $\theta_{500}$= $R_{500}/DA(z)$, where $DA(z)$ is the angular diameter distance.
In order to define a richness parameter to be used as a proxy for cluster mass, $M_{500}$, we need to define the aperture radius in which galaxies should be counted and the redshift range for which this radius is still representative of the cluster $R_{500}$. To this aim, in Fig.~\ref{theta500} we show the entire MCXC sample $\theta_{500}$ vs. redshift relation. The red horizontal line indicates the 2.5 arcmin radius of a single {\it Spitzer}/IRAC field-of-view. The red asterisks indicate the mean $\theta_{500}$ per redshift bin of $\Delta z=0.1$, the error bars are the standard deviation of the mean per redshift bin. We note that at $z>0.4$ (dot-dashed line) the average $\theta_{500}$ of the sample is included within the {\it Spitzer}/IRAC field-of-view. Therefore we adopt this lower redshift cut to the cluster samples considered in our study. There are 142 clusters in MCXC at $z>0.4$.
The most reliable X-ray masses are obtained by solving the equation of hydrostatic equilibrium, which requires measurements of the density and temperature gradients of the X-ray emitting gas \citep[see discussion in][]{Maughan07}. This is only possible for nearby bright clusters, therefore it remains a challenge for the majority of clusters detected in X-ray surveys, especially at high redshifts where
surface brightness dimming effects become significant. Thus, in most cases, cluster masses are estimated from simple properties such as X-ray luminosities ($L_{X}$) or from adopting a single global temperature ({$kT$) via the calibration of scaling relations.
To derive an estimate of the total cluster mass, $M_{500}$, within $R_{500}$, we adopt the most recent calibrations, in particular in their redshift evolution, of the relations between X-ray global properties and cluster total mass, as presented in \cite{Reichert11}.
\citet[][see also references therein]{Reichert11} obtained these relations by homogenizing published estimates of X-ray luminosity and total mass. These values were rescaled at different radii and overdensities by using their dependence upon the gas density which was described by a $\beta$-model \citep{Cavaliere76}.
\citet{Reichert11} scaling relations, together with the MCXC luminosities, are then used here to run the following iterative process:
(i) an input temperature is assumed; (ii) a conversion from the MCXC $L_{500, [0.1-2.4 keV]}$ luminosity to the pseudo-bolometric (0.01--100 keV) value, $L_{500, bol}$, is derived assuming the thermal {\tt apec} model in XSPEC \citep{Arnaud96}, adopting the temperature assumed at step (i) and a metal abundance of 0.3 times the solar value; (iii) a value of the mass within an overdensity of 500 with respect to the critical density of the universe at the cluster's redshift is then calculated from Eq.~26 in \cite{Reichert11},
\begin{equation}
\frac{M_{500, X}}{10^{14} M_{\odot}}=(1.64 \pm 0.07)\cdot\bigg(\frac {L_{500, bol}}{10^{44} erg \cdot s^{-1}}\bigg)^{0.52 \pm 0.03} \cdot \bigg(\frac{H(z)}{H_{0}}\bigg)^{\alpha},
\label{lum-mass equation}
\end{equation}
where $H(z)=\sqrt{\Omega_{\Lambda} +\Omega_{m}*(1+z)^3 +\Omega_{k}*(1+z)^2}$, $\Omega_{k}=(1-\Omega_{m}-\Omega_{\Lambda})$ and $\alpha=-0.90^{+0.35}_{-0.15}$;
(iv) a new temperature is recovered from the $M$-$T$ relation (Eq.~23 in \cite{Reichert11}) and compared to the input value assumed at step (i) ; (v) the calculations are repeated if the relative difference between these two values is larger than 5 percent.
We consider also a correction on the given luminosity due to the change in the initial $R_{500}$.
This correction, typically few percent, is obtained as described in \cite{Piffaretti11}, by evaluating the relative change of the square of the gas density profile integrated over the cylinder with dimension of $r=R_{500}$ and height of $2 \times 5 \, R_{500}$.
\begin{figure}[htb]
\epsscale{1.0}
\includegraphics[scale=.35]{Fig1-eps-converted-to.pdf}
\caption{$\theta_{500}$ as a function of redshift for the entire MCXC sample of X-ray clusters. The horizontal red line indicates half the typical field of view of a single {\it Spitzer}/IRAC pointing. At $z>0.4$ (dot-dashed line) the average $\theta_{500}$ of the sample is included in the {\it Spitzer}/IRAC field-of-view. Asterisks are average values in bins of redshift of size $\Delta z =0.1$. The final X-ray sample analyzed in this study is indicated with solid red circles.
}
\label{theta500}
\end{figure}
\begin{figure}[htb]
\epsscale{1.0}
\includegraphics[scale=.3]{Fig2-eps-converted-to.pdf}
\caption{The {\it ROSAT}-based bolometric luminosities derived in this work are plotted against those measured with {\it Chandra} by \citet{Maughan12} for a sub-sample of 26 clusters in common.}
\label{Maughan}
\end{figure}
As a consistency test, for a subsample of common clusters, we can also compare the bolometric luminosities we have obtained with those independently derived by \citet{Maughan12}.
\citet{Maughan12} have used a sample of 115 galaxy clusters at $0.1<z<1.3$ observed with {\it Chandra} to investigate the relation between X-ray bolometric luminosity and $Y_X$ (the product of gas mass and temperature) and found a tight $L_X-Y_X$ relation \citep{Maughan07}. They also demonstrate that cluster masses can be reliably estimated from simple luminosity measurements in low quality data where direct masses, or measurements of $Y_X$, are not possible.
There are 26 clusters in common between our {\it ROSAT}-based sample and their {\it Chandra} sample. In Fig.~\ref{Maughan} we compare the bolometric luminosities obtained independently and find the values to be in a very good agreement.
We then searched for {\it Spitzer}/IRAC archival observations homogeneously covering at least an area within 2.5 arcmin radius from the cluster center coordinates, and with a minimum exposure time of 90 seconds. This depth ensures that we reach at least a 5$\sigma$ sensitivity limit of 21.46 AB mag (9.4 $\mu Jy$) at 4.5 $\mu$m (see Section 3.1 for further discussion of required depth).
These requirements result in a final X-ray-selected sample comprised of 47 galaxy clusters at $0.4< z<1.27$ (indicated by red circles in Fig.~\ref{theta500}). We note that a few large clusters (indicated by red circles above the red line in Fig.~\ref{theta500}) have still been considered throughout this work. This is because the mean $\theta_{500}$ in those redshift bins is smaller than the IRAC field
of view. It also ensures an adequate sample size and avoids biasing our derived richness-mass
relation against large, less-concentrated clusters.
The derived cluster mass and redshift distributions of our X-ray sample are illustrated in Fig.~\ref{sample} (blue circles and histograms).
\begin{figure}
\epsscale{1.0}
\includegraphics[scale=.475]{Fig3-eps-converted-to.pdf}
\caption{Cluster mass and redshift distributions of the X-ray- (blue) and SZE-selected (red) cluster samples studied in this work. Both subsamples extend over similar ranges of the $M_{500} - z$ plane, and the median values are indicated by the dashed lines of the corresponding color.}
\label{sample}
\end{figure}
\subsection{SZE Clusters Sample}
Recent years have seen rapid progress of both the quality and quantity of SZE measurements, using a variety of instruments. Therefore, several programs have been launched in the past few years with the aim of measuring total masses through the SZ effect of large samples of clusters, both for cosmology and astrophysics studies.
{\it Spitzer}/IRAC data coverage over some of the SZ survey fields have been requested, as well as targeted SZE observations of existing MIR-selected clusters have also been obtained by various investigators.
We have therefore mined the {\it Spitzer}/IRAC archive and drawn a heterogeneous sample of, spectroscopically confirmed, SZE selected clusters, based from a number of these programs.
Applying the same redshift and photometric coverage selection criteria illustrated in \S 2.1, the final SZE-selected sample considered in our study is comprised of 69 galaxy clusters at $0.4< z<2.0$.
In particular, our sample is comprised of 4 clusters from the {\it Planck} Cluster Catalog \citep{Planck15}, 4 clusters from the Massive Distant Clusters of {\it WISE} Survey \citep[MADCoWS,][]{Brodwin15}, 1 cluster from the IRAC Distant Cluster Survey \citep[IDCS,][]{Brodwin12}, 1 cluster from the {\it XMM}-Newton Large Scale Structure Survey \citep[XLSSU,][]{Pierre11, Mantz14}, and 59 clusters from the SPT-SZ Cluster Survey \citep[SPT-SZ,][]{Bleem15},.
Cluster masses, $M_{500, SZ}$, as reported in the aforementioned papers, are based on the spherically integrated Comptonization measurement, $Y_{500, SZ}$, obtained by either the {\it Planck} space telescope, the Combined Array for Research in Millimeter-wave Astronomy (CARMA\footnote{\url{https://www.mmarray.org}}) or the South Pole Telescope \citep[SPT;][]{Carlstrom11, Austermann12, Story13}.
Cluster mass and redshift distributions of the final SZE-selected sub-sample are illustrated in Fig.~\ref{sample} (red circles and histograms) and can be compared with the X-ray sample shown therein.
\section{{\it Spitzer} Data}
Publicly available {\it Spitzer}/IRAC data for each cluster in our sample is accessible via the {\it Spitzer Heritage Archive} (SHA). All of the IRAC data for the X-ray-selected sample were acquired during the initial cryogenic mission, while all, but four, of the SZE sample data were acquired during the post-cryogenic {\it Warm Mission}. The {\it Warm} and {\it Cryo} missions have been put onto the same calibration scale in the SHA provided data products, so we expect no differences between the missions to be relevant to this work.
\subsection{Source Extraction}
The publicly accessible Spitzer Enhanced Imaging Products\footnote{\url{http://irsa.ipac.caltech.edu/data/SPITZER/Enhanced/SEIP}} (SEIP) provide super mosaics (combining data from multiple programs where available) and a source list of photometry for sources observed during the cryogenic mission of {\it Spitzer}. The SEIP includes data from the four channels of IRAC (3.6, 4.5, 5.8, 8 $\mu$m) and the 24 $\mu$m channel of the Multi-Band Imaging Photometer for Spitzer (MIPS) where available. In addition to the {\it Spitzer} photometry, the source list also contains photometry for positional counterparts found in the AllWISE release of the Wide-Field Infrared Survey Explorer (WISE) and in the Two Micron All Sky Survey (2MASS). To ensure high reliability, strict cuts were placed on extracted sources, and some legitimate sources may appear to be missing. These sources were removed by cuts in size, compactness, blending, shape, and SNR, along with multi-band detection requirements. In most fields, the completeness of the source list is well matched to expectations for a SNR=10 cut off, as reliability is favored over completeness. However, the list may be incomplete in areas of high surface brightness and/or high source surface density. This is most relevant for this work for objects near bright sources or the centers of clusters which may have a higher source density.
Following the recommendations in \citet{Surace04}, for our richness estimate we adopt the aperture corrected, IRAC 4.5 $\mu$m flux density measured within an aperture of diameter 3.8 arcseconds from the SEIP source list. The chosen aperture is twice the instrumental FWHM, which provides accurate photometry , with an aperture correction for a point-source already applied, which is customary for cluster studies with {\it Spitzer} in the literature \citep[e.g.,][]{Bremer06, Rettura06}. IRAC PSF has a FWHM$\sim$2 arcsec, thus we note that a star/galaxy separation in {\it Spitzer} data, especially at faint fluxes, is not straightforward. Therefore we will describe in \S 4 how we account and correct for foreground stars in our richness estimates.
For the {\it Warm Mission} data a SEIP source list is not available in the {\it Spitzer} archive. However we have adopted the same SEIP source extraction pipeline and applied it ourselves in exactly the same way as for the {\it Cryo} mission clusters.
\begin{figure}[t!]
\epsscale{1.0}
\includegraphics[scale=.3]{Fig4_new3-eps-converted-to.pdf}
\caption{4.5$\mu$m number counts for four representative clusters in our sample (black dashed histogram), compared to number counts of the deeper SpUDS control field (red solid histogram). The left column shows the number counts for two X-ray clusters while the right column shows the number counts for two SZE clusters.}
\label{numcounts}
\end{figure}
\begin{figure}[t!]
\epsscale{1.0}
\includegraphics[scale=.55]{Fig5-eps-converted-to.pdf}
\caption{Histogram of the 4.5$\mu$m depths reached by the archival data available for the X-ray (blue) and the SZE (red) samples. The dashed lines indicate the median depth of each sample.}
\label{histodepth}
\end{figure}
\begin{figure*}[h!]
\epsscale{1.0}
\includegraphics[scale=.6]{Fig6-eps-converted-to.pdf}
\caption{{\it Top-left Panel:} $[4.5]$-band image of a representative cluster in our sample at $z=1.132$. The white dashed circle indicated has a radius, $r=2'$. {\it Top-right Panel:} Positions of all the sources extracted by the photometric pipeline from the 4.5$\mu$m band image of the cluster and indicated by black diamonds. Sources with magnitudes $[4.5]< 21$ AB are indicated by open red diamonds. The black circle has a radius, $r=2'$, centered on the reported cluster center. Magnitude-selected sources that are also within the circle are indicated with filled red symbols. {\it Bottom-left Panel:} [4.5] magnitude distribution of all sources in the {\it Spitzer}/IRAC image of this cluster. The red dot-dashed line indicates the magnitude cut adopted consistently throughout this work. The number of sources,$N_{Cluster}$, brighter than the $[4.5]_{cut}$=21 AB and within 2$\arcmin$ from the cluster center is indicated. {\it Bottom-right Panel:} Distribution of the number of sources in the control field brighter than $[4.5]= 21$ and within $r<2'$ from each source extracted in the SpUDS photometric catalog. The red line indicates a 2-$\sigma$ clipped Gaussian fit of the distribution. The red dot-dashed line indicates the mean of the Gaussian fit, $<N_{Field}>$, that is used for the source background correction throughout this work, as described in the text. Clearly, the cluster field has more than twice
as many objects within the aperture.}
\label{method}
\end{figure*}
\begin{figure}[t!]
\epsscale{1.0}
\includegraphics[scale=.31]{Fig7-eps-converted-to.pdf}
\caption{ {\it Spitzer}/IRAC images of four clusters in our sample with similar M$_{500,SZ}$ but with different richness or redshifts measured. The white-dashed circles have a radius, $r=2'$, centered on the reported cluster. Images (A) and (B) show clusters at similar redshift, $z\sim 0.5$, while (C) and(D) are instead at $z\sim 1$. Despite having similar redshifts and masses, (A) and (B) are found with large differences in their richness values suggesting that there may be other dependancies. Images (A) and (C) however show clusters of the same mass, found with similar richness despite being at different redshift.}
\label{Panel_method}
\end{figure}
\subsection{Surveys Depth}
As we deal with a heterogenous sample that has been observed by {\it Spitzer} at varying depths, for consistency of our analysis, we aim to be able to calibrate our method to a depth that is reached by all our archival data.
For illustration purposes, we show in Fig.~\ref{numcounts} the number counts of four representative clusters in our samples along with the number counts derived from a reference deep Spitzer legacy program, that we adopt as a control field. The {\it Spitzer} UKIDSS Ultra Deep Survey (SpUDS, PI: J. Dunlop) data used here come from a program covering $\sim$1 $deg^{2}$ in the UKIRT Infrared Deep Sky Survey, Ultra Deep Survey field \citep[UKIDSS UDS][]{Dye06}, centered at R.A.=02$^{h}$:18$^{m}$:45$^{s}$, Decl.$=-05^{\circ}$:00$^{'}$:00$^{''}$. Note that we use the SEIP source list photometry available in the archive for our control (SpUDS) field as well. The SpUDS survey reaches greater sensitivities than the data on the majority of our clusters, in particular for the SZE sample, as shown by examples on the right column of the panel Fig.~\ref{numcounts}.
As shown in Fig.~\ref{histodepth} for the entire sample, the IRAC coverage of our samples is not uniform. The median depth of the SZE cluster observations reach $[4.5]=21$ AB, for instance, while the median depth for the X-ray sample reaches $[4.5]=22.5$ AB. For the sake of overall consistency of our analysis and to be able to calibrate our method to a depth that the vast majority of current and future {\it Spitzer} surveys can easily reach (with even 90 second exposure), we adopt $[4.5]_{cut}=21$ AB as the magnitude cut for all subsequent analyses. We will also further investigate the dependance of richness estimates on image depth in \S4.1.
For galaxy stellar populations formed at high redshift, a negative k-correction provide a nearly constant 4.5$\mu$m flux density over a wide redshift range. An $L_{[4.5]}^{*}$ galaxy formed at $z_{f}$ = 3 will have $[4.5] \sim 21$ (AB) at $0.4 \lesssim z \lesssim 2.0$, which is sufficiently bright that it is robustly seen in even just 90 seconds integrations with Spitzer \citep[e.g.,][]{Eisenhardt08}.
While we recognize that using the simple approach of a single apparent magnitude cut at $0.4<z<2.0$ would introduce a bias for optical mass-richness relationships, we note here that an infrared relation is not significantly affected because of the k-correction. Adopting \citet{Mancone10} results on the evolution with redshift of the characteristic absolute magnitude $M^{*}_{[4.5]}(z)$, we note that at 4.5$\mu$m, in the redshift range spanned by our sample, the stellar population evolution and redshift evolution are roughly matched, thus sampling a similar rest-frame luminosity range of the cluster galaxy population as a function of redshift. Because of cluster galaxy population evolution with redshift seen through the 4.5$\mu$m band filter at $0.4\lesssim z \lesssim 2$, our adopted apparent magnitude limit $[4.5]_{cut}$ always corresponds to a roughly similar absolute magnitude $M_{[4.5]_{cut}}\sim M^{*}_{[4.5]}(z)+1$ over this large redshift range. We find in fact that $M_{[4.5]_{cut}}$ varies between $M^{*}_{[4.5]}(z)$+0.87 (at $z\gtrsim$1.2) and $M^{*}_{[4.5]}(z)$+1.17 (at z$\sim$0.5), thus by 0.3 mag. This small variation in limit magnitude will not significantly increase the scatter in the Mass-richness relation we will derive in the next \S5. In Section 4.1, based on a sub sample of clusters for which deeper data are available, we will study the dependence of Richness estimates on survey depth and will parameterize a linear relation (Eq. ~3) to account for these effects. Accordingly, a variation in magnitude cut by 0.3 mag will result in a variation in Richness, $\Delta R\sim6 gals \times Mpc^{-2}$, hence in logarithmic scale $\Delta Log R\sim$ 0.05, which is very small and will not significantly increase the scatter in the derived Mass-richness relation.
\section{Derivation of {\it Spitzer} 4.5$\mu$m Richness }
\begin{figure}[t!]
\epsscale{1.0}
\includegraphics[scale=.575]{Fig8-eps-converted-to.pdf}
\caption{The dependance of average richness, $<R_{[4.5]}>$ with adopted 4.5$\mu$m magnitude cuts, $[4.5]_{cut}$, and aperture radii for the reference X-ray `deep sample'. Error bars indicate the standard deviation of the mean.}
\label{magdepth}
\end{figure}
The richness of a cluster is a measure of the surface density of galaxies associated with that cluster within a given radius. Because of the presence of background and foreground field galaxies and foreground stars, one cannot identify which source in the vicinity of a cluster belongs to the cluster. Richness is therefore a statistical measure of the galaxy population of a cluster, based on some operational definition of cluster membership and an estimate of foreground/background subtraction.
Furthermore, as we aim to provide an efficient and inexpensive 4.5$\mu$m photometric proxy of cluster mass within $R_{500}$, we need to adopt a sufficiently large aperture radius in which galaxies should be counted that minimizes the Poisson scatter in richness, takes into account the typical $R_{500}$ of clusters at $z>0.4$ and the angular size
constraint defined by the single pointing {\it Spitzer} field-of-view. Thus we define a richness parameter, $R_{[4.5]}$, as the background-subtracted projected surface density of sources with $[4.5]<21$ AB within 2 arcminutes from the cluster center, expressed in units of $galaxies \cdot Mpc^{-2}$.
We first measure the number of objects in the vicinity of the cluster, $N_{Cluster}$, with $[4.5]<21$\,mag within 2 arcminutes of the cluster center determined from the SZE or X-ray data (bottom-left panel of Fig.~\ref{method}). In order to estimate the number of background sources (stars and galaxies) to subtract, we use the SpUDS survey to derive a mean blank field surface density of sources
above the same magnitude limit.
To estimate this, we measure the number of sources above the magnitude limit within an aperture radius of 2 arcminutes from each source with $[4.5]<21$ in the SEIP photometric catalog of the SpUDS field. We then fit a Gaussian to the distribution, iteratively clipping at 2$\sigma$ (see bottom-right panel of Fig.~\ref{method}). The resulting mean of the distribution, $<N_{Field}>=76~gals$ is then subtracted from $N_{Cluster}$.
This method of background subtraction however assumes that the stellar density in the SpUDS field is the same as that in the cluster field which need not be true due to the structure of our Galaxy.
As we deal with an all sky, archival sample of clusters, we correct for the variation of the foreground star counts with Galactic latitude. Using the \citet{Wainscoat92} mode predictions\footnote{web tool available at: \url{http://irsa.ipac.caltech.edu/applications/BackgroundModel/}} for the IR point-source sky, we can estimate the number of stars with $[4.5]<21$ within 2 arcminutes from the center of each cluster in our sample, $N_{S}$, and compare it to the average value for the SpUDS field, $N_{S, Field}=9.4$. Thus we can correct our richness estimate at the location of each cluster for the difference in star counts by subtracting the difference between these numbers:
\begin{equation}
R_{[4.5]}= N_{Cluster} - <N_{Field}> - (N_{S} - N_{S, Field})
\label{rich corr}
\end{equation}
$N_{Cluster}$ and $N_{S}$ for each cluster in our sample is listed in Table 1 and 2.
To test the fidelity of the calibrated model of the Galaxy adopted here, we have also compared \citet{Wainscoat92} predictions with the ones from a more recent model of the Galaxy, TRILEGAL \citep{Girardi12} .
At each of the 116 cluster positions, TRILEGAL has been run 10 times with varying input parameters (IMFs, extinction laws, model of the thin/thick disk, halo and bulge model) to output the mean and stdev values for the number of stars within 2 arcminutes. We find the results of the two models in remarkably good agreement. At the coordinates of our sample clusters, we find the median difference between the outputs of the two models to be only $\sim$ 1.5 stars in a 2 arcminute radius. The mean difference of the two models is found to be $\sim$ 4.3 stars in a 2 arcminute radius. This difference is two order of magnitude smaller than the typical total source counts, $N_{Cluster}$, at the location of the clusters (see Table~1 and 2) hence negligible with respect to the typical errors (Poissonian statistics) reported here.
Finally we normalize for the surface area subtended by the 2$\arcmin$ radius aperture
at the redshift of each cluster and express $R_{[4.5]}$ in units of $gals \cdot Mpc^{-2}$ throughout the paper (unless specified). We note that since projected areas evolve slowly with redshift, in particular at high redshift, our method is suitable also for clusters for which only a photometric redshift is available. For instance, for a $z_{phot}$=1.0 cluster, even a large uncertainty in redshift of $\Delta z = \pm 0.1$ would only result in a variation of area of just $\sim5\%$, implying a small variation of the inferred $R_{[4.5]}$.
The derived richness values for our sample of clusters are listed in Table 1 and 2. Richness uncertainties account for Poisson fluctuations in background counts and cluster counts as well as the uncertainty in the mean background counts shown in Figure \ref{method}.
We note that we do not adopt a color criterion in our richness definition. The $[3.6]-[4.5]$ color is known to be degenerate with redshift at $z \lesssim 1.3$, but can be used as an effective redshift indicator \citep[e.g.,][]{Papovich08,Muzzin13,Wylezalek13,Rettura14} at $z > 1.3$. The method takes advantage of the fact that the $[3.6]-[4.5]$ color is a linear function of redshift between $1.3 \lesssim z \lesssim 1.5$ and at $z \gtrsim 1.5$ the color reaches a plateau out to $z \sim 3$ (see also left panel of Fig.~\ref{models}). While an IRAC color cut $[3.6]-[4.5] > -0.1$ (AB) is effective at identifying galaxies at $z >$ 1.3, due to the color degeneracy at lower redshifts, having at least one shallow optical band in addition, would be required for alleviating contamination from foreground interlopers at $z <0.3$ (see discussion in \citet{Muzzin13}).
Since optical data are unavailable for the large part of our archival sample and $>90$ \% of our sample is comprised of cluster galaxies at $z < 1.3$ we do not include a color cut in our definition of richness. We also note that by measuring richness at 4.5\,$\mu$m, corresponding to rest-frame near-infrared bands at the redshifts spanned by our sample, we are tracing the masses of galaxies
better than optical richness estimates because stellar mass-to-light ratios show less scatter in the NIR than in the optical \citep[e.g.,][]{Bell01}.
\subsection{Dependance of Richness on survey depth and aperture radius}
In order to investigate the effect of the chosen aperture radius and depth of the IRAC $4.5 \mu$m data on richness estimates, we performed a series of tests on a subsample of clusters where the IRAC data is deep enough to allow us to measure richness values at different sensitivity levels.
As shown in Fig.~\ref{histodepth}, the X-ray sample contains a `deep' subsample of 36 clusters for which their depth is $\geq 22.5$ mag AB. We measure the average (and standard deviation of the mean) richness of this sample down to various depths, $[4.5]_{cut}$, ranging $21 < [4.5] < 22.5$, and with different aperture radii, $0.5'<r<2'$. As shown in Fig.~\ref{magdepth}, richness increases with increasing magnitude cut adopted which is not surprising since there are typically more galaxies at fainter luminosities for a canonical luminosity function. This test validates the importance of adopting a uniform magnitude cut while dealing with a heterogeneous, archival sample. We also note that the slope and standard deviation of richness is much smaller for the larger, adopted, 2 arcminutes radius aperture
than for the smaller apertures.
For the adopted 2 arcminutes aperture radius, the dependance of the mean richness, $ < R_{[4.5]} > $, with magnitude cut adopted, $[4.5]_{cut}$, is best-fitted with the linear relation
\begin{equation}
\frac{<R_{[4.5]}>}{galaxies \cdot Mpc^{-2}} = (-358.71) + (18.97) \times \frac{[4.5]_{cut}}{AB\hspace{0.1cm}mag}.
\label{richeness depth equation}
\end{equation}
This relation allows us to quantify the expected increase in average richness value, at increasing depths of the observations, due simply to an intrinsic photometric effect, not due to cluster-to-cluster variations or dynamical state. By means of extrapolations, this relation could be also used to predict the expected average richness value for samples of clusters at similar redshifts with upcoming, wide-area, infrared surveys (see discussion in \S 6.4).
\begin{figure}[t!]
\epsscale{1.0}
\includegraphics[angle=-90,scale=.52]{Fig9-eps-converted-to.pdf}
\caption{The 4.5$\mu$m richness-mass relation for a sample of 47 X-ray selected clusters (top panel) and 69 SZE-selected clusters (bottom panel) at $0.4<z<2.0$. The dashed lines correspond to the best straight-line fits to data with errors in both coordinates for each sample respectively. The dotted lines indicate the 68.3$\%$ confidence regions of each fit.}
\label{MassRichnessXvsSZ}
\end{figure}
\begin{figure}[h!]
\epsscale{1.0}
\includegraphics[scale=.285]{Fig10-eps-converted-to.pdf}
\caption{The 4.5$\mu$m richness-mass relation for a sample of 47 X-ray selected clusters (blue circles) and 69 SZE-selected clusters (red circles) at $0.4<z<2.0$. The blue and red dashed lines correspond to the best straight-line fits to the individual samples shown previously in Figure 9. The solid line indicates the fit to the combined sample and the dotted lines indicate the 68.3$\%$ confidence regions of this fit.}
\label{MassRichness}
\end{figure}
\begin{figure}[h!]
\epsscale{1.0}
\includegraphics[angle=-90,scale=.625]{Fig11-eps-converted-to.pdf}
\caption{{\it Top:} Histogram of richness of the X-ray (blue) and the SZE (red) samples. The dashed lines indicate the median richness of each sample. {\it Bottom:} The dependance of richness with cluster mass and aperture radius for the X-ray (blue) and the SZE (red) samples.}
\label{Profile}
\end{figure}
\section{Calibrating a Cluster Mass-Richness relation at $0.4<z<2.0$}
In this section, we calibrate mass-richness relations based on our richness estimates defined in \S 4 and on total cluster masses as described in \S 2.1 and \S 2.2. In Fig.~\ref{MassRichnessXvsSZ} we show the relations we find for the 47 clusters of the X-ray sample (top panel) and the 69 clusters of the SZE sample (bottom panel). We perform a least squares fit of the data for each sample individually with a single linear relation on log-quantities, where also errors in both variables are taken into account, and find:
\begin{equation}
\log \frac{M_{500,X}}{M_{\odot}}= (13.68 \pm 0.17) + (0.57 \pm 0.23) \cdot \log \frac{R_{[4.5]}}{gals \cdot Mpc^{-2}},
\label{richXequation}
\end{equation}
and
\begin{equation}
\log \frac{M_{500,SZ}}{M_{\odot}}= (13.34 \pm 0.21) + (0.93 \pm 0.22) \cdot \log \frac{R_{[4.5]}}{gals \cdot Mpc^{-2}}
\label{richYZequation}
\end{equation}
In Fig.~\ref{MassRichness} we show the relation we find for the 116 clusters of the combined sample (solid black line):
\begin{equation}
\log \frac{M_{{500}}}{M_{\odot}}= (13.56 \pm 0.25) + (0.74 \pm 0.18) \cdot \log \frac{R_{[4.5]}}{gals \cdot Mpc^{-2}}.
\label{richAllequation}
\end{equation}
To test the robustness of our fit, we have also run a bootstrap Monte Carlo test, in which the mass-richness relation is repeatedly resampled,
to reveal whether a small sample of clusters could for instance dramatically alter the result of the fit. We run the least-square fitting algorithm 1000 times, and at each repetition we randomly toss out 25\% of the sample. We then infer the mean and standard deviation of the intercept and slope distribution for the 1000 fit results. We find the latter values in perfect agreement with the ones of Eq. ~\ref{richAllequation}.\\
We have also checked whether a small subsample of high mass clusters could largely alter the result of the fit. However, even excluding the three most massive clusters in the sample, the resulting values of intercept and slope and their errors are still consistent within 1$\sigma$ of the ones presented in Eq. ~\ref{richAllequation}.
Based on the 68.3 \% confidence regions of the fits (dotted lines), we estimate the associated errors in mass at fixed richness to be $\pm 0.25$ dex. We will discuss the dependance of the scatter of this relation with concentration in \S 6.2. The intrinsic scatter of the relation is measured in the $R_{[4.5]}$ direction around the best-fitting $R_{[4.5]}-M$ relation for that sample, and is denoted as $\sigma_{R_{[4.5]} | M}$. We find $\sigma_{R_{[4.5]} | M}$=0.32 dex for our sample.
We compare the measured scatter in our relation with literature richness and mass estimates. Using an $r$-band luminosity-based optical richness estimator, $R_{L}$, \citet{Planck14} found the associated error in mass at fixed richness to be $\pm 0.27$ dex. The intrinsic scatter of their $R_{L}-M$ relation, $\sigma_{R_{L} | M}$=0.35, is also similar to the value that we have found. We remind that $R_{L}$ was defined by \citet{Wen12} for a large sample of low-redshift Sloan Digital Sky Survey \citep[SDSS,][]{York00} selected clusters for which X-ray masses were provided by \citet{Piffaretti11}. We note that their method cannot be extended to all clusters in our sample because SDSS lacks coverage of the Southern Sky and because deeper optical data than the one available from SDSS would be required to detect the bulk of cluster galaxies at $0.6 < z \lesssim 2$.
Also at lower redshifts, $0.03 <z< 0.55$, \citet{Andreon10} and \citet{Andreon15} have used multi-band SDSS photometry to define an optical richness estimator, $n_{200}$, aimed at counting red cluster members within a specified luminosity range and color as a proxy of the total mass, $M_{200}$, within the $R_{200}$ radius. These predicted masses are found to have a smaller 0.16 dex scatter with mass, but require optical photometry in at least two bands.
As a comparison with other observable-mass scaling relations, we note that \citet{Maughan07} have studied the $L_{X}-M$ relation for a sample of 115 clusters at $0< z< 1.3$, and found the associated error in mass at fixed luminosity to be $\pm 0.21$ dex. They have measured the intrinsic scatter in the $L_{X}-M$ relation to be $\sigma_{L | M}$= 0.39 when all of the core emission is included in their $L_{X}$ measurements. Interestingly, they have also demonstrated that the scatter can greatly be reduced to $\sigma_{L | M}$= 0.17 by excising the core emission in their $L_{X}$ measurements.
Furthermore \citet{Rozo14a, Rozo14b} have used SZE data from {\it Planck} and X-ray data from {\it Chandra} and {\it XMM-Newton} to calibrate $Y_{X}-M$, $Y_{SZ}-Y_{X}$ and $Y_{SZ}-M$ relations for different samples of clusters at $z<0.3$. They found low values for the scatters of the $Y_{SZ}-M$ relations, $\sigma_{Y_{SZ} | M}$= 0.12-0.20.
To summarize, the method we have proposed here requires only shallow, single pointing, single band observations and an estimate of the cluster center position and redshift to provide reliable richness-based cluster mass estimates. The very low observational cost associated with our approach makes it potentially available for very large samples of clusters at $0.4 < z < 2.0$.
\begin{figure}[t!]
\epsscale{1.0}
\includegraphics[scale=.4]{Fig12-eps-converted-to.pdf}
\caption{The 4.5 $\mu$m richness vs. mass relation color-coded by galaxy concentration for the combined sample. The solid line indicates the linear fit to the sample where also errors in both variables are taken into account. The dotted lines indicate the 68.3$\%$ confidence regions of this fit.}
\label{Concentration}
\end{figure}
\begin{figure*}[t!]
\epsscale{1.0}
\includegraphics[scale=.375]{Fig13-eps-converted-to.pdf}
\caption{Comparison of richness estimates (left panel) and total luminosity density of all sources measured within $r=120"$ from each cluster center (right panel) based on data from {\it Spitzer} at 4.5$\mu$m and {\it WISE} at 4.6$\mu$m. The solid lines represent the identity (1:1) lines. The dashed lines correspond to the best straight-line fit to the data with errors in both coordinates.}
\label{wise}
\end{figure*}
\section{Discussion}
In this section we discuss the richness distribution and galaxy surface density profile for our samples. We also examine potential sources of the scatter in the mass-richness relation, including sample selection and galaxy concentration. We then explore the possibility of extending our method to other MIR all-sky surveys like {\it WISE}, and the implication of our findings on future, wide-field near-infrared cluster surveys like {\it Euclid}.
\subsection{Richness Distribution and Profiles}
In the top panel of Fig.~\ref{Profile} we show the distributions of richness for our samples. The X-ray sample (blue histogram and blue dashed line) shows larger median richness values than the SZ sample (red histogram and red dashed line). This difference is statistically significant as shown by the Kolmogorov-Smirnov (KS) test that provides a probability $P_{KS} \sim 10^{-6}$ that the observed distributions of richness are extracted from the same parent population.
In the bottom panel of Fig.~\ref{Profile} we show the dependance of richness on aperture radius for both cluster samples. We divide each sample in two equally populated bins of X-ray or SZE derived cluster masses. We calculate the richness profiles by measuring the background-subtracted projected surface density of galaxies with $[4.5]<21$ AB within $r= 30", 60" , 120"$ from the cluster center.
We find the shapes of the richness profiles to be similar for both samples and in each mass bins, and the richness values consistent within the large scatter. However, the low-mass X-ray sample appears to be as rich as the high-mass SZ sample. Hence at fixed cluster mass, X-ray clusters appear to have, on average, higher near-infrared richness than SZ-selected clusters.
One possibility is that this difference is due to systematics in derived masses for the X-ray and SZ samples. For example, if despite the careful re-calibration in \S 2, the X-ray masses are underestimates, then the higher richness of the X-ray clusters would imply that they are more massive clusters. Given the extensive work on calibrating X-ray and SZ observables with mass, this seems unlikely.
Yet, without having SZ, X-ray and richness estimates of the same clusters in this redshift range, we cannot definitively rule it out.
Alternately, could this be due to a selection effect, where the X-ray surveys, at fixed cluster mass, typically selects richer systems where galaxy merging has been, on average, less effective than in SZE selected clusters? X-ray surveys are indeed usually considered biased toward selecting more relaxed, more evolved systems \citep[e.g.,][]{Eckert11}. This is because of the presence of a surface brightness peak in the so-called `cool core' clusters. The clear peak of X-ray emission is more easily detected in the wide shallow surveys of ROSAT, for instance, and it is considered to be associated with a decrease in the gas temperature, hence typical of relaxed structures.
On the other hand, most of the clusters newly discovered by {\it Planck} via SZE show clear indication of morphological disturbances in their X-ray images, suggesting a more active dynamical state \citep{PlanckIX11}. Interestingly, \citet{Rossetti16}, measuring the offset between the X-ray peak and the BCG population, a known indicator of an active cluster dynamical state \citep[e.g.,][]{Sanderson09, MannEbeling12}, have found evidence of the dynamical state of SZ-selected clusters to be significantly different from X-ray-selected samples, with a higher fraction of non-relaxed, merging systems.
In the hierarchical cluster formation scenario, clusters form by the infall of less massive groups along the filaments.
Therefore while it is possible that a larger fraction of SZ-selected clusters in our sample are in a less developed stage of cluster formation, if the difference in richness between our samples was to be explained with the fact that X-ray clusters were more evolved and had accreted more galaxies within $R_{500}$, then they should also be the most massive. This has however not been found, as clearly shown in Fig.~\ref{sample} unless there are mass calibration uncertainties larger than those discussed.
An intriguing possibility is that there are differences in the intrinsic baryon fraction within $R_{500}$ relative to the cosmological baryon fraction between the cluster samples.
If X-ray clusters at these redshifts harbor a larger baryon fraction per unit dark matter halo mass compared to SZE cluster samples within the aperture radius,
it could account for lower total masses, more efficient cooling of the intracluster medium by thermal bremsstrahlung and the resultant increased star-formation efficiency could result in a larger population of luminous galaxies translating to a higher richness. Indeed, simulations like the Millennium simulations show a factor of two variation in the baryon fraction of $>10^{14}$\,M$_{\sun}$ dark matter halos which could easily account for both the differences in derived masses and richness values. Even among the X-ray cluster sample, \citet{Vikhlinin09} show that the baryon fraction increases with increasing mass among their which is likely the origin of
a richness-mass correlation that we derive. A comparison between the density profiles of SZ clusters and X-ray clusters in \citet{PlanckIX11}, shows that SZ clusters show shallower density profiles
than X-ray selected clusters which may again argue that the baryons in SZE selected clusters are predominantly at larger radii than in the X-ray sample.
However, extracting effects such as these from the data would again require SZ, X-ray and richness estimates of the same clusters in this redshift range
while our study has rather heterogeneous samples with different origins that we have attempted to place on the same calibration scale. Apart from stating these possibilities, it is challenging for us
to definitively claim one of them as an origin for the observed difference.
\subsection{Galaxy Concentration}
In an attempt to understand the source of the scatter in the mass-richness relation, we also measure the galaxy concentration of our cluster samples, defined as the ratio between the richness measured within $r= 60"$ and $r=120"$. By definition, a higher value of galaxy concentration correspond to systems with a steeper galaxy surface density profile.
As shown in Fig.~\ref{Concentration}, there is a hint that clusters that deviate the most from the fitted mass-richness relation are also the ones with the most centrally concentrated
galaxy surface density profile.
If we would include a correction for galaxy concentration, the scatter of the mass-richness relation would slightly decrease so that the associated error in mass at fixed richness is found to be $\pm 0.22$ dex
indicating that surface density concentration does play a significant role in the scatter.
Possible origins for this are blending and source confusion in the IRAC image, beam dilution when the 2$\arcmin$ aperture is much larger than the cluster overdensity
or the impact of galaxy merging on richness estimates.
Images of galaxy clusters that are more centrally concentrated are more likely to be affected by the blending of some cluster galaxies in the core, given the {\it Spitzer} angular resolution. This could result in underestimating their richness. The amount of confusion is linearly proportional to the surface density of galaxies. A cluster which is 4 times as concentrated in surface density would have its richness
underestimated by 0.6 dex which is the amount of offset of the red points in Fig.~\ref{Concentration} from the best fit line.
Alternatively, systems with higher central galaxy concentration have their richness estimate, calculated within a radius, $r=120"$, biased by the fact that the aperture chosen is too large, hence their richness is relatively smaller compared to less centrally concentrated systems. However, if this was purely an observational bias, we should have found a correlation between galaxy concentration and $\theta_{500}$ derived for the hot gas in the intracluster medium, a proxy of cluster size, which is not apparent.
If mergers were responsible for the scatter in the richness-mass relation, clusters of the same total mass, that might have experienced more or less merger events in their cores with respect to their outskirts, would result in lower or higher concentration measurements, and would therefore be found to have lower or higher values of richness. In the most extreme cases, as shown in Fig.~\ref{Panel_method}, galaxy clusters of the same total mass (as probed by their gas), and at the same lookback time, may have experienced very different evolutionary processes, resulting in large differences in richness and concentration inferred from the number and location of their cluster members.
Thus, we conclude that the scatter in the richness-mass relation that we derive is likely due to a combination of source confusion and differences in evolutionary history of the clusters.
\begin{figure*}[t!]
\epsscale{1.0}
\includegraphics[scale=.5]{Fig14-eps-converted-to.pdf}
\caption{Evolution of the $[3.6]-[4.5]$ color (left panel) and $H-[4.5 ]$ color (middle panel) with redshift for a set of Bruzual \& Charlot (2003) stellar population models with exponentially declining star formation rates with $\tau=0.1$ Gyr (early-type galaxy) and $\tau=1.0$ Gyr (star forming galaxy). These colors are used to translate our measure of [4.5]\,$\mu$m richness into a H-band richness estimate. The right panel shows the predicted richness (in $gals \cdot arcmin^{-2}$) for Euclid clusters at $0.4<z<2.0$, in the wide area survey ($H_{cut}$=24 AB), as a function of cluster mass.}
\label{models}
\end{figure*}
\subsection{{\it WISE} 4.5$\mu$m Richness}
The AllWISE\footnote{\url{http://wise2.ipac.caltech.edu/docs/release/allwise/}} program combined data from the cryogenic Wide-field Infrared Survey Explorer mission \citep[{\it WISE},][]{Wright10} and the \citep[NEOWISE][]{Mainzer11} post-cryogenic survey to deliver a survey of the full MIR sky. Since all-sky catalogs in the W2 band at $4.6 \mu$m are available, it
could allow us to potentially extend our method outside the {\it Spitzer} footprint. Therefore in this section we test whether our proposed method of deriving MIR richness estimates can be applied robustly to the publicly available {\it WISE} data .
To this aim, we use archival {\it WISE} 4.6$\mu$m data available at the location of all our clusters, and apply the same method described in \S 4. The SEIP Source List contains W2 photometry for positional counterparts found in the AllWISE release both for all clusters in our sample and for the control SpUDS field. In Fig.~\ref{wise} (left panel) we show the richness estimates based on data from {\it Spitzer} and {\it WISE} for our SZE sample. The dashed lines indicates the straight line fit to be compared to the 1:1 (solid) line. We note that there are several catastrophic outliers and that {\it Spitzer}-based richness values are systematically higher than the {\it WISE}-based counterparts.
We note that {\it WISE} is less sensitive than {\it Spitzer} and that its angular resolution is also poorer ($6.4"$ vs. $2.0"$).
To test whether the catastrophic discrepancies in the {\it Spitzer} vs. {\it WISE} richness estimates could be ascribed to confusion, we sum of the flux densities of every object detected, within $r=120"$ from each cluster center, by the two instruments. We also subtract a median flux density from the background field to get a flux over density at the location of the cluster.
Source confusion makes groups of sources in a high resolution image appear as a single bright
source in a low resolution image. By adding the flux densities, we take out the effect of confusion which would bias richness estimates low.
We then use the redshift of the cluster to translate the summed flux over density to a luminosity surface density.
As shown in the right panel of Fig.~\ref{wise} we find that the total luminosity densities for each cluster appears to be conserved, as the two instruments provide matching measurements. Therefore we can ascribe the aforementioned discrepancies solely to the poorer angular resolution of {\it WISE}, with richness estimates highly depending on the particular projected cluster galaxy geometry. At the {\it WISE} image quality, we expect a higher number of sources to be blended, resulting in lower counts of galaxies per cluster, yielding lower richness values than those measured by {\it Spitzer}.
For example, source over densities of 30 galaxies in the 2$\arcmin$ radius aperture would correspond to 10 gals Mpc$^{-2}$ which in turn would correspond to a ratio of 11 for the number of WISE beams
per source. This is well below the classical confusion limit. In reality, the confusion is even higher since the average underlying foreground source density would also contribute to confusion noise.
To summarize, we deem {\it WISE}-based richness estimates to be poorer proxies for cluster mass preventing us to effectively extend our method beyond the {\it Spitzer} footprint. Calibrating a Mass-richness relation for the{\it WISE} dataset will require a different technique and is therefore beyond the scope of this paper.
\subsection{Future Wide Field Near-Infrared Cluster Surveys}
The upcoming {\it Euclid} and {\it WFIRST} missions aim to survey large portions of the extra-galactic sky in the near-infrared (e.g., H band) to measure the effects of Dark Energy, but also have distant cluster studies as a key scientific goal. The {\it Euclid} wide-area survey, in particular, will observe 15000 deg$^{2}$, almost the entire extragalactic celestial sphere, down to a 5$\sigma$ point source
depth of H=24 mag (AB). They plan to use photometric redshift overdensities to identify clusters but that requires ancillary ground-based optical data which is currently being taken.
In this section, based on the results of our analysis, we try to provide a simple prediction of the expected richness values for {\it Euclid}-selected clusters and the range of masses that will be accessible.
{\it Euclid} is expected to detect $\sim 2 \times 10^{6}$ clusters at all redshifts, with $\sim 4 \times 10^{4}$ of them at $1 < z < 2$, with cluster masses $M_{200} \gtrsim 8 \times 10^{13} M_{\odot}$ \citep{Sartoris16}. The cluster sample in our study spans a similar mass and redshift range, hence we can attempt to predict the average richness expected for clusters at $0.4 <z < 2.0$ as a function o mass in the {\it Euclid} survey.
In Fig.~\ref{models} we show the evolution of the $[3.6]-[4.5]$ (left panel) and $H-[4.5 ]$ color (middle panel) with redshift for a set of \citet{BC03} stellar population models with exponentially declining star formation rates. We show both the typical color evolution expected for an early-type galaxy (assuming $\tau=0.1$ Gyr) and a star forming galaxy ($\tau=1.0$ Gyr) as described in \citet{Rettura10,Rettura11}. As pointed out by several authors \citep[e.g.,][]{Papovich08, Muzzin13, Wylezalek13, Rettura14} the $[3.6]-[4.5]$ color is fairly insensitive to different modes of star formation out to $z\sim 3$ and can be used as a good redshift indicator at $z>1.3$. The $H-[4.5]$ color, instead, is more sensitive to galaxy star formation history, in particular between $1 < z < 2$.
According to the models shown in the middle panel of Fig.~\ref{models}, we expect the $H-[4.5 ]$ color of a galaxy to vary between -0.7 and 0.75 AB (dashed lines) depending on its type at $0.4 <z < 2.0$ (gray shaded area). This would imply that to match the {\it Euclid} $H_{cut}$=24 AB depth, we need an equivalent {\it Spitzer} 4.5 $\mu$m survey to reach $[4.5]_{cut}=24.7 AB$.
In \S 4.1, based on our `deep' sub-sample of 36 clusters for which the deepest IRAC coverage was available in the {\it Spitzer} archive, we have derived a relation between the survey depth and the average richness of our cluster sample. As we have demonstrated, we already are below the knee of the galaxy luminosity function at these redshifts and we do not expect the slope of the relation to change as we go deeper.
Using Eq. 3, we can then predict the richness of clusters at an equivalent depth of $[4.5]_{cut}=24.7$, i.e. down to $H_{cut}$=24. We predict the following levels of richness (in galaxies Mpc$^{-2}$) for {\it Euclid}-detected clusters as a function of cluster mass: $\log R_{[H]}=1.78 \pm 0.26$ ($\log M_{500} < 14.5$), $\log R_{[H]}=1.87 \pm 0.21$ ($14.5 \leq \log M_{500} \leq 14.75$), $\log R_{[H]}=1.99 \pm 0.16$ ($\log M_{500} > 14.75$).
In the right panel of Fig.~\ref{models} we show the expected mean richness (and standard deviation of the mean) in bins of cluster mass for the {\it Euclid} clusters at $0.4 <z < 2.0$, propagating
the uncertainty in the fit required to extrapolate to these faint flux densities. For immediate comparison with the future observational data, the figure reports the expected richness values in units of $galaxies \cdot arcmin^{-2}$. We conclude that typical {\it Euclid} clusters that are about 3$\times10^{14}$\,M$_{\sun}$ will show galaxy over densities of $\sim$12 $galaxies \cdot arcmin^{-2}$.
\section{Summary}
In this paper we have studied a sample of 116 X-ray and Sunyaev-Zeldovich effect selected galaxy clusters at $0.4 < z < 2.0$ observed by {\it Spitzer} at 4.5$\mu$m. Together, they span more than a decade in total cluster mass.
With the aim of providing a simple and efficient observable that easily translates as a proxy for cluster mass, we have defined a 4.5$\mu$m richness parameter that requires just a single pointing of IRAC imaging and shallow observing time ($\sim$ 90sec) that reaches a depth of [4.5$]<$21 AB mag.
We have obtained the following results:
\begin{itemize}
\item{} We have derived {\it ROSAT}-based X-ray bolometric luminosities and masses that are in agreement with independent studies performed using {\it Chandra} data by \citet{Maughan07, Maughan12}.
\end{itemize}
\begin{itemize}
\item{} By analyzing deeper IRAC imaging data, available for a subsample of systems, we have studied and parameterized the dependance of our richness parameter on survey depth and aperture radius. We have found that richness measured in the larger radius adopted here, r=$2\arcmin$, is less sensitive to variations in depth.
\end{itemize}
\begin{itemize}
\item{} We have calibrated a mass-richness relation for both subsamples individually and combined. We have fitted linear relations in log-log space and estimated the associated error in mass at fixed richness to be $\pm 0.17, 0.22, 0.25$ dex
for the X-ray, SZE and the combined samples, respectively. We find a slight dependence of the scatter with galaxy concentration, defined as the ratio between richness measured within an aperture of 1 and 2 arcminutes.
\end{itemize}
\begin{itemize}
\item{}
We have measured the intrinsic log-scatter of our 4.5 $\mu$m richness-mass relation for our combined sample, $\sigma_{R_{[4.5]} | M}$=0.32 dex. The value of scatter we found is similar to the one obtained by the \citet{Planck14} adopting deeper SDSS-based optical richness estimator at lower redshifts. The scatter associated with our observable is larger than the one obtained by \citet{Andreon15} and \citet{Rozo14b} that have however adopted richness estimates that require deeper multiband observations which are challenging, particularly at high redshifts.
\end{itemize}
\begin{itemize}
\item{} We have found that similar {\it WISE}-based 4.6$\mu$m richness estimates would provide poorer proxies of cluster mass due to the lower angular resolution of the data with respect to Spitzer/IRAC that results in source confusion.
\end{itemize}
\begin{itemize}
\item{} Finally, we provide a calibration of the average richness as a function of cluster mass in the near-infrared, which can be applied to galaxy over-densities that will be detected by the upcoming {\it Euclid} mission through its wide-area near-infrared survey.
\end{itemize}
As {\it Spitzer} continues to survey large area of the sky during its extended {\it Warm Mission} our results make richness-based cluster mass estimates available for large samples of clusters at a very low observational cost up to $z \sim 2$.
\acknowledgments A.R. is grateful to the Spitzer Archival Team for providing access to advanced data products and is thankful to Dr. Peter Capak for providing access to the SEIP photometry pipeline. A.R. is grateful to Drs. M. Nonino, L. Girardi for discussions and providing access to the TRILEGAL model runs. A.R. is grateful to Drs. Mark Brodwin, Anthony Gonzalez, Ben Maughan, Adam Mantz, Mauro Sereno, Loredana Vetere for interesting discussions, comments and suggestions that improved this manuscript. This work is based on data obtained with the {\it Spitzer Space Telescope}, which is operated by the Jet Propulsion Lab (JPL), California Institute of Technology (Caltech), under a contract with NASA. Support was provided by NASA through contract number 1439211 and 1484822 issued by JPL/Caltech.
\clearpage
\begin{table} [h!]
\caption{X-ray Selected Cluster Sample}
\label{table:1}
\centering
\tiny
\begin{tabular}{l l l l l c c c c}
\hline
\hline
Clus ID & RA & DEC & NAME & z & $\log M_{500, X}$ & $\log R_{[4.5]}$ & $N_{Cluster}$ & $N_{S}$ \\
& (deg., J2000) & (deg., J2000) & & & ($M\odot$) & ($galaxies \cdot Mpc^{-2}$) & & \\
\hline
\hline
26 & 4.6408 & 16.4381 & MACS J0018.5+1626 & 0.5456 & 14.995 $\pm$ 0.035 & 1.645$^{+0.061}_{-0.071}$ & 162 & 14.32 \\
46 & 7.64 & 26.3044 & WARP J0030.5+2618 & 0.5 & 14.506 $\pm$ 0.029 & 1.716$^{+0.056}_{-0.065}$ & 173 & 19.31 \\
51 & 8.9971 & 85.2214 & WARP J0035.9+8513 & 0.8317 & 14.462 $\pm$ 0.034 & 1.721$^{+0.056}_{-0.064}$ & 242 & 37.17 \\
145 & 25.3846 & -30.5783 & 400d J0141-3034 & 0.442 & 14.514 $\pm$ 0.028 & 1.604$^{+0.064}_{-0.074}$ & 134 & 8.74 \\
156 & 28.1721 & -13.9703 & WARP J0152.7-1357 & 0.833 & 14.638 $\pm$ 0.035 & 1.448$^{+0.075}_{-0.091}$ & 149 & 8.60 \\
187 & 34.1404 & -17.7908 & WARP J0216.5-1747 & 0.578 & 14.435 $\pm$ 0.030 & 1.119$^{+0.106}_{-0.140}$ & 101 & 8.75 \\
200 & 37.6108 & 18.6061 & 400d J0230+1836 & 0.799 & 14.671 $\pm$ 0.035 & 1.520$^{+0.070}_{-0.083}$ & 167 & 15.70 \\
268 & 52.1504 & -21.6678 & 400d J0328-2140 & 0.59 & 14.545 $\pm$ 0.031 & 1.819$^{+0.050}_{-0.057}$ & 208 & 10.59 \\
276 & 53.2925 & -24.9447 & 400d J0333-2456 & 0.475 & 14.477 $\pm$ 0.029 & 1.459$^{+0.074}_{-0.089}$ & 123 & 10.73 \\
312 & 58.9971 & -37.6961 & 400d J0355-3741 & 0.473 & 14.528 $\pm$ 0.029 & 1.683$^{+0.058}_{-0.067}$ & 155 & 12.39 \\
316 & 61.3512 & -41.0042 & 400d J0405-4100 & 0.686 & 14.524 $\pm$ 0.032 & 1.525$^{+0.069}_{-0.082}$ & 156 & 13.23 \\
355 & 73.5462 & -3.015 & MACS J0454.1-0300 & 0.5377 & 14.954 $\pm$ 0.034 & 1.676$^{+0.059}_{-0.068}$ & 178 & 25.66 \\
380 & 80.2937 & -25.51 & 400d J0521-2530 & 0.581 & 14.491 $\pm$ 0.030 & 1.357$^{+0.083}_{-0.102}$ & 135 & 23.86 \\
382 & 80.5575 & -36.4136 & 400d J0522-3624 & 0.472 & 14.412 $\pm$ 0.028 & 1.589$^{+0.065}_{-0.076}$ & 150 & 22.29 \\
405 & 85.7117 & -41.0014 & RDCS J0542-4100 & 0.642 & 14.585 $\pm$ 0.032 & 1.552$^{+0.067}_{-0.080}$ & 170 & 26.83 \\
550 & 132.1983 & 44.9392 & RX J0848.7+4456 & 0.574 & 14.100 $\pm$ 0.028 & 1.375$^{+0.081}_{-0.100}$ & 126 & 13.56 \\
551 & 132.2346 & 44.8711 & RX J0848.9+4452 & 1.261 & 14.328 $\pm$ 0.041 & 0.889$^{+0.133}_{-0.193}$ & 105 & 13.54 \\
557 & 133.3058 & 57.9956 & 400d J0853+5759 & 0.475 & 14.435 $\pm$ 0.028 & 1.683$^{+0.058}_{-0.067}$ & 157 & 14.01 \\
586 & 141.6521 & 12.7164 & 400d J0926+1242 & 0.489 & 14.405 $\pm$ 0.028 & 1.567$^{+0.066}_{-0.078}$ & 141 & 14.00 \\
601 & 145.7796 & 46.9975 & RXC J0943.1+4659 & 0.4069 & 14.679 $\pm$ 0.030 & 1.939$^{+0.044}_{-0.049}$ & 192 & 10.53 \\
621 & 149.0121 & 41.1189 & 400d J0956+4107 & 0.587 & 14.465 $\pm$ 0.030 & 1.322$^{+0.086}_{-0.107}$ & 118 & 9.88 \\
631 & 150.5321 & 68.98 & 400d J1002+6858 & 0.5 & 14.455 $\pm$ 0.029 & 1.568$^{+0.066}_{-0.078}$ & 142 & 13.35 \\
634 & 150.7671 & 32.9078 & 400d J1003+3253 & 0.4161 & 14.711 $\pm$ 0.030 & 1.830$^{+0.050}_{-0.056}$ & 168 & 9.63 \\
713 & 164.2479 & -3.6244 & MS1054.4-0321 & 0.8309 & 14.661 $\pm$ 0.035 & 1.454$^{+0.075}_{-0.090}$ & 154 & 12.61 \\
743 & 169.375 & 17.7458 & 400d J1117+1744 & 0.547 & 14.417 $\pm$ 0.029 & 1.525$^{+0.069}_{-0.082}$ & 137 & 8.73 \\
747 & 170.0321 & 43.3019 & WARP J1120.1+4318 & 0.6 & 14.634 $\pm$ 0.032 & 1.547$^{+0.068}_{-0.080}$ & 146 & 8.31 \\
748 & 170.2429 & 23.4428 & 400d J1120+2326 & 0.562 & 14.522 $\pm$ 0.030 & 1.647$^{+0.061}_{-0.071}$ & 159 & 8.37 \\
825 & 180.5571 & 57.8647 & 400d J1202+5751 & 0.677 & 14.508 $\pm$ 0.032 & 1.372$^{+0.081}_{-0.100}$ & 129 & 9.45 \\
864 & 185.3542 & 49.3019 & 400d J1221+4918 & 0.7 & 14.605 $\pm$ 0.033 & 1.580$^{+0.065}_{-0.077}$ & 163 & 8.55 \\
865 & 185.5079 & 27.1553 & 400d J1222+2709 & 0.472 & 14.417 $\pm$ 0.028 & 1.522$^{+0.069}_{-0.083}$ & 127 & 8.11 \\
873 & 186.74 & 33.5472 & WARP J1226.9+3332 & 0.888 & 14.779 $\pm$ 0.038 & 1.281$^{+0.089}_{-0.113}$ & 127 & 8.02 \\
971 & 198.0808 & 39.0161 & 400d J1312+3900 & 0.404 & 14.426 $\pm$ 0.027 & 1.688$^{+0.058}_{-0.067}$ & 139 & 8.48 \\
1020 & 203.585 & 50.5181 & ZwCl 1332.8+5043 & 0.62 & 14.530 $\pm$ 0.031 & 1.682$^{+0.058}_{-0.068}$ & 176 & 9.29 \\
1050 & 206.875 & -11.7489 & RXC J1347.5-1144 & 0.4516 & 15.221 $\pm$ 0.037 & 1.726$^{+0.056}_{-0.064}$ & 162 & 15.81 \\
1063 & 208.57 & -2.3628 & 400d J1354-0221 & 0.546 & 14.418 $\pm$ 0.029 & 1.687$^{+0.058}_{-0.067}$ & 169 & 12.94 \\
1066 & 209.3308 & 62.545 & 400d J1357+6232 & 0.525 & 14.474 $\pm$ 0.029 & 1.635$^{+0.061}_{-0.072}$ & 154 & 11.18 \\
1089 & 213.7962 & 36.2008 & WARP J1415.1+3612 & 0.7 & 14.473 $\pm$ 0.032 & 1.498$^{+0.071}_{-0.085}$ & 149 & 9.61 \\
1094 & 214.1171 & 44.7772 & NSCS J141623+444558 & 0.4 & 14.531 $\pm$ 0.028 & 1.778$^{+0.053}_{-0.060}$ & 154 & 9.77 \\
1107 & 215.9492 & 24.0781 & MACS J1423.8+2404 & 0.543 & 14.909 $\pm$ 0.034 & 1.642$^{+0.061}_{-0.071}$ & 157 & 10.25 \\
1171 & 229.4829 & 31.4597 & WARP J1517.9+3127 & 0.744 & 14.332 $\pm$ 0.032 & 1.033$^{+0.115}_{-0.158}$ & 105 & 12.11 \\
1184 & 231.1679 & 9.9597 & WARP J1524.6+0957 & 0.516 & 14.576 $\pm$ 0.030 & 1.522$^{+0.069}_{-0.083}$ & 140 & 15.69 \\
1264 & 250.4679 & 40.0247 & 400d J1641+4001 & 0.464 & 14.520 $\pm$ 0.029 & 1.564$^{+0.066}_{-0.078}$ & 142 & 18.84 \\
1410 & 275.4087 & 68.4644 & RX J1821.6+6827 & 0.8156 & 14.453 $\pm$ 0.034 & 1.135$^{+0.104}_{-0.137}$ & 132 & 29.93 \\
1506 & 309.6225 & -1.4214 & RX J2038.4-0125 & 0.673 & 14.373 $\pm$ 0.031 & 1.208$^{+0.096}_{-0.124}$ & 165 & 62.22 \\
1519 & 314.0908 & -4.6308 & MS2053.7-0449 & 0.583 & 14.425 $\pm$ 0.030 & 1.799$^{+0.052}_{-0.058}$ & 231 & 40.92 \\
1548 & 322.3579 & -7.6917 & MACS J2129.4-0741 & 0.594 & 14.873 $\pm$ 0.034 & 1.652$^{+0.060}_{-0.070}$ & 181 & 24.79 \\
1658 & 345.7004 & 8.7306 & WARP J2302.8+0843 & 0.722 & 14.377 $\pm$ 0.031 & 1.158$^{+0.102}_{-0.133}$ & 117 & 16.19 \\\hline
\end{tabular}
\end{table}
\clearpage
\begin{table}[h!]
\caption{SZE Selected Cluster Sample}
\label{table:2}
\centering
\tiny
\begin{tabular}{l l l l l c c c c}
\hline
\hline
Clus ID & RA & DEC & NAME & z & $\log M_{500, SZ}$ & $\log R_{[4.5]}$ & $N_{Cluster}$ & $N_{S}$ \\
& (deg., J2000) & (deg., J2000) & & & ($M\odot$) & ($galaxies \cdot Mpc^{-2}$) & & \\
\hline
\hline
OBJ1 & 3.05417 & 16.0375 & MOO J0012+1602$^{(1)}$ & 0.944 & 14.146$^{+0.071}_{-0.085}$ & 1.505$^{+0.069}_{-0.082}$ & 172 & 14.39 \\
OBJ4 & 49.8517 & -0.4225 & MOO J0319-0025 & 1.194 & 14.491$^{+0.027}_{-0.029}$ & 0.902$^{+0.126}_{-0.178}$ & 104 & 12.19 \\
OBJ5 & 153.535004 & 0.64056 & MOO J1014+0038 & 1.27 & 14.531$^{+0.025}_{-0.026}$ & 1.423$^{+0.075}_{-0.091}$ & 165 & 13.49 \\
OBJ7 & 228.677917 & 13.77528 & MOO J1514+1346 & 1.059 & 14.342$^{+0.055}_{-0.064}$ & 1.501$^{+0.069}_{-0.083}$ & 176 & 14.02 \\
OBJ8 & 216.637299 & 35.139889 & IDCS J1426.5+3508$^{(2)}$ & 1.75 & 14.415$^{+0.055}_{-0.063}$ & 0.990$^{+0.120}_{-0.166}$ & 109 & 9.94 \\
OBJ10 & 34.432999 & -3.76 & XLSSU J021744.1-034536$^{(3)}$ & 1.91 & 14.127$^{+0.017}_{-0.018}$ & 0.973$^{+0.122}_{-0.171}$ & 107 & 9.52 \\
OBJ9 & 86.655128 & -53.757099 & SPT-CL J0546-5345$^{(4)}$ & 1.067 & 14.703$^{+0.034}_{-0.037}$ & 1.346$^{+0.075}_{-0.091}$ & 161 & 27.47 \\
OBJ11 & 310.248322 & -44.860229 & SPT-CL J2040-4451 & 1.478 & 14.522$^{+0.041}_{-0.045}$ & 1.395$^{+0.072}_{-0.087}$ & 177 & 28.46 \\
OBJ12 & 31.442823 & -58.48521 & SPT-CL J0205-5829 & 1.322 & 14.675$^{+0.034}_{-0.037}$ & 1.194$^{+0.095}_{-0.122}$ & 130 & 12.67 \\
OBJ13 & 316.52063 & -58.745075 & SPT-CL J2106-5844 & 1.132 & 14.922$^{+0.031}_{-0.033}$ & 1.418$^{+0.072}_{-0.086}$ & 172 & 24.45 \\
OBJ16 & 355.299103 & -51.328072 & SPT-CL J2341-5119 & 1.003 & 14.747$^{+0.033}_{-0.036}$ & 0.950$^{+0.120}_{-0.166}$ & 105 & 12.16 \\
OBJ17 & 93.964989 & -57.776272 & SPT-CL J0615-5746 & 0.972 & 15.023$^{+0.031}_{-0.033}$ & 1.410$^{+0.068}_{-0.081}$ & 176 & 35.12 \\
OBJ18 & 326.64624 & -46.550034 & SPT-CL J2146-4633 & 0.933 & 14.737$^{+0.035}_{-0.038}$ & 1.228$^{+0.088}_{-0.110}$ & 132 & 17.61 \\
OBJ20 & 83.400879 & -50.09008 & SPT-CL J0533-5005 & 0.881 & 14.578$^{+0.040}_{-0.044}$ & 0.802$^{+0.126}_{-0.179}$ & 100 & 24.49 \\
OBJ21 & 15.729427 & -49.26107 & SPT-CL J0102-4915 & 0.8701 & 15.159$^{+0.030}_{-0.033}$ & 1.496$^{+0.071}_{-0.085}$ & 162 & 10.60 \\
OBJ22 & 9.175811 & -44.184902 & SPT-CL J0036-4411 & 0.869 & 14.512$^{+0.047}_{-0.052}$ & 1.414$^{+0.077}_{-0.094}$ & 147 & 10.13 \\
OBJ23 & 72.27417 & -49.024605 & SPT-CL J0449-4901 & 0.792 & 14.69 $^{+0.036}_{-0.039}$ & 1.384$^{+0.076}_{-0.092}$ & 146 & 17.81 \\
OBJ24 & 359.922974 & -50.164902 & SPT-CL J2359-5009 & 0.775 & 14.557$^{+0.041}_{-0.045}$ & 1.013$^{+0.114}_{-0.154}$ & 104 & 11.54 \\
OBJ25 & 353.105713 & -53.967545 & SPT-CL J2332-5358 & 0.402 & 14.723$^{+0.036}_{-0.039}$ & 1.291$^{+0.083}_{-0.103}$ & 105 & 12.88 \\
OBJ26 & 325.139099 & -57.457577 & SPT-CL J2140-5727 & 0.4054 & 14.531$^{+0.048}_{-0.054}$ & 1.476$^{+0.065}_{-0.077}$ & 126 & 20.02 \\
OBJ27 & 69.574867 & -54.321243 & SPT-CL J0438-5419 & 0.4214 & 15.033$^{+0.031}_{-0.034}$ & 1.582$^{+0.061}_{-0.071}$ & 137 & 17.77 \\
OBJ28 & 87.904144 & -57.155659 & SPT-CL J0551-5709 & 0.423 & 14.696$^{+0.037}_{-0.041}$ & 1.626$^{+0.054}_{-0.062}$ & 154 & 28.82 \\
OBJ29 & 62.815441 & -48.321751 & SPT-CL J0411-4819 & 0.4235 & 14.913$^{+0.032}_{-0.035}$ & 1.387$^{+0.075}_{-0.091}$ & 115 & 14.52 \\
OBJ30 & 323.916351 & -57.44091 & SPT-CL J2135-5726 & 0.427 & 14.789$^{+0.034}_{-0.037}$ & 1.638$^{+0.057}_{-0.065}$ & 148 & 20.49 \\
OBJ31 & 321.146179 & -61.410179 & SPT-CL J2124-6124 & 0.435 & 14.715$^{+0.037}_{-0.040}$ & 1.453$^{+0.065}_{-0.077}$ & 130 & 22.74 \\
OBJ32 & 52.728668 & -52.469772 & SPT-CL J0330-5228 & 0.4417 & 14.824$^{+0.034}_{-0.036}$ & 1.570$^{+0.064}_{-0.075}$ & 134 & 13.24 \\
OBJ33 & 77.337387 & -53.705322 & SPT-CL J0509-5342 & 0.4607 & 14.704$^{+0.037}_{-0.040}$ & 1.028$^{+0.091}_{-0.116}$ & 104 & 21.00 \\
OBJ34 & 60.968086 & -57.323669 & SPT-CL J0403-5719 & 0.4664 & 14.574$^{+0.044}_{-0.049}$ & 1.475$^{+0.069}_{-0.081}$ & 129 & 15.99 \\
OBJ35 & 103.962601 & -52.567741 & SPT-CL J0655-5234 & 0.4703 & 14.707$^{+0.038}_{-0.042}$ & 1.351$^{+0.056}_{-0.065}$ & 158 & 56.19 \\
OBJ36 & 326.468201 & -56.747559 & SPT-CL J2145-5644 & 0.48 & 14.840$^{+0.033}_{-0.036}$ & 1.688$^{+0.055}_{-0.063}$ & 164 & 19.35 \\
OBJ37 & 308.801147 & -52.851883 & SPT-CL J2035-5251 & 0.5279 & 14.793$^{+0.035}_{-0.038}$ & 1.741$^{+0.050}_{-0.057}$ & 194 & 29.60 \\
OBJ38 & 354.352264 & -59.704929 & SPT-CL J2337-5942 & 0.775 & 14.926$^{+0.031}_{-0.034}$ & 1.402$^{+0.076}_{-0.092}$ & 144 & 14.14 \\
OBJ39 & 82.019592 & -53.002384 & SPT-CL J0528-5300 & 0.7678 & 14.562$^{+0.041}_{-0.046}$ & 1.273$^{+0.080}_{-0.098}$ & 137 & 23.77 \\
OBJ40 & 345.466888 & -55.776756 & SPT-CL J2301-5546 & 0.748 & 14.429$^{+0.052}_{-0.060}$ & 1.090$^{+0.102}_{-0.133}$ & 111 & 14.34 \\
OBJ41 & 31.279436 & -64.545746 & SPT-CL J0205-6432 & 0.744 & 14.532$^{+0.045}_{-0.050}$ & 1.303$^{+0.083}_{-0.103}$ & 130 & 14.62 \\
OBJ42 & 310.8284 & -50.593838 & SPT-CL J2043-5035 & 0.7234 & 14.656$^{+0.040}_{-0.043}$ & 1.474$^{+0.066}_{-0.077}$ & 165 & 27.70 \\
OBJ43 & 314.217407 & -54.993736 & SPT-CL J2056-5459 & 0.718 & 14.545$^{+0.043}_{-0.048}$ & 1.491$^{+0.065}_{-0.077}$ & 165 & 25.30 \\
OBJ44 & 315.093262 & -45.805138 & SPT-CL J2100-4548 & 0.7121 & 14.466$^{+0.057}_{-0.066}$ & 1.340$^{+0.075}_{-0.091}$ & 142 & 24.02 \\
OBJ45 & 47.629108 & -46.783417 & SPT-CL J0310-4647 & 0.7093 & 14.635$^{+0.040}_{-0.044}$ & 1.091$^{+0.105}_{-0.140}$ & 107 & 11.51 \\
OBJ46 & 19.598965 & -51.943447 & SPT-CL J0118-5156 & 0.705 & 14.575$^{+0.044}_{-0.049}$ & 1.449$^{+0.074}_{-0.090}$ & 143 & 11.00 \\
OBJ47 & 0.249912 & -57.806423 & SPT-CL J0000-5748 & 0.7019 & 14.659$^{+0.037}_{-0.040}$ & 1.378$^{+0.078}_{-0.096}$ & 135 & 13.07 \\
OBJ48 & 68.254105 & -56.502499 & SPT-CL J0433-5630 & 0.692 & 14.496$^{+0.050}_{-0.056}$ & 1.080$^{+0.098}_{-0.127}$ & 112 & 17.78 \\
OBJ49 & 80.301186 & -51.076565 & SPT-CL J0521-5104 & 0.6755 & 14.614$^{+0.039}_{-0.043}$ & 1.494$^{+0.066}_{-0.078}$ & 159 & 22.39 \\
OBJ50 & 38.255245 & -58.327393 & SPT-CL J0233-5819 & 0.663 & 14.594$^{+0.041}_{-0.046}$ & 1.391$^{+0.077}_{-0.094}$ & 134 & 13.00 \\
OBJ51 & 33.106094 & -46.950199 & SPT-CL J0212-4657 & 0.6553 & 14.770$^{+0.034}_{-0.038}$ & 1.473$^{+0.073}_{-0.087}$ & 142 & 10.42 \\
OBJ52 & 335.712189 & -48.573456 & SPT-CL J2222-4834 & 0.6521 & 14.734$^{+0.036}_{-0.039}$ & 1.477$^{+0.070}_{-0.084}$ & 147 & 15.01 \\
OBJ53 & 77.920914 & -51.904373 & SPT-CL J0511-5154 & 0.645 & 14.611$^{+0.040}_{-0.044}$ & 1.376$^{+0.074}_{-0.089}$ & 139 & 21.06 \\
OBJ54 & 85.716667 & -41.004444 & SPT-CL J0542-4100 & 0.642 & 14.713$^{+0.038}_{-0.041}$ & 1.405$^{+0.069}_{-0.082}$ & 148 & 26.83 \\
OBJ55 & 40.861546 & -59.512436 & SPT-CL J0243-5930 & 0.6352 & 14.661$^{+0.038}_{-0.042}$ & 1.427$^{+0.074}_{-0.090}$ & 137 & 13.56 \\
OBJ56 & 319.731659 & -50.932484 & SPT-CL J2118-5055 & 0.6254 & 14.557$^{+0.047}_{-0.053}$ & 1.301$^{+0.078}_{-0.095}$ & 130 & 21.38 \\
OBJ57 & 326.531036 & -48.780003 & SPT-CL J2146-4846 & 0.623 & 14.592$^{+0.044}_{-0.049}$ & 1.585$^{+0.062}_{-0.072}$ & 165 & 17.95 \\
OBJ58 & 89.925095 & -52.826031 & SPT-CL J0559-5249 & 0.609 & 14.762$^{+0.034}_{-0.037}$ & 1.349$^{+0.070}_{-0.083}$ & 143 & 30.61 \\
OBJ59 & 314.587891 & -56.14529 & SPT-CL J2058-5608 & 0.606 & 14.468$^{+0.053}_{-0.060}$ & 1.294$^{+0.076}_{-0.092}$ & 132 & 25.23 \\
OBJ60 & 326.69574 & -57.614769 & SPT-CL J2146-5736 & 0.6022 & 14.570$^{+0.043}_{-0.047}$ & 1.417$^{+0.072}_{-0.086}$ & 139 & 19.48 \\
OBJ61 & 356.184692 & -42.720924 & SPT-CL J2344-4243 & 0.596 & 15.081$^{+0.031}_{-0.033}$ & 1.625$^{+0.062}_{-0.072}$ & 162 & 10.91 \\
OBJ62 & 64.345047 & -47.813923 & SPT-CL J0417-4748 & 0.581 & 14.870$^{+0.033}_{-0.035}$ & 1.258$^{+0.086}_{-0.107}$ & 117 & 14.87 \\
OBJ63 & 83.608215 & -59.625652 & SPT-CL J0534-5937 & 0.5761 & 14.439$^{+0.055}_{-0.064}$ & 1.223$^{+0.079}_{-0.096}$ & 125 & 25.93 \\
OBJ64 & 352.960846 & -50.863926 & SPT-CL J2331-5051 & 0.576 & 14.748$^{+0.034}_{-0.037}$ & 1.256$^{+0.088}_{-0.111}$ & 114 & 12.34 \\
OBJ65 & 327.181213 & -61.277969 & SPT-CL J2148-6116 & 0.571 & 14.649$^{+0.039}_{-0.043}$ & 1.473$^{+0.067}_{-0.080}$ & 144 & 20.27 \\
OBJ66 & 74.116264 & -51.27684 & SPT-CL J0456-5116 & 0.5615 & 14.707$^{+0.036}_{-0.040}$ & 1.450$^{+0.069}_{-0.083}$ & 139 & 19.00 \\
OBJ67 & 38.187614 & -52.957821 & SPT-CL J0232-5257 & 0.5559 & 14.729$^{+0.036}_{-0.040}$ & 1.432$^{+0.075}_{-0.090}$ & 129 & 11.75 \\
OBJ68 & 305.027344 & -63.243397 & SPT-CL J2020-6314 & 0.5361 & 14.515$^{+0.048}_{-0.054}$ & 1.306$^{+0.069}_{-0.082}$ & 137 & 33.81 \\
OBJ69 & 304.483551 & -62.978218 & SPT-CL J2017-6258 & 0.5346 & 14.587$^{+0.042}_{-0.047}$ & 1.359$^{+0.066}_{-0.078}$ & 142 & 34.27 \\
OBJ70 & 346.729767 & -65.091042 & SPT-CL J2306-6505 & 0.5298 & 14.758$^{+0.036}_{-0.039}$ & 1.742$^{+0.053}_{-0.060}$ & 182 & 16.94 \\
OBJ71 & 56.724724 & -54.650532 & SPT-CL J0346-5439 & 0.5297 & 14.738$^{+0.036}_{-0.039}$ & 1.541$^{+0.066}_{-0.077}$ & 143 & 14.41 \\
26 & 4.640833 & 16.438056 & MACS J0018.5+1626$^{(5)}$ & 0.5456 & 14.938$^{+0.040}_{-0.044}$ & 1.645$^{+0.061}_{-0.071}$ & 162 & 14.31 \\
355 & 73.54625 & -3.015 & MACS J0454.1-0300$^{(5)}$ & 0.5377 & 14.858$^{+0.054}_{-0.061}$ & 1.676$^{+0.059}_{-0.068}$ & 178 & 25.66 \\
621 & 149.012083 & 41.118889 & 400d J0956+4107$^{(5)}$ & 0.587 & 14.844$^{+0.049}_{-0.055}$ & 1.322$^{+0.086}_{-0.107}$ & 118 & 9.89 \\
1050 & 206.875 & -11.748889 & RXC J1347.5-1144$^{(5)}$ & 0.4516 & 15.026$^{+0.029}_{-0.031}$ & 1.726$^{+0.056}_{-0.064}$ & 162 & 15.81 \\
\hline
\end{tabular}
Note: $M_{500, SZ}$ values as reported by (1) \citet{Brodwin15}, (2) \citet{Brodwin12}, (3) \citet{Mantz14}, (4) \citet{Bleem15}, (5) \citet{Planck15}
\end{table}
\clearpage
\email{aastex-help@aas.org}
{\it Facilities:} \facility{Spitzer}
|
1,314,259,994,926 | arxiv | \section{Introduction}
Let $\P^n$ be the projective space of dimension $n$ over the field $\R$
of all real numbers or the field $\C$ of all complex numbers.
In \cite{Ti}, a \emph{planarization} was defined as a mapping
$\Phi:U\to\P^n$, where $U\subset\P^2$ is an open subset, such that
$\Phi(\lambda\cap U)$ is a subset of a hyperplane, for every line $\lambda\subset\P^2$.
Studying planarizations is closely related to studying maps taking lines
to curves of certain linear systems, cf. \cite{Ti}; a classical result of this type
is the M\"obius--von Staudt theorem \cite{Mob,vS} about maps taking lines to lines,
sometimes called the Fundamental Theorem of Projective Geometry.
We will always assume that the planarizations are sufficiently smooth, i.e.,
sufficiently many times differentiable.
If the ground field is $\C$, then we assume analyticity.
The main result of this paper is a complete description of all planarizations in case $n=3$.
\begin{mainthm}
Let $\Phi:U\to\P^3$ be a planarization.
Then there is a nonempty open subset $V\subset U$, for which
the planarization $\Phi|_V:V\to\P^3$ is trivial, or co-trivial, or quadratic, or
dual quadratic.
\end{mainthm}
We need to explain the terminology.
A planarization $\Phi:V\to\P^3$ is said to be \emph{trivial} if
$\Phi(V)$ is a subset of a plane.
A planarization $\Phi:V\to\P^3$ is said to be \emph{co-trivial} if
there exists a point $b\in\P^3$ such that, for every line $\lambda\subset\P^2$,
the set $\Phi(\lambda\cap V)$ lies in a plane containing $b$.
Of course, logically, co-trivial planarizations include trivial planarizations.
On the other hand, there are ``more'' trivial planarizations than co-trivial
planarizations that are not trivial.
This is one of the reasons for distinguishing trivial planarizations
as a separate class; the second reason being a partial duality between trivial and
co-trivial planarizations.
Trivial planarizations can be described in terms of an arbitrary map
from $\P^2$ to $\P^2$, and co-trivial planarizations can be described in
terms of an arbitrary function on $\P^2$.
A map $\Phi:V\to\P^3$ is said to be a \emph{quadratic rational map} if in some
(hence any) system of homogeneous coordinates it is given by quadratic
homogeneous polynomials.
In other words, there are quadratic homogeneous polynomials $Q_0$, $Q_1$,
$Q_2$ and $Q_3$ in $x_0$, $x_1$, $x_2$ such that $\Phi$ maps a point
with homogeneous coordinates $[x_0:x_1:x_2]$ to the point with homogeneous
coordinates $[y_0:y_1:y_2:y_3]$, where
$$
y_\alpha=Q_\alpha(x_0,x_1,x_2),\quad \alpha=0,1,2,3.
$$
It is easy to see that every quadratic rational map takes lines to conics,
in particular, every quadratic rational map is a planarization.
Some quadratic planarizations are neither trivial nor co-trivial.
Another class of examples is provided by duality.
Let $\Phi:V\to\P^3$ be a planarization.
Recall that the dual projective plane $\P^{2*}$ consists of lines in $\P^2$,
and the dual projective space $\P^{3*}$ consists of planes in $\P^3$.
Let $V^*$ be the subset of $\P^{2*}$ consisting of all lines $\lambda\subset\P^2$
with the following property: the set $\Phi(\lambda\cap V)$ lies in a unique plane $P_\lambda$.
Note that the set $V^*$ is open, possibly empty.
The dual planarization $\Phi^*:V^*\to\P^{3*}$ is by definition the map taking
$\lambda$ to $P_\lambda$.
Given a coordinate representation of $\Phi$, it is easy to
write explicit formulas for $\Phi^*$.
It turns out that a planarization dual to a quadratic rational map is a special kind of cubic rational map.
Such planarization is called \emph{dual quadratic}.
It is rather obvious that the duality is symmetric: if $\Phi:V\to\P^3$ is a
planarization, $\Phi^*:V^*\to\P^{3*}$ is the dual planarization with a
nonempty domain $V^*$, and $V^{**}\cap V\ne\0$, then $\Phi=\Phi^{**}$ on
$V^{**}\cap V$.
It is proved in \cite{Ti} that a planarization $\Phi:U\to\P^3$ that is neither
trivial nor co-trivial must be a rational map of degree two or three, at
least on some nonempty open subset of $U$.
Thus, to prove the Main Theorem, it suffices to describe all cubic
planarizations.
A cubic planarization is defined globally as a rational map from
$\P^2$ to $\P^3$ (it may have some points of indeterminacy).
Moreover, it suffices to assume that the ground field is $\C$.
Thus the description of all planarizations reduces to some question
of classical complex algebraic geometry.
In the following sections, we will answer this question.
It is natural to consider the following equivalence relation on the set of all
planarizations. Given two planarizations $\Phi:V\to\P^3$, $\Phi':V'\to\P^3$ we say that
they are \emph{equivalent} if they coincide on some nonempty open set after
suitable projective coordinate changes in $\P^2$ and in $\P^3$.
In other terms, there are projective automorphisms $\eta\in \PGL_2$, $\mu\in \PGL_3$,
and a nonempty open subset $W\subset V\cap\eta^{-1}(V')$
such that $\Phi=\mu\circ\Phi'\circ\eta$ on $W$.
In Section \ref{s:norf}, we describe all equivalence classes of
planarizations over real numbers by specifying a representative in each class.
These representatives will also be called \emph{normal forms} of planarizations,
so that every planarization is projectively equivalent to some normal form.
Of course, there are infinitely many classes of trivial and co-trivial planarizations.
These classes can be described by means of function parameters, i.e., they
depend on some arbitrary functions.
Other than that, there are 16 classes.
Our classification is based on the classification of equivalence classes of quadratic
rational maps \cite{CSS}.
\subsubsection*{Organization of the paper}
In Section \ref{s:cubmap}, we recall some basic properties of cubic rational
maps and the associated linear webs of plane cubic curves.
In Section \ref{s:plan}, we address specific properties of cubic planarizations.
The main result of this section is that a cubic planarization that
is neither trivial nor co-trivial and that has only finitely many points
of indeterminacy must map $\P^2$ many-to-one to its image surface.
In Section \ref{s:desc}, we complete the description of
all cubic planarizations thus proving the Main Theorem.
Section \ref{s:quad} is a digression needed for a classification of
all planarizations up to equivalence.
In this section, we classify all quadratic planarizations.
Finally, in Section \ref{s:norf}, we give a list of normal forms
for planarizations.
\section{Cubic maps and base points}
\label{s:cubmap}
In this section, all spaces and maps are defined over complex numbers.
Let $\Phi:\P^2\dashrightarrow\P^3$ be a cubic rational map sending
a point of $\P^2$ with homogeneous coordinates $[x_0:x_1:x_2]$
to a point of $\P^3$ with homogeneous coordinates
$[y_0:y_1:y_2:y_3]$, where
$$
y_\al=\vp_\al(x_0,x_1,x_2),\quad \al=0,1,2,3,
$$
and $\vp_\alpha$ is a homogeneous polynomial in three variables of degree 3.
Recall that an \emph{indeterminacy point} of $\Phi$ is a point $x$ in $\P^2$
such that $\vp_\alpha(x)=0$ for all $\al=0$, 1, 2, 3.
This is precisely a point that does not have an image under $\Phi$.
Recall that $\Phi$ defines a linear system $\Lc_{\Phi}$
of plane cubics of dimension 3
(a three-dimensional linear system is called a \emph{linear web}).
By definition, $\Lc_{\Phi}$ is generated by the cubics $\vp_\alpha=0$,
i.e., the equation of any cubic in $\Lc_{\Phi}$ has the form
$$
c_0\vp_0+c_1\vp_1+c_2\vp_2+c_3\vp_3=0,
$$
where the coefficients $c_0$, $c_1$, $c_2$, $c_3$ are complex numbers not
vanishing simultaneously, thus $[c_0:c_1:c_2:c_3]$ can be thought
of as a point in $\P^3$, or, in more invariant terms, as a point
in the dual projective space $\P^{3*}$ defining a plane $P$ in $\P^3$.
The plane cubic $\vk_P$ associated with $P$ and given by the equation displayed
above contains the set of all points $x\in\P^2$ such that $\Phi(x)\in P$.
Indeterminacy points of $\Phi$ are also called the \emph{base points}
of $\Lc_{\Phi}$.
We will write $B_\Phi$ for the set of all base points of $\Lc_\Phi$.
Every cubic from $\Lc_{\Phi}$ contains the set $B_\Phi$.
If $B_\Phi$ contains an irreducible curve $\beta$, then this curve has degree at most three.
Let $\xi=0$ be an irreducible equation defining the curve $\beta$.
Note that the equations of all cubics from $\Lc_\Phi$ are divisible by $\xi$.
Therefore, the restriction of $\Phi$ to the complement of $\beta$ coincides
with the rational map $\xi^{-1}\Phi$ of degree $3-\deg(\beta)$.
For this reason, we will mostly assume that $B_\Phi$ is a finite set of points.
There is a natural way of assigning multiplicity to every point $b\in B_\Phi$.
Namely, the multiplicity $m(b)$ is equal to the minimum intersection index
of two cubics in $\Lc_\Phi$ at $b$.
Since all cubics in $\Lc_\Phi$ pass through $b$, we have $m(b)\ge 1$.
We will write $|B_\Phi|$ for the number of points in $B_\Phi$ counting multiplicities.
In other terms, we have by definition
$$
|B_\Phi|=\sum_{b\in B_\Phi} m(b).
$$
In what follows, we will write $S_\Phi=\Phi(\P^2\sm B_\Phi)$ for the image of $\Phi$.
The following two propositions are classical and well known but we recall the
proofs for completeness.
Suppose that $B_\Phi$ is finite and that $S_\Phi$ has dimension two.
By \cite[Theorem II.7]{Bea} (elimination of indeterminacy),
there exists a compact projective surface $X$, a regular morphism $\pi:X\to\P^2$
that is a finite composition of blow-ups, and a regular morphism
$\Psi:X\to\P^3$ with the property $\Psi=\Phi\circ\pi$, which holds where
the right-hand side is defined.
By the Specialization Principle, cf. \cite[Theorem (3.25)]{Mum}, and since the
dimension of $\Psi(X)\supset S_\Phi$ is 2, a generic point in $\Psi(X)$ has
exactly $k$ preimages in $X$, where $k$ is the degree of the field
extension $\C(X)/\Psi^*\C(\Psi(X))$.
The difference $X\sm\pi^{-1}(\P^2\sm B_\Phi)$ consists of exceptional curves,
whose images under $\Psi$ lie in a proper Zariski closed subset of $\Psi(X)$.
Therefore, a generic point of $S_\Phi$ has exactly $k$ preimages in
$\P^2\sm B_\Phi$.
We will call the number $k$ the \emph{topological degree} of
$\Phi:\P^2\sm B_\Phi\to S_\Phi$.
\begin{prop}
\label{p:algdeg}
Suppose that $|B_\Phi|<\infty$ and $\dim(S_\Phi)=2$.
Let $k$ be the topological degree of $\Phi:\P^2\sm B_\Phi\to S_\Phi$.
Then the projective closure of $S_\Phi$ is a surface of degree
$(9-|B_\Phi|)/k$, in particular, $|B_\Phi|<9$.
\end{prop}
\begin{proof}
Let $d$ denote the degree of the surface $\ol S_\Phi$, the closure of $S_\Phi$ in $\P^3$.
By the Kleiman transversality theorem, there is a proper Zariski closed set
$\Zc_1$ of lines in $\P^3$ such that every line $L\not\in\Zc_1$ intersects
the set $S_\Phi$ transversely at exactly $d$ points.
There is some at most one-dimensional exceptional subvariety $E$ of $S_\Phi$ such that
all points outside of $E$ have exactly $k$ preimages under
$\Phi:\P^2\sm B_\Phi\to S_\Phi$.
There is a proper Zariski closed set $\Zc_2$ of lines in $\P^3$ containing all
lines intersecting $E$.
Every line $L\subset\P^3$ defines a pencil $\Lc_\Phi(L)\subset\Lc_\Phi$
consisting of all cubics $\vk_P$, where $P$ runs through all planes containing $L$.
Let $\nu(L)$ be the sum of intersection indices of two generic curves
in $\Lc_\Phi(L)$ at points of $B_\Phi$.
Clearly, there is a proper Zariski closed set $\Zc_3$ of lines containing
all lines $L$ with $\nu(L)\ne |B_\Phi|$.
Consider a line $L$ not in $\Zc_1\cup\Zc_2\cup\Zc_3$.
Then the set $S_\Phi\cap L$ consists of $d$ transverse intersection points.
The line $L$ can be represented as the intersection of two planes $P_1$ and $P_2$.
Let $\vk_1$ and $\vk_2$ be the corresponding conics in $\Lc_\Phi$.
Since $L\not\in\Zc_1\cup\Zc_2$, the intersection $\vk_1\cap\vk_2$ is a
disjoint union of $dk$ transverse intersection points and the set $B_\Phi$.
Since $L\not\in\Zc_3$, the sum of intersection indices of $\vk_1$ and $\vk_2$
at points of $B_\Phi$ is equal to $|B_\Phi|$.
By the Bezout theorem, we have $9=3^2=dk+|B_\Phi|$.
\end{proof}
\begin{prop}
\label{p:curveB}
Let $\lambda\subset\P^2$ be a line such that $\Phi(\lambda\sm B_\Phi)$ lies
in a nonsingular conic. Then $\lambda$ intersects the set $B_\Phi$.
\end{prop}
\begin{proof}
Let $C$ be the conic containing the set $\Phi(\lambda\sm B_\Phi)$.
Similarly to the discussion presented above, there is a well-defined
\emph{topological degree} $d_\lambda$ of the mapping $\Phi:\lambda\to C$.
A generic point of $C$ has exactly $d_\lambda$ preimages in $\lambda$,
and these preimages have multiplicity one in the sense that
the differential of $\Phi:\lambda\to C$ does not vanish at these points.
Let $\xi=0$ be an equation of a generic plane in $\P^3$.
Then $\xi$ can be thought of as a section of the line bundle $\Oc_{\P^3}(1)$
on $\P^3$.
The restriction of $\xi$ to $C$ has two simple zeros.
On the other hand, the section $\xi\circ\Phi$ of the line bundle $\Oc_\lambda(3)$
on $\lambda$ has 3 zeros counting multiplicities.
We obtain that $3=2 d_\lambda$, a contradiction.
\begin{comment}
It follows that either one of these zeros lies in $B_\Phi$, or
two of these 3 points map to the same point of $C$ (this includes
the case, where the two zeros coincide; in this case, the
differential of $\Phi|_\lambda$ must vanish there).
We assume the latter: two of the three zeros map to the same point of $C$.
Since $\xi$ is generic, it follows that $\Phi|_\lambda$ is many-to-one.
But then there must be at least four zeros of the section $\xi\circ\Phi$
counting multiplicities.
A contradiction, which shows that $\lambda$ must intersect $B_\Phi$, as claimed.
\end{comment}
Alternatively, the proposition can be easily derived
from Lemmas 2.6 and 2.7 of \cite{Ti}.
\end{proof}
\section{Planarizations}
\label{s:plan}
As before, we consider a cubic rational map $\Phi$.
We now assume that $\Phi$ is a \emph{planarization}, i.e., the $\Phi$-image of
every line in $\P^2$ is a subset of some plane in $\P^3$.
We say that $\Phi$ is \emph{strictly cubic} if there is no Zariski open subset $U\subset\P^2$
such that the restriction of $\Phi$ to $U$ coincides with some quadratic rational map.
As we have seen, for every strictly cubic planarization $\Phi$, the set $B_\Phi$ is
finite.
Consider any line $\lambda\subset\P^2$.
It is called \emph{non-special} if $\Phi(\lambda)$ is not a line.
By Proposition \ref{p:curveB}, or by the M\"obius--von Staudt theorem,
a generic line in $\P^2$ is non-special for $\Phi$.
Since $\Phi$ is a planarization, for every non-special line $\lambda$, there is a unique plane
$P_\lambda$ in $\P^3$ containing $\Phi(\lambda)$.
The preimage $\Phi^{-1}(P_\lambda)$ lies in a unique cubic curve
$\vk_\lambda\in\Lc_\Phi$ containing $\lambda$.
In fact, $\Phi^{-1}(P_\lambda)$ coincides with $\vk_\lambda\sm B_\Phi$.
Then $\vk_\lambda=\lambda\cup \si_\lambda$, where $\si_\lambda$ is a conic.
The curves $\Phi(\lambda)$ and $\Phi(\si_\lambda)$ are two plane curves in $\P^3$.
In this section, we will prove the following theorem.
\begin{thm}
\label{t:deg}
Suppose that $\Phi:\P^2\dashrightarrow\P^3$ is a strictly cubic planarization, which is not
trivial and not co-trivial.
Then the topological degree of $\Phi:\P^2\sm B_\Phi\to S_\Phi$ is bigger than one.
\end{thm}
\subsection{General properties of cubic planarizations}
We assume in this section that $\Phi$ is a strictly cubic planarization such that the
dimension of $S_\Phi$ is two, and prove some general properties of $\Phi$.
\begin{lem}
\label{l:gen-cub}
For a generic choice of $\lambda$, the curve $\Phi(\lambda)$ is cubic.
\end{lem}
\begin{proof}
Suppose that the degree of $\Phi(\lambda)$ is at most 2, for a Zariski open
set of lines $\lambda$.
Then, by \cite[Lemma 2.8]{Ti}, the map $\Phi$ is not strictly cubic,
i.e., all $\varphi_\al$ have a nontrivial common factor, a contradiction with
the assumption that $B_\Phi$ is finite.
The result also follows from Proposition \ref{p:curveB}.
\end{proof}
\begin{lem}
\label{l:fibers}
Suppose that $\Phi$ is not co-trivial.
Then all fibers of $\Phi$ are finite, i.e., there is no
semi-algebraic subset of dimension one in $\P^2$ mapping to a point.
\end{lem}
\begin{proof}
Suppose that $\Gamma\subset\P^2\sm B_\Phi$ is a semi-algebraic subset of
dimension one such that $\Phi(\Gamma)$ is a point.
Note that a generic line $\lambda\subset\P^2$ intersects $\Gamma$.
Therefore, the plane $P_\lambda$ passes through the point $\Phi(\Gamma)$.
It follows that $\Phi$ is co-trivial.
\end{proof}
Recall that, by our assumption, the image $S_\Phi$ has dimension two.
We will write $J_\Phi$ for the semi-algebraic set in $\P^2$,
on which the Jacobian of $\Phi:\P^2\sm B_\Phi\to S_\Phi$ vanishes.
In other terms, $J_\Phi$ consists of all points $p\in\P^2$ such that
the differential $d_p\Phi$ of $\Phi$ at $p$ is degenerate, i.e.,
has a nontrivial kernel.
Since $S_\Phi$ has dimension two, by the Sard lemma,
the Jacobian of $\Phi$ cannot vanish everywhere.
Therefore, the dimension of $J_\Phi$ is at most one.
\begin{prop}
\label{p:Jac}
Suppose that $\dim(J_\Phi)=1$.
Then every component of $J_\Phi$ of dimension one is mapped to a subset of a plane.
\end{prop}
To prove Proposition \ref{p:Jac}, we need the following simple and general lemma:
\begin{lem}
\label{l:impos}
If a germ of a holomorphic curve $T\subset\P^3$ has the property that
all tangent lines of $T$ pass through some point $b\in\P^3$, then
in fact $T$ lies in a line passing through $b$.
\end{lem}
\begin{proof}
We may choose homogeneous coordinates in $\P^3$ so that $b=[0:0:0:1]$.
In the affine chart $(x_1,x_2,x_3)\mapsto [1:x_1:x_2:x_3]$, the lines
passing through $b$ are tangent to the vertical line field.
The only integral curves of the vertical line field are vertical lines.
\end{proof}
\begin{proof}[Proof of Proposition \ref{p:Jac}]
Let $K$ be a component of $J_\Phi$, whose dimension is one.
The set $\Phi(K)$ is not a point, by Lemma \ref{l:fibers}.
Since $K$ is not mapped to a point under $\Phi$, the
restriction of $\Phi$ to $K$ has only finitely many critical points.
Note that, if $K$ lies in a line, then the statement follows from
the definition of a planarization.
Thus, we may assume that $K$ is not a subset of a line.
There are proper Zariski closed subsets $\Zc_1$, $\Zc_2$ and $\Zc_3$
of $K\times K$ with the following properties:
\begin{enumerate}
\item if $p$ or $q$ is a critical point of the restriction of $\Phi$
to $K$, then $(p,q)\in\Zc_1$;
\item if $(p,q)\not\in\Zc_2$, then the line connecting $p$ and $q$ is non-special;
\item let $\lambda$ be the line through $p$ and $q$; if the restriction
of the differential $d_p\Phi$ to the tangent line of $\lambda$ at $p$ vanishes
or the restriction of the differential $d_q\Phi$ to the tangent line of $\lambda$
at $q$ vanishes, then $(p,q)\in\Zc_3$.
\end{enumerate}
We now assume that $(p,q)$ does not belong to $\Zc_1\cup \Zc_2\cup \Zc_3$.
Since $p$, $q\in J_\Phi$, the curve $\Phi(K)$ is tangent to
$\Phi(\lambda)$ at points $\Phi(p)$ and $\Phi(q)$.
Moreover, since $(p,q)\not\in\Zc_1\cup\Zc_3$, the points $\Phi(p)$ and
$\Phi(q)$ are nonsingular for $\Phi(K)$ and $\Phi(\lambda)$, so that these
two varieties have well-defined tangent lines at $\Phi(p)$ and $\Phi(q)$.
It follows that the curve $\Phi(K)$ is tangent to the plane $P_\lambda$
at $\Phi(p)$ and $\Phi(q)$.
Tangent lines of $\Phi(K)$ at $\Phi(p)$ and $\Phi(q)$ lie in the
same plane $P_\lambda$, hence they intersect.
Since this is true for a Zariski dense set of pairs $(p,q)$, it follows
that every pair of tangent lines of $\Phi(K)$ intersect.
Fix two tangent lines $\Lambda_1$ and $\Lambda_2$ of $\Phi(K)$.
Any other tangent line $\Lambda$ of $\Phi(K)$ must intersect
both $\Lambda_1$ and $\Lambda_2$.
Thus, either $\Lambda$ lies in the plane containing $\Lambda_1\cup\Lambda_2$,
or $\Lambda$ passes through the intersection point $\Lambda_1\cap\Lambda_2$.
We see that the only possibilities for $\Phi(K)$ are that
\begin{enumerate}
\item all tangent lines
of this curve lie in the same plane, OR
\item all tangent lines of this curve
pass through the same point.
\end{enumerate}
In case (2), we have that $\Phi(K)$ is a subset of some line
by Lemma \ref{l:impos}, therefore, $\Phi(K)$ is a subset of a plane.
In case (1), we have that all tangent lines of $\Phi(K)$ belong
to the same plane, therefore, the curve $\Phi(K)$ itself lies in this
plane.
\end{proof}
\begin{prop}
\label{p:Jac-fin}
Suppose that $\Phi$ is strictly cubic, not trivial, not co-trivial, and not dual quadratic.
Suppose also that the topological degree of $\Phi$ is equal to one.
Then the set $J_\Phi$ is finite, possibly empty.
\end{prop}
To prove this proposition, we need the following lemma:
\begin{lem}
\label{l:Sdeg3}
Suppose that $\Phi$ is strictly cubic, the topological degree of $\Phi$ is one,
and $S_\Phi$ is a two-dimensional subset of some surface of degree 3.
Then $\Phi$ is trivial or co-trivial.
\end{lem}
\begin{proof}
By Proposition \ref{p:algdeg}, we have $|B_\Phi|=6$.
For every line $\lambda\subset\P^2$ disjoint from $B_\Phi$, consider the
corresponding conic $\si_\lambda$.
Then $\si_\lambda$ contains the set $B_\Phi$.
Moreover, we have $(\si_\lambda\cdot\varkappa)_{B_\Phi}\ge 6$, where $\varkappa$ is
any cubic from the linear web $\Lc_\Phi$, and $(\si_\lambda\cdot\varkappa)_{B_\Phi}$
denotes the sum of the intersection multiplicities of $\si_\lambda$ and $\varkappa$
at all points of $B_\Phi$.
Indeed, if $\lambda\cap B_\Phi=\0$, then the inequality
$(\si_\lambda\cdot\varkappa)_{B_\Phi}\ge 6$ follows from
$((\si_\lambda+\lambda)\cdot\varkappa)_{B_\Phi}\ge 6$.
The general case follows from the upper-semicontinuity of the intersection multiplicities.
If a line $\lambda'$ is disjoint from $B_\Phi$, then
$(\si_\lambda\cdot\si_{\lambda'})_{B_\Phi}\ge 6$.
On the other hand, two different conics either share a line component
or intersect by at most 4 points, counting multiplicities.
It follows that all conics $\si_\lambda$ share a line component $\lambda_0$,
in particular, all $\si_\lambda$ have a point $a\notin B_\Phi$ in common.
\begin{comment}
Suppose first that $(\lambda_0\cdot\varkappa)_{B_\Phi}\ge 4$ for all
$\varkappa\in\Lc_{B_\Phi}$.
Consider any cubic $\vk\in\Lc_\Phi$.
It must contain the line $\lambda_0$; otherwise, by the Bezout
theorem, the index of $\vk$ and $\lambda_0$ would be three.
It follows that all cubics in $\Lc_\Phi$ contain $\lambda_0$, hence
$B_\Phi\supset\lambda_0$, a contradiction with our assumption that $\Phi$
is strictly cubic.
The only case it remains to consider is where all conics $\si_\lambda$
are the same union of two lines $\lambda_0$ and $\lambda_1$, and we have
$(\lambda_i\cdot\varkappa)\ge 3$ for $i=0$, 1, and all $\varkappa\in\Lc_\Phi$.
Take any point $a \in \lambda_0\cup\lambda_1$,
\end{comment}
Then, for every line $\lambda\subset\P^2$, the plane $P_\lambda$ containing the
image of $\lambda$ contains also $\Phi(a)$,
which means that $\Phi$ must be co-trivial.
\end{proof}
\begin{proof}[Proof of Proposition \ref{p:Jac-fin}]
Assume the contrary: there is a component $K$ of $J_\Phi$ that has dimension one.
By Proposition \ref{p:Jac}, the image $\Phi(K)$ is a plane curve.
Suppose that neither $K$ not $\Phi(K)$ is a subset of a line.
We will write $P$ for the plane containing $\Phi(K)$.
By the proof of Proposition \ref{p:Jac}, the image of a generic line
$\lambda\subset\P^2$ under the map $\Phi$ is tangent to
$\Phi(K)$ at two or more points.
Moreover, we may assume that the tangent lines of $\Phi(K)$ at
these points do not coincide and therefore define a unique plane.
It follows that $P_\lambda=P$.
Since $\lambda$ is generic, this implies that $\Phi$ is trivial.
Suppose now that $\Phi(K)$ lies in a line $L\subset\P^3$.
Then, since, for a generic line $\lambda\subset\P^2$, the curve
$\Phi(\lambda)$ is tangent to $\Phi(K)$, the plane $P_\lambda$
must contain $L$.
It follows that $\Phi$ is co-trivial.
Finally, suppose that $K$ is a subset of a line but $\Phi(K)$
is not a subset of a line.
Then $\Phi(K)$ lies in a plane algebraic curve $\Xi$ of degree two or three.
Consider the dual planarization $\Phi^*$.
If $\Phi$ is neither trivial nor co-trivial, then $\Phi^*$ is defined on
some nonempty Zariski open subset of $\P^{2*}$.
If $\Phi^*$ is trivial, then $\Phi$ must be co-trivial.
If $\Phi^*$ is co-trivial, then $\Phi$ must be trivial.
Thus we may assume that $\Phi^*$ is neither trivial nor co-trivial.
Note that, for a generic line $\lambda\subset\P^2$, the image
$\Phi(\lambda)$ is tangent to $\Phi(K)$ at the point $\Phi(\lambda\cap K)$.
It follows that the plane $P_\lambda$ is tangent to $\Phi(K)$.
We see that the image $\Phi^*(\P^{2*}\sm B_{\Phi^*})$ lies in the
set of all tangent planes of $\Xi$.
The set $\Xi^*$ of all tangent planes of $\Xi$ is a cone in $\P^{3*}$
(indeed, every plane in $\Xi^*$ is an element of a linear pencil of
planes, i.e., of a line in $\P^{3*}$, containing the plane of $\Xi$).
A plane section of the cone $\Xi^*$ not passing through the vertex
of this cone is a curve projectively equivalent to the dual curve of $\Xi$.
Since $\Phi^*$ is a planarization, it must be a cubic rational map by \cite{Ti}.
It follows that the projectively dual curve of $\Xi$ has degree at most three.
Then the degree of the surface $\Xi^*$ is also at most three.
If the image of $\P^{2*}\sm B_{\Phi^*}$ under $\Phi^*$ has dimension one,
then this image lies in a plane, hence $\Phi^*$ is trivial.
Thus we may assume that $\Phi^*(\P^{2*}\sm B_{\Phi^*})$ has dimension two,
i.e., includes an open subset of $\Xi^*$.
It now follows from Lemma \ref{l:Sdeg3} that the planarization $\Phi$
is trivial or co-trivial.
\end{proof}
\subsection{Proof of Theorem \ref{t:deg}}
In this section, we assume that $\Phi$ satisfies all assumptions of Theorem \ref{t:deg},
namely, that $\Phi$ is not trivial, not co-trivial, and is strictly cubic.
It follows that $S_\Phi$ has dimension two and that the set $B_\Phi$ is finite.
First note that, if $\Phi$ is dual quadratic, then the conclusion of
Theorem \ref{t:deg} holds.
Indeed, let $\Phi^*$ be the dual planarization of $\Phi$.
It is defined on some nonempty Zariski open subset of $\P^{2*}$.
Since $\Phi$ is dual quadratic, the map $\Phi^*$ is a quadratic rational map.
Recall that lines in $\P^{2*}$ correspond to points in $\P^2$:
namely, a point $a\in\P^2$ defines the line $a^*\in\P^{2*}$
consisting of all lines in $\P^2$ passing through $a$.
If a line $a^*\subset\P^{2*}$ is non-special for $\Phi^*$, we will write
$P^*_a$ for the plane in $\P^3$ containing the set $\Phi^*(a^*\sm B_{\Phi^*})$.
The plane $P^*_a$, of course, identifies with the point $\Phi(a)$.
The plane $P^*_a$ in $\P^3$ defines a conic in $\P^2$ containing $a^*$.
This conic consists of $a^*$ and another line $a_1^*$.
Clearly, we have $a_1\ne a$ for a generic $a\in\P^2$ (there is no nontrivial
\emph{linear} systems of double lines in $\P^2$) and that $\Phi(a_1)=\Phi(a)$.
It follows that the topological degree of $\Phi$ is bigger than one.
We may now assume that $\Phi$ is not dual quadratic.
\begin{prop}
\label{p:cusp}
It is impossible that, for a Zariski dense set of lines $\lambda\subset\P^2$,
the images $\Phi(\lambda)$ are cuspidal cubics.
\end{prop}
\begin{proof}
We will write $\gamma$ for the Zariski closure of the set of all
points $p$ with the following property: there is a line $\lambda\ni p$, whose
$\Phi$-image is a cuspidal cubic curve, the point $\Phi(p)$ being the cusp of this curve.
If $p$ and $\lambda$ are as above, then $p\in J_\Phi$.
It follows that $\gamma\subset J_\Phi$ has dimension at most one.
Since a generic line intersects $\gamma$, we conclude that $\gamma$ is a curve
rather than a finite set of points.
Assuming that $\Phi$ is not trivial, not co-trivial,
not dual quadratic, and is of topological degree one
we get a contradiction with Proposition \ref{p:Jac-fin}, which states that
$J_\Phi$ and hence $\gamma$ are finite sets.
\end{proof}
By Lemma \ref{l:gen-cub}, for a generic line $\lambda\subset\P^2$, the set
$\Phi(\lambda)$ is a cubic curve.
A plane rational cubic curve is either nodal or cuspidal.
Thus we have two cases.
Suppose first that, for a generic line $\lambda$, the cubic $\Phi(\lambda)$ is cuspidal.
Then, by Proposition \ref{p:cusp}, the planarization $\Phi$ is trivial, or co-trivial,
or dual quadratic.
Thus we may assume that, for a non-empty Zariski open set of lines $\lambda$, the
cubic $\Phi(\lambda)$ is nodal.
For every line $\lambda\subset\P^2$ such that $\Phi(\lambda)$ is a nodal cubic, we
let $\Sigma(\lambda)$ be the set of points of $\lambda$ mapping to the singular point of $\Phi(\lambda)$.
Thus the set $\Sigma(\lambda)$ consists of two points, and these
two points are mapped to the node of the nodal cubic $\Phi(\lambda)$.
Let $\Gamma$ be the Zariski closure of the union of $\Sigma(\lambda)$ over all lines $\lambda\subset\P^2$ such that $\Phi(\lambda)$ is a nodal cubic.
\begin{lem}
\label{l:GammaP2}
The set $\Gamma$ coincides with the whole of $\P^2$.
\end{lem}
\begin{proof}
Assume the contrary: the set $\Gamma$ has dimension one or less.
Then the set $\Zc$ of lines $\lambda$ such that $\Phi:\lambda\cap\Gamma\to\P^3$
is not injective has dimension at most one.
Indeed, given a point $a\in\Gamma$, there are only finitely many lines
connecting $a$ with some other point in the finite set $\Phi^{-1}(\Phi(a))$
(the latter set is finite by Lemma \ref{l:fibers}).
Consider a line $\lambda\not\in\Zc$.
Moreover, we may assume that $\Phi(\lambda)$ is a nodal cubic.
Then the set $\Sigma(\lambda)$ contains some point $b\ne a$.
This is a contradiction with the fact that $\Phi$ is injective on the
set $\lambda\cap\Gamma$.
\end{proof}
It follows from Lemma \ref{l:GammaP2} that the topological degree of the map $\Phi$
is strictly bigger than one, thus Theorem \ref{t:deg} is proved.
\section{Description of cubic planarizations}
\label{s:desc}
In this section, we give a complete description of cubic planarizations thus
completing the description of all planarizations.
We will assume throughout this section that $\Phi$ is a strictly cubic planarization
that is neither trivial nor co-trivial.
Then $S_\Phi$ has dimension two, and the set $B_\Phi$ is finite.
By Theorem \ref{t:deg}, the topological degree of the map
$\Phi:\P^2\sm B_\Phi\to S_\Phi$ is at least two.
We can now make this result stronger.
\begin{prop}
If $\Phi$ is not dual quadratic, then
the topological degree of the map $\Phi:\P^2\sm B_\Phi\to S_\Phi$ is
equal to three.
\end{prop}
\begin{proof}
Consider the dual planarization $\Phi^*$.
Recall that it is defined on some nonempty Zariski open subset of $\P^{2*}$.
By the classification of planarizations, the map $\Phi^*$ must be cubic.
Moreover, $\Phi^*$ is neither trivial, nor co-trivial
(otherwise $\Phi$ would be trivial or co-trivial).
Consider the set $B_{\Phi^*}\subset\P^{2*}$ of all indeterminacy points of $\Phi^*$.
Since $\Phi$ is not dual quadratic, the planarization $\Phi^*$ is strictly cubic.
It follows that the set $B_{\Phi^*}$ is finite.
All facts established earlier for $\Phi$ apply also to $\Phi^*$.
In particular, a generic fiber of the map $\Phi^*$ consists of at least
two points, and a generic line $a^*\subset\P^{2*}$ is mapped to
a nodal cubic under $\Phi^*$.
If a line $a^*\subset\P^{2*}$ is non-special for $\Phi^*$, then we will write
$P^*_{a^*}$ for the unique plane in $\P^{3*}$ containing $\Phi^*(a^*)$.
Recall that $P^*_{a^*}$ is identified with $\Phi(a)$ under the
natural identification between $\P^{3**}$ and $\P^3$.
Similarly to the properties of $\Phi$, the full preimage of $P^*_{a^*}$ under $\Phi^*$
is a cubic curve $\vk^*_{a^*}$ consisting of $a^*$ and some conic
$\sigma^*_{a^*}$.
Since the topological degree of $\Phi$ is at least two, we know that, for a generic line
$a_1^*\subset\P^{2*}$, there is another line in $\P^{2*}$ mapping
to the same plane $P^*_{a_1^*}$ under $\Phi^*$.
Hence the conic $\si^*_{a_1^*}$ splits into the union of two lines.
We will write $a_2^*$ and $a_3^*$ for these two lines.
Thus, the cubic $\vk^*_{a_1^*}$ splits into the union of the three lines
$a_1^*$, $a_2^*$ and $a_3^*$.
Generically, these three lines are different, and they map to
the same plane $P^*_{a_1^*}$.
This property of $\Phi^*$ translates to the following property of $\Phi$:
a generic point of $S_\Phi$ (corresponding to the plane $P^*_{a_1^*}$)
has exactly three preimages $a_1$, $a_2$, $a_3$.
\begin{comment}
Note that the restriction of $\Phi^*$ to a generic line $\lambda$ has
topological degree one, i.e., a generic fiber of this restriction is a singleton.
Consider the equivalence relation $\sim$ on $\P^2\sm B_{\Phi^*}$ defined as follows:
two points $a$, $b$ are equivalent if $\Phi^*(a)=\Phi^*(b)$.
We know that a generic equivalence class of $\sim$ consists of at least two points.
If $a$ runs through a generic line $\lambda$, then the equivalent to $a$ points
run through the union $\lambda'\cup\lambda''$, since the restriction of $\Phi^*$
to $\lambda$ has topological degree one.
\end{comment}
\end{proof}
We now assume that the topological degree of the map $\Phi:\P^2\sm B_\Phi\to S_\Phi$
is equal to three, and the same is true for the dual planarization $\Phi^*$.
By Proposition \ref{p:Jac-fin}, we may also assume that $J_\Phi$ is finite.
Let $d$ denote the degree of the surface $\ol S_\Phi$.
By Proposition \ref{p:algdeg}, we have $3d=9-|B_\Phi|$.
It follows that $d$ is at most three, i.e., the surface $\ol S_\Phi$ is at most cubic.
Suppose that $\ol S_\Phi$ is a cubic surface.
It follows that the set $B_\Phi$ is empty, hence $S_\Phi$ is compact.
It is a classical fact that $S_\Phi$ contains at least one line
(recall that a smooth cubic surface contains 27 lines, and any cubic surface can
be approximated by smooth cubic surfaces).
Let $L$ be a line contained in $S_\Phi$.
Since $J_\Phi$ is finite, the $\Phi$-preimage of a generic point in $L$
consists of exactly three points.
This means that there are three different lines $\lambda_0$, $\lambda_1$ and
$\lambda_2$ mapping to $L$.
Indeed, the set $\Phi^{-1}(L)$ is contained in a cubic curve $\Phi^{-1}(P)$,
where $P\subset\P^3$ is any plane containing the line $L$.
On the other hand, there are three distinct elements of $\P^{2*}$ mapping to
$P\in\P^{3*}$ under $\Phi^*$, for a generic choice of $P\supset L$.
We see that $\Phi^{-1}(L)=\Phi^{-1}(P)$ is a union of three distinct lines,
each mapping one-to-one to $L$.
This leads to a contradiction, because we can take $P$ passing through $L$
and some other (generic) point of $S_\Phi$; then $\Phi^{-1}(L)\ne\Phi^{-1}(P)$.
The thus obtained contradiction with the assumption that
the topological degree of $\Phi$ is three concludes the proof of the Main Theorem.
\section{Quadratic planarizations}
\label{s:quad}
Throughout this section, we suppose that
$\Phi:\P^2\dashrightarrow\P^3$ is a quadratic rational map
such that the image of $\Phi$ lies in some quadratic surface $S$
but does not lie in a plane.
We will classify all such quadratic maps up to projective equivalence.
The classification must be classical but we failed to find a modern reference.
The following theorem describes the classification over $\C$.
\begin{thm}
\label{t:qclass}
Suppose that the ground field is $\C$.
Then $\Phi$ is equivalent to one and only one of the following three maps:
\begin{align*}
\Phi_{1}:&[x_0:x_1:x_2]\mapsto [x_0^2:x_0x_1:x_0x_2:x_1x_2],\\
\Phi_{2}:&[x_0:x_1:x_2]\mapsto [x_0^2:x_0x_1:x_1^2:x_0x_2],\\
\Phi_{3}:&[x_0:x_1:x_2]\mapsto [x_0^2:x_0x_1:x_1^2:x_2^2].
\end{align*}
The planarizations $\Phi_1$ and $\Phi_2$ are co-trivial.
The dual planarization of $\Phi_3$ is equivalent to $\Phi_3$.
\end{thm}
The corresponding real classification differs only in that the
complex equivalence class of $\Phi_1$ splits into two real equivalence
classes $\Phi_{1a}$ and $\Phi_{1b}$.
\begin{thm}
\label{t:qclassR}
Suppose that the ground field is $\R$.
Then $\Phi$ is equivalent to one and only one of the following four maps:
\begin{align*}
\Phi_{1a}:&[x_0:x_1:x_2]\mapsto [x_0^2:x_0x_1:x_0x_2:x_1x_2],\\
\Phi_{1b}:&[x_0:x_1:x_2]\mapsto [x_0^2:x_0x_1:x_0x_2:x_1^2+x_2^2],\\
\Phi_{2}:&[x_0:x_1:x_2]\mapsto [x_0^2:x_0x_1:x_1^2:x_0x_2],\\
\Phi_{3}:&[x_0:x_1:x_2]\mapsto [x_0^2:x_0x_1:x_1^2:x_2^2].
\end{align*}
The planarizations $\Phi_{1a}$, $\Phi_{1b}$ and $\Phi_2$ are co-trivial.
The dual planarization of $\Phi_3$ is equivalent to $\Phi_3$.
\end{thm}
In Section \ref{ss:C}, we prove Theorem \ref{t:qclass}, and
in Section \ref{ss:R}, we prove Theorem \ref{t:qclassR}.
\subsection{Complex classification}
\label{ss:C}
In this section, we assume that the ground field is $\C$.
The proof of Theorem \ref{t:qclass} consists of several lemmas.
We first assume that the quadric $S$ is non-degenerate.
\begin{lem}
\label{l:nondeg-par}
If the surface $S$ is given by the equation $u_0u_1=u_2u_3$ with respect
to some system of homogeneous coordinates $[u_0:u_1:u_2:u_3]$ in $\P^3$,
then $\Phi$ has the form
$$
[x_0,x_1,x_2]\mapsto [\psi_0\psi_1:\psi_2\psi_3:\psi_0\psi_2:\psi_1\psi_3],
$$
where $\psi_\alpha$, $\alpha=0$, $\dots$, $3$,
are homogeneous linear forms in $x_0$, $x_1$, $x_2$.
\end{lem}
\begin{proof}
The map $\Phi$ can be written in coordinates as
$u_\alpha=\varphi_\alpha(x_0,x_1,x_2)$, where $x_0$, $x_1$, $x_2$
are homogeneous coordinates in $\P^2$, and the index $\alpha$ runs from 0 to 3.
We claim that every quadratic polynomial $\varphi_\alpha$ is reducible.
Indeed, if one of these polynomials, say, $\varphi_0$ is irreducible,
then, by the unique factorization property, $\varphi_2$ or $\varphi_3$
is divisible by $\varphi_0$, hence is proportional to $\varphi_0$.
It follows however that the image of $\Phi$ lies in a plane, a contradiction.
Thus every $\varphi_\alpha$ is a product of two linear factors.
We write $\varphi_0$ as $\psi_0\psi_1$, where $\psi_0$ and $\psi_1$
are linear homogeneous polynomials in $x_0$, $x_1$, $x_2$.
Then $\varphi_2$ or $\varphi_3$ is divisible by $\psi_0$.
Relabeling $\varphi_2$ and $\varphi_3$ if necessary, we may assume that
$\varphi_2$ is divisible by $\psi_0$.
Set $\varphi_2=\psi_0\psi_2$, where $\psi_2$ is some linear polynomial.
It now follows from the identity $\varphi_0\varphi_1=\varphi_2\varphi_3$ that
$\psi_1\varphi_1=\psi_2\varphi_3$.
We see that $\varphi_3$ is divisible by $\psi_1$, therefore, $\varphi_3$
can be written as $\psi_1\psi_3$.
It follows that $\varphi_1=\psi_2\psi_3$.
\end{proof}
We can now classify all maps $\Phi$, for which $S$ is non-singular.
\begin{lem}
\label{l:C-nonsing}
Suppose that $S$ is nonsingular.
Then $\Phi$ is equivalent to the following map:
\begin{align*}
\Phi_{1}:&[x_0:x_1:x_2]\mapsto [x_0^2:x_0x_1:x_0x_2:x_1x_2].
\end{align*}
In particular, $\Phi$ is co-trivial.
\end{lem}
\begin{proof}
There is a system of homogeneous coordinates $u_0$, $u_1$, $u_2$, $u_3$
in the target space $\P^3$ such that the surface $S$ is given by the
equation $u_0u_1=u_2u_3$.
By Lemma \ref{l:nondeg-par}, the map $\Phi$ has the form
$[x_0,x_1,x_2]\mapsto [\psi_0\psi_1:\psi_2\psi_3:\psi_0\psi_2:\psi_1\psi_3]$,
where $\psi_\alpha$ are linear forms in $x_0$, $x_1$, $x_2$.
The set of indeterminacy points of $\Phi$ is equal to
$B_\Phi=\{\psi_0=\psi_3=0\}\cup\{\psi_1=\psi_2=0\}$.
Indeed, if $\psi_0\ne 0$, then we must have $\psi_1=\psi_2=0$,
and if $\psi_1\ne 0$, then we must have $\psi_0=\psi_3=0$.
We claim that the set $B_\Phi$ consists of exactly two points.
The system of equations $\psi_0=\psi_3=0$ defines a point $a$.
Indeed, otherwise the linear functionals $\psi_0$ and $\psi_3$
must be proportional, and we may assume $\psi_0=\psi_3$.
In this case, we have $\varphi_0=\varphi_3$, which means that
the image of $\Phi$ lies in a plane section of $S$, a contradiction with our assumption.
Similarly, the system of equations $\psi_1=\psi_2=0$ defines a point $b$.
It remains to show that $a\ne b$.
Indeed, otherwise the map $\Phi$ factors through the central projection
of $\P^2\sm\{a\}$ onto $\P^1$.
It follows that the image of $\Phi$ lies in a conic, a contradiction with
our assumption.
Thus we have $B_\Phi=\{a,b\}$.
Consider the linear web of conics $\Lc_\Phi$ associated with $\Phi$.
All conics of $\Lc_\Phi$ pass through $a$ and $b$.
On the other hand, the linear system $\Lc$ of all conics passing through $a$
and $b$ has dimension 3.
Therefore, $\Lc_\Phi=\Lc$.
We can now choose homogeneous coordinates $[x_0:x_1:x_2]$ in $\P^2$
so that $a=[0:1:0]$ and $b=[0:0:1]$.
Then $\Lc$ is spanned by the following degenerate conics: $x_0^2=0$,
$x_0x_1=0$, $x_0x_2=0$ and $x_1x_2=0$.
The map $\Phi$ corresponding to this choice of generators coincides with $\Phi_1$,
as desired.
The planarization $\Phi_1$ is co-trivial: indeed, every line is mapped
under $\Phi_1$ to a plane passing through $[0:0:0:1]$.
\end{proof}
Continuing the complex classification of quadratic planarizations, we now assume
that $S$ is contained in a degenerate quadric.
\begin{lem}
\label{l:cone-par}
Suppose that $S$ is given by the
equation $u_1^2=u_0u_2$ with respect to some system of homogeneous coordinates
$[u_0:u_1:u_2:u_3]$ in $\P^3$.
Then the map $\Phi$ has the form
$$
[x_0:x_1:x_2]\mapsto [x_0^2:x_0x_1:x_1^2,\varphi_3(x_0,x_1,x_2)],
$$
where $\varphi_3$ is some homogeneous quadratic form in the variables
$x_0$, $x_1$, $x_2$.
\end{lem}
\begin{proof}
Suppose that $\Phi$ is given by the equations $u_\alpha=\varphi_\alpha(x_0,x_1,x_2)$.
As before, we argue that $\varphi_1$ is reducible, otherwise it would be
proportional either to $\varphi_0$ or to $\varphi_2$.
Similarly, $\varphi_0$ and $\varphi_2$ are reducible.
We can write $\varphi_1$ as $\psi_0\psi_1$, where $\psi_0$ and $\psi_1$ are linear functions.
Then $\varphi_0$ or $\varphi_2$ is divisible by $\psi_0$; we may assume the former
and write $\varphi_0=\psi_0\tilde\psi_0$.
It follows from the equation $\varphi_1^2=\varphi_0\varphi_2$ that
$\psi_0\psi_1^2=\tilde\psi_0\varphi_2$.
Therefore, $\tilde\psi_0$ is proportional to $\psi_0$ or to $\psi_1$.
In the former case, we have $\varphi_0=\psi_0^2$ and $\varphi_2=\psi_1^2$,
up to a projective coordinate change in $\P^3$ (multiplying the homogeneous
coordinates by different constants).
In the latter case, $\varphi_0$ would be proportional to $\varphi_1$,
a contradiction with our assumption.
Thus we have
$\varphi_1=\psi_0\psi_1$, $\varphi_0=\psi_0^2$, $\varphi_2=\psi_1^2$
for some non-proportional linear forms $\psi_0$, $\psi_1$.
We can choose the homogeneous coordinates $[x_0:x_1:x_2]$ in $\P^2$
so that $\psi_0=x_0$ and $\psi_1=x_1$.
The map $\Phi$ now takes the form
$[x_0:x_1:x_3]\mapsto [x_0^2:x_0x_1:x_1^2:\varphi_3(x_0,x_1,x_2)]$,
where $\varphi_3$ is a quadratic form in the variables $x_0$, $x_1$, $x_2$,
as desired.
\end{proof}
The following lemma provides normal forms for $\Phi$ in the case, where
$S$ is an irreducible cone.
\begin{lem}
\label{l:C-sing}
Suppose that $S$ is a singular
irreducible quadric, i.e., a quadratic cone.
Then $\Phi$ is equivalent to at least one of the maps
\begin{align*}
\Phi_{2}:&[x_0:x_1:x_2]\mapsto [x_0^2:x_0x_1:x_1^2:x_2x_0]\\
\Phi_{3}:&[x_0:x_1:x_2]\mapsto [x_0^2:x_0x_1:x_1^2:x_2^2].
\end{align*}
\end{lem}
\begin{proof}
There is a homogeneous coordinate system $u_0$, $u_1$, $u_2$, $u_3$
in the space $\P^3$ such that the cone $S$ is given by the equation $u_1^2=u_0u_2$.
By Lemma \ref{l:cone-par}, we may assume that the map $\Phi$ has the form
$[x_0:x_1:x_3]\mapsto [x_0^2:x_0x_1:x_1^2:\varphi_3(x_0,x_1,x_2)]$,
where $\varphi_3$ is a quadratic form in the variables $x_0$, $x_1$, $x_2$.
We may change $\varphi_3$ by adding any linear combination of $x_0^2$, $x_0x_1$, $x_1^2$,
i.e., by adding any quadratic form in $x_0$, $x_1$ only:
this can be implemented by means of a projective coordinate change in the target
space $\P^3$.
Thus we may assume that
$\varphi_3(x_0,x_1,x_2)=x_2(a_0x_0+a_1x_1+a_2x_2)$.
Suppose first that $a_2=0$.
Then at least one of the coefficients $a_0$, $a_1$ is nonzero.
Assume e.g. that $a_0\ne 0$ (the case $a_1\ne 0$ is obtained
from this case by interchanging $x_0$ and $x_1$).
Then we set $\wt x_0=a_0x_0+a_1x_1$, $\wt x_1=x_1$, $\wt x_2=x_2$.
In the new variables, the map $\Phi$ has the form
$\Phi:[\wt x_0:\wt x_1:\wt x_2]\mapsto [U_0:U_1:U_2:\wt x_2\wt x_0]$,
where $U_0$, $U_1$ and $U_2$ are linearly independent quadratic
forms in $\wt x_0$, $\wt x_1$.
Since the space of quadratic forms in $\wt x_0$, $\wt x_1$ is three-dimensional,
the monomials $\wt x_0^2$, $\wt x_0\wt x_1$, $\wt x_1^2$ can be
represented as linear combinations of $U_0$, $U_1$, $U_2$.
Therefore, changing homogeneous coordinates in the target space $\P^3$,
we can reduce $\Phi$ to the form
$\Phi:[\wt x_0:\wt x_1:\wt x_2]\mapsto
[\wt x_0^2:\wt x_0\wt x_1:\wt x_1^2:\wt x_2\wt x_0]$,
i.e., to the form $\Phi_2$.
Suppose now that $a_2\ne 0$.
Then we make the following change of variables:
$x_0=\wt x_0$, $x_1=\wt x_1$,
$x_2=c_0\wt x_0+c_1\wt x_1+c_2\wt x_2$.
In the new variables, the map $\Phi$ has the form
\begin{align*}
[\wt x_0:\wt x_1:\wt x_2]\mapsto [\wt x_0^2:\wt x_0\wt x_1:\wt x_1^2:
\wt\varphi_3(\wt x_0,\wt x_1,\wt x_2)],\\
\wt\varphi_3(\wt x_0,\wt x_1,\wt x_2)=
(c_0\wt x_0+c_1\wt x_1+c_2\wt x_2)\left((a_0+a_2c_0)\wt x_0+(a_1+a_2c_1)\wt x_1+
a_2c_2\wt x_2\right)=\\
=c_2\wt x_2\left((a_0+2a_2c_0)\wt x_0+(a_1+2a_2c_1)\wt x_1+a_2c_2\wt x_2\right)+\dots.
\end{align*}
The dots mean a quadratic form in $\wt x_0$, $\wt x_1$.
We now set
$$
c_0=-\frac{a_0}{2a_2},\quad c_1=-\frac{a_1}{2a_2},\quad c_2=\frac 1{\sqrt{a_2}}
$$
(we choose any one of the two complex values of $\sqrt{a_2}$).
Then we have $\wt\varphi_3=\wt x_2^2+\dots$, where dots mean a
quadratic form in $\wt x_0$, $\wt x_1$.
The latter can be
killed by a suitable change of variables in the target space (more precisely,
by adding a certain linear combination of the coordinates $u_0$, $u_1$, $u_2$
to the last coordinate $u_3$).
Thus we reduced $\Phi$ to the form
$[\wt x_0:\wt x_1:\wt x_2]\mapsto [\wt x_0^2:\wt x_0\wt x_1:\wt x_1^2:\wt x_2^2]$,
i.e., to the form $\Phi_3$.
\end{proof}
Finally, we need to distinguish between $\Phi_2$ and $\Phi_3$, i.e.,
to prove that these two maps are not equivalent.
To this end, it suffices to compute the dual planarizations of
$\Phi_2$ and $\Phi_3$ and observe that equivalent planarizations must
have equivalent dual planarizations.
The planarization $\Phi_2$ is co-trivial: the image of every line
is contained in a plane passing through $[0:0:1:0]$.
On the other hand, a straightforward computation shows that the dual
planarization of $\Phi_3$ is equivalent to $\Phi_3$, in particular, is
not trivial.
This concludes the proof of Theorem \ref{t:qclass}.
\subsection{Real classification}
\label{ss:R}
In this subsection, we assume that the ground field is $\R$.
The proof of Theorem \ref{t:qclassR} splits into the following two lemmas.
\begin{lem}
\label{l:R-nonsing}
Suppose that $S$ is nonsingular.
Then $\Phi$ is equivalent to one and only one of the following maps:
\begin{align*}
\Phi_{1a}:&[x_0:x_1:x_2]\mapsto [x_0^2:x_0x_1:x_0x_2:x_1x_2],\\
\Phi_{1b}:&[x_0:x_1:x_2]\mapsto [x_0^2:x_0x_1:x_0x_2:x_1^2+x_2^2].
\end{align*}
\end{lem}
\begin{proof}
Consider the linear web of conics $\Lc_\Phi$ associated with the
complexification of $\Phi$.
By the proof of Lemma \ref{l:C-nonsing}, the web $\Lc_{\Phi}$ has two
different complex base points $a\ne b$ and consists of all conics
passing through $a$ and $b$.
There are two possibilities: $a$ and $b$ can be real or complex conjugate.
Suppose first that $a$ and $b$ are real.
Then, as in the proof of Lemma \ref{l:C-nonsing}, we show that
$\Phi$ is equivalent to $\Phi_{1a}$.
Suppose now that $a$ and $b$ are complex conjugate.
Performing a suitable change of homogeneous coordinates $[x_0:x_1:x_2]$ in $\P^2$
over real numbers, we may assume that $a=[0,1,i]$ and $b=[0,1,-i]$.
Then the web of all conics passing through $a$ and $b$ is spanned by
the degenerate conics $x_0^2=0$, $x_0x_1=0$, $x_0x_2=0$ and $x_1^2+x_2^2=0$.
Note that, in the affine chart $x_0=1$ with affine coordinates $x_1$, $x_2$,
the web $\Lc_\Phi$ is exactly the web of all circles.
With this choice of generating conics, $\Phi$ coincides with $\Phi_{1b}$.
\end{proof}
\begin{lem}
\label{l:R-sing}
Suppose that $S$ is singular but irreducible.
Then $\Phi$ is equivalent to one and only one of the following maps:
\begin{align*}
\Phi_{2}:&[x_0:x_1:x_2]\mapsto [x_0^2:x_0x_1:x_1^2:x_2x_0]\\
\Phi_{3}:&[x_0:x_1:x_2]\mapsto [x_0^2:x_0x_1:x_1^2:x_2^2].
\end{align*}
\end{lem}
\begin{proof}
The proof of Lemma \ref{l:C-sing} applies verbatim over reals,
except that it is not always possible to take $\sqrt{a_2}$.
If $a_2<0$, then we set instead $c_2=1/\sqrt{-a_2}$ and reduce
$\Phi$ to the form
$[\wt x_0:\wt x_1:\wt x_2]\mapsto [\wt x_0^2:\wt x_0\wt x_1:\wt x_1^2:-\wt x_2^2]$.
However, changing sign of the last coordinate in $\P^3$ gets it
back to the form $\Phi_3$.
Since $\Phi_2$ and $\Phi_3$ are not equivalent over complex numbers,
neither are they over reals.
\end{proof}
\section{Normal forms of planarizations}
\label{s:norf}
In this section, we give a complete list of local normal forms of planarizations.
We assume that the ground field is $\R$.
\begin{thm}
\label{t:nf}
Suppose that $U\subset\P^2$ is an open subset and $\Phi:U\to\P^3$ is a planarization.
Then, for every open subset $V\subset U$ there exists a $($possibly smaller$)$
open subset $W\subset V$ such that $\Phi:W\to\P^3$ is projectively equivalent to at least one of the following normal forms:
\begin{itemize}
\item[$(T)$:] $[x:y:z]\mapsto [f(x,y,z):g(x,y,z):h(x,y,z):0]$
\item[$(CT)$:] $[x:y:z]\mapsto [x:y:z:f(x,y,z)]$
\item[$(Q_1)$:] $[x:y:z]\mapsto [xy:xz:yz:x^2+y^2+z^2]$
\item[$(Q_2)$:] $[x:y:z]\mapsto [xy:xz:yz:x^2 - y^2 + z^2]$
\item[$(Q_3)$:] $[x:y:z]\mapsto [x^2+y^2:y^2+z^2:xz:yz]$
\item[$(Q_4)$:] $[x:y:z]\mapsto [x^2-y^2:xy:yz:z^2]$
\item[$(Q_5)$:] $[x:y:z]\mapsto [xz-yz:x^2:y^2:z^2]$
\item[$(Q_6)$:] $[x:y:z]\mapsto [x^2:xz-y^2:yz:z^2]$
\item[$(Q_7)$:] $[x:y:z]\mapsto [y^2-z^2:xy:xz:yz]$
\item[$(Q_8)$:] $[x:y:z]\mapsto [xy:xz:y^2:z^2]$
\item[$(Q_9)$:] $[x:y:z]\mapsto [xy:xz-y^2:yz:z^2]$
\item[$(Q_{10})$:] $[x:y:z]\mapsto [x^2:xy:y^2:z^2]$
\item[$(C_1)$:] $[x:y:z]\mapsto [z(x^2+y^2):y(x^2+z^2):x(y^2+z^2):xyz]$
\item[$(C_2)$:] $[x:y:z]\mapsto [z(x^2-y^2):y(x^2+z^2):x(y^2-z^2):xyz]$
\item[$(C_3)$:] $[x:y:z]\mapsto [x^2z:z(x^2+y^2):x(x^2+y^2-z^2):y(x^2+y^2+z^2)]$
\item[$(C_4)$:] $[x:y:z]\mapsto [x^2y:x(x^2-y^2):z(x^2+y^2):yz^2]$
\item[$(C_5)$:] $[x:y:z]\mapsto [x^2(x+y):y^2(x+y):z^2(x-y):xyz]$
\item[$(C_6)$:] $[x:y:z]\mapsto [x^3:xy^2:2xyz-y^3:z(xz-y^2)]$.
\end{itemize}
Here $f$, $g$ and $h$ are sufficiently smooth degree 1 homogeneous functions of $x$, $y$, $z$.
In normal form $(T)$, the mapping $[x:y:z]\mapsto [f:g:h]$ represents an arbitrary
sufficiently smooth embedding of $W$ into $\P^2$.
\end{thm}
In Theorem \ref{t:nf}, the form $(T)$ represents all trivial planarizations,
and the form $(CT)$ represents all co-trivial planarizations.
These items correspond to infinitely many projective equivalence classes.
However, note that there are finitely many classes of nontrivial non-co-trivial
planarizations.
\begin{proof}
By the Main Theorem, every point $a\in U$ has a neighborhood $V\subset U$
such that the planarization $\Phi:V\to\P^3$ is trivial, co-trivial, quadratic
or dual quadratic.
Suppose first that $\Phi:V\to\P^3$ is trivial.
This means by definition that there is a plane $P\subset\P^3$ such that
$\Phi(V)\subset P$.
By a projective coordinate change, we may assume that the plane $P$
is given in homogeneous coordinates $[u_0:u_1:u_2:u_3]$ by $u_3=0$.
Then the map $\Phi:V\to\P^3$ has form $(T)$.
Suppose now that the planarization $\Phi:V\to\P^3$ is co-trivial but not trivial.
Then, by definition, there is a point $b\in\P^3$ such that, for every line $L\subset\P^2$,
there is a plane $P_L$ containing the set $\Phi(L\cap V)\cup\{b\}$.
We may assume that $b=[0:0:0:1]$.
Since $\Phi:V\to\P^3$ is not trivial, there is a nonempty open subset $W\subset V$
such that $\Phi(W)\not\ni b$.
Let $\P^2(b)$ be the projective plane formed by all lines in $\P^3$ passing through $b$,
and let $\pi:\P^3\sm\{b\}\to\P^2(b)$ be the canonical projection mapping a point
$c\ne b$ to the line $bc$.
Note that the map $\Psi=\pi\circ\Phi:W\to\P^2(b)$ has the following property.
For every line $L\subset\P^2$, the set $\Psi(W\cap L)$ is a subset of a line.
By the M\"obius--von Staudt theorem, a map with this property must be
a restriction of a projective transformation or a mapping from $W$ to a line,
possibly after replacing $W$ with a smaller open set.
In the second case, $\Phi:W\to\P^3$ is trivial, and therefore is equivalent to
the form $(T)$.
In the first case, the map $\Psi$ is given by $[x:y:z]\mapsto [x:y:z]$
provided that we choose a suitable system of projective coordinates in $\P^2(b)$.
Then the map $\Phi:W\to\P^3$ is given by
$[x:y:z]\mapsto [x:y:z:f(x,y,z)]$ for some (sufficiently smooth) function $f$.
Suppose now that the planarization $\Phi:V\to\P^3$ is quadratic but
neither trivial nor co-trivial.
Note that the image $\Phi(V)$ lies in a surface $S$ of degree 2, 3 or 4.
If $S$ has degree 2, then $\Phi$ is equivalent to one of the maps $\Phi_{1a}$, $\Phi_{1b}$,
$\Phi_2$, $\Phi_3$ from Theorem \ref{t:qclassR}.
Since, by our assumption, $\Phi$ is not co-trivial, it must be equivalent to $\Phi_3$;
the latter is redenoted by $(Q_{10})$ in the statement of the theorem.
Suppose now that $S$ has degree 3 or 4.
In this case, we refer to the results of \cite{CSS}.
By \cite{CSS}, every quadratic rational map $\Phi$ such that $\Phi(\P^2\sm B_\Phi)$ is dense
in a surface of degree 3 or 4 is equivalent to one of the maps $(Q_1)$--$(Q_9)$.
Finally, suppose that the planarization $\Phi:V\to\P^3$ is dual quadratic but
not trivial, not co-trivial, and not quadratic.
Then its dual planarization is equivalent to one of the maps $(Q_1)$--$(Q_{13})$.
A straightforward computation shows that the dual planarizations to
$(Q_1)$--$(Q_6)$ are equivalent, respectively, to $(C_1)$--$(C_6)$.
The planarizations $(Q_7)$--$(Q_9)$, characterized by the property that the
corresponding surfaces in $\P^3$ are cubic, turn out to be equivalent to
their dual planarizations.
In particular, the dual planarizations of $(Q_7)$--$(Q_9)$ are quadratic.
\end{proof}
The equations of the surfaces parameterized by dual-quadratic planarizations
$(C_1)$--$(C_6)$ are
\begin{itemize}
\item[$(C_1)$:] $4t^3 - t(u^2 + v^2 + w^2) + uvw = 0$
\item[$(C_2)$:] $4t^3 + t(u^2 - v^2 + w^2) + uvw = 0$
\item[$(C_3)$:] $4vu^2 + u(t^2 - 4v^2 + w^2) - vw^2 = 0$
\item[$(C_4)$:] $4tu^2 - uw^2 + tv^2 = 0$
\item[$(C_5)$:] $u(vw - t^2) + vt^2 = 0$
\item[$(C_6)$:] $u(4tv - w^2) + v^3 = 0$,
\end{itemize}
where $[u:v:w:t]$ are homogeneous coordinates in $\P^3$.
These equations are obtained by eliminating the three variables $x$, $y$, $z$ from
the four equations
$$
[u:v:w:t]=\Phi[x:y:z].
$$
We used \textit{Mathematica} to perform the computations.
We provide figures of these surfaces below, see Figures 1--3.
The surfaces that are parameterized by maps $(Q_1)$--$(Q_9)$ have been studied in \cite{CSS}.
In particular, pictures of all these surfaces can be found there.
\begin{figure}[H]
\includegraphics[height=4cm]{planar1.eps}\hspace{1cm}
\includegraphics[height=4cm]{planar2.eps}
\caption{The surfaces parameterized by $(C_1)$ (left) and by
$(C_2)$ (right) in the affine chart $t=1$.}
\end{figure}
\begin{figure}[H]
\includegraphics[height=4cm]{planar3.eps}\hspace{1cm}
\includegraphics[height=4cm]{planar4.eps}
\caption{The surfaces parameterized by $(C_3)$ in the affine chart $t=1$ (left)
and by $(C_4)$ in the affine chart $w=1$ (right).}
\end{figure}
\begin{figure}[H]
\includegraphics[height=4cm]{planar5.eps}\hspace{1cm}
\includegraphics[height=4cm]{planar6.eps}
\caption{The surface parameterized by $(C_5)$ in the affine chart $t=1$ (left)
and by $(C_6)$ in the affine chart $w=1$ (right).}
\end{figure}
|
1,314,259,994,927 | arxiv | \section{Introduction}
Computer simulation is an essential tool for studying physical properties
of many-particle systems. The Metropolis-type Monte Carlo simulation
\cite{metro53} with a single spin flip has been a success as a standard method
of simulation of many-particle systems. However, the single-spin-flip
algorithm often suffers from the problem of slow dynamics
or the critical slowing down; that is, the relaxation time
diverges at the critical temperature.
To overcome difficulty, a cluster flip algorithm was proposed
by Swendsen and Wang \cite{sw87}.
They applied the Fortuin-Kasteleyn \cite{fk72}
representation to identify clusters of spins.
The problem of the thermal phase transition is mapped onto
the geometric percolation problem in the cluster formalism.
In the cluster algorithm, spins in the cluster are updated
at a time. In the Swendsen-Wang (SW) algorithm, all the spins
are partitioned into clusters; thus, the SW algorithm is
called the multi-cluster algorithm.
Wolff \cite{wolff89} proposed another type of cluster algorithm,
that is, a single-cluster algorithm, where only a single cluster
is generated, and the spins of that cluster are updated.
Although the cluster algorithm was originally formulated
for the scalar order parameter, such as the Potts model,
Wolff \cite{wolff89} introduced the idea of embedded cluster
to deal with systems of vector spins, such as
the classical XY model or the classical Heisenberg model.
Computational physics develops with the advance in computer technology.
Recently the use of general purpose computing on graphics processing
unit (GPU) is a hot topic in computer science.
Drastic reduction of processing times can be realized in
scientific computations.
Using the common unified device architecture (CUDA) released by NVIDIA,
it is now easy to implement algorithms on GPU
using standard C or C++ language with CUDA specific extension.
Preis {\it et al.} \cite{preis09} studied the two-dimensional (2D) and
three-dimensional (3D) Ising models
by using the Metropolis algorithm with CUDA.
They used a variant of sublattice decomposition
for a parallel computation on GPU.
The spins on one sublattice do not interact with other spins
on the same sublattice. Therefore one can update all spins
on a sublattice in parallel when making the Metropolis simulation.
As a result they were able to accelerate 60 times for the 2D Ising model
and 35 times for the 3D Ising model compared to a current CPU core.
Recently, the GPU acceleration of the multispin coding
of the Ising model was discussed \cite{block10}.
Moreover, many attempts for simulating lattice spin models on GPU
using the Metropolis algorithm were reported
\cite{Levy,Bernaschi_GPU_spin_glass,weigel_spin_model}.
Since the Metropolis algorithm has the problem of slow dynamics
as mentioned above, and this problem becomes conspicuous
with increasing the system size, it is highly desirable
to apply the GPU-based calculation to cluster algorithms.
Only limited trials have been reported so far.
The present authors \cite{komura11} have proposed the GPU-based
calculation with CUDA for the Wolff single-cluster algorithm,
where parallel computations are performed for the newly added spins
in the growing cluster.
Hawick {\it et al.} \cite{Hawick_single_cluster} have studied
the CUDA implementation of the Wolff algorithm
using a modified connected component labeling
for the assignment of the cluster. They put more emphasis on
the hybrid implementation of Metropolis and Wolff updates and
the optimal choice of the ratio of both updates.
Quite recently, Weigel \cite{weigel11} has studied parallelization of
cluster labeling and cluster update algorithms for calculations
with CUDA. He realized the SW multi-cluster algorithm
by using the combination of self-labeling algorithm and
label relaxation algorithm or hierarchical sewing algorithm.
In this paper, we present the GPU-based calculation with CUDA
for the SW multi-cluster algorithm of 2D classical spin systems.
We realize the SW cluster algorithm by using the connected component
labeling algorithm for the assignment of clusters.
The rest of the paper is organized as follows.
In section 2, we briefly describe the standard way of implementing
the SW algorithm on CPU.
In section 3, we explain two types of connected component labeling
which are used in the present calculation, and the idea of
implementing the SW cluster algorithm on GPU.
In section 4, we compare the performance of GPU calculation
with that of CPU calculation.
The summary and discussion are given in section 5.
\section{Swendsen-Wang cluster algorithm}
We start with the Potts model whose Hamiltonian is given by
\begin{equation}
\mathcal{H} = -J \sum_{<i,j>}(\delta_{S_{i},S_{j}}-1),
\quad S_{i} = 1, 2, \cdots, q,
\end{equation}
and for $q$ = 2 this corresponds to the Ising model.
Here, $J$ is the coupling and $S_{i}$ is the Potts spin
on the lattice site $i$. The summation is taken over
the nearest neighbor pairs $<i,j>$.
Periodic boundary conditions are employed.
Swendsen and Wang proposed a Monte Carlo algorithm
of multi-cluster flip \cite{sw87}.
There are three main steps in the SW algorithm:
(1) Construct a bond lattice of active or non-active bonds.
(2) The active bonds partition the spins into clusters which are
identified and labeled using a cluster-labeling algorithm.
(3) All spins in each cluster are set randomly to one of $q$.
The cluster identification problem is a variant of
connected component labeling, which is an algorithmic
application of graph theory.
For an efficient cluster-labeling algorithm,
the Hoshen-Kopelman algorithm \cite{Hoshen_Kopelman}, which was
first introduced in context of cluster percolation, is
often used. The Hoshen-Kopelman algorithm
is a special version of the class of union-and-find algorithms
\cite{cormen}, and
has an advantage over
other methods in low computer memory usage and
short computational time.
The actual spin-update process of the SW cluster algorithm
on a CPU can be formulated as follows \cite{janke,landau}:
\begin{itemize}
\item[(i)] Choose a site $i$.
\item[(ii)] Look at each of the nearest neighbors $j$. If $S_j$
is equal to $S_i$, generate bond between site $i$ and $j$
with probability $p=1-e^{-\beta}$, where $\beta$ is
the inverse temperature $J/T$.
\item[(iii)]
Choose the next spin and go to (i) until all sites are checked.
\item[(iv)]
Apply the Hoshen-Kopelman algorithm \cite{Hoshen_Kopelman} to identify all clusters.
\item[(v)]
Choose a cluster.
\item[(vi)]
Assign the spins $S_i$ in the cluster to one of $q$
with probability $1/q$.
\item[(vii)]
Choose another cluster and go to (vi) until all clusters are checked.
\item[(viii)] Go to (i).
\end{itemize}
The procedures from (i) to (iii) correspond to the step of
active bond generation.
The procedure (iv) corresponds to the step of cluster labeling.
Those from (v) to (vii) correspond
to the step of spin flip.
In the Hoshen-Kopelman cluster-labeling algorithm, integer labels
are assigned to each spin in a cluster. Each cluster has
its own distinct set of labels. The proper label of a cluster,
which is defined to be the smallest label of any spin in the cluster,
is found by the following function.
The array \verb+label+ is used, and
if \verb+label+ is a label belonging to a cluster,
the \verb+label[label]+ is the index of another label
in the same cluster which has a smaller value
if such a smaller value exists. The proper label for the cluster
is found by evaluating \verb+label[label]+ repeatedly.
\section{GPU calculation of the Swendsen-Wang cluster algorithm}
Since the calculations of the step of active bond generation
and the step of spin flip are done independently on each site,
these steps are well suited for parallel computation on GPU.
On the other hand, in the step of cluster labeling
the assignment of label of cluster is done on each site
piece by piece sequentially; thus the cluster-labeling algorithm
such as the Hoshen-Kopelman algorithm cannot be directly
applied to the parallel computation on GPU.
\begin{figure}
\begin{center}
\includegraphics[width=1.0\linewidth]{figure1.eps}
\caption{\label{fig:fig1}
Two steps of iterations of the "Label Equivalence" method
proposed by Hawick {\it et al.} \cite{Hawick_labeling}.
The connection of sites in the same cluster is represented
by the same color, and the arrow shows the neighboring sites
to check for comparison.
The thread number is denoted by "index". The variable for saving
label and the temporal variable are represented by "label" and "R",
respectively.
The scanning function compares "label" of each site with
that of the nearest-neighbor sites.
If "label" of the nearest-neighbor site is smaller than "label"
of that site, "R[label]" is updated to the smallest one.
The equivalence chain of "R" is resolved in the analysis function
from the starting site to the new site if "label[index]" is equal to "index".
The labeling function updates "label"
by label[index] $\leftarrow$ R[label[index]]. Although some clusters
are not represented by the same "label" at the end of the 1st step
in this case, all the sites reaches the final label by two steps
of iteration. The process of update of "R" in the 2nd step
is also shown in the figure.
}
\end{center}
\end{figure}
Recently, Hawick {\it et al.} \cite{Hawick_labeling}
studied the cluster-labeling algorithm efficient
for GPU calculation. Checking four implementations of
multi-pass labeling method, they proposed the labeling
method of "Label Equivalence", which is the most efficient
among four proposals.
The procedure of their algorithm is explained in figure \ref{fig:fig1}.
Their algorithm consists of three kernel functions, that is,
scanning function, analysis function and labeling function,
and two variables for labeling; one is a variable
for saving the label, "label" in figure \ref{fig:fig1}, and the other is
a temporal variable for updated label, "R" in figure \ref{fig:fig1}.
The scanning function compares the label of each site with
that of the nearest-neighbor sites when the bond between
each site and the nearest-neighbor site is active.
If the label of the nearest-neighbor site is smaller than the label
of that site, the temporal variable with the label number,
R[label[index]] in figure \ref{fig:fig1}, is updated to the smallest one.
For the update of the temporal variable on the scanning function,
the atomic operation
\verb+atomicMin()+ is used. Atomic operations provided by CUDA
are performed without interference from any other threads.
The analysis function resolves the equivalence chain of "R" obtained
in the scanning function; the temporal variable
\verb+R[index]+
is updated from the starting site
to the new site, which is similar to the method of
the Hoshen-Kopelman algorithm.
Each thread checks the temporal variable and the label on each site.
When the label number, "label", is equal to the thread number, "index",
each thread tracks back the temporal variable until
the temporal variable, "R", remains unchanged.
Since each thread executes this operation concurrently,
the final value is reached quickly.
The labeling function updates the label for saving
by \verb+label[index]+ $\leftarrow$ \verb+R[label[index]]+.
In the cluster-labeling algorithm due to Hawick {\it et al.},
the loop including three functions is iterated up to the point
when the information of the labeling needs
no more process of scanning function.
A small number of iterations
are needed; 4096$\times$4096 systems with free boundary conditions
were labeled in 9 or less iterations \cite{Hawick_labeling}.
\begin{figure}
\begin{center}
\includegraphics[width=1.0\linewidth]{figure2.eps}
\caption{
\label{fig:fig2}
The first step of iterations of the refinement of Label Equivalence
method proposed by Kalentev {\it et al.} \cite{Kalentev}.
The meanings of color and arrow are the same as figure \ref{fig:fig1}.
The thread number is denoted by "index".
The scanning function compares "label" of each site with
that of the nearest-neighbor sites.
The equivalence chain is resolved in the analysis function
from the starting site to the new site if "label[label]" is not equal
to "label", which results in the update of "label".
Since the cluster-labeling due to Kalentev is the refined version of
that due to Hawick, the output of labeling is the same as figure
\ref{fig:fig1}. Some clusters are not represented by the same "label"
at the end of the 1st step in this case, but all the sites reaches
the final label by two steps of iteration.
}
\end{center}
\end{figure}
More recently, Kalentev {\it et al.} \cite{Kalentev} reported
the refinement of the algorithm due to Hawick {\it et al.}.
The procedure of their algorithm is shown in figure \ref{fig:fig2}.
First, they used only one variable for labeling instead of two
because there is no need for a temporal reference;
the implementation was improved in terms of memory consumption.
It means that the number of kernel functions are reduced
from three to two because the process of the labeling function
is no more needed.
Second, they changed the execution condition on the analysis function
from "when \verb+label[index]+ is equal to \verb+index+" to
"when \verb+label[label]+ is not equal to \verb+label+".
Finally, they eliminated the atomic operation.
The update of labeling is executed up to the point when the
labeling needs no more process of the scanning function;
thus even if collision between threads happens because of
the absence of the atomic operations,
it will be resolved during the next iterative step.
With the refinements due to Kalentev {\it et al.}, the improvement of
computational speed and the reduction of the memory usage were realized.
We adapt the two cluster-labeling algorithms, that due to Hawick {\it et al.}
\cite{Hawick_labeling} and that due to Kalentev {\it et al.} \cite{Kalentev},
to the SW multi-cluster algorithm of Monte Carlo simulation.
In the cluster-labeling algorithms, the label of the cluster is not given
serially. To flip the spins in the cluster of the SW algorithm,
we do not have to know the serial number for the label of the cluster.
We assign the new spin to any label number even if the cluster of
that label does not exist. Because of parallel computation,
the assignment of new spin to all the possible number of labels
requires no extra cost.
To improve the computational speed and save memory, we store
the information on spin, bond and label in one word; This idea
was used by Hawick {\it et al.} \cite{Hawick_single_cluster}.
In the case of treating the system with many spin states, for example,
we separate the information on spin from the one-word information.
We finally note that we use a linear congruential random generator
which was proposed by Preis {\it et al.} \cite{preis09}
when random numbers are generated.
\section{Results}
We have tested the performance of our code on NVIDIA GeForce GTX580
and GTX285.
For comparison, we run the code on a current CPU,
Intel(R) Xeon(R) CPU W3680 @ 3.33GHz. Only one core of the CPU
is used. For compiler, we have used gcc 4.1.2 with option -O3.
We first show the data for the 2D $q$-state Potts models.
For the cluster-labeling algorithm, we use both the algorithm
due to Hawick {\it et al.} \cite{Hawick_labeling} and
that due to Kalentev {\it et al.} \cite{Kalentev}.
We compare the GPU computational time with the CPU computational time
at the critical temperature, $T_{c}/J = 1/\ln(1+\sqrt{2}) = 1.1346$
for the $q=2$ Potts model (Ising model) and
$T_{c}/J = 1/\ln(1+\sqrt{3}) = 0.9950$ for the $q=3$ Potts model.
The average computational times per a spin update
at the critical temperature for the $q=2$ Potts model
and the $q=3$ Potts model
are tabulated in tables \ref{tb:GPU_CPU_time_q=2_Potts} and
\ref{tb:GPU_CPU_time_q=3_Potts}, respectively.
There, the time for only a spin update and
that including the measurement of energy and magnetization are given.
We show the measured time per a spin flip in units of nano sec.
The linear system sizes $L$ are $L$=256, 512, 1024, 2048 and 4096.
We can see from tables \ref{tb:GPU_CPU_time_q=2_Potts} and
\ref{tb:GPU_CPU_time_q=3_Potts} that the computational time of
our GPU implementation of the SW algorithm is almost constant
for $L \ge 1024$. And the computational speed using the algorithm
of Kalentev {\it et al.} is superior to that of Hawick {\it et al.}
for all system sizes.
The performance for $q=2$ with the algorithm of Hawick {\it et al.} is
2.96 nano sec per a spin flip and
that with the algorithm of Kalentev {\it et al.} is
2.51 nano sec per a spin flip with $L=4096$ on GTX580.
The comparison of the performance on GTX580 and that on CPU
leads to the acceleration of computational speed
with the algorithm of Kalentev {\it et al.} as 12.4 times for a spin flip
and 12.6 times for a spin flip with the measurement of energy
and magnetization for the $q=2$ Potts model with $L=4096$.
The number of iterations at the critical temperature
is about 6.6, 7.1, 7.6, 8.1 and 8.6 on average for $L$ = 256, 512,
1024, 2048 and 4096, respectively; that is, the loop count gradually
increases with system size.
We here mention the amount of memory used.
The amount of register is 10 to 13 bytes per thread, and
the amount of shared memory is 2048 bytes per block for each kernel function.
These values remain unchanged by system size.
Using "GPU Occupancy Calculator", we checked that the GPU occupancy is 100\%
for each kernel function, which indicates that the best performance of GPU
is attained.
\begin{table*}[htbp]
\begin{center}
\begin{tabular}{lllllll}
\hline
& & $L$=256 & $L$=512 & $L$=1024 & $L$=2048 & $L$=4096 \\
\hline
GTX580 & update only & 5.02 nsec & 3.48 nsec & 3.02 nsec & 2.96 nsec & 2.96 nsec\\
\ \ Hawick {\it et al.} & + measurement & 5.73 nsec & 3.94 nsec & 3.40 nsec & 3.32 nsec & 3.34 nsec\\
GTX580 & update only & 4.76 nsec & 3.10 nsec & 2.58 nsec & 2.51 nsec & 2.51 nsec\\
\ \ Kalentev {\it et al.} & + measurement & 5.47 nsec & 3.54 nsec & 2.98 nsec & 2.86 nsec & 2.87 nsec\\
GTX285 & update only & 10.0 nsec & 6.96 nsec & 6.14 nsec & 6.03 nsec & 6.04 nsec\\
\ \ Hawick {\it et al.} & + measurement & 11.2 nsec & 7.63 nsec & 6.70 nsec & 6.55 nsec & 6.60 nsec\\
GTX285 & update only & 8.76 nsec & 5.86 nsec & 5.12 nsec & 5.00 nsec & 5.07 nsec\\
\ \ Kalentev {\it et al.} & + measurement & 9.90 nsec & 6.52 nsec & 5.66 nsec & 5.51 nsec & 5.60 nsec\\
Xeon(R) W3680 & update only & 28.9 nsec & 30.0 nsec & 31.3 nsec & 31.1 nsec & 31.2 nsec\\
& + measurement & 33.6 nsec & 34.6 nsec & 36.4 nsec & 36.1 nsec & 36.3 nsec\\
\hline
\end{tabular}
\caption{\label{tb:GPU_CPU_time_q=2_Potts}Average computational time per a
spin flip
at $T_c$ for the $q=2$ Potts model. The time for only a spin
update and that including the measurement of energy and magnetization are
given.}
\end{center}
\end{table*}
\begin{table*}[htbp]
\begin{center}
\begin{tabular}{lllllll}
\hline
& & $L$=256 & $L$=512 & $L$=1024 & $L$=2048 & $L$=4096 \\
\hline
GTX580 & update only & 4.85 nsec& 3.41 nsec& 2.94 nsec & 2.88 nsec & 2.89 nsec\\
\ \ Hawick {\it et al.} & + measurement & 5.70 nsec& 3.93 nsec& 3.39 nsec & 3.31 nsec & 3.31 nsec\\
GTX580 & update only & 4.54 nsec& 3.02 nsec& 2.51 nsec & 2.43 nsec & 2.44 nsec\\
\ \ Kalentev {\it et al.}& + measurement & 5.39 nsec& 3.54 nsec& 2.97 nsec & 2.86 nsec & 2.85 nsec\\
GTX285 & update only & 9.92 nsec& 6.92 nsec& 6.09 nsec & 5.94 nsec & 5.96 nsec\\
\ \ Hawick {\it et al.} & + measurement & 11.2 nsec& 7.72 nsec& 6.77 nsec & 6.60 nsec & 6.61 nsec\\
GTX285 & update only & 8.51 nsec& 5.76 nsec& 5.01 nsec & 4.88 nsec & 4.96 nsec\\
\ \ Kalentev {\it et al.}& + measurement & 9.84 nsec& 6.56 nsec& 5.67 nsec & 5.54 nsec & 5.60 nsec\\
Xeon(R) W3680 & update only & 29.2 nsec& 29.2 nsec& 31.5 nsec & 31.4 nsec & 31.7 nsec\\
& + measurement & 35.1 nsec& 34.9 nsec& 37.3 nsec & 37.5 nsec & 37.6 nsec\\
\hline
\end{tabular}
\caption{\label{tb:GPU_CPU_time_q=3_Potts}Average computational time per a spin flip
at $T_c$ for the $q=3$ Potts model. The time for only a spin
update and that including the measurement of energy and magnetization are given.}
\end{center}
\end{table*}
Next, we refer to the temperature dependence of our GPU implementation
of the SW algorithm.
We plot the temperature dependence of the GPU computational time
for the $q=2$ Potts model and the $q=3$ Potts model
with $L=1024$ in figures \ref{fig:fig3}(a) and (b), respectively.
There, we show the average computational time per a spin flip
with two cluster-labeling algorithms in units of nano sec.
From figures \ref{fig:fig3}(a) and (b) we can see that
the computational time is nearly independent
of temperature.
Thus, our GPU implementation of the SW algorithm
is effective for all range of temperatures.
We observe that
the computational time becomes a little bit longer near
the critical temperature, which reflects on the fact
that the loop count of
iteration in the cluster labeling increases near the critical
temperature.
It may be due to the complex shape of cluster
near the critical temperature.
\begin{figure}
\begin{center}
\includegraphics[width=0.4\linewidth]{figure3a.eps}
\hspace{0.02\linewidth}
\includegraphics[width=0.4\linewidth]{figure3b.eps}
\caption{\label{fig:fig3}
(a) Temperature dependence of the computational time for GPU computation for the $q=2$
Potts model with $L=1024$ and (b) that for the $q=3$
Potts model with $L=1024$. }
\end{center}
\end{figure}
As an illustration, we plot the moment ratio,
\begin{equation}
U(T) = \frac{<M(T)^4>}{<M(T)^2>^2},
\end{equation}
which is essentially the Binder ratio \cite{Binder}
except for the normalization,
of the $q=2$ Potts model and the $q=3$ Potts model
in figures \ref{fig:fig4}(a) and (b), respectively.
The square of the order parameter of the $q$-state Potts model is
calculated as
\begin{equation}
M^2 = \frac{q \ \sum_{k=1}^q n[k]^2-N^2}{q-1},
\end{equation}
where $n[k]$ is the number of spins with the state $k$, and
$N$ is the total number of spins.
We here give the data obtained by using the cluster-labeling
algorithm due to
Hawick {\it et al.}, for example.
We discarded the first 10,000 Monte Carlo updates and
the next 100,000 Monte Carlo updates were used for measurement.
The crossing of the data with different sizes reproduces
the known results of the critical temperatures.
\begin{figure}
\begin{center}
\includegraphics[width=0.4\linewidth]{figure4a.eps}
\hspace{0.02\linewidth}
\includegraphics[width=0.4\linewidth]{figure4b.eps}
\caption{\label{fig:fig4}
(a) Moment ratio of the $q=2$ Potts model for $L$=256, 512, 1024 and 2048 and
(b) that of the $q=3$ Potts model for $L$=256, 512, 1024 and 2048.}
\end{center}
\end{figure}
Next, we extend our GPU-based calculation of SW multi-cluster
algorithm to the system of vector spins.
We treat the $q$-state clock model, and the Hamiltonian is given by
\begin{equation}
\mathcal{H} = -J\sum_{<i,j>} \bm{S}_i \cdot \bm{S}_j,
\end{equation}
where $\bm{S}_i$ is a planar unit vector,
$(\cos \theta_i, \sin \theta_i)$, at site $i$;
$\theta_i$ takes the value of $\theta_i = 2\pi p_i/q$
with $p_i=1, 2, \cdots, q$. When $q$ tends to infinity,
the clock model becomes the classical XY model.
To make a cluster flip, we use the idea of
embedded cluster introduced by Wolff \cite{wolff89}.
We project vector spins to form Ising spin clusters.
The essential part of the GPU implementation is the same
as the case of the Potts model.
We note that the proper use of shared memories is effective
especially for the calculation of the inner product of vectors.
As an example, we pick up the $q$-state clock model with $q$=6.
This model is known to show two Kosterlitz-Thouless
transitions \cite{KT}, $T_1$ and $T_2$.
The numerical estimates of $T_1/J$ and $T_2/J$ are
around 0.7 and 0.9 \cite{tomita2002a}.
We test the performance of the present implementation
near the upper critical temperature.
The average computational time per a spin update
at $T/J=0.9$ for the $q=6$ clock model
is tabulated in table \ref{tb:GPU_CPU_time_q=6_up_clock}.
For the cluster-labeling algorithm, we use both the algorithm
of Hawick {\it et al.} \cite{Hawick_labeling} and
that of Kalentev {\it et al.} \cite{Kalentev}.
The computational time for only a spin update and
that including the measurement of energy and magnetization are given.
The linear system sizes $L$ are $L$=256, 512, 1024, 2048 and 4096.
We show the measured time per a spin flip in units of nano sec.
For the measurement of physical quantities, we also measure
the correlation function with distances $L/4$ and $L/2$.
The correlation function is defined as follows:
\begin{equation}
G(r,T) = <S_i^xS_{i+r}^x+S_i^yS_{i+r}^y>,
\end{equation}
and the ratio of the correlation functions with different
distances is a good estimator for the analysis of
Kosterlitz-Thouless transition \cite{tomita2002b}.
\begin{table*}[htbp]
\begin{center}
\begin{tabular}{lllllll}
\hline
& & $L$=256 & $L$=512 & $L$=1024 & $L$=2048 & $L$=4096 \\
\hline
GTX580 & update only & 4.88 nsec& 3.36 nsec & 2.94 nsec & 2.85 nsec & 2.88 nsec\\
\ \ Hawick {\it et al.} & + measurement & 6.20 nsec& 4.11 nsec & 3.57 nsec & 3.44 nsec & 3.47 nsec\\
GTX580 & update only & 4.49 nsec& 2.93 nsec & 2.48 nsec & 2.38 nsec & 2.42 nsec\\
\ \ Kalentev {\it et al.}& + measurement & 5.84 nsec& 3.68 nsec & 3.12 nsec & 2.98 nsec & 3.01 nsec\\
GTX285 & update only & 10.4 nsec& 7.37 nsec & 6.51 nsec & 6.21 nsec & 6.26 nsec\\
\ \ Hawick {\it et al.} & + measurement & 12.7 nsec& 8.76 nsec & 7.68 nsec & 7.32 nsec & 7.36 nsec\\
GTX285 & update only & 8.65 nsec& 5.86 nsec & 5.15 nsec & 4.97 nsec & 5.09 nsec\\
\ \ Kalentev {\it et al.}& + measurement & 10.9 nsec& 7.25 nsec & 6.32 nsec & 6.08 nsec & 6.19 nsec\\
Xeon(R) W3680 & update only & 83.4 nsec& 83.2 nsec & 84.5 nsec & 86.4 nsec & 86.3 nsec\\
& + measurement & 99.4 nsec& 108.6 nsec& 114.5 nsec & 124.7 nsec & 128.8 nsec\\
\hline
\end{tabular}
\caption{\label{tb:GPU_CPU_time_q=6_up_clock}Average computational time per a spin flip
at $T/J=0.9$ for the q=6 clock model. The time for only a spin
update and that including the measurement
of energy, magnetization and correlation function with distances $L/4$ and $L/2$ are given.}
\end{center}
\end{table*}
Although the calculation of the clock model on CPU takes
much more time than that of the Potts model,
the computational time for the GPU-based calculation
of the clock model is almost the same as that
of the Potts model. The proper use of shared memories
may contribute to the good performance for the clock model.
The performance of the $q=6$ clock model with
the cluster-labeling algorithm of Hawick {\it et al.}
is 2.88 nano sec per a spin flip and
that with the algorithm of Kalentev {\it et al.} is
2.42 nano sec per a spin flip with $L=4096$ on GTX580.
The acceleration of computational speed over the
calculation on CPU with the algorithm
of Kalentev {\it et al.} is 35.6 times for a spin flip
and 42.7 times for a spin flip including the measurement
of energy, magnetization
and correlation function with distances $L/4$ and $L/2$
for the $q=6$ clock model with $L=4096$.
The temperature dependence of our GPU-based calculation
of the SW algorithm for the $q=6$ clock model is plotted
in figure \ref{fig:fig5}. The linear system size is $L=1024$.
We show the average computational time per a spin flip
in units of nano sec. From figure \ref{fig:fig5} we can see that
the computational time weakly depends on the temperature.
Thus, we can say that our GPU implementation of the SW
multi-cluster algorithm is also effective for the clock model
in all range of temperatures.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\linewidth]{figure5.eps}
\caption{\label{fig:fig5}
Temperature dependence of the computational time for GPU computation for the $q=6$
clock model with $L=1024$. }
\end{center}
\end{figure}
As an illustration, we plot the ratio of the correlation function
\begin{equation}
R(T) = \frac{G(L/2,T)}{G(L/4,T)}
\end{equation}
of the $q=6$ clock model in figure \ref{fig:fig6}.
We discarded the first 10,000 Monte Carlo updates and
the next 400,000 Monte Carlo updates were used for measurement.
From figure \ref{fig:fig6}, we see that the curves of different
sizes overlap in the intermediate Kosterlitz-Thouless phase
($T_1<T<T_2$), and spray out for the low-temperature ordered
and high-temperature disordered phases.
The graph reproduces the result shown in Fig. 2 of
Ref.~\cite{tomita2002b} for small sizes.
Recently, the estimate
of two transition temperatures of $q=6$ clock model
is an issue of controversy \cite{hwang,baek}.
The detailed analysis of the clock models
using the finite-size scaling analysis
will be given elsewhere.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\linewidth]{figure6.eps}
\caption{\label{fig:fig6}
Temperature dependence of the ratio of the correlation function for the $q=6$ clock model for $L$=256, 512, 1024 and 2048. }
\end{center}
\end{figure}
\section{Summary and discussion}
We have formulated a GPU parallel computing of the SW multi-cluster
algorithm by using the two connected component
labeling algorithms, the algorithm by Hawick {\it et al.}
\cite{Hawick_labeling} and that by Kalentev {\it et al.}
\cite{Kalentev}, for the assignment of clusters.
Starting with the $q$-state Potts model, we also extended
our implementation to systems of vector spins
using the idea of embedded cluster by Wolff \cite{wolff89}.
We have tested the $q$-state Potts models with $q$=2 and 3
and the $q$-state clock model with $q=6$ by use of our implementation
of the SW algorithm.
As a result, the GPU computational time by using the
cluster-labeling algorithm by Kalentev {\it et al.} is
2.51 nano sec per a spin update for the $q=2$ Potts model
and 2.42 nano sec per a spin update for the $q=6$ clock model
on GTX580 with the linear size $L=4096$ at the critical temperature.
The performance of the algorithm by Kalentev {\it et al.}
is superior to that of Hawick {\it et al.} for all models and sizes.
It confirms the effectiveness of refinement by Kalentev {\it et al.};
that is, the elimination of atomic operation and
the reduction of the number of kernel functions.
We obtained that the computational time of our implementation
is almost constant for the linear size $L \ge 1024$,
and there is little temperature dependence
for our SW multi-cluster algorithm.
Now we compare the performance of our implementation of the
SW multi-cluster algorithm with that of Weigel \cite{weigel11}.
He uses the combination of self-labeling algorithm and
label relaxation algorithm or hierarchical sewing algorithm.
Comparing with the breadth-first search and the tree-based
union-and-find approach, the self-labeling algorithm is used
in partitioning a set of elements into disjoint subsets.
To consolidate cluster labels,
the label relaxation algorithm and the hierarchical sewing algorithm
are used.
The GPU computational time of his algorithm was reported as
2.70 nano sec per a spin update for the $q=2$ Potts model at
the critical temperature with the linear size $L=8192$
on GTX580.
The performance of the algorithm of Weigel strongly depends
on system size and temperature, and this speed of 2.70 nano sec
per a spin update is reached only for $L=8192$.
The performance becomes much worse for $L<8192$ and
at temperatures below the critical temperature.
On the other hand,
the GPU computational time of our algorithm
is 2.51 nano sec for the same model with $L=4096$
on the same GPU, GTX580,
and our implementation of the SW algorithm
has little dependence on system size and temperature.
We have shown the data up to $L=4096$ in this paper
because we use one-dimensional index in launching a CUDA kernel.
Since the amount of memory on GTX580 is 1.5 Gbyte,
we can treat the system up to $L=8192$ by using the two-dimensional index.
The algorithm employed here implements the labeling
over the whole lattice instead of partitioning.
Because of the flexibility of our implementation,
it will be interesting to apply the present formulation
to multi-GPU calculations.
There are advantages in both our implementation and that by Weigel.
The application of GPU to cluster algorithms has just started.
This problem deserves further attention.
\section*{Acknowledgment}
This work was supported by a Grant-in-Aid for Scientific Research from
the Japan Society for the Promotion of Science.
|
1,314,259,994,928 | arxiv | \section{
Main Result} \label {sec_4} \subsection {
Theorem} \label {thm_4.1} {\it
Let $\Ycal$ be a closed subspace of $\overline {\Ccal \sb 0}(\X)$ such that $\Ycal$ is a subset of $\lkpar \Fcal L \sp 1 \rkpar (\X)$ where $X = \Rn$ or $X = \Tn$. If $\Ycal$ is reflexive then it is of finite dimension.} \par \ \par \noindent
The theorem is proved in \S\S \ref {sbsc_7.1} and \ref {sbsc_7.2} on page \pageref {sbsc_7.1} and \pageref {sbsc_7.2} respectively. \section {
Preparation} \label {sec_5} \subsection {}
We refer to the definition of weakly sequential completeness of $\Wcal$ in \S \ref {def_3.3.1} on this page. \subsubsection {} \label {sbsbsc_5.1.1}
The space $L \sp 1 (a,b)$ is weakly sequentially complete. This is a theorem of Steinhaus. Cf.\ \cite {Steinhaus_1919}. More generally, the space $L \sp 1 (S, \Sigma, \mu)$ for a positive measure space $(S, \Sigma, \mu)$ is weakly sequentially complete. Cf.\ e.g.\ Dunford, Schwartz \cite [theorem IV.8.6 p.\ 290] {Dunford_Schwartz_1958}. \subsubsection {} \label {sbsbsc_5.1.2}
If $\lmpar e \sb m : m \in \Zbf \sb + \rmpar$ is the canonical basis in $\co \overline {\Zbf \sb +}$ then $(e \sb 1 + e \sb 2 + \dots + e \sb m) \sb {m \in \Zbf \sb +}$ is weakly Cauchy but does not converge weakly. Hence the space $\co \overline {\Zbf \sb +}$ is not weakly sequentially complete. Cf.\ e.g.\ Albiac, Kalton \cite [example 2.3.11 p.\ 38] {Albiac_Kalton_2006}. \subsection {
Lemma} \label {lma_5.2} {\it
Let $\Wcal \sb 1$ be a closed subspace of $\Wcal$. If $\Wcal$ is weakly sequentially complete, then $\Wcal \sb 1$ is weakly sequentially complete.} \par \ \par \noindent
For the sake of completeness we provide a proof. \subsection* {\it
Proof:} We need to show that every weakly Cauchy sequence in $\Wcal \sb 1$ converges weakly to an element $\Wcal \sb 1$. \par
Assume that the sequence $(x \sb m) \sb {m \in {\Zbf \sb +}}$ in $\Wcal \sb 1$ is weakly Cauchy. Then the same sequence in $\Wcal$ is weakly Cauchy and hence it has a weak limit $x \in \Wcal$. If $x \notin \Wcal \sb 1$ then according to the separation theorem of Hahn and Banach there is $x \sp * \in \Wcal \sp *$ such that $\langle \Wcal \sb 1, x \sp * \rangle = \{ 0 \}$ and $\langle x, x \sp * \rangle = 1$. This contradicts $x$ being the weak limit given above. \subsection {
Theorem \rm (Pe\l{}czy\'nski \cite [lemma 2 p.\ 214] {Pelczynski_1960}. Cf.\ also Lindenstrauss, Tzafriri \cite [proposition 2.a.2 p.\ 53] {Lindenstrauss_Tzafriri_1977}.)} {\it
Let $\Ycal$ be a closed subspace of $\lsp {\Zbf \sb +} 1$ of infinite dimension. Then $\Ycal$ contains a subspace $\Zcal$ which is isomorphic to $\lsp {\Zbf \sb +} 1$.} \subsection {
Lemma} {\it
A closed subspace of a reflexive space is reflexive.} \subsection* {\it
Comment on the proof:} A proof using the separation theorem of Hahn and Banach and the theorem of Banach and Alaoglu is outlined in Rudin \cite [exercise 1 p.\ 111] {Rudin_1991}. \subsection {
Corollary} \label {cor_5.5} {\it
Let $\Ycal$ be a closed subspace of $\lsp {\Zbf \sb +} 1 $ of infinite dimension. Then $\Ycal$ is not reflexive.} \subsection {
Theorem \rm (Cf.\ Katznelson \cite [\S 1.4 p.\ 108] {Katznelson_1968}, \cite [\S 1.4 p.\ 137] {Katznelson_2004} and Zygmund \cite [theorem VI.6.1 p.\ 247] {Zygmund_2002}.)} {\it
Let $a \in \co {} \Zbf$ and consider for a fixed number $q > 1$
\begin {equation} \label {eq_5.1}
g(x) \, = \, \sum \sb {m \in \Zbf} a(m) e \sp {i\xx \sb m} \end{equation}
with $\x \sb {-m} = -\x \sb m$, $\x \sb 1 > 0$, $\x \sb {m + 1} > q\x \sb m$ and $\x \sb m \in \Zbf$ for all $m \in \Zbf \sb +$. Then there is a number $C$ independent of $a$ such that
\begin {equation*}
\nm a {} {\lsp \Zbf 1} \, \leq \, C \nm g {} {\Ccal (\Tbf)}. \end{equation*}} \subsection {
Corollary} \label {cor_5.8} {\it
Let $\Gcal$ be the set of $g$ appearing in \eqref {eq_5.1} and let $T$ be the linear mapping from $\Gcal$ to $\lsp \Zbf 1$ given by $Tg = a$. Then $T$ has a unique extension, which we also denote by $T$, to the closure of $\Gcal$ in $\Ccal (\Tbf)$ such that
\begin {equation*}
\nm {Tg} {} {\lsp \Zbf 1} \, \leq \, C \nm g {} {\Ccal (\Tbf)} \end{equation*}
where $C$ is independent of $g$.} \section
{Some Lemmata for the Proof of the Main Result} \label {sec_6} \subsection {
Lemma} \label {lma_6.1} {\it
Assume that $\Ycal$ is a closed subspace of $\Co \overline n$ such that $\Ycal$ is a subset of $\lkpar \Fcal L \sp 1 \rkpar \lpar \Rn \rpar$. Then there are positive numbers $\alpha$ and $C \sb {\ref {eq_6.01}}$ independent of $\Fcal f \in \Ycal$ such that
\begin {equation} \label {eq_6.01}
\nm {\Fcal f} {} {\Co \overline n} \, \leq \, C \sb {\ref {eq_6.01}} \nm {\Fcal f} {} {\Ccal \lpar \alpha \overline \Bn \, \rpar}. \end{equation}} \subsection*{\it
Proof:} To simplify notation we write $\overline{\Ccal \sb 0}$, $L \sp 1$ and $c \sb 0$ instead of $\Co \overline n$, $\Lsp n 1$ and $\co {} {\Zbf \sb +}$ respectively. \subsubsection {} \label {sbsbsc_6.1.1}
Let $\Xcal = \Fcal \sp {-1} \Ycal$. For all $f \in \Xcal$ we have
\begin {equation} \label {eq_6.02}
\nm {\Fcal f} {} {\overline{\Ccal \sb 0}} \, \leq \, \nm f {} {L \sp 1} \end{equation}
and $\Xcal$ is a closed subspace of $L \sp 1$. According to the open mapping theorem there is a number $C \sb {\ref {eq_6.03}}$ independent of $f \in \Xcal$ such that
\begin {equation} \label {eq_6.03}
\nm f {} {L \sp 1} \, \leq \, C \sb {\ref {eq_6.03}}\nm {\Fcal f} {} {\overline{\Ccal \sb 0}}. \end{equation} \subsubsection {}
Let $\eps \sb m$ be a positive number for each $m \in \Zbf \sb +$ such that
\begin{equation*}
\eps \, = \, \sum \sb 1 \sp \infty \eps \sb m \, < \, 1. \end{equation*}
For the purpose of deriving a contradiction we assume that for each choice of positive numbers $\alpha$ and $C$ there is an $\Fcal f \in \Ycal$ such that
\begin{equation*}
C \nm {\Fcal f} {} {\Ccal \left (\alpha \overline \Bn \, \right )} \, < \, \nm {\Fcal f} {} {\overline{\Ccal \sb 0}}. \end{equation*}
As basis for a recursion we choose for $\alpha = \alpha \sb 1 > 0$ and $C = 1/\eps \sb 1$ a function $\Fcal f \sb 1 \in \Xcal$ with $\nm {\Fcal f \sb 1} {} {\overline{\Ccal \sb 0}} = 1$ such that
\begin{equation*}
\nm {\Fcal f \sb 1} {} {\Ccal \left (\alpha \sb 1 \overline \Bn \, \right )} \, < \, \eps \sb 1. \end{equation*}
If $\Fcal f \sb l \in \Ycal$ with $\nm {\Fcal f \sb l} {} {\overline{\Ccal \sb 0}} = 1$ for $l \in \{ 1, \dots , m \}$ as well as $\alpha \sb m > 0$ have been chosen we choose $\alpha \sb {m + 1}$ so that
\begin{equation*}
\sup \lmpar \babs {\lkpar \Fcal f \sb l \rkpar (\x)} : l \in \{ 1, \dots , m \}, \, \axi \geq \alpha \sb {m + 1} \rmpar \, \leq \, \eps \sb {m + 1} \quad \text {and} \quad \alpha \sb {m + 1} > \alpha \sb m. \end{equation*}
By our assumption we can find $\Fcal f \sb {m + 1} \in \Ycal$ with $\nm {\Fcal f \sb {m + 1}} {} {\overline{\Ccal \sb 0}} = 1$ such that
\begin{equation*}
\nm {\Fcal f \sb {m + 1}} {} {\Ccal \left (\alpha \sb {m + 1} \overline \Bn \, \right )} \, < \, \eps \sb {m + 1}. \end{equation*}
We have thus constructed the set $\Phi = \{ f \sb m \in \Xcal : m \in \Zbf \sb + \} \subset L \sp 1$ such that
\begin{align} \label {eq_6.04}
\sup \lmpar \babs {\lkpar \Fcal f \sb l \rkpar (\x)} : l \in \{ 1, \dots , m \}, \, \axi \geq \alpha \sb {m + 1} \rmpar \, & \leq \, \eps \sb {m + 1}, \\
\label {eq_6.05}
\nm {\Fcal f \sb m} {} {\overline{\Ccal \sb 0}} \, &= \, 1 \\ \intertext
{and} \label {eq_6.06}
\nm {\Fcal f \sb {m + 1}} {} {\Ccal \left (\alpha \sb {m + 1} \overline \Bn \, \right )} \, &< \, \eps \sb {m + 1} \end{align}
for each $m \in \Zbf \sb +$ as well as the increasing sequence $\lpar \alpha \sb m \rpar \sb {m \in \Zbf \sb +}$ of positive numbers. \subsubsection {}
For each $m \in \Zbf \sb +$ we choose $b \sb m \in \Rn$ such that $\babs {\lkpar \Fcal f \sb m \rkpar (b \sb m)} = \nm {\Fcal f \sb m} {} {\overline{\Ccal \sb 0}}$ $= 1$. Then $\alpha \sb m < \babs {b \sb m} < \alpha \sb {m + 1}$. Given any $a \in c \sb 0$ such that $a(N + r) = 0$ for some $N \in \Zbf \sb +$ and for all $r \in \Zbf \sb +$ we let $k$ be such that $\babs {a(k) \lkpar \Fcal f \sb k \rkpar \lpar b \sb k \rpar} = \nm a {} {\overline {c \sb 0}}$. Write
\begin{equation*}
\nm a {} {\overline {c \sb 0}} \, = \, \babs {\sum \sb {m = 1} \sp N a(m) \lkpar \Fcal f \sb m \rkpar \lpar b \sb k \rpar \ - \ \sideset {} {\hskip .1mm \sp \prime} \sum \sb {m = 1} \sp N a(m) \lkpar \Fcal f \sb m \rkpar \lpar b \sb k \rpar}, \end{equation*}
where we have omitted the term $a(k) \lkpar \Fcal f \sb k \rkpar \lpar b \sb k \rpar$ from the second sum. We apply the triangle inequality to get
\begin{equation*}
\nm a {} {\overline{c \sb 0}} \, \leq \, \nm {\sum \sb 1 \sp N a(m) \Fcal f \sb m} {} {\overline{\Ccal \sb 0}} \, + \, \babs {a(k)} \lpar \sum \sb 1 \sp {k - 1} \babs {\lkpar \Fcal f \sb m \rkpar \lpar b \sb k \rpar} + \sum \sb {k + 1} \sp N \babs {\lkpar \Fcal f \sb m \rkpar \lpar b \sb k \rpar} \rpar. \end{equation*}
In the parenthesis we use \eqref {eq_6.04} and \eqref {eq_6.06} on page \pageref {eq_6.06} for the first and second group of terms respectively. (If $k = 1$ or $k = N$ then there is only one group of terms.) We get
\begin{equation*}
\nm a {} {\overline {c \sb 0}} \, < \, \nm {\sum \sb 1 \sp N a(m) \Fcal f \sb m} {} {\overline{\Ccal \sb 0}} + \nm a {} {\overline {c \sb 0}} \, \eps. \end{equation*}
We have proved that there is a number $C \sb {\ref {eq_6.07}}$ independent of $a \in c \sb 0$ such that
\begin {equation} \label {eq_6.07}
\nm a {} {\overline {c \sb 0}} \, \leq \, C \sb {\ref {eq_6.07}} \nm {\sum \sb {m \in \Zbf \sb +} a(m) \Fcal f \sb m} {} {\overline{\Ccal \sb 0}}. \end{equation}
\subsubsection {}
There is a vector $b \in \Rn$ such that
\begin{equation*}
\nm {\sum \sb 1 \sp N a(m) \Fcal f \sb m} {} {\overline{\Ccal \sb 0}} \, = \, \babs {\sum \sb 1 \sp N a(m) \lkpar \Fcal f \sb m \rkpar (b)} \, \leq \, \nm a {} {\overline {c \sb 0}} \sum \sb 1 \sp N \babs {\lkpar \Fcal f \sb m\rkpar (b)}. \end{equation*}
Furthermore, there is a unique positive integer $k$ such that $\alpha \sb k \leq |b| < \alpha \sb {k + 1}$. Hence we write
\begin{equation*}
\nm {\sum \sb 1 \sp N a(m) \Fcal f \sb m} {} {\overline{\Ccal \sb 0}} \, \leq \, \nm a {} {\overline {c \sb 0}} \lkpar \lpar \sum \sb 1 \sp {k - 1} + \sum \sb {k + 1} \sp N \rpar \babs {\lkpar \Fcal f \sb m\rkpar (b)} + \babs {\lkpar \Fcal f \sb k \rkpar (b)} \rkpar. \end{equation*}
In the parenthesis we again use \eqref {eq_6.04} and \eqref {eq_6.06} on page \pageref {eq_6.06} for the first and second group of terms respectively. We get
\begin{equation*}
\nm {\sum \sb 1 \sp N a(m) \Fcal f \sb m} {} {\overline{\Ccal \sb 0}} \, < \, \nm a {} {\overline {c \sb 0}} \lpar \eps + 1 \rpar. \end{equation*}
We have proved that there is a number $C \sb {\ref {eq_6.08}}$ independent of $a \in c \sb 0$ such that
\begin {equation} \label {eq_6.08}
\nm {\sum \sb {m \in \Zbf \sb +} a(m) \Fcal f \sb m} {} {\overline{\Ccal \sb 0}} \, \leq \, C \sb {\ref {eq_6.08}} \nm a {} {\overline {c \sb 0}}. \end{equation} \subsubsection {}
For all $a \in c \sb 0$ we have
\begin {equation} \label {eq_6.09}
\nm a {} {\overline {c \sb 0}} \, \leq \, C \sb {\ref {eq_6.07}} \nm {\sum \sb {m \in \Zbf \sb +} a(m) f \sb m} {} {L \sp 1} \end{equation}
according to \eqref {eq_6.07} and \eqref {eq_6.02} on page \pageref {eq_6.07} and \pageref {eq_6.02} respectively. On the other hand, for all $a \in c \sb 0$ we have
\begin {equation} \label {eq_6.10}
\nm {\sum \sb {m \in \Zbf \sb +} a(m) f \sb m} {} {L \sp 1} \, \leq \, C \sb {\ref {eq_6.03}} C \sb {\ref {eq_6.08}} \nm a {} {\overline {c \sb 0}} \end{equation}
according to \eqref {eq_6.03} and \eqref {eq_6.08} on page \pageref {eq_6.03} and \pageref {eq_6.08} respectively. \subsubsection {} \label {sbsbsc_6.1.6}
Let $\Xcal \sb 1$ be the closed linear span of $\Phi$ in $L \sp 1$, and let $\lmpar e \sb m : m \in \Zbf \sb + \rmpar$ be the canonical basis of $\overline {c \sb 0}$. The estimates \eqref {eq_6.09} and \eqref {eq_6.10} give that the mapping $e \sb m \mapsto f \sb m$ can be extended to an isomorphism $\overline {c \sb 0} \longrightarrow \Xcal \sb 1$. But $\Xcal \sb 1$ is a closed subspace of a weakly sequentially complete space and hence $\Xcal \sb 1$ is according to lemma \ref {lma_5.2} on page \pageref {lma_5.2} weakly sequentially complete. Hence $\overline {c \sb 0}$ is weakly sequentially complete. We now invoke \S \ref {sbsbsc_5.1.2} on page \pageref {sbsbsc_5.1.2} to obtain a contradiction. \subsection {}
In the proof of lemma \ref {lma_6.1} we use only boundedness and linearity \linebreak properties of the Fourier transformation $\Fcal$. Symmetry properties of that linear mapping are not needed for the argument. \subsubsection {\bf
Proposition} {\it
Let $\Wcal$ be weakly sequentially complete and let the mapping $T: \Wcal \longrightarrow \Co \overline n$ be bounded, linear and injective. Assume that $\Ycal$ is a closed subspace of $\Co \overline n$ such that $\Ycal$ is a subset of $T\Wcal$. Then there are positive numbers $\alpha$ and $C$ independent of $F \in \Ycal$ such that
\begin{equation*}
\nm F {} {\Co \overline n} \, \leq \, C \nm F {} {\Ccal \left (\alpha \overline \Bn \, \right )}. \end{equation*}} \subsection {
Lemma} \label {lma_6.3} {\it
Assume that $\Ycal$ is a closed subspace of $\co \overline \Zn$ such that $\Ycal$ is a subset of $\lkpar \Fcal L \sp 1 \rkpar \lpar \Zn \rpar$. Then there is a positive integer $\alpha$ and a number $C \sb {\ref {eq_6.11}}$ both independent of $\Fcal f \in \Ycal$ such that
\begin {equation} \label {eq_6.11}
\nm {\Fcal f} {} {\co \overline \Zn} \, \leq \, C \sb {\ref {eq_6.11}} \, \sup \lmpar \babs {\lkpar \Fcal f \rkpar (\xi)} : \axi \leq \alpha \rmpar. \end{equation}} \subsection*{\it
Proof:} The proof is by imitation of the proof of lemma \ref {lma_6.1} on page \pageref {lma_6.1} with some modifications due to the fact that the frequencies are points in $\Zn$. It is given here for the sake of completeness. \par
To simplify notation we write $L \sp 1$ and $\nm F {} {}$ instead of $L \sp 1 \lpar \Tn \rpar$ and \linebreak $\nm F {} {\co \overline \Zn}$ respectively. Observe that we keep the notation for $\co {} {\Zbf \sb +}$. \subsubsection {}
Let $\Xcal = \Fcal \sp {-1} \Ycal$. For all $f \in \Xcal$ we have
\begin {equation} \label {eq_6.12}
\nm {\Fcal f} {} {} \, \leq \, \nm f {} {L \sp 1} \end{equation}
and $\Xcal$ is a closed subspace of $L \sp 1$. According to the open mapping theorem there is a number $C \sb {\ref {eq_6.13}}$ independent of $f \in \Xcal$ such that
\begin {equation} \label {eq_6.13}
\nm f {} {L \sp 1} \, \leq \, C \sb {\ref {eq_6.13}}\nm {\Fcal f} {} {}. \end{equation} \subsubsection {}
Let $\eps \sb m$ be a positive number for each $m \in \Zbf \sb +$ such that
\begin{equation*}
\eps \, = \, \sum \sb 1 \sp \infty \eps \sb m \, < \, 1. \end{equation*}
For the purpose of deriving a contradiction we assume that for each choice of $\alpha$ and $C$, where $\alpha$ is a positive integer and $C$ is a positive number, there is an $\Fcal f \in \Ycal$ such that
\begin{equation*}
C \, \sup \lmpar \babs {\lkpar \Fcal f \rkpar (\xi)} : \axi \leq \alpha \rmpar \, < \, \nm {\Fcal f} {} {}. \end{equation*}
As basis for a recursion we choose for a positive integer $\alpha = \alpha \sb 1$ and $C = 1/\eps \sb 1$ a function $\Fcal f \sb 1 \in \Xcal$ with $\nm {\Fcal f \sb 1} {} {} = 1$ such that
\begin{equation*}
\sup \lmpar \babs {\lkpar \Fcal f \sb 1 \rkpar (\xi)} : \axi \leq \alpha \sb 1 \rmpar \, < \, \eps \sb 1. \end{equation*}
If $\Fcal f \sb l \in \Ycal$ with $\nm {\Fcal f \sb l} {} {} = 1$ for $l \in \{ 1, \dots , m \}$ as well as a positive integer $\alpha \sb m$ have been chosen we choose an integer $\alpha \sb {m + 1}$ so that
\begin{equation*}
\sup \lmpar \babs {\lkpar \Fcal f \sb l \rkpar (\xi)} : l \in \{ 1, \dots , m \}, \, \axi \geq \alpha \sb {m + 1} \rmpar \leq \eps \sb {m + 1} \quad \text {and} \quad \alpha \sb {m + 1} > \alpha \sb m + 1. \end{equation*}
By our assumption we can find $\Fcal f \sb {m + 1} \in \Ycal$ with $\nm {\Fcal f \sb {m + 1}} {} {} = 1$ such that
\begin{equation*}
\sup \lmpar \babs {\lkpar \Fcal f \sb {m + 1} \rkpar (\xi)} : \axi \leq \alpha \sb {m + 1} \rmpar \, < \, \eps \sb {m + 1}. \end{equation*}
We have thus constructed the set $\Phi = \{ f \sb m \in \Xcal : m \in \Zbf \sb + \} \subset L \sp 1$ such that
\begin{align} \label {eq_6.14}
\sup \lmpar \babs {\lkpar \Fcal f \sb l \rkpar (\x)} : l \in \{ 1, \dots , m \}, \, \axi \geq \alpha \sb {m + 1} \rmpar \, & \leq \, \eps \sb {m + 1}, \\
\label {eq_6.15}
\nm {\Fcal f \sb m} {} {} \, &= \, 1 \\ \intertext
{and} \label {eq_6.16}
\sup \lmpar \babs {\lkpar \Fcal f \sb {m + 1} \rkpar (\xi)} : \axi \leq \alpha \sb {m + 1} \rmpar \, &< \, \eps \sb {m + 1} \end{align}
for each $m \in \Zbf \sb +$ as well as the increasing sequence $\lpar \alpha \sb m \rpar \sb {m \in \Zbf \sb +}$ of positive integers. \subsubsection {}
For each $m \in \Zbf \sb +$ we choose $b \sb m \in \Zn$ such that $\babs {\lkpar \Fcal f \sb m \rkpar (b \sb m)} = \nm {\Fcal f \sb m} {} {}$ $= 1$. Then $\alpha \sb m < \babs {b \sb m} < \alpha \sb {m + 1}$. Given any $a \in \co {} {\Zbf \sb +}$ such that $a(N + r) = 0$ for some $N \in \Zbf \sb +$ and for all $r \in \Zbf \sb +$ we let $k$ be such that $\babs {a(k) \lkpar \Fcal f \sb k \rkpar \lpar b \sb k \rpar} = \nm a {} {\co \overline {\Zbf \sb +}}$. Write
\begin{equation*}
\nm a {} {\co \overline {\Zbf \sb +}} \, = \, \babs {\sum \sb {m = 1} \sp N a(m) \lkpar \Fcal f \sb m \rkpar \lpar b \sb k \rpar - \sideset {} {\hskip .1mm \sp \prime} \sum \sb {m = 1} \sp N a(m) \lkpar \Fcal f \sb m \rkpar \lpar b \sb k \rpar}, \end{equation*}
where we have omitted the term $a(k) \lkpar \Fcal f \sb k \rkpar \lpar b \sb k \rpar$ from the second sum. We apply the triangle inequality to get
\begin{equation*}
\nm a {} {\co \overline {\Zbf \sb +}} \, \leq \, \nm {\sum \sb 1 \sp N a(m) \Fcal f \sb m} {} {} \, + \, \babs {a(k)} \lkpar \sum \sb 1 \sp {k - 1} \babs {\lkpar \Fcal f \sb m \rkpar \lpar b \sb k \rpar} + \sum \sb {k + 1} \sp N \babs {\lkpar \Fcal f \sb m \rkpar \lpar b \sb k \rpar} \rkpar. \end{equation*}
In the parenthesis we use \eqref {eq_6.14} and \eqref {eq_6.16} on page \pageref {eq_6.14} for the first and second group of terms respectively. (If $k = 1$ or $k = N$ then there is only one group of terms.) We get
\begin{equation*}
\nm a {} {\co \overline {\Zbf \sb +}} \, < \, \nm {\sum \sb 1 \sp N a(m) \Fcal f \sb m} {} {} + \nm a {} {\co \overline {\Zbf \sb +}} \eps. \end{equation*}
We have proved that there is a number $C \sb {\ref {eq_6.17}}$ independent of $a \in \co {} {\Zbf \sb +}$ such that
\begin {equation} \label {eq_6.17}
\nm a {} {\co \overline {\Zbf \sb +}} \, \leq \, C \sb {\ref {eq_6.17}} \nm {\sum \sb {m \in \Zbf \sb +} a(m) \Fcal f \sb m} {} {}. \end{equation}
\subsubsection {}
There is a vector $b \in \Zn$ such that
\begin{equation*}
\nm {\sum \sb 1 \sp N a(m) \Fcal f \sb m} {} {} \, = \, \babs {\sum \sb 1 \sp N a(m) \lkpar \Fcal f \sb m \rkpar (b)} \, \leq \, \nm a {} {\co \overline {\Zbf \sb +}} \sum \sb 1 \sp N \babs {\lkpar \Fcal f \sb m\rkpar (b)}. \end{equation*}
Furthermore, there is a unique positive integer $k$ such that $\alpha \sb k \leq |b| < \alpha \sb {k + 1}$. Hence we write
\begin{equation*}
\nm {\sum \sb 1 \sp N a(m) \Fcal f \sb m} {} {} \, \leq \, \nm a {} {\co \overline {\Zbf \sb +}} \lkpar \lpar \sum \sb 1 \sp {k - 1} + \sum \sb {k + 1} \sp N \rpar \babs {\lkpar \Fcal f \sb m \rkpar (b)} + \babs {\lkpar \Fcal f \sb k \rkpar (b)} \rkpar. \end{equation*}
In the parenthesis we again use \eqref {eq_6.14} and \eqref {eq_6.16} on page \pageref {eq_6.14} for the first and second group of terms respectively. We get
\begin{equation*}
\nm {\sum \sb 1 \sp N a(m) \Fcal f \sb m} {} {} \, < \, \nm a {} {\co \overline {\Zbf \sb +}} \lpar \eps + 1 \rpar. \end{equation*}
We have proved that there is a number $C \sb {\ref {eq_6.18}}$ independent of $a \in \co {} {\Zbf \sb +}$ such that
\begin {equation} \label {eq_6.18}
\nm {\sum \sb {m \in \Zbf \sb +} a(m) \Fcal f \sb m} {} {} \, \leq \, C \sb {\ref {eq_6.18}} \nm a {} {\co \overline {\Zbf \sb +}}. \end{equation} \subsubsection {}
For all $a \in \co {} {\Zbf \sb +}$ we have
\begin {equation} \label {eq_6.19}
\nm a {} {\co \overline {\Zbf \sb +}} \, \leq \, C \sb {\ref {eq_6.17}} \nm {\sum \sb {m \in \Zbf \sb +} a(m) f \sb m} {} {L \sp 1} \end{equation}
according to \eqref {eq_6.17} and \eqref {eq_6.12} on page \pageref {eq_6.17} and \pageref {eq_6.12} respectively. On the other hand, for all $a \in \co {} {\Zbf \sb +}$ we have
\begin {equation} \label {eq_6.20}
\nm {\sum \sb {m \in \Zbf \sb +} a(m) f \sb m} {} {L \sp 1} \, \leq \, C \sb {\ref {eq_6.13}} C \sb {\ref {eq_6.18}} \nm a {} {\co \overline {\Zbf \sb +}} \end{equation}
according to \eqref {eq_6.13} and \eqref {eq_6.18} on page \pageref {eq_6.13} and \pageref {eq_6.18} respectively. \subsubsection {}
Using \eqref {eq_6.19} and \eqref {eq_6.20} the proof is now concluded in a way similar to the proof of lemma \ref {lma_6.1}. See \S \ref {sbsbsc_6.1.6} on page \pageref {sbsbsc_6.1.6}. \subsection {
Notation} \label {sbsc_6.4} Let $c \sb k$ be an integer for each $k \in \{ 1, 2, \dots , n \}$ and let $Q = \{ x \in \Rn : c \sb k \leq x \sb k \leq c \sb k + 1, \; k \in \{ 1, 2, \dots , n\}\}$. We form $\Qcal$, the countable collection of all $Q$. In the union representation
\begin{equation*}
\bigcup \sb {Q \in \Qcal} Q \, = \, \Rbf \sp n \end{equation*}
the intersection of a pair of terms in the left hand side has Lebesgue measure $0$. If $\beta$ is a positive number we replace $c \sb k$ and $c \sb k + 1$ by $\beta c \sb k$ and $\beta(c \sb k + 1)$ respectively so as to obtain $\beta Q$. For $\beta \neq 1$ the union of $\beta Q$ has the same disjointness property as for $\beta = 1$. \par
For fixed $\beta > 0$ it is clear that
\begin {equation} \label {eq_6.21}
\sup \lmpar \babs {x - x \sp \prime} : x, x \sp \prime \in \beta Q, Q \in \Qcal \rmpar \, \leq \, \beta \sqrt n. \end{equation} \subsection {
Lemma} \label {lma_6.5} {\it
Let $\Xcal$ be a reflexive subspace of $\Lsp n 1$ of infinite dimension. Then for each choice of positive numbers $\beta$ and $C \sb {\ref {eq_6.22}}$ there is an $f \in \Xcal$ such that
\begin {equation} \label {eq_6.22}
\sum \sb {Q \in \Qcal} \babs {\int \sb {\beta Q} f \ } \, < \, C \sb {\ref {eq_6.22}} \nm f {} {\Lsp n 1}. \end{equation}} \subsection*{\it
Proof:} For the purpose of deriving a contradiction we assume that there is a choice of positive numbers $\beta$ and $C$ independent of $f \in \Xcal$ such that
\begin{equation*}
C \nm f {} {\Lsp n 1} \, \leq \, \sum \sb {Q \in \Qcal} \babs {\int \sb {\beta Q} f \ }. \end{equation*}
Since we also have
\begin{equation*}
\sum \sb {Q \in \Qcal} \babs {\int \sb {\beta Q} f \ } \, \leq \, \sum \sb {Q \in \Qcal} \int \sb {\beta Q} \babs f \, = \, \nm f {} {\Lsp n 1} \end{equation*}
the mapping $T : \Xcal \longrightarrow \ell \sp 1 (\Zbf \sb +)$ given by
\begin{equation*}
[Tf](m) \, = \, \int \sb {\beta Q \sb m} f \ , \quad Q \sb m \in \beta\Qcal \end{equation*}
is an isomorphism between $\Xcal$ and a closed subspace of $\ell \sp 1 (\Zbf \sb +)$ of infinite dimension.
According to corollary \ref {cor_5.5} on page \pageref {cor_5.5} this is impossible. This is the contradiction sought for. \subsection {
Notation} The interval $[0,1[$ is the disjoint union of half-open intervals $[1 - 2 \sp {-k}, 1 - 2 \sp {-k - 1}[$ where $k$ runs through $\Nbf$. The length if each interval is the reciprocal of a dyadic integer. This construction may be transformed to any half-open interval $[a,b[$ using the bijection
\begin{equation*}
\ph : [0,1[ \ \longrightarrow [a,b[ \, , \quad \la \longmapsto (1 - \la)a + \la b. \end{equation*} \par
Let $N$ be a positive integer. The interval $[-\pi,\pi[$ is the disjoint union of $N$ half-open intervals $[a,b[$ of equal length $\beta = 2\pi/N$. For each such interval \linebreak $[a,b[$ we apply the construction using subintervals of $[0,1[$ of length of a \linebreak reciprocal dyadic integer and the bijection $\ph$. As $[a,b[$ runs through finitely many subintervals of $\Tbf = [-\pi,\pi[$ we obtain a countable disjoint union \linebreak representing that interval. By definition, a term in this union representation is a {\it $\beta$-admissable interval}. \par
Let $\beta = 2\pi/N$ be given for some positive integer $N$. For each factor $\Tbf$ in the cartesian product $\Tn$ we pick a $\beta$-admissable interval and form the cartesian product $R$ of those $n$ intervals. We also form $\Rcal \sb \beta$, the countable collection of all $R$. In the union representation
\begin{equation*}
\bigcup \sb {R \in \Rcal \sb \beta} R \, = \, \Tbf \sp n \end{equation*}
the intersection of a pair of terms in the left hand side has Lebesgue measure $0$. \par
For fixed $\beta > 0$ it is clear that there is a positive number $C$ such that
\begin {equation} \label {eq_6.23}
\sup \lmpar \babs {x - x \sp \prime} : x, x \sp \prime \in R, R \in \Rcal \sb \beta \rmpar \, \leq \, \beta C. \end{equation} \subsection {
Lemma} \label {lma_6.7} {\it
Let $\Xcal$ be a reflexive subspace of $L \sp 1 (\Tn)$ of infinite dimension. Then for each choice of positive numbers $\beta$ and $C \sb {\ref {eq_6.24}}$, where $\beta = 2\pi/N$ for $N \in \Zbf \sb +$, there is an $f \in \Xcal$ such that
\begin {equation} \label {eq_6.24}
\sum \sb {R \in \Rcal \sb \beta} \babs {\int \sb R f \ } \, < \, C \sb {\ref {eq_6.24}} \nm f {} {L \sp 1 \lpar \Tn \rpar}. \end{equation}} \subsection*{\it
Proof:} We imitate the proof of lemma \ref {lma_6.5} on page \pageref {lma_6.5} whereby
\begin{equation*}
[Tf](m) \, = \, \int \sb {R \sb m} f \ , \quad R \sb m \in \Rcal \sb \beta. \end{equation*} \section
{Proof of Theorem \ref {thm_4.1}} \label {sec_7} \subsection {\it
Proof of theorem {\rm \ref {thm_4.1}} on page {\rm \pageref {thm_4.1}} in the case $X = \Rn$:} \label {sbsc_7.1} \subsubsection {}
For each $f \in L \sp 1 = \Lsp n 1$, for each $m \in \Zbf \sb +$ and for each $\beta > 0$ we have with the notation from \S \ref {sbsc_6.4} on page \pageref {sbsc_6.4}
\begin{multline*}
\babs {\int \sb {\beta Q \sb m} e \sp {-i\xx} f(x) \dx} \, \leq \, \int \sb {\beta Q \sb m} \babs {e \sp {-i\xx} - e \sp {-iq \sb m \x}} \babs {f(x)} \dx \, + \\
+ \, \babs {e \sp {-iq \sb m \x} \int \sb {\beta Q \sb m} f \ } \, \leq \, \int \sb {\beta Q \sb m} \babs {x - q \sb m} \axi \babs {f(x)} \dx + \babs {\int \sb {\beta Q \sb m} f \ } \end{multline*}
for any $q \sb m \in Q \sb m$. Summing with respect to $m$, taking $\sup$ with respect to $\x$ and invoking \eqref {eq_6.21} gives
\begin {equation} \label {eq_7.1}
\nm {\Fcal f} {} {\Ccal \left (\alpha \overline \Bn \, \right )} \leq \sup \sb {\axi \leq \alpha} \sum \sb 1 \sp \infty \babs {\int \sb {\beta Q \sb m} e \sp {-i\xx} f(x) \dx} \leq \alpha \beta \sqrt n \nm f {} {L \sp 1} + \sum \sb 1 \sp \infty \babs {\int \sb {\beta Q \sb m} f }. \end{equation} \subsubsection {}
Assume that $\Ycal$ fulfills the assumptions of the theorem. As in the proof of lemma \ref {lma_6.1} (cf.\ \S \ref {sbsbsc_6.1.1} on page \pageref {sbsbsc_6.1.1}) the space $\Xcal = \Fcal \sp {-1} \Ycal$ is according to the open mapping theorem isomorphic to $\Ycal$. We have
\begin {equation} \label {eq_7.2}
\nm f {} {L \sp 1} \, \leq \, C \sb {\ref {eq_6.03}} \nm {\Fcal f} {} {\Co \overline n} \, \leq \, C \sb {\ref {eq_6.03}} C \sb {\ref {eq_6.01}} \nm {\Fcal f} {} {\Ccal \left (\alpha \overline \Bn \, \right )} \end{equation}
where we have used lemma \ref {lma_6.1} on page \pageref {lma_6.1} in the second inequality. Collecting the estimates \eqref {eq_7.1} and \eqref {eq_7.2} gives that there is a number $C \sb {\ref {eq_7.3}}$ independent of $f$ such that
\begin {equation} \label {eq_7.3}
\nm f {} {L \sp 1} \, \leq \, C \sb {\ref {eq_7.3}} \lkpar \alpha \beta \sqrt n \nm f {} {L \sp 1} + \sum \sb 1 \sp \infty \babs {\int \sb {\beta Q \sb m} f \ } \rkpar. \end{equation} \subsubsection {}
By assumption, $\Ycal$ is reflexive. Hence $\Xcal$ is reflexive. For the purpose of deriving a contradiction we now assume that $\Xcal$ is of infinite dimension. Then the assumptions of lemma \ref {lma_6.5} on page \pageref {lma_6.5} are fulfilled, and hence for
\begin{equation*}
\beta \, < \, \frac 1 {2 \, C \sb {\ref {eq_7.3}} \, \alpha \sqrt n} \quad \text {and} \quad C \sb {\ref {eq_6.22}} \, = \, \frac 1 {2C \sb {\ref {eq_7.3}}} \end{equation*}
there is an $f \in \Xcal$ such that
\begin{equation*}
\nm f {} {L \sp 1} \leq C \sb {\ref {eq_7.3}} \lkpar \alpha \beta \sqrt n \nm f {} {L \sp 1} + \sum \sb 1 \sp \infty \babs {\int \sb {\beta Q \sb m} f \ } \rkpar < \frac 1 2 \nm f {} {L \sp 1} + \frac 1 2 \nm f {} {L \sp 1} = \nm f {} {L \sp 1}. \end{equation*}
This is the contradiction sought for. \subsection {\it
Remarks on the proof of theorem {\rm \ref {thm_4.1}} on page {\rm \pageref {thm_4.1}} in the case $X = \Tn$:} \label {sbsc_7.2} The proof is by imitation of the proof in the case $X = \Rn$ which was just completed. Lemmata \ref {lma_6.1} and \ref {lma_6.5} on page \pageref {lma_6.1} and \pageref {lma_6.5} respectively are replaced by lemmata \ref {lma_6.3} and \ref {lma_6.7} on page \pageref {lma_6.3} and \pageref {lma_6.7} respectively. Inequality \eqref {eq_6.21} on page \pageref {eq_6.21} is replaced by inequality \eqref {eq_6.23} on page \pageref {eq_6.23}.
\section
{Examples of Closed Non-Reflexive Subspaces} \label {sec_8} \subsection {}
The examples provided here are based on Karlander's idea in \cite [p.\ 312] {Karlander_1997} using lacunary trigonometric series. \subsection {
The Case $X = \Rn$.} \label {sbsc_8.2} Let $H$ be any {\it positive} function in $\Lsp n 1 \cap \Co \overline n$. We say that the function $F$ belongs to the space $\Ycal$ if and only if there is an $a \in \lsp {\Zbf \sb +} 1$ such that
\begin {equation*}
F(\x) \, = \, \lkpar Ta \rkpar (\x) \, = \, H(\x) \sum \sb {k \in \Zbf \sb+} a(k) e \sp {i2 \sp k \x \sb 1}, \quad \x \, = \, (\x \sb 1, \dots , \x \sb n) \in \Rn. \end{equation*}
Then $F$ is the Fourier transform of an $\Lsp n 1$-function. \subsubsection {}
We have
\begin {equation*}
\nm {Ta} {} {\Co \overline n} \, \leq \, \nm H {} {\Co \overline n} \nm a {} {\lsp {\Zbf \sb +} 1} \end{equation*}
and so $T$ is a bounded linear bijection from $\lsp {\Zbf \sb +} 1$ to $\Ycal$. Assume that we can show that $\Ycal$ is closed. Then, according to the open mapping theorem $T \sp {-1}$ is a bounded linear bijection from $\Ycal$ to $\lsp {\Zbf \sb +} 1$, and we may conclude that $\Ycal$ and $\lsp {\Zbf \sb +} 1$ are isomorhic. In particular, $\Ycal$ is not reflexive. This shows that we may obtain closed subspaces of $\lkpar \Fcal L \sp 1 \rkpar \lpar \Rn \rpar$ of infinite dimension if we drop the reflexivity requirement. \subsubsection {}
We now show that $\Ycal$ is closed. \par
Assume that $F \sb m$ is in $\Ycal$ for each $m \in \Zbf \sb +$ and that $F \sb m$ converges to $F$ in $\Co \overline n$ as $m \to \infty$. If $G \sb m = F \sb m/H$ and if $K$ is a compact set then there is a function $G \in \Ccal \lpar \Rn \rpar$ such that $G \sb m$ converges to $G$ in $\Ccal (K)$ as $m \to \infty$. But
\begin{equation*}
G \sb m (\x) \, = \, \sum \sb {k \in \Zbf \sb+} a \sb m (k) e \sp {i2 \sp k \x \sb 1} \end{equation*}
for some $a \sb m \in \lsp {\Zbf \sb +} 1$. We now invoke corollary \ref {cor_5.8} on page \pageref {cor_5.8} to conclude that there is a function $a \in \lsp {\Zbf \sb +} 1$ such that $a \sb m$ converges to $a$ in $\lsp {\Zbf \sb +} 1$ as $m \to \infty$. For fixed $\x \in \Rn$ we have
\begin{equation*}
F(\x) \, = \, \lim \sb {m \to \infty} H(\xi) \sum \sb {k \in \Zbf \sb+} a \sb m (k) e \sp {i2 \sp k \x \sb 1} \, = \, H(\xi) \sum \sb {k \in \Zbf \sb+} a(k) e \sp {i2 \sp k \x \sb 1}. \end{equation*}
We have proved that $F \in \Ycal$ and hence $\Ycal$ is closed. \subsection {
The Case $X = \Tn$.} Let $H$ be any {\it positive} function in $\lsp \Zn 1$. We say that the function $F$ belongs to the space $\Ycal$ if and only if there is an $a \in \lsp {\Zbf \sb +} 1$ such that
\begin {equation*}
F(\x) \, = \, H(\x) \sum \sb {k \in \Zbf \sb+} a(k) e \sp {i2 \sp k \x \sb 1}, \quad \x \, = \, (\x \sb 1, \dots , \x \sb n) \in \Zn. \end{equation*}
Then $F$ is the Fourier transform of an $L \sp 1 \lpar \Tn \rpar$-function. \par
In a way similar to the argument in \S \ref {sbsc_8.2} on page \pageref {sbsc_8.2} one may prove that $\Ycal$ is a closed subspace of $\co \overline \Zn$ such that $\Ycal$ is a subset of $\lkpar \Fcal L \sp 1 \rkpar \lpar \Zn \rpar$ and such that $\Ycal$ is isomorphic to $\lsp {\Zbf \sb +} 1$. Thus, also in this case we may obtain closed subspaces $\Ycal$ of $\co \overline \Zn$ of infinite dimension such that $\Ycal$ is a subset of $\lkpar \Fcal L \sp 1 \rkpar \lpar \Zn \rpar$ if we drop the reflexivity requirement.
|
1,314,259,994,929 | arxiv | \section{Introduction}
The shape of the stellar Initial Mass Function (IMF) and whether it is
universal or not are key issues in astrophysics.
For clusters within 2 kpc, there is no compelling evidence for variations in
the stellar IMF \citep[e.g.][]{meyer00,kro02,chabrier_conf} or the brown dwarf
IMF \citep[e.g.][]{andersen08}.
However, these clusters only span a limited range in total cluster mass
($10^2-10^3$ M$_\odot$) and all have a metallicity similar to the solar value.
Thus, we are forced to observe more extreme regions of star formation in
search of variations in the IMF as a function of environment.
It has been suggested that the shape of the IMF and in particular the
characteristic mass where the IMF flattens from a Salpeter power--law
could depend on the metallicity in the molecular cloud out of which the
stars are formed.
\citet{low}, \citet{larson}, and \citet{omukai} suggest that a lower
metallicity results in higher temperatures in the molecular cloud which
would increase the Jeans mass.
This would in turn result in a top heavy IMF relative to the solar
metallicity IMF\@.
The closest place with massive metal--poor young star clusters is the
Large Magellanic Cloud (LMC).
The metallicity is only $\frac{1}{3}-\frac{1}{2}$ the solar value
\citep{smith} and star clusters can be studied in some detail despite a
distance of $\sim$50 kpc \citep{westerlund}.
Of particular interest is the 30 Dor cluster which is powering the most
luminous HII region in the Local Group \citep{kennicutt}.
The cluster has a mass of at least 2.2$\times 10^4$ M$_\odot$ within a radius
of 4.7 pc \citep{hunter95} and is a relatively low-mass analog to the more
distant starburst clusters.
R136 lies at the center of the 30 Dor cluster and has long commanded
significant attention: Once thought to be a single $\sim$1000 M$_\odot$ star
\citep{cassinelli}, the region is now known to host numerous O stars
\citep{melnick85,weigelt,pehlemann,campbell}.
The whole 30 Dor region, with a size of 200 pc, appears to have an age
spread of $\sim$20 Myr \citep{mcgregor,selman2} with stars still forming
\citep{rubio,maercker}.
R136 appears to have a much smaller age spread of at most a few Myr
\citep{melnick85,brandl96,masseyhunter}.
An age of 2 Myr or less is inferred from spectroscopy of the O stars in
the very cluster center \citep{masseyhunter}, whereas the intermediate mass
population is thought to be $\sim$3--4 Myr old \citep{hunter95}.
\citet{masseyhunter} obtained HST spectroscopy of the 65 bluest and most
luminous sources within 17\arcsec\ of the cluster center.
They derived the IMF over the mass range 15--120 M$_\odot$ and found it to be
well approximated by a power--law $\frac{dN}{dlog M}\propto M^{\Gamma}$ with a
slope of $\Gamma=-1.3\pm0.1$, consistent with a Salpeter slope IMF\@
\citep{salpeter}.
\citet{hunter95,hunter96} obtained \filter{F555W} (\filter{V}) and
\filter{F814W} (\filter{i}) band optical photometry utilizing HST/WFPC2 in
order to resolve the cluster's intermediate mass stellar population.
The IMF derived for different annuli out to a radius of 4.7 pc was found to
be in the range $-1.46 < \Gamma < -1.17$ for the mass range 2.8--15 M$_\odot$,
again consistent with a Salpeter slope IMF\@.
\citet{masseyhunter} combined their results for the high--mass IMF with the
results from \citet{hunter95,hunter96} in order to constrain the IMF from
2.8 M$_\odot$ up to 120 M$_\odot$.
Comparing the number of high--mass stars predicted by the intermediate--mass
IMF from \citet{hunter96}, they found the number of massive stars was
consistent with a single power--law IMF with a Salpeter slope,
i.e. $\Gamma=-1.35$.
Combining the two data sets used in \citet{hunter95,hunter96}, \citet{sirianni}
derived the IMF between 1.35 M$_\odot$ and 6.5 M$_\odot$, extending the IMF
determination into the mass range where the stars are still in their
pre--main sequence phase.
The IMF was derived in a box with the dimensions
$\sim$30\farcs4$\times$26\farcs8\arcsec\ (7.6pc$\times$6.7pc), but
excluding the inner most 13\farcs6$\times$8.6\arcsec\ (3.5pc$\times$2.2pc).
Again, a Salpeter slope was found down to 2 M$_\odot$, but the IMF was found
to be flatter than Salpeter, $\Gamma=-0.27\pm0.08$, between 1.35 M$_\odot$ and
2 M$_\odot$, suggesting the characteristic mass is higher in this massive,
metal--poor cluster than $\sim$ 0.5 M$_\odot$ as found in the Galactic field
\citep[][]{kro02}.
The foreground (A$_\mathrm{V}=0.7$ mag) and differential extinction
(A$_\mathrm{V}\sim0-2$ mag) within the cluster \citep{brandl96} makes
it desirable to observe the cluster in the infrared, for example the
\filter{H} band where the extinction is less than 20\% that of the
\filter{V} band.
In addition, pre--main sequence stars are often associated with
circumstellar disks and outflows which will introduce additional
extinction for the clusters low--mass content.
We have observed R136 with HST/NICMOS Camera 2 through the \filter{F160W}
band, which is similar to a ground--based \filter{H} filter.
The observations were aimed at being sensitive to objects below 1 M$_\odot$
for a stellar population with an age of 3 Myr.
Preliminary results have previously been presented in \citet{HZ1,HZ2}, and
\citet{MA}.
The paper is structured as follows.
The data and their reduction is described in Section 2.
Section 3 shows the results for the \filter{F160W} band imaging.
The IMF is derived in Section 4 and compared with the IMF derived by
\citet{sirianni}.
We point out several plausible reasons for the different results in
the optical and near--infrared, including mass segregation, and differential
extinction.
Finally, our conclusions are presented in Section 5.
\section{Data reduction and photometry}
\subsection{Observations}
We have obtained HST/NICMOS Camera 2 images through the \filter{F160W}\ band of the
central 56\arcsec$\times$57\arcsec\ region around R136 in the 30 Dor
cluster (HST program ID 7370).
The observations were centered on the cluster
(RA,DEC)=(05:38:43.3,$-$69:06:08) and on two adjacent control fields centered
on (05:38:42.4,$-$68:52:00), and (05:38:56.9,$-$68:52:00).
The observing dates were Oct 14 and 16, 1997.
The field-of-view of the 256$\times$256 pixel NICMOS Camera 2 is 19\arcsec
$\times$19\arcsec\ with a pixel scale of
0\farcs075, resulting in Nyquist sampling of diffraction--limited \filter{F160W}\ band
data.
Each position in a 3$\times$3 mosaic centered on R136 was observed four
times with small dithers of $\sim$16 pixels.
The data were obtained in non--destructive MULTIACCUM mode such that the
photometry of the bright stars can be retrieved due to the first short
integration in each exposure.
The integration time for each dither position was 896 seconds, resulting in a
total integration time of 3584 seconds for each position in the mosaic.
The two control fields were observed in a similar manner.
The location of the mosaic is shown in Fig.~\ref{overviewfig} and the NICMOS
mosaic is shown in Fig.~\ref{30dormos}.
The faintest stars visible with the stretch used here have an \filter{F160W}\ magnitude
of $\sim$21.5 mag, corresponding to a mass of 0.8 M$_\odot$, based on the
pre-main sequence models of \cite{siess}, adopting an age of 3 Myr
\citep{hunter95}, half solar metallicity, and an extinction of
A$_\mathrm{V}=1.85$ mag (see Section 3.2).
For comparison, the similar detection limit in an uncrowded environment without nebulosity would be $\sim$23.5 mag according to the NICMOS exposure time calculator.
\subsection{Data reduction}
Each individual image was processed through the {\tt calnica} and
{\tt calnicb} pipelines as well as the {\tt biaseq} and {\tt pedsky}
procedures within the IRAF environment.
The tasks are described in detail in the NICMOS Data Handbook.
We used synthetic dark frames and flat fields created for the appropriate
instrument temperature at each exposure.
The {\tt biaseq} task corrects differences in bias levels for each chip
between different sub-exposures.
The {\tt pedsky} task corrects differences in the bias level
for each quadrant of the chip when the array is reset before the exposure.
The data for each position in the mosaic were combined using the
{\tt drizzle} task.
The reduced pixel size (0\farcs0375) was chosen as half the detector pixel size.
Bad pixels, bad columns, and the coronagraphic hole were flagged as bad
pixels before the images were combined.
\subsection{Source detection and photometry}
Source detection was done using {\tt daofind} and photometry was performed
via point spread function (PSF) photometry utilizing {\tt allstar} within
the IRAF environment.
It was difficult to obtain a good PSF model from the data due to the high
degree of crowding and the spatial variability of the PSF.
Instead, the TINYTIM software \citep{hook} was used to create a synthetic
PSF.
TINYTIM allows to create a PSF that varies as a function of the location on the array.
The source detection and photometry was performed on each individual position
in the mosaic due to the linearly varying Point Spread Function (PSF).
A hot template star (O5V) was used for the spectral energy distribution in
order to achieve the best fit for the brightest stars to limit their
residuals.
The TINYTIM PSF was created for five different positions on the NICMOS
Camera 2 array and the PSFs were placed in an empty frame with the same
number of pixels as the NICMOS Camera 2 array.
Four frames were created with offsets between each PSF identical to the
offsets used for the science data in order to replicate the data as closely
as possible.
The four PSF frames were then combined using drizzle together in the same
manner as the science data and a linearly--varying PSF was created from
the drizzled frame.
Source detection is complicated due to the diffraction features present in
NICMOS data.
Adoption of a low threshold for source detection led to numerous diffraction
spots from bright stars being erroneously identified as fainter stars.
Instead the source detection was done in the following way in order to limit
false detections.
We first detected the brightest stars (brighter than 1000$\sigma$) in each
frame and used {\tt allstar} to remove these with the synthetic PSF.
A search for fainter stars (brighter than 500$\sigma$) was then performed in
the frame with the bright stars removed.
Since the removal of the brightest stars also removed the diffraction pattern
associated with them, we did not detect the diffraction spots as stars.
The two star lists (the brightest stars and the fainter stars) were joined
into one and these stars were removed from the original frame, again using
{\tt allstar}.
Fainter stars are then found from the frame with the already detected stars
removed.
This process was iterated until stars at 10 $\sigma$ peak pixel intensity
over the background were detected and removed.
The frame with the stars removed was then ring--median filtered to remove
stellar residuals but to retain the large-scale nebulosity in each frame.
The median--filtered image was then removed from the original frame and the
star detection process was repeated in this frame but now continued to a
detection threshold of 5$\sigma$.
A 5$\sigma$ instead of e.g. a 3$\sigma$ threshold was selected to limit the
risks of false detections due to noise spikes.
We finally made sure by visual inspection that every detection indeed was a
point source and that it was not a spurious detection due to the diffraction
spikes and spots from bright stars.
The main interest here is in the low--mass (faint) stellar content in R136
and one concern is the detection of residuals from the bright stars as false
stellar objects.
Some false sources were detected by {\tt daofind} but are rejected during the
PSF fitting routine.
A few remained from the brightest stars.
They typically produced at most a few false detections in the diffraction
spikes that were $\sim$6--7 mag fainter than the bright source.
We removed these detections together with other false detection through the
visual inspection of all sources.
We have further utilized the artificial star experiments described below to
examine how many detections are false due to the residuals from bright stars.
We had only false positives associated with the brightest artificial stars
(\filter{F160W} $<$ 12 mag).
For artificial stars fainter than \filter{F160W}$\sim14$ mag, no false
detections were present.
The false detections for the bright stars were located at the diffraction
spikes and would have been identified in the manual inspection of the source
list.
We found a total of 10108 uniquely detected sources with a formal error
smaller than 0.1 mag and brighter than \filter{F160W}=22.5 mag in the 9 frames.
Below this magnitude limit the incompleteness is substantial, as discussed
below.
Table~\ref{sources} presents the list of detected stars.
\subsection{Completeness corrections}
The effects of crowding were examined by placing artificial stars in the
individual frames using the PSF created from
the synthetic TINYTIM PSF.
The artificial stars followed a luminosity function with a similar slope to
that of the observed stars (see Section 3) but with a surface density 10\%\
that of the detected number of stars to avoid affecting the crowding
characteristics of the real stars.
We performed 100 artificial star experiments for each frame, for a total of
10 times more artificial stars than real stars.
Fig.~\ref{30dor_corrections} shows the resulting recovery fractions as a
function of the input magnitudes for several annuli around the cluster
center.
The difference of the size of the error bars as a function of distance from
the cluster center is due to a lower number of artificial stars placed in the
central parts of the cluster.
This is a consequence of adding 10\% artificial stars relative to observed
stars in each artificial star experiment and the relative number of stars in
each annulus.
The IMF is not determined in regions with this low completeness.
We are mainly interested in the low-mass stellar content of the cluster, which
is below the 50\%\ completeness in the central parts.
The uncertainty in the completeness corrections for the inner parts of the
cluster will therefore not affect the conclusions drawn for the stellar
populations further out.
The completeness is a strong function of the radial distance from the center.
For the outer regions of the cluster, 50\% or more of the stars brighter than
\filter{F160W}$=21.5$ mag are detected, whereas only the very brightest stars are
detected in the innermost region.
In an annulus at 0.6-1 pc radius from the center, we detect 50\% or more of
the stars brighter than \filter{F160W}$=18.0$ mag.
Adopting the PMS models of \citet{siess} and the main sequence models of
\citet{marigo}, \filter{F160W}$=21.5$ mag corresponds to a 0.8 M$_\odot$, half solar
metallicity, 3 Myr old object, whereas \filter{F160W}$=18$ mag corresponds to a
7.5 M$_\odot$ star, assuming an extinction of A$_\mathrm{V}=1.85$ mag in both
cases.
\subsection{Photometric accuracy}
We have investigated the accuracy of the derived photometry by using the
stars detected in the overlap regions of several fields.
Fig.~\ref{brandl_comp} shows the difference in derived magnitude for stars
detected in the overlap regions of the mosaic.
Dots denote stars outside a 2 pc radius and plus signs denote stars between 1.25--2
pc radius, respectively.
The \filter{F160W}\ band photometry has been compared with the ground--based \filter{H}\ band
photometry obtained using adaptive optics observations by \citet{brandl96}.
Fig.~\ref{brandl_comp} shows the magnitude difference between the adaptive
optics photometry and this study based on 829 stars common to both datasets.
Stars were considered detected in both datasets if the spatial position
coincided within 2.5 drizzled pixels, corresponding to 0\farcs094.
Some scatter is present between the two datasets, especially for the fainter
stars.
However, the median difference between the magnitudes derived for the two
datasets is less than 6\% for objects \filter{F160W}$< 18$ mag.
We have in the following treated the \filter{F160W} observations as a
standard Cousins \filter{H} band.
Conversely, there appears to be a tendency for the fainter stars to be
brighter in the \filter{F160W}\ data than in the \filter{H} band data of \citet{brandl96}.
The tendency for the fainter stars to be skewed towards fainter \filter{H}\ band
magnitudes is an effect also seen in other comparisons between HST/NICMOS
and AO data \citep[e.g.][]{stolte} who suggest it is due to the extended
halos present in AO observations around bright stars.
\section{Results}
The immediate results from the \filter{F160W}\ band HST photometry are presented.
After discussing the luminosity function for different annuli, the luminosity
profile for the cluster is derived.
The \filter{F160W}\ band data are combined with the optical HST data by \citet{hunter95}
and the color--magnitude diagrams are presented.
Utilizing the two color--magnitude diagrams we show that the spread
observed for the higher mass stars is consistent with that expected due to
reddening.
We estimate the average age for the stellar population and discuss the
possible presence of an age spread.
\subsection{Luminosity functions}
The star counts in the central 0.6 pc radius region is heavily affected by low
number statistics, crowding even for the brightest stars, and relatively
uncertain incompleteness corrections.
We therefore focus on the sources outside 0.6 pc in this paper.
Fig.~\ref{LFs} shows the \filter{F160W}\ band luminosity functions for the 0.6--7 pc
radius region of the 30 Dor cluster divided into several radial bins to show
the difference in photometric depth due to crowding.
Overplotted are the completeness--corrected LFs, where each bin has been
divided by the corresponding recovery fraction from the artificial star
experiments.
The completeness--corrected luminosity functions are relatively smooth and
have been fitted with power-laws down to the 50\% completeness limit.
The derived slopes with their 1$\sigma$ uncertainties and the 50\%\
completeness limits are presented in Table~\ref{HLF_slopes}.
Although the slope in the inner annulus is found to be more shallow, the derived slopes are consistent with each other within 2$\sigma$ with an average slope
of 0.31 and the shallow slope is not significant.
The completeness--corrected combined histogram for the stars detected in the
two off--cluster control fields is shown in the lower right panel in
Fig.~\ref{LFs}.
From the histogram, it can be seen that the field star contamination found
from the star counts is $\approx$10--15\%\ for the faintest stars in the 5--7
pc annulus and less closer to the center as well as for brighter stars.
Although the contamination of field stars is found to be relatively small it
is not negligible and they are therefore statistically subtracted from the
cluster population in the following analysis (Section 4).
\subsection{The optical--near infrared color--magnitude diagrams}
Next, the \filter{F160W}\ band photometry is combined with the optical data presented by
\citet{hunter95}.
A star was considered detected in both surveys if the spatial position agreed
within 2.5 drizzled NICMOS Camera 2 pixels (0\farcs094).
In the cases where two optical stars were located within the search radius of
the star detected in the NICMOS Camera 2 observations, the brightest star was
chosen as the match.
We find in total 2680 in common with the \citet{hunter95} survey that
detected 3623 stars the inner 35\arcsec\ of the cluster.
1848 of those sources have a combined formal photometric error in the
\filter{F555W}--\filter{F160W} color of less than 0.1 mag.
Within the area covered by \citet{hunter95} we detect a total of 5095 sources.
Most of the stars detected by the NICMOS survey but not the WFPC2 observations
are fainter than \filter{F160W}$=$ 20 mag.
Assuming an object age of 3 Myr and an average extinction of
A$_\mathrm{V}=1.85$ mag (see below), the similar object would have a magnitude
in the \filter{F814W} band of $\sim$22 mag.
For objects with more extinction, they will be even harder to detect in the
\filter{F814W} band.
\citet{hunter95} essentially don't detect any stars within a 1 pc radius at
this magnitude or fainter.
Only 1 in 4 stars in the magnitude interval \filter{F814W}=21--22 mag was
detected outside 1 pc.
It is thus not surprising that a significant population of faint stars are
detected in the NICMOS survey relative to the WFPC2 survey.
Nevertheless, the lower spatial resolution of the NICMOS observations results
in a low recovery fraction at these magnitudes in the central few pc of the
cluster.
The majority of the sources not detected in the NICMOS survey but at optical
wavelengths are located within a radius of 1 pc.
The lack of detection is due to the lower spatial resolution in this study
relative to the optical HST data.
The resolution is almost a factor of two better in the \filter{F814W} band
than in the \filter{F160W} band.
Sources not detected in the NICMOS data outside 1 pc are mainly due to
crowding as well.
Indeed, visual inspection of the location of the stars detected in the optical
but not near--infrared shows they are often located either very close to the core
or on the first Airy ring of a bright source.
The \filter{F555W}-\filter{F160W}\ versus \filter{F160W} color--magnitude diagram is
shown in Fig.~\ref{CMD}.
Overplotted are a 3 Myr isochrone for the high--mass stars adopted from the
\citet{marigo} models and 2, 3, and 4 Myr isochrones adopted from
\citet{siess} for stars below 7 M$_\odot$.
The stars above 7 M$_\odot$ and up to the maximum mass we fit the IMF in
Section 4.1 (20 M$_\odot$) are all expected to be on the main sequence.
Both isochrones were calculated adopting a metallicity of half the solar
value, typical for the LMC \citep{smith}.
The two isochrones have a small offset in both the \filter{V} (0.06 mag) and
\filter{H} (0.07 mag) band.
We have forced the \citet{marigo} isochrone to match the \citet{siess}
isochrone at 7 M$_\odot$.
It is evident there is a significant scatter in the color--magnitude diagram.
The scatter is likely due to a combination of binary systems (both physical
and chance alignments), differential extinction, photometric errors and a
possible age spread.
The median extinction is found for the main sequence part of the isochrone.
For objects in the range 7--20 M$_\odot$, we find a median extinction of
A$_\mathrm{V}=1.85$ mag which is slightly higher than the reddening found by
\citet{selman2} in the inner part of the 30 Dor region.
At masses below 7 M$_\odot$, the spread in the color--magnitude diagram is
larger but almost exclusively extends to the red part of the diagram.
This indicates the lower--mass objects on average have an excess amount of
extinction relative to the higher mass objects.
\citet{selman2} observed stars more massive than $10$ M$_\odot$ and would not
detect the additional reddening for the lower--mass sources.
The possible sources for the additional reddening is described in Sec.~3.3.
We have estimated an average age for the cluster by utilising the fact the
isochrone is almost horizontal in the color range
\filter{F555W}-\filter{F160W}=1.5--2.5 mag and around
\filter{F160W}$\sim$19 mag.
The median \filter{F160W} magnitude is 19.0 mag in this region of the
color--magnitude diagram.
Adopting an average extinction of A$_\mathrm{V}=1.85$ mag, this corresponds
to the \filter{F160W} magnitudes of the 3 Myr isochrone in the same
color range.
We have thus adopted 3 Myr as the mean age of the low mass cluster population
and a 3 Myr isochrone is adopted to create a mass--luminosity relation in
order to turn the luminosities into masses for objects below 7 M$_\odot$ and
the 3 Myr \citet{marigo} isochrone above.
We will in Sect.~4 the effects on the derived IMF adopting an age spread
of 2 Myr.
The right hand panel in Fig.~\ref{CMD} shows the \filter{I}--\filter{F160W}\ versus
\filter{F160W}\ color--magnitude diagram.
It is evident that the clustering around the isochrone is tighter than for
the \filter{V}--\filter{F160W}\ versus \filter{F160W}\ color magnitude diagram.
This is expected if a large part of the scatter is due to differential
extinction.
We can calculate the scatter around the main sequence in both color--magnitude
diagrams and compare with the difference predicted from extinction.
If the spread in the color--magnitude diagrams is due to extinction we expect
the ratio of spread in the \filter{V}--\filter{F160W}\ versus \filter{F160W}\ diagram to be ratio of
the extinction in each color, i.e. $(1-0.192)/(0.62-0.192)=1.88$ times larger
than in the \filter{I}--\filter{F160W}\ versus \filter{F160W}\ color--magnitude diagram.
Since the isochrone is almost vertical in both diagrams, we have calculated
the standard deviation around the reddened isochrone in both color--magnitude
diagrams.
We have used the stars with good photometry, better than 5\%\ in each filter,
and in the magnitude range $13 < \filter{F160W} < 17$mag.
The standard deviation found for the \filter{V}--\filter{F160W}\ and \filter{I}--\filter{F160W}\
color--magnitude diagrams are 0.60 mag, and 0.36 mag and the ratio is 1.7.
If the measurement errors are taken into account this ratio increases.
The typical errors for the culled sample are 0.04, 0.02, and 0.03 mag for the
V, I, and \filter{F160W}\ bands, respectively.
After taking the measurement errors into account, the ratio is found to be
1.9, assuming the measurement errors in two filters are independent.
Due to blending, this is not necessarily the case.
Thus, 1.9 is an upper limit and we thus find the ratio to be between
1.7 and 1.9, in agreement with the scatter being due to differential
extinction.
Since the amount of differential extinction does not affect the \filter{F160W}\ band
photometry significantly, the single band photometry presented here is
competitive with the 2--band optical photometry.
There is a unique translation from the \filter{F160W}\ band magnitude to the object
mass for the majority of the mass range.
For the optical photometry, the color information is used to determine the
extinction and the mass function is thus effectively determined by the
de--reddened \filter{V} band magnitude.
\subsection{The differences between the optical and near--infrared HST observations}
The main advantage of the optical relative to the near--infrared HST
photometry is the improved resolution due to the smaller diffraction limit.
The stellar content can therefore be resolved to lower masses closer to
the cluster core than is possible with the near--infrared observations.
However, phenomena associated with the star formation process can introduce
additional reddening that can complicate the derivation of the low--mass IMF
from optical data.
The low--mass objects may still be associated with a circumstellar disk.
There is evidence from e.g. the Orion Nebula Cluster that circumstellar
disks can survive the UV radiation from massive stars \citep{robberto}.
Even if the disks are being evaporated by the ration field from the early
type stars, the evaporated material will be a further source of reddening.
Patchy extinction associated with the 30 Dor complex and located in the
foreground of R 136 will be an additional source of differential reddening.
There are signs in the optical images presented in Fig. 1 of \citet{sirianni}
of patches of extinction, e.g. to the east--north--east of the cluster center.
If variable extinction is present or if a significant fraction of the stars
are associated with disks or outflows, an extinction limited sample has to
be created in order to avoid a biases against detection of the
low--mass stars.
The near--infrared photometry is effected by differential extinction as well
but the effect is less than 20\% of that measured in the \filter{V} band.
Thus, whereas the IMF derived from optical observations where an extinction
limited sample is not defined might be severely affected for the low--mass
objects, the effect on near--infrared observations is modest.
Therefore, in the outer parts of the cluster where crowding is a smaller
issue than closer to the center, the near--infrared observations are more
suitable to detect and characterise the low--mass stellar population in the
cluster.
On the other hand, single band photometry has the disadvantage that there
is no information on the age of individual objects.
We investigate in the next Section how this might affect the derived IMF.
We note that if differential extinction is present, the situation is no
better for the optical photometry.
Even though the cluster was observed through two filters in the optical,
there is still a degeneracy between age and extinction.
\cite{sirianni} converted the \filter{V}--\filter{i} photometry into an
effective temperature and used that effective temperature to obtain a
bolometric correction.
Without de--reddening the sources, the age of a cluster member can be in
error and hence the mass estimates will be uncertain.
\section{Analysis}
We construct a mass--luminosity relation by combining the main sequence
models by \citet{marigo}, and the pre--main sequence models of \citet{siess}
in order to infer the stellar mass from the \filter{F160W} band magnitude.
We then derive the mass functions for R136 outside 0.6 pc where the 50\%
completeness limit corresponds to a stellar mass below 10 M$_\odot$.
Deriving the IMF this way is a well established procedure
\citep[][]{lada,muench02}.
We further discuss the potential effect of extinction on the derived IMF\@.
Finally, we search for evidence for mass segregation in the outer parts of
the cluster using the cumulative luminosity functions.
\subsection{Deriving the mass function}
A mass--luminosity relation is needed to convert the derived \filter{F160W}
band magnitude for each star to a mass.
We use the \citet{siess} isochrones for stars below 7 M$_\odot$ and the
\citet{marigo} 3 Myr isochrone for the more massive stars as discussed in
Section 3.2.
The age of the cluster is first assumed to be 3 Myr and is later varied to
examine the effects on the derived IMF for different cluster ages.
Stars below $\sim$3 M$_\odot$ are on the pre--main sequence isochrone whereas
the more massive stars up to our upper mass limit of 20 M$_\odot$ (see below)
are on the main sequence.
The adopted mass--luminosity relation is shown in Fig.~\ref{MLrel}.
We have limited knowledge of the extinction for the majority of our objects.
Instead, we have adopted an average extinction of A$_\mathrm{V}=1.85$ mag, as
determined from the \filter{V-H} versus \filter{H} color--magnitude diagram
in Fig.~\ref{CMD}.
Since the amount of extinction ranges between A$_\mathrm{V}=0.7-3$ mag
\citep{brandl96} the extinction for an individual object might be wrong by up
to A$_\mathrm{V}\sim 1$ mag, a maximum error $<$ 0.2 mag in the \filter{F160W}\ band.
This corresponds to an error of $\sim$10\% when the luminosity is transformed
into a mass.
Fig.~\ref{massfuncs} shows the derived mass functions outside 0.6 pc for a
3 Myr isochrone after field stars have been subtracted statistically in
each annulus.
The mass functions are in general smooth and well fit by power--laws.
However, there appears to be some structure in the derived IMFs at
intermediate masses, 2--4 M$_\odot$, which is the region where the pre--main
sequence track joins the main sequence.
The mass--luminosity relation is plagued by a non--monotonous feature at
this mass range (see Fig.~\ref{MLrel}), which marks the radiative-convective
gap \citep{mayne}
and the transition region from
pre-main sequence to main sequence \citep[see also][]{stolte04}.
A similar structure in the derived IMF is seen in the results from e.g.
NGC 3603 but at a slightly higher mass since the cluster is
younger \citep[e.g.][]{stolte2}.
The turn--on mass is higher for a younger cluster.
Thus we would expect the kink in the mass--luminosity relation to move to
higher masses for a younger cluster.
Since this is what is seen comparing NGC 3603 and R 136, it indicates
indeed a feature of the isochrones and not a feature intrinsic to the cluster.
The number of stars in each mass bin is provided in Table~\ref{numbers} .
Power--laws have been fitted to each of the histograms in order to derive
the slopes of the mass function in each annulus.
The fit was done over the mass range from 20 M$_\odot$ down to the 50\%\
completeness limit for each annulus.
The mass for stars above $\sim$20 M$_\odot$ is very poorly constrained from
near--infrared observations due to uncertainties in the bolometric
corrections \citep[e.g.][]{massey03}.
The derived slopes $\Gamma$, where $dN/d\log M\propto M^{\Gamma}$, are
indicated in Fig.~\ref{massfuncs} and are also presented in Table~\ref{slopes}.
The derived slopes for annuli outside 1 pc are consistent with each other
within 2$\sigma$ error bars.
For the 3--5 pc and 5--7 pc annuli where the data are complete to below
2 M$_\odot$, the slopes are found to be $-1.2\pm0.1$ and $-0.9\pm0.2$, respectively, slightly shallower than the slope of $\Gamma=$-$1.28\pm0.05$ derived by
\citet{sirianni} above 2 M$_\odot$, except that in our case the IMF continues
as a power--law down to 0.8 M$_\odot$.
Has the fact that we used the whole mass range for our power-law fit
washed out a possible
flattening at the low mass end? To test this possibility, we have
additionally fitted a separate power--law to the low--mass part of the IMF.
Only the part of the mass function that is not influenced by the kink in the
mass luminosity relation is used.
This region is limited to masses below 1.7 M$_\odot$ for the 3 Myr isochrone.
It is therefore only for the 5--7 pc annulus that a reasonable mass range is
covered to fit the IMF.
We find the slope to be $\Gamma=-0.9\pm0.2$, which is more shallow but
consistent at the 2$\sigma$ level with a Salpeter IMF and is consistent
with the slope derived for the full mass range.
We have derived the IMF in the same boxes as done by \citet{sirianni}.
The completeness correction was calculated independently for each box before
the IMFs were combined to the average IMF for direct comparison with the
IMF presented by \citet{sirianni}.
The 50\% completeness limit for the NICMOS data varies from 2.8 to
1.4 M$_\odot$ for the four boxes.
Following Sirianni et al., we have derived an average completeness limit
for the three regions of 2.2 M$_\odot$.
As evident, the agreement is good for the common mass range.
We appear to underestimate the stars at $\sim$ 6M$_\odot$ compared to
\citet{sirianni}. However those appear to be recovered at 8 M$_\odot$.
The color--magnitude diagrams show a large spread in the main sequence to pre--main
sequence transition at \filter{F160W}=18--19 mag.
Although shown in Section 3.2 that this scatter can be explained by differential
extinction, it cannot be ruled out that there is an age spread present as well as
suggested in previous studies \citep{hunter95,masseyhunter}.
It is therefore reasonable to take a star formation history different than a
single burst at 3 Myr into account.
We show in Fig.~\ref{IMF_spread} the IMF in the outer two annuli assuming a
cluster age of 2 and 4 Myr, respectively.
We also show an 'average' IMF found as the average of the IMF's derived for
the age range 2--4 Myr in 0.5 Myr increments.
The lower mass limit in the average IMF was determined from the 4 Myr
isochrone which provides the most restrictive mass limit.
We find that both in the case of a 2 and 4 Myr isochrone the IMF is well fit
by power--laws.
The derived slopes are steeper assuming an older isochrone relative to the
younger ones.
There is no indication for a flattening below 2 M$_\odot$ in either case.
The average IMF is also found to be represented by a power--law with a
slope consistent with a Salpeter slope.
The slopes of the derived power--laws are given in Table~\ref{slopes}.
For the average IMF, the number of stars averaged over the different ages
in each mass bin is derived.
Error bars for the average IMF have been determined as the standard
deviation around the mean number of objects in each mass bin.
As was the case for the 3 Myr isochrone, the slopes of the IMF for
different assumed ages have also been calculated and are provided in
Table~\ref{slopes}.
The slopes are found to be shallower than a Salpeter slope,
but at the $\sim2\sigma$
level consistent with a Salpeter slope.
The slopes are also consistent with those derived for all masses up to
20 M$_\odot$, as was the case assuming the 3 Myr isochrone.
The lack of a flattening in the IMF below 2 M$_\odot$ is in contrast to the
results presented by \citet{sirianni}, who derived the IMF closer to the
cluster center.
There can be several possible reasons for the difference in the derived
IMF slope in the two surveys.
First, due to the different spatial resolution in the two studies, the
NICMOS IMF is derived further away from the center of the cluster than the
WFPC2 IMF by \citet{sirianni}.
The IMF was derived in the areas shown in Fig.~\ref{30dormos} as regions B,C,
and D.
Thus, all of their surveyed area is outside a radius of 1 pc and the
majority of their surveyed area is between 2 and 5 pc where crowding precludes
NICMOS
from detecting stars less massive than 2.2 M$_\odot$ for a 3 Myr isochrone.
One possibility for the difference in the derived slopes for the NICMOS and
WFPC2 data can therefore be a variation of the IMF as a function of radius.
Another possible reason can be differential extinction as suggested by
\citet{selman2}.
Both possibilities are discussed in subsections 4.2 and 4.3
\subsection{The effect of differential extinction}
As was suggested by \citet{selman2}, the presence of differential extinction
can potentially alter the low--mass end of the IMF if an extinction-limited
sample is not used.
In order to estimate the possible effect on the IMF if differential extinction
is not taken into account, we have constructed a simple model of the cluster
which includes differential extinction and the depth of the
dataset from \citet{sirianni}.
A Salpeter slope IMF and a cluster age of 3 Myr were assumed.
Each object within the artificial cluster was assigned a \filter{V}
band magnitude based on its mass from the 3 Myr isochrone computed for
a half solar metallicity by \citet{siess}.
The objects were then reddened by a foreground extinction chosen randomly
from a normal distribution with a standard deviation of 0.7 and shifted
to peak at A$_\mathrm{V}$=1.85 mag.
If the extinction was found to be less than A$_\mathrm{V}=0.7$ mag,
a new extinction was calculated.
Stars were then considered detected if their reddened magnitude is within
the 50\%\ completeness limit presented by \citet{sirianni}.
The model is obviously an oversimplification of the real situation.
Nevertheless it is expected to illustrate how the derived IMF might differ
from the underlying IMF.
Figure~\ref{checksirianni} shows the input Salpeter IMF (solid line), the
derived IMF (dashed line) together with the measurements by \citet{sirianni}.
The model mimics a flattening in the observed IMF similar to that deduced
by \citet{sirianni}.
The ratio of the number of stars below and above 2 M$_\odot$ respectively
has been calculated both for the model cluster and the data
from \citet{sirianni}.
For the model cluster it is found to be 0.87, which is in reasonable
agreement with the ratio of 0.76 derived from the observations.
\subsection{Cumulative mass functions in the outer parts of R136}
Another explanation for the difference between the results obtained here
and the results by \citet{sirianni} can be mass segregation.
We have searched for evidence for mass segregation in the two outer annuli
in our survey.
We used the luminosity functions instead of the mass functions to avoid
additional uncertainties due to the mass--luminosity relation.
The results obtained for the mass functions are very similar to those from
the luminosity functions.
The cumulative luminosity distributions are shown in Fig.~\ref{cumu_LF}
for the outer two radial bins. These are the only bins where the 50\%
completeness limit is below 2 M$_\odot$. The two cumulative distributions
are very similar.
We have performed a Kolmogorov--Smirnov test to quantify the similarity of the cumulative luminosity distributions.
The maximum difference between the two distributions is 0.039 and the probability for the two distributions to be drawn from the same parent distribution is 10\%.
Thus, there is no strong evidence (less than 2$\sigma$) for mass segregation in the outer parts of the cluster.
The fact that there is little evidence for mass segregation outside 3 pc does
not exclude the possibility that the cluster is mass segregated out to a
radius of several pc.
Both \citet{malumuth} and \citet{brandl96} found evidence for mass segregation
of the massive stars in the center of the cluster.
\citet{brandl96} showed the half--mass relaxation time to be 7.8$\cdot10^7$
yr, much longer than the cluster age.
They also point out that the massive stars will experience mass segregation
on a much shorter time scale than the lower mass stars; the time scale
depends inversely on the stellar mass.
It is thus not surprising, from a dynamical point of view, that there is
no evidence for mass segregation outside the half--mass radius of
1.7 pc \citep{hunter95}.
On the other hand, this does not rule out the possibility that the cluster
might be mass segregated at birth closer to the cluster center.
Evidence for mass segregation has been found in e.g. the Orion Nebula
Cluster (ONC) \citep{hillenbrandhartmann,bonnell}.
\citet{hillenbrandhartmann} showed evidence for mass segregation down to
stellar masses of 1--2 M$_\odot$.
Due to the youth of the ONC, they concluded the mass segregation had to be at
least partly primordial.
It is thus possible that R136 is also affected by primordial mass segregation
close to the cluster center and that mass segregation is the reason for the
difference between the NICMOS and WFPC2 IMFs.
\subsection{Cluster mass}
We can obtain a rough estimate of the cluster mass from the near--infrared
observations.
The main limitation in our mass estimate is the amount of confusion due to
crowding in the cluster centre: our data mainly samples the IMF down to and
below 1.4 M$_\odot$ outside 3 pc.
Nevertheless, we can utilise the mass estimates within 2 pc from
\citet{hunter95} to complement our mass estimate down to 2.1 M$_\odot$.
Inside 2 pc and into 0.15 pc, the results are extrapolated from the local
completeness limit mass down to 2.1 M$_\odot$ assuming an
underlying Salpeter IMF\@.
No stars have been detected less massive than 20 M$_\odot$ within the
central 0.15 pc radius due to crowding.
The mass in the very center has been estimated from the surface density
profile down to 2.8 M$_\odot$ in \citet{hunter95} to be
$4\cdot10^4$ M$_\odot$pc$^{-2}$, resulting in a mass of 3700 M$_\odot$
down to a lower mass limit of 2.1 M$_\odot$.
We find the cluster total mass down to 2.1 M$_\odot$ to be
$5\cdot10^4$ M$_\odot$.
The directly determined mass down to 2.8 M$_\odot$ within 4.7 pc is found to
be $2.0\cdot10^4$ M$_\odot$, almost the same as found by \citet{hunter95}.
If the IMF follows a Salpeter slope down to 0.5 M$_\odot$ as observed in the
Galactic field and nearby lower--mass clusters \citep{kro02}, the total mass
in the central region would be roughly double the amount given above, and
the total cluster mass would be close to $\sim10^5$M$_\odot$.
The velocity dispersion, and hence the dynamical mass, of the whole NGC
2070 region, including R 136 has been determined by \citet{bosch}.
The dynamical mass was determined to be 4.5$\cdot10^5$M$_\odot$, almost 5
times higher than expected for R 136 alone, but consistent with the
photometric mass for the same area \citep{selman2}.
If we take into account that the half mass radius of R 136 is 1.7 pc
\citep{hunter95}, compared to 14 pc for the whole NGC 2070 region and
assuming the velocity dispersion is the same in the inner parts of the
cluster, we would expect a dynamical mass of
4.5$\cdot10^5\cdot 1.7/14$M$_\odot$=5.5$\cdot10^4$M$_\odot$ which is lower
than the mass expected if the IMF is consistent with a Galactic IMF down to
0.5 M$_\odot$.
Thus, at face value, the velocity dispersion would be low enough that the
cluster can stay bound.
However, a measurement of the velocity dispersion for the inner regions is
necessary to directly compare the photometric mass with the dynamical mass.
\subsection{ The surface brightness profile}
We can directly derive the surface brightness profile of the region around
R136 in the 30 Dor cluster since the data does not suffer from saturated stars.
Although bright stars will saturate through the one hour exposure, the
non-destructive readout mode ensures that only the first reads are used to
derive the magnitude of the brightest stars.
The surface brightness profile is shown in Fig.~\ref{lightprof}.
Between $\sim$0.2 and 2 pc, the light profile is well fit by a power--law, whereas inside 0.2 pc the light profile appears to be flattening.
We have therefore fitted the light profile with a power--law modified by a core radius, similar to the approach in \citet{EFF}.
Constraining the fit to inside 2 pc, we find a slope of $-1.54\pm0.02$, slightly more shallow than $-1.72\pm0.06$ derived outside 0.1pc by \citet{campbell} using \filter{F336W} Planetary Camera onboard HST observations.
The core radius is found to be $0.025\pm0.004$pc, which is less than the resolution of the observations and is thus likely and upper limit.
Previous HST optical studies determined a small core radius, $r_c\le 0.02$ pc
\citep{hunter95}, consistent with our findings here.
However, since the derived core radius is smaller than the resolution of the observations, it's evidence is weak.
One or two bright stars off center by only a small amount could mimic a cluster core.
\subsection{Comparison with other massive clusters and the implications of low--mass stars in R136}
How does the low--mass end of the IMF in 30 Dor compare with that determined
for other massive and dense stellar clusters?
A top--heavy IMF in massive dense clusters has been suggested on theoretical
grounds \citep[e.g.][]{silk}.
The most convincing example of a young cluster with a present-day mass
function departing significantly from a Salpeter IMF above 1 M$_\odot$ is the
Arches cluster \citep{stolte,figer}.
\citet{stolte} found an average slope of $\Gamma=-0.9\pm0.15$ for the central
parsec of the Arches cluster, flatter than a Salpeter slope of -1.35.
Deeper observations found that the present day mass function in Arches
to be well approximated by a power--law with a slope of $\Gamma=-0.91\pm0.08$
down to 1.3 M$_\odot$ \citep{kim}.
However, recent work taking differential extinction into account suggest the slope of the power--law is only slightly more shallow than a Salpeter slope, $\Gamma=-1.1\pm0.2$ \citep{espinoza}.
\citet{portegies} note that even if the observed IMF is slightly flatter thann a Salpeter IMF, this can be explaned by mass segregation.
The mass segregation would be accelerated in the cluster due
to the strong gravitational field from the Galactic Center.
By adopting realistic parameters for a model cluster and an appropriate
distance from the Galactic center, they found that an input Salpeter slope
IMF would be transformed to the observed present day mass function via
strong dynamical evolution.
\citet{stolte2} showed that the IMF of the cluster powering the NGC 3603 HII
region was well fitted by a power--law but with a slope flatter than
Salpeter, $\Gamma=-0.91\pm0.15$.
They further showed evidence for mass segregation for the more massive
stars, M$> 4$ M$_\odot$.
The data indicated a slight flattening of the low--mass content
(M$<$ 3 M$_\odot$).
NGC 3603 is younger than the Arches cluster and not affected by a strong
tidal gravitational field. Thus it is expected to be less
influenced by dynamical
mass segregation.
The even more massive starburst clusters appear to be the primary sites
(unit cells)
of star formation in starburst galaxies, including
interacting/colliding galaxies such as The Antennae or The Cartwheel.
If starburst clusters are the basic building blocks of certain star--forming
galaxies, their stellar content (IMF) will affect much of the observed
chemical and photometric evolution of galaxies, both at the
present epoch and perhaps even more so in the high-redshift
past \citep{charlot}.
Several observational claims have been made that the IMF in unresolved
starburst clusters is top--heavy \citep{rieke}, although observations of the
Antennae gave a mixed result \citep{mengel}.
However, it has been suggested that the high mass--to--light ratios found
in some young starburst clusters are artificially high related to their
not being in virial equilibrium due to gas expulsion
from the clusters \citep{goodwinbastian}.
During the first 50 Myr of the cluster, the velocity dispersion and hence the
cluster mass might be overestimated if the cluster is assumed to be virialized.
\citet{goodwinbastian} suggest that the top--heavy IMFs inferred in young
unresolved extragalactic star clusters
might be spurious due to their non-virialized dynamical state.
With the present dataset it is clear that the IMF in the outer parts of R136
continues as a power--law down to 1 M$_\odot$, similar to what is found in
other star clusters and the slope is similar to what is found in the field.
Whether this is true for the cluster as a whole depends on the cause for the
flattening observed closer to the cluster center.
It would be interesting to know the IMF if the observations could
be extended closer to the characteristic mass where the
Galactic field star IMF flattens \citep[0.5 M$_\odot$][]{kro02}, a mass that
can be reached in massive young clusters ($\le$ 4 Myr) in the LMC
with AO systems.
It has long been suggested R136 might be a proto--globular cluster
\citep{meylan,larson93}.
The question has been whether R136 would remain bound over a Hubble time.
One consequence of a top--heavy IMF is that the cluster would dissolve
soon after gas expulsion and mass loss due to evolution of the
high--mass stars.
However, the detection of stars in R136 less massive than 1 M$_\odot$
gives the first {\it direct} evidence that low--mass stars are formed in a
starburst cluster.
The fact that the IMF in the outer parts of R136 appears to be a Salpeter IMF
down to at least 1 M$_\odot$ gives support to the notion the cluster might be
a proto--globular cluster, albeit a light one.
Early gas expulsion and subsequent mass loss through stellar evolution will
disrupt star clusters deficient in low--mass stars during the first 5 Gyr of
the clusters life \citep{chernoff,goodwin_old}
However, a determination of the velocity dispersion in the inner parts of the
cluster is necessary to determine its final fate.
Thus, the presence of low--mass stars is a necessary, but not sufficient
condition for the possibility of the cluster to evolve into a globular cluster.
The median mass of Galactic globular clusters is 8.1$\cdot10^4$ M$_\odot$
\citep{mandushev}, comparable to the mass of R136.
Even if R 136 will remain bound it will lose some mass and might end up
as a low--mass globular cluster.
\section{Conclusions}
We have analyzed HST/NICMOS \filter{F160W} band data covering the central
14pc$\times$14.25pc around R136 in the NGC 2070 cluster in the LMC.
We have reached the following conclusions:
\begin{itemize}
\item{From the color--magnitude diagram obtained by combining our photometry
with previously published HST/WFPC 2 \filter{F555W} data we constrain the age
of the lower--mass stellar content in the cluster to be 2--4 Myr, consistent
with previous estimates.
We derive individual masses for the objects detected adopting a 3 Myr
isochrone.}
\item{We have detected stars in the cluster down to 0.5 M$_\odot$ at
$r > 5$ pc, assuming an age of 3 Myr.}
\item{The derived IMF is consistent with a Salpeter slope IMF with no
evidence for a flattening at low masses down to the 50\%\ completeness limit
corresponding to a mass of 1.1 M$_\odot$ outside a radius of 5 pc for a
3 Myr population and 1.4 M$_\odot$ if the oldest stars are 4 Myr. }
\item{The result is in disagreement with the flattening of the IMF below 2
M$_\odot$ observed by \citet{sirianni} using optical data covering a region
closer to the cluster center.
We suggest two possible reasons for the discrepancy: differential extinction
and mass segregation.}
\item{We find no evidence for mass segregation outside 3 pc, but with the
current data, we cannot rule out that closer to the center the low--mass
stars are segregated.}
\item{From the radial surface brightness profile we have derived a core
radius for the cluster of 0.025 pc (0\farcs1), consistent with previous
estimates by \citet{hunter95}.}
\item{The mass of the cluster within 7 pc between 25 M$_\odot$ and down to
2.1 M$_\odot$ is estimated to be 5$\cdot10^4$ M$_\odot$. If the IMF continues
with a Salpeter slope down to 0.5 M$_\odot$ the total mass estimate will
double.}
\item{The total mass of the cluster combined with the large number of
low--mass stars suggests that the 30 Dor cluster may survive to become a
proto--globular cluster depending on the cluster velocity dispersion.}
\end{itemize}
\acknowledgements
We thank Richard Larson for discussions in the early phases of the project,
Eddie Bergeron for assistance with the drizzle software, and Matthew
Kenworthy for commenting on an early version of the manuscript.
M. A. and H. Z. acknowledges support from the DLR grant 50OR9912:
``Data analysis of NICMOS/HST images of the 30 Dor cluster'' and partial
funding through the DLR grant 50OR0401.
M.A thanks the Astrophysikalisches Institut Potsdam for providing a
stimulating and supportive environment for carrying out this Thesis work.
Additional support was funded through the European Commission Fifth Framework
Programme Research Training Network ``The Formation and Evolution of Young
Stellar Clusters'' (HPRN-CT-2000-00155).
The Astronomische Gesellschaft is acknowledged for providing funding for
travel.
Support for this work was provided by NASA through grant number
GO-07370.01-96A from the Space Telescope Science Institute, which is
operated by the Association of Universities for Research in Astronomy,
Inc., under NASA contract NAS 5-26555.
Facilities: \facility{
This paper is based on observations made with the NASA/ESA
{ \it Hubble Space Telescope}, operated by the Space Telescope Science
Institute, which is operated by the Association of Universities for
Research in Astronomy, Inc., under NASA contract NAS5-26555.}
|
1,314,259,994,930 | arxiv | \section{Introduction}
Thermal transport in semiconductors and dielectrics is determined by phonon scattering processes. Intrinsic phonon-phonon scattering includes three-phonon, four-phonon and higher-order phonon processes. The scattering rate $\tau^{-1}$, reciprocal of phonon relaxation time $\tau$, is essential in predicting lattice thermal conductivity $\kappa$ based on the Boltzmann transport equation (BTE)\,\cite{Srivastava_book, Broido2007apl} \begin{equation} \kappa_z=\frac{1}{V}\sum_\lambda v_{z,\lambda}^2 c_\lambda\tau_\lambda, \label{eq_k_BTE}
\end{equation}where $V$ is the volume, $\lambda\equiv(\mathbf{k},j)$ specifies a phonon mode with wave vector $\mathbf{k}$ and dispersion branch $j$, $v_z$ is phonon group velocity projection along the transport $z$ direction, and $c_\lambda$ is phonon specific heat per mode. Starting from the third-order anharmonic Hamiltonian and the Fermi's golden rule (FGR), Maradudin \textit{et al.}\,\cite{Maradudin1962prb, Maradudin1962pss} proposed an anharmonic lattice dynamics (ALD) method to predict intrinsic three-phonon scattering rates in solids. Debernardi \textit{et al.}\,\cite{Deb1995prl} performed the ALD calculations based on density functional theory (DFT) to obtain $\tau_\lambda$. Recently significant advances have been achieved by Broido \textit{et al.} by combining ALD and BTE to predict $\kappa$\,\cite{Broido2007apl}. The ALD method based on first-principles force constants or classical interatomic potentials have since been extensively used\,\cite{Turney2009prb1, Lindsay2009prb, Esf2011prb, Lindsay2012prl}. A recent review on this topic can be found in Ref.\,\cite{Feng2014Jn}. However, the current ALD method is limited to evaluating three-phonon scattering rates and does not capture four and higher-order scatterings due to the challenge, and hence its accuracy is limited to relatively low temperatures, typically far from the melting point. For example, at 1000 K the lattice thermal conductivity of Si predicted by only considering three-phonon scattering is $\sim$41 W/mK\,\cite{Esf2011prb}, which is higher than the experimental value $\sim$30 W/m-K\,\cite{Glass1964PR}.
Although the study of four-phonon scattering has a long history, it was limited to the qualitative interpretation of the experimental data\,\cite{Joshi1970prb, Ecsedy1977prb}. Recently, Lindsay \textit{et al.} examined the phase space for four-phonon scattering processes \cite{Lindsay2008jp}. Turney \textit{et al.} have discussed the higher-order anharmonicity of the interatomic potential in argon, by comparing the three-phonon scattering rates obtained by the ALD method to the total phonon scattering rates obtained by molecular dynamics (MD) and normal mode analysis (NMA)\,\cite{Turney2009prb1}. Sapna and Singh\,\cite{Sapna2013mplb} estimated the four-phonon scattering rates in carbon nanotubes using an analytical model involving approximations such as the Callaway model, the Debye model, etc. Although NMA can predict the total scattering rates, it cannot separate three-phonon and higher-order phonon processes and does not provide scattering probability of each individual scattering process\,\cite{Feng2014Jn}. Therefore, a direct and rigorous calculation of four-phonon scattering rates in the ALD framework is of great significance for a better understanding of phonon transport and a more accurate prediction of $\kappa$.
In this work, we derive an ALD formalism for four-phonon scattering by extending the derivation of Maradudin \textit{et al.}\,\cite{Maradudin1962prb}. Bulk argon, a strongly anharmonic material, is used as a benchmark material to demonstrate the approach and the importance of four-phonon scattering in thermal transport. This is followed by the study of three less anharmonic materials -- bulk diamond, silicon and germanium. The accuracy of our calculations are demonstrated by the agreement of the scattering rates and lattice thermal conductivities between ALD (with four-phonon scattering included) and MD. Comparison is also made to experiment when appropriate. An agreement between our prediction and experimental results is also presented.
\section{Derivation of four-phonon scattering rate}
\label{Sec_formula}
The Hamiltonian of crystals can be written as the summation of the harmonic and anharmonic parts based on perturbation theory\,\cite{Maradudin1962prb,Ziman_book}
\begin{equation}
\hat{H} = \hat{H}_0 + \hat{H}_3 + \hat{H}_4 + \cdots,
\end{equation}
where the harmonic part $\hat{H}_0$, first-order perturbation $\hat{H}_3$ and second-order perturbation $\hat{H}_4$ are\,\cite{Maradudin1962prb}
\begin{eqnarray}
&&\hat{H}_0=\sum_\lambda \hbar\omega_\lambda (a_\lambda^\dagger a_\lambda +1/2), \\
&&\hat{H}_3 = \sum_{\lambda \lambda_1 \lambda_2} H_{\lambda\lambda_1\lambda_2}^{(3)}\left(a_{-\lambda}^\dagger + a_\lambda\right) (a_{-\lambda_1}^\dagger + a_{\lambda_1}) (a_{-\lambda_2}^\dagger + a_{\lambda_2}), \\
&&\hat{H}_4 = \sum_{\lambda \lambda_1 \lambda_2 \lambda_3} H_{\lambda\lambda_1\lambda_2\lambda_3}^{(4)}(a_{-\lambda}^\dagger + a_\lambda) (a_{-\lambda_1}^\dagger + a_{\lambda_1}) (a_{-\lambda_2}^\dagger + a_{\lambda_2}) (a_{-\lambda_3}^\dagger + a_{\lambda_3}),
\end{eqnarray}
respectively. Here $a_\lambda^\dagger$ and $a_\lambda$ are the creation and annihilation operators with $a_\lambda^\dagger|n_\lambda\rangle=\sqrt{n_\lambda+1}|n_\lambda+1\rangle$ and $a_\lambda|n_\lambda\rangle=\sqrt{n_\lambda}|n_\lambda-1\rangle$ respectively. $\omega_\lambda$ is the angular frequency of the phonon mode $\lambda$. The expressions for $H_{\lambda\lambda_1\lambda_2}^{(3)}$ and $H_{\lambda\lambda_1\lambda_2\lambda_3}^{(4)}$ given in Ref.\,\cite{Maradudin1962prb} are
\begin{equation}
H_{\lambda\lambda_1\lambda_2}^{(3)} = \frac{\hbar^{3/2}}{2^{3/2}\times 6N^{1/2}} \Delta_{\mathbf{k}\!+\!\mathbf{k}_1\!+\!\mathbf{k}_2,\mathbf{R}} \frac{V_{\lambda\lambda_1\lambda_2}^{(3)}} {\sqrt{\omega_\lambda\omega_{\lambda_1}\omega_{\lambda_2}}},
\end{equation}
\begin{equation}
H_{\lambda\lambda_1\lambda_2\lambda_3}^{(4)} = \frac{\hbar^{2}}{2^{2}\times 24N} \Delta_{\mathbf{k}\!+\!\mathbf{k}_1\!+\!\mathbf{k}_2\!+\!\mathbf{k}_3,\mathbf{R}} \frac{V_{\lambda\lambda_1\lambda_2\lambda_3}^{(4)}} {\sqrt{\omega_\lambda\omega_{\lambda_1}\omega_{\lambda_2}\omega_{\lambda_3}}},
\end{equation}
\begin{equation}
\label{eq_V3_H}
V_{\lambda\lambda_1\lambda_2}^{(3)}\!=\!\sum_{b\!,l_1b_1\!,l_2b_2}\sum_{\alpha\alpha_1\!\alpha_2}\Phi_{0b\!,l_1b_1\!,l_2b_2}^{\alpha\alpha_1\alpha_2}\frac{e_{\alpha b}^\lambda e_{\alpha_1 b_1}^{\lambda_1} e_{\alpha_2 b_2}^{\lambda_2}}{\sqrt{\bar{m}_b \bar{m}_{b_1} \bar{m}_{b_2}}} e^{ i \mathbf{k_1}\cdot\mathbf{r}_{l_1}} e^{ i \mathbf{k_2}\cdot \mathbf{r}_{l_2}},
\end{equation}
\begin{equation}
\label{eq_V4_H}
V_{\lambda\lambda_1\lambda_2\lambda_3}^{(4)}=\sum_{b,l_1b_1,l_2b_2,l_3b_3}\sum_{\alpha\alpha_1\alpha_2\alpha_3}\Phi_{0b,l_1b_1,l_2b_2,l_3b_3}^{\alpha\alpha_1\alpha_2\alpha_3}\frac{e_{\alpha b}^\lambda e_{\alpha_1 b_1}^{\lambda_1} e_{\alpha_2 b_2}^{\lambda_2} e_{\alpha_3 b_3}^{\lambda_3}}{\sqrt{\bar{m}_b \bar{m}_{b_1} \bar{m}_{b_2} \bar{m}_{b_3}}} e^{ i \mathbf{k_1}\cdot \mathbf{r}_{l_1}} e^{ i \mathbf{k_2}\cdot \mathbf{r}_{l_2}} e^{ i \mathbf{k_3}\cdot \mathbf{r}_{l_3}},
\end{equation}
where $N$ is the total number of $\mathbf{k}$ points. $\mathbf{R}$ is a reciprocal lattice vector. The Kronecker deltas $\Delta_{\mathbf{k}+\mathbf{k}_1+\mathbf{k}_2,\mathbf{R}}$ and $\Delta_{\mathbf{k}+\mathbf{k}_1+\mathbf{k}_2\!+\mathbf{k}_3,\mathbf{R}}$ describe the momentum selection rule and have the property that $\Delta_{m,n}= 1$ (if $m=n$), or 0 (if $m\neq n$). $l$, $b$, and $\alpha$ label the indexes of the unit cells, basis atoms, and ($x$,$y$,$z$) directions, respectively. $\Phi_{0b\!,l_1b_1\!,l_2b_2}^{\alpha\alpha_1\alpha_2}$ and $\Phi_{0b,l_1b_1,l_2b_2,l_3b_3}^{\alpha\alpha_1\alpha_2\alpha_3}$ are the third- and fourth-order force constants, respectively. $e$ is the phonon eigenvector. $\bar{m}_b$ is the average atomic mass at the lattice site $b$.
Considering a three-phonon process $\lambda\rightarrow\lambda_1+ \lambda_2$, for example, the initial state is $|i\rangle=|n_\lambda+1,n_{\lambda_1},n_{\lambda_2}\rangle$ and the final state is $|f\rangle=|n_\lambda,n_{\lambda_1}+1,n_{\lambda_2}+1\rangle$. Based on FGR, the transition probability from $|i\rangle$ to $|f\rangle$ is proportional to
\begin{equation}
\frac{2\pi}{\hbar}\left|\langle f|\hat{H}_3|i\rangle\right|^2\delta(E_i-E_f) \sim \left|\sqrt{n_\lambda}\sqrt{1\!+\!n_{\lambda_1}}\sqrt{\!1\!+\!n_{\lambda_2}\!}\right|^2\cdot\left|H_{\lambda\lambda_1\lambda_2}^{(3)}\right|^2 \sim n_\lambda (1\!+\!n_{\lambda_1})(\!1\!+\!n_{\lambda_2}\!)\left|H_{\lambda\lambda_1\lambda_2}^{(3)}\right|^2.
\end{equation}
Similarly the transition probability of the process $\lambda\leftarrow\lambda_1+ \lambda_2$ is proportional to
\begin{equation}
\frac{2\pi}{\hbar}\left|\langle i|\hat{H}_3|f\rangle\right|^2\delta(E_i-E_f) \sim \left|\sqrt{1+n_\lambda}\sqrt{n_{\lambda_1}}\sqrt{\!n_{\lambda_2}\!}\right|^2\cdot\left|H_{\lambda\lambda_1\lambda_2}^{(3)}\right|^2 \sim (\!1\!+\!n_\lambda\!)n_{\lambda_1}n_{\lambda_2}\left|H_{\lambda\lambda_1\lambda_2}^{(3)}\right|^2.
\end{equation}
The time rate of the occupation number change of the mode $\lambda$ due to three-phonon\,\cite{Maradudin1962prb, Feng2014Jn, Klemens_book, Ziman_book, Kaviany_book} and four-phonon scattering (Fig.\,\ref{fig_scattering}) can be written as
\begin{eqnarray}
\label{BTE2}
\frac{\partial n_\lambda}{\partial t} |_s =
&& -\sum_{\lambda_1\lambda_2}\left\{\frac{1}{2}\big[n_\lambda (1\!+\!n_{\lambda_1})(\!1\!+\!n_{\lambda_2}\!)\!-\!(\!1\!+\!n_\lambda\!)n_{\lambda_1}n_{\lambda_2}\big]\mathcal{L}_- \!+\! \big[n_\lambda n_{\lambda_1}(\!1\!+\!n_{\lambda_2}\!)\!-\!(\!1\!+\!n_\lambda\!)(\!1\!+\!n_{\lambda_1}\!)n_{\lambda_2}\big]\mathcal{L}_+\right\} \nonumber \\
& & -\sum_{\lambda_1\lambda_2\lambda_3}\bigg\{ \frac{1}{6}\big[n_\lambda (\!1\!+\!n_{\lambda_1}\!)(\!1\!+\!n_{\lambda_2}\!)(\!1\!+\!n_{\lambda_3}\!)\!-\!(\!1\!+\!n_\lambda\!)n_{\lambda_1}n_{\lambda_2}n_{\lambda_3}\big]\mathcal{L}_{--} \nonumber \\
& & \ \ \ \ \ \ \ \ \ \ \ \ \!+ \frac{1}{2}\big[n_\lambda n_{\lambda_1}(1\!+\!n_{\lambda_2})(\!1\!+\!n_{\lambda_3}\!)\!-\!(\!1\!+\!n_\lambda\!)(\!1\!+\!n_{\lambda_1}\!)n_{\lambda_2}n_{\lambda_3}\big]\mathcal{L}_{+-} \nonumber \\
& & \ \ \ \ \ \ \ \ \ \ \ \ \!+ \frac{1}{2}\big[n_\lambda n_{\lambda_1}n_{\lambda_2}(\!1\!+\!n_{\lambda_3}\!)\!-\!(\!1\!+\!n_\lambda\!)(\!1\!+\!n_{\lambda_1}\!)(1\!+\!n_{\lambda_2})n_{\lambda_3}\big]\mathcal{L}_{++} \bigg\}
\end{eqnarray}
The first summation on the right hand side represents the three-phonon scattering rate of the mode $\lambda$, with the first term accounting for the splitting process $\lambda\rightarrow\lambda_1+ \lambda_2$ and the second the combination process $\lambda+\lambda_1\rightarrow \lambda_2$. The physical meaning of the first term is the difference between the transition rates of $\lambda\rightarrow\lambda_1+ \lambda_2$ and $\lambda\leftarrow\lambda_1+ \lambda_2$, and thus indicates the decay rate of $n_\lambda$ due to the splitting process. Similarly, the second term illustrates the transition rate difference between $\lambda+\lambda_1\rightarrow \lambda_2$ and $\lambda+\lambda_1\leftarrow \lambda_2$, indicating the decay rate of $n_\lambda$ due to the combination process. $\mathcal{L}_\pm$ contains the information of the intrinsic transition probability and the transition selection rules for energy and momentum, $\omega_\lambda\pm\omega_{\lambda_1}-\omega_{\lambda_2}=0$ and $\mathbf{k}\pm\mathbf{k}_1-\mathbf{k}_2=\mathbf{R}$, with $\mathbf{R}=0$ implying the normal ($N$) process and $\mathbf{R}\neq 0$ the Umklapp ($U$) process. The second summation accounts for the four-phonon scattering of the mode $\lambda$, with the first parentheses representing the process $\lambda\rightarrow \lambda_1 + \lambda_2+\lambda_3$, the second the process $\lambda+ \lambda_1 \rightarrow \lambda_2+\lambda_3$, and the third $\lambda+ \lambda_1 + \lambda_2\rightarrow\lambda_3$. Similarly, $\mathcal{L}_{\pm\pm}$ accounts for the transition probabilities and the selection rules $\omega_\lambda\pm\omega_{\lambda_1}\pm\omega_{\lambda_2}-\omega_{\lambda_3}=0$ and $\mathbf{k}\pm\mathbf{k}_1\pm\mathbf{k}_2-\mathbf{k}_3=\mathbf{R}$. The minus sign before each scattering term indicates that the perturbation $n_\lambda'$ to the equilibrium Bose-Einstein distribution $n_\lambda^0$ is decreasing with time, i.e., the phonon distribution tends to recover its equilibrium state, due to the scattering. The factors 1/6 and 1/2 in Eq.\,(\ref{BTE2}) account for the sixfold count and double count in the summation, respectively.
In the single mode relaxation time approximation (SMRTA)\,\cite{Kaviany_book, Feng2014Jn}, the mode $\lambda$ is suddenly stimulated to an excited state and has the occupation number
\begin{equation}
n_\lambda = n_\lambda^0+n_\lambda',\label{eq_n}
\end{equation}
while other modes stay in equilibrium, i.e.,
\begin{eqnarray}
\label{n_Iterative}
n_{\lambda_1} &=& n_{\lambda_1}^0, \label{eq_nprime} \\
n_{\lambda_2} &=& n_{\lambda_2}^0, \label{eq_n2prime} \\
n_{\lambda_3} &=& n_{\lambda_3}^0. \label{eq_n3prime}
\end{eqnarray}
By substituting Eqs.\,(\ref{eq_n})-(\ref{eq_n3prime}) into Eq.\,(\ref{BTE2}) and using the fact that
\begin{eqnarray}
\lambda\!\rightarrow\!\lambda_1\!+\!\lambda_2\!&:&
n_\lambda^0(1+n_{\lambda_1}^0)(1+n_{\lambda_2}^0)-(1+n_\lambda^0)n_{\lambda_1}^0 n_{\lambda_2}^0 =0 \label{eq_math_0}\\
\lambda\!+\!\lambda_1\!\rightarrow\!\lambda_2\!&:&
n_\lambda^0n_{\lambda_1}^0(1+n_{\lambda_2}^0) - (1+n_\lambda^0)(1+n_{\lambda_1}^0)n_{\lambda_2}^0 =0 \label{eq_math_2}\\
\lambda\!\rightarrow\!\lambda_1\!+\!\lambda_2\!+\!\lambda_3\!&:&
n_\lambda^0(1+n_{\lambda_1}^0)(1+n_{\lambda_2}^0)(1+n_{\lambda_3}^0)-(1+n_{\lambda}^0)n_{\lambda_1}^0 n_{\lambda_2}^0
n_{\lambda_3}^0 =0\\
\lambda\!+\!\lambda_1\!\rightarrow\!\lambda_2\!+\!\lambda_3\!&:&
n_\lambda^0n_{\lambda_1}^0(1+n_{\lambda_2}^0)(1+n_{\lambda_3}^0)-(1+n_{\lambda}^0)(1+n_{\lambda_1}^0) n_{\lambda_2}^0 n_{\lambda_3}^0 =0\\
\lambda\!+\!\lambda_1\!+\!\lambda_2\!\rightarrow\!\lambda_3\!&:&
n_\lambda^0n_{\lambda_1}^0 n_{\lambda_2}^0(1+n_{\lambda_3}^0)-(1+n_{\lambda}^0)(1+n_{\lambda_1}^0)(1+n_{\lambda_2}^0) n_{\lambda_3}^0 =0
\end{eqnarray}
and the fact
\begin{eqnarray}
\lambda\!\rightarrow\!\lambda_1\!+\!\lambda_2\!&:& (1+n_{\lambda_1}^0)(1+n_{\lambda_2}^0)-n_{\lambda_1}^0 n_{\lambda_2}^0 = \frac{n_{\lambda_1}^0 n_{\lambda_2}^0}{n_{\lambda}^0} = 1+n_{\lambda_1}^0+n_{\lambda_2}^0, \label{eq_math1}\\
\lambda\!+\!\lambda_1\!\rightarrow\!\lambda_2\!&:& n_{\lambda_1}^0(1+n_{\lambda_2}^0) - (1+n_{\lambda_1}^0)n_{\lambda_2}^0 = \frac{(1+n_{\lambda_1}^0)n_{\lambda_2}^0}{n_{\lambda}^0} = n_{\lambda_1}^0- n_{\lambda_2}^0, \label{eq_math2}\\
\lambda\!\rightarrow\!\lambda_1\!+\!\lambda_2\!+\!\lambda_3\!&:&
(1+n_{\lambda_1}^0)(1+n_{\lambda_2}^0)(1+n_{\lambda_3}^0)-n_{\lambda_1}^0 n_{\lambda_2}^0
n_{\lambda_3}^0 = \frac{n_{\lambda_1}^0 n_{\lambda_2}^0 n_{\lambda_3}^0}{n_{\lambda}^0},\label{eq_math3}\\
\lambda\!+\!\lambda_1\!\rightarrow\!\lambda_2\!+\!\lambda_3\!&:&
n_{\lambda_1}^0(1+n_{\lambda_2}^0)(1+n_{\lambda_3}^0)-(1+n_{\lambda_1}^0) n_{\lambda_2}^0 n_{\lambda_3}^0 = \frac{(1+n_{\lambda_1}^0) n_{\lambda_2}^0 n_{\lambda_3}^0}{n_{\lambda}^0}, \label{eq_math4}\\
\lambda\!+\!\lambda_1\!+\!\lambda_2\!\rightarrow\!\lambda_3\!&:&
n_{\lambda_1}^0 n_{\lambda_2}^0(1+n_{\lambda_3}^0)-(1+n_{\lambda_1}^0)(1+n_{\lambda_2}^0) n_{\lambda_3}^0 = \frac{(1+n_{\lambda_1}^0) (1+n_{\lambda_2}^0) n_{\lambda_3}^0}{n_{\lambda}^0},\label{eq_math5}
\end{eqnarray}
Eq.\,(\ref{BTE2}) is reduced to
\begin{eqnarray}
\label{BTE3}
\frac{\partial n_\lambda'}{\partial t} |_s =& & -n_\lambda'\sum_{\lambda_1\lambda_2}\left\{ \frac{1}{2}(1+n_{\lambda_1}^0+n_{\lambda_2}^0)\mathcal{L}_- \!+\! (n_{\lambda_1}^0- n_{\lambda_2}^0)\mathcal{L}_+ \right\} \nonumber\\
& & -n_\lambda'\sum_{\lambda_1\lambda_2\lambda_3}\bigg\{ \frac{1}{6} \frac{n_{\lambda_1}^0 n_{\lambda_2}^0 n_{\lambda_3}^0}{n_{\lambda}^0} \mathcal{L}_{--} \nonumber \!+ \frac{1}{2} \frac{(1+n_{\lambda_1}^0) n_{\lambda_2}^0 n_{\lambda_3}^0}{n_{\lambda}^0} \mathcal{L}_{+-} \nonumber \!+ \frac{1}{2}\frac{(1+n_{\lambda_1}^0) (1+n_{\lambda_2}^0) n_{\lambda_3}^0}{n_{\lambda}^0} \mathcal{L}_{++} \bigg\} \nonumber \\
=& & -n_\lambda'(\tau_{3,\lambda}^{-1}+ \tau_{4,\lambda}^{-1}),
\end{eqnarray}
where $\tau_{3,\lambda}^{-1}$ and $\tau_{4,\lambda}^{-1}$ are
\begin{equation}
\label{tau3}
\tau_{3,\lambda}^{-1}= \sum_{\lambda_1\lambda_2}\left\{ \frac{1}{2}(1+n_{\lambda_1}^0+n_{\lambda_2}^0)\mathcal{L}_- \!+\! (n_{\lambda_1}^0- n_{\lambda_2}^0)\mathcal{L}_+ \right\} ,
\end{equation}
\begin{equation}
\label{tau4}
\tau_{4,\lambda}^{-1}= \sum_{\lambda_1\lambda_2\lambda_3}\bigg\{ \frac{1}{6} \frac{n_{\lambda_1}^0 n_{\lambda_2}^0 n_{\lambda_3}^0}{n_{\lambda}^0} \mathcal{L}_{--} \!+ \frac{1}{2} \frac{(1+n_{\lambda_1}^0) n_{\lambda_2}^0 n_{\lambda_3}^0}{n_{\lambda}^0} \mathcal{L}_{+-} \!+ \frac{1}{2}\frac{(1+n_{\lambda_1}^0) (1+n_{\lambda_2}^0) n_{\lambda_3}^0}{n_{\lambda}^0} \mathcal{L}_{++} \bigg\}.
\end{equation}
Thus, the scattering rate based on the SMRTA is
\begin{equation}
\label{eq_solution_SMRTA}
\tau_\lambda^{-1}= \tau_{3,\lambda}^{-1}+ \tau_{4,\lambda}^{-1}.
\end{equation}
The exact solution to BTE beyond the SMRTA including four-phonon scattering is quite complicated and thus will be presented in our subsequent work. Since the focus of this paper is on the importance of four-phonon scattering compared to the three-phonon scattering, the SMRTA is enough to demonstrate the features.
Equations (\ref{eq_math_0})-(\ref{eq_math5}) are derived based on the energy conservation law. For example, Eqs.\,(\ref{eq_math_0}) and (\ref{eq_math1}) are derived by substituting the $\omega$ of the Bose-Einstein distribution $e^{\hbar\omega/k_BT}=1+1/n_\lambda^0$ into the energy conservation (selection rule) $\omega_\lambda=\omega_{\lambda_1}+\omega_{\lambda_2}$, giving the relation $1+1/n_\lambda^0=(1+1/n_{\lambda_1}^0)(1+1/n_{\lambda_2}^0)$.
The expressions for $\mathcal{L}_\pm$ and $\mathcal{L}_{\pm\pm}$ are given by FGR,
\begin{eqnarray}
\label{Lpm}
\mathcal{L}_\pm&=& 18*2\frac{2\pi}{\hbar}\left|H_{\lambda\lambda_1\lambda_2}^{(3)}\right|^2\delta(E_i-E_f)\label{eq_Lpm1}\\ &=&\frac{\pi\hbar}{4N}\!\left|V_{\pm}^{\!(3)\!}\right|^2\! \Delta_\pm \frac{\delta(\omega_\lambda\!\pm\!\omega_{\lambda_1}\!-\!\omega_{\lambda_2})}{\omega_\lambda\omega_{\lambda_1}\omega_{\lambda_2}}\label{Lpm2},
\end{eqnarray}
\begin{eqnarray}
\label{Lpmpm}
\mathcal{L}_{\pm\pm}&=& 96*2\frac{2\pi}{\hbar}\left|H_{\lambda\lambda_1\lambda_2\lambda_3}^{(4)}\right|^2\delta(E_i-E_f)\label{eq_Lpmpm1}\\
&=& \frac{\pi\hbar}{4N} \frac{\hbar}{2N} \!\left|V_{\pm\pm}^{\!(4)\!}\right|^2\! \Delta_{\pm\pm} \frac{\delta(\omega_\lambda\pm\omega_{\lambda_1}\pm\omega_{\lambda_2}-\omega_{\lambda_3})}{\omega_\lambda\omega_{\lambda_1}\omega_{\lambda_2}\omega_{\lambda_3}}\label{Lpmpm2},
\end{eqnarray}
where $V_{\pm}^{(3)}$ and $V_{\pm\pm}^{(4)}$ are
\begin{equation}
\label{eq_V3}
V_{\pm}^{(3)}\!=\!\sum_{b\!,l_1b_1\!,l_2b_2}\sum_{\alpha\alpha_1\!\alpha_2}\Phi_{0b\!,l_1b_1\!,l_2b_2}^{\alpha\alpha_1\alpha_2}\frac{e_{\alpha b}^\lambda e_{\alpha_1 b_1}^{\pm\lambda_1} e_{\alpha_2 b_2}^{-\lambda_2}}{\sqrt{\bar{m}_b \bar{m}_{b_1} \bar{m}_{b_2}}} e^{\pm i \mathbf{k_1}\!\cdot \!\mathbf{r}_{l_1}} e^{- i \mathbf{k_2}\!\cdot\! \mathbf{r}_{l_2}},
\end{equation}
\begin{equation}
\label{eq_V4}
V_{\pm\pm}^{(4)}=\sum_{b,l_1b_1,l_2b_2,l_3b_3}\sum_{\alpha\alpha_1\alpha_2\alpha_3}\Phi_{0b,l_1b_1,l_2b_2,l_3b_3}^{\alpha\alpha_1\alpha_2\alpha_3}\frac{e_{\alpha b}^\lambda e_{\alpha_1 b_1}^{\pm\lambda_1} e_{\alpha_2 b_2}^{\pm\lambda_2} e_{\alpha_3 b_3}^{-\lambda_3}}{\sqrt{\bar{m}_b \bar{m}_{b_1} \bar{m}_{b_2} \bar{m}_{b_3}}} e^{\pm i \mathbf{k_1}\cdot \mathbf{r}_{l_1}} e^{\pm i \mathbf{k_2}\cdot \mathbf{r}_{l_2}} e^{- i \mathbf{k_3}\cdot \mathbf{r}_{l_3}}.
\end{equation}
In Eq.\,(\ref{eq_Lpm1}), the factor 18 accounts for the topologically equivalent pairing schemes that were explained in Ref.\,\cite{Maradudin1962prb}. Analogously, the factor 96 in Eq. (\ref{eq_Lpmpm1}) comes from the fact that in Fig.\,5 of Ref.\,\cite{Maradudin1962prb} the phonon $\lambda$ can pair with any of the four phonons at the lower vertex, and the $\mathbf{k}j'$ can pair with any four at the upper vertex, while the three remaining phonons at lower vertex can pair with the three remaining phonons at the upper vertex in six ways. In both Eqs. (\ref{eq_Lpm1}) and (\ref{eq_Lpmpm1}), the factor 2 in front of $\frac{2\pi}{\hbar}$ accounts for the difference between scattering rate and self-energy linewidth. The delta function $\delta(E)$ is replaced by $\delta(\omega)/\hbar$. The Kronecker deltas $\Delta_\pm$ and $\Delta_{\pm\pm}$ are short for $\Delta_{\mathbf{k}\pm\mathbf{k}_1-\mathbf{k}_2,\mathbf{R}}$ and $\Delta_{\mathbf{k}\pm\mathbf{k}_1\pm\mathbf{k}_2\!-\mathbf{k}_3,\mathbf{R}}$, respectively.
\begin{figure}[tbph
\centering
\includegraphics[width= 3.3in]{4phonon_diag.eps}
\caption{ The sketches of four-phonon scattering processes. (a)-(c) The splitting, redistribution, and combination processes, respectively. Each category contains $N$ processes ($\mathbf{R}=\mathbf{0}$) and $U$ processes ($\mathbf{R}\neq\mathbf{0}$).}\label{fig_scattering}
\end{figure}
\section{mitigate the computational cost}
We use the central difference method to obtain the second-, third-, and fourth-order IFCs as listed below:
\begin{equation}
\Phi_{at_1,at_2}^{\alpha_1\alpha_2}=\frac{1}{(2\Delta)^2}\sum_{s_1,s_2}^{-1,1}s_1s_2E(r_{at_1}^{\alpha_1}+s_1\Delta,r_{at_2}^{\alpha_2}+s_2\Delta),
\end{equation}
\begin{equation}
\Phi_{at_1,at_2,at_3}^{\alpha_1\alpha_2\alpha_3}=\frac{1}{(2\Delta)^3}\sum_{s_1,s_2,s_3}^{-1,1}s_1s_2s_3E(r_{at_1}^{\alpha_1}+s_1\Delta,r_{at_2}^{\alpha_2}+s_2\Delta,r_{at_3}^{\alpha_3}+s_3\Delta),
\end{equation}
\begin{equation}
\Phi_{at_1,at_2,at_3,at_4}^{\alpha_1\alpha_2\alpha_3\alpha_4}=\frac{1}{(2\Delta)^4}\sum_{s_1,s_2,s_3,s_4}^{-1,1}s_1s_2s_3s_4E(r_{at_1}^{\alpha_1}+s_1\Delta,r_{at_2}^{\alpha_2}+s_2\Delta,r_{at_3}^{\alpha_3}+s_3\Delta,r_{at_4}^{\alpha_4}+s_4\Delta).
\end{equation}
Here $r_{at}^{\alpha}$ represents the $\alpha$ component of the equilibrium position of the atom $at$. $\Delta$ is a all displacement. $E$ is the energy. In the central difference method, the energy derivatives at the position $\mathbf{r}$ are obtained by the finite differences between $E(r^{\alpha}+\Delta)$ and $E(r^{\alpha}-\Delta)$, ($\alpha=x,y,z$), and thus $s_1$ \textit{et al.} take the two values ``+1'' and ``-1'', representing the plus and minus signs, respectively. The calculation of the $m$th-order IFC requires the energies of $(6Nn_b)^m$ atomic configurations.
The computational cost and the memory requirement of fourth-order ALD calculations by Eqs.(\ref{Lpmpm2}) and (\ref{eq_V4}) are 9$NN_cn_b^2$ times of those of third-order by Eqs.(\ref{Lpm2}) and (\ref{eq_V3}), where $n_b$ and $N_c$ are the total number of basis atoms in a primitive cell and the total number of primitive cells in the domain, respectively.
To obtain the phonon scattering rate, the computational cost needs to be reduced. In first-principles-based three-phonon scattering calculations, the most time consuming part is to obtain the IFCs, and symmetries were typically employed to reduce the computational cost\,\cite{Esfar2008prb, Li2012prb}. In contrast, in our classical potential-based four-phonon scattering calculation, the biggest challenge is the scattering matrices calculation rather than the IFCs calculation since the phase space allows $10^3$-$10^4$ orders larger amount of 4-phonon processes than 3-phonon processes for the $\mathbf{k}$ meshes studied in this work. Therefore, the essential works are 1) to reduce the number of IFCs, which directly affects the computational cost of each scattering matrix element of $V_\pm^{(3)}$ and $V_{\pm\pm}^{(4)}$, and 2) to reduce the dimensions of $V_\pm^{(3)}$ and $V_{\pm\pm}^{(4)}$ by excluding in advance the mode combinations that do not satisfy the momentum and energy selection rules. In the calculation of IFCs, even if every pair of atoms is within the potential cutoff radius, the force-constant is not necessary to be counted. For example, typically the value of a 3-IFC or 4-IFC does not depend on the choice of the small atomic displacement $\Delta$, if the value of a IFC is found to be near 0 and strongly depend on the displacement $\Delta$ (e.g., the value may vary by orders near 0 when $\Delta$ changes by 2 times, and it does not converge no matter what value the $\Delta$ is), this IFC is considered to be negligible compared to numerical accuracy. By testing sufficient cases and finding out the conditions that have such infinitesimal IFCs, one can exclude the atomic combinations that satisfy such conditions in advance. To validate the accuracy of the calculation after employing these optimizations, we have compared the results of three-phonon scattering rates before and after employing these optimizations. No difference was found within the recorded precision for the three-phonon scattering rates, indicating that those optimizations do not sacrifice the calculation accuracy. These optimizations may not be significant for three-phonon scattering calculations since the total computation cost is relatively low, however, they are essential for making the four-phonon calculations practical. Even after these optimizations, the $V_{\pm\pm}$ matrices for 4-phonon process may still largely exceed the maximum memory of computers. By separating the calculation into several steps and writing/reading the data to/from files, this problem can be solved. Last but not least, to predict thermal conductivity, only the phonon scattering rates in the irreducible BZ are needed to be calculated.
For argon, the Lennard-Jones potential\,\cite{Ashcroft_book} with a cutoff radius of 8.5 \AA{} is used to describe the interatomic interaction. The scattering rates are calculated on the mesh of $16\!\times\!16\!\times\!8$ $\mathbf{k}$-points in the Brillouin zone. For diamond, Si, and Ge, the Tersoff potential\,\cite{Tersoff1989} and a $16\!\times\!16\!\times\!16$ $\mathbf{k}$-mesh are used. The details of normal mode analysis and Green-Kubo method based on molecular dynamics are described in Appendix\,\ref{append}.
\section{Results}
\subsection{Benchmark on Lennard-Jones argon: Large four-phonon scattering rates}
Taking bulk argon as a benchmark material, which has been extensively studied\,\cite{Feng2014Jn, McGaughey2004prb, Turney2009prb1, Kab2007jap}, we have calculated the spectral scattering rates $\tau_{3,\lambda}^{-1}$ and $\tau_{4,\lambda}^{-1}$ as shown in Figs.\,\ref{TA_Ar} and\,\ref{fig_argon_comp}. Interestingly, we found that $\tau_{4,\lambda}^{-1}$ is comparable to $\tau_{3,\lambda}^{-1}$ at mid and high temperatures. To benchmark the accuracy of the calculation, we carried out MD simulations and frequency-domain NMA to probe the linewidth $\tau_{{\rm NMA},\lambda}^{-1}$ of the phonon spectral energy density, which includes the total scattering rates of all orders. It can be seen that $\tau_{3,\lambda}^{-1}+\tau_{4,\lambda}^{-1}$ agrees well with $\tau_{{\rm NMA},\lambda}^{-1}$ for both the TA and LA branches throughout the frequency and temperature range as shown in Figs.\,\ref{TA_Ar}(d) and \ref{fig_argon_comp}\,(b). In addition, the values of $\tau_{3,\lambda}^{-1}$ and $\tau_{{\rm NMA},\lambda}^{-1}$ agree well with those predicted by Turney \textit{et al.}\,\cite{Turney2009prb1} using ALD and the time-domain NMA, respectively.
\begin{figure}[tphb]
\begin{center}
\includegraphics[width= 0.4\linewidth]{TA_Ar.eps}
\end{center}
\caption{(a) The Brillouin Zone for face-centered cubic structures (Ar, diamond, Si and Ge). (b) Dispersion relation of Ar from $\Gamma$ to X. (c) $\tau_{3,\lambda}^{-1}$ and $\tau_{4,\lambda}^{-1}$ of the TA branch with respect to the reduced wave vector ($\Gamma$-X) in argon at 20, 50, and 80 K, which are represented by different colors. (d) $\tau_{3,\lambda}^{-1}+\tau_{4,\lambda}^{-1}$ is compared to the linewidth $\tau_{{\rm NMA},\lambda}^{-1}$ predicted in frequency-domain NMA based on MD.}\label{TA_Ar}
\end{figure}
\begin{figure}[t
\centering
\includegraphics[width= 3.4in]{LA_k_34Ar.eps}
\caption{Phonon scattering rates and thermal conductivity of argon. (a) The $\tau_{3,\lambda}^{-1}$ and $\tau_{4,\lambda}^{-1}$ of the LA branch as a function of the reduced wave vector $\mathbf{k}^*$ from $\Gamma$ to X in argon at 20 K, 50 K and 80 K. (b) The summation of $\tau_{3,\lambda}^{-1}$ and $\tau_{4,\lambda}^{-1}$ is compared to the linewidth $\tau_{{\rm NMA},\lambda}^{-1}$ predicted in frequency-domain NMA based on MD simulations. (c) The relative importance of the four-phonon scattering rates, $\tau_{4,\lambda}^{-1}/\tau_{3,\lambda}^{-1}$, for the LA branch at 20 K, 50 K and 80 K. (d) The temperature dependences of $\tau_{3,\lambda}^{-1}\sim T$ and $\tau_{4,\lambda}^{-1}\sim T^2$ for the two modes $k\mathbf{k}^*$=(0.5,0,0) and $\mathbf{k}^*$=(0.625,0,0). (e) The $\kappa$ values of argon predicted from $\tau_{3,\lambda}^{-1}$, $\tau_{3,\lambda}^{-1}+\tau_{4,\lambda}^{-1}$, and $\tau_{{\rm NMA},\lambda}^{-1}$ as a function of temperature, with the inset showing the ratio of $\kappa_{3+4}/\kappa_{3}$. $\kappa_{\rm NMA}$(Q) and $\kappa_{\rm NMA}$(C) represent that the specific heat $c_\lambda$ in Eq.\,(\ref{eq_k_BTE}) is calculated by the quantum (Bose-Einstein) and classical (Boltzmann) phonon distributions, respectively. The phonon dispersion used in the calculation of $\kappa_{\rm NMA}$ is from lattice dynamics (LD) calculation, to be consistent with the $\kappa_3$ and $\kappa_{3+4}$ calculations.}\label{fig_argon_comp}
\end{figure}
The reason why $\tau_{4,\lambda}^{-1}$ is comparable to $\tau_{3,\lambda}^{-1}$ is that although each four-phonon process is in a higher-order and thus has a much lower scattering probability, the momentum and energy selection rules allow much greater number of four-phonon processes. For instance, each phonon mode participates in $\sim 10^3\!-\!10^4$ three-phonon processes while $\sim 10^7$ four-phonon processes for argon in a $16\!\times\!16\!\times\!8$ $\mathbf{k}$-mesh. In Fig.\,\ref{fig_argon_comp}\,(c), we show the relative importance of four-phonon scattering $\tau_{4,\lambda}^{-1}/\tau_{3,\lambda}^{-1}$ with respect to the reduced wave vector. The mid-frequency LA phonons have the highest $\tau_{4,\lambda}^{-1}$, since all the three types of four-phonon processes in Fig.\,\ref{fig_scattering} are allowed to happen.
Another important note is that at high temperatures $\tau_{3,\lambda}^{-1}$ increases linearly whereas $\tau_{4,\lambda}^{-1}$ quadratically with increasing temperature\,\cite{Joshi1970prb}, as shown in Fig.\,\ref{fig_argon_comp}\,(d). These temperature dependencies result from Eq.\,(\ref{tau3}) and Eq.\,(\ref{tau4}), which roughly indicate $\tau_{3,\lambda}^{-1}\sim n^0$ and $\tau_{4,\lambda}^{-1}\sim (n^0)^2$, leading to $\tau_{3,\lambda}^{-1}\sim T$ and $\tau_{4,\lambda}^{-1}\sim T^2$ since $n^0$ is proportional to $T$ at high temperatures.
The importance of four-phonon scattering in lattice thermal conductivity is studied by calculating $\kappa_3$, $\kappa_{3+4}$ and $\kappa_{\rm NMA}$ based on $\tau_{3,\lambda}^{-1}$, $\tau_{3,\lambda}^{-1}+\tau_{4,\lambda}^{-1}$ and $\tau_{{\rm NMA},\lambda}^{-1}$ respectively. For less computational cost, we use the isotropic assumption by taking the phonon modes in the $\Gamma-X$ direction to calculate the thermal conductivity. Equation (\ref{eq_k_BTE}) is converted to the continuous form\,\cite{Feng2014Jn} $\kappa_z=\frac{1}{(2\pi)^3}\sum_j\int c_\lambda v_{\lambda,z}^2 \tau_\lambda d\mathbf{k} =\frac{4\pi}{3}\frac{1}{(2\pi)^3}\sum_j \int c_\lambda v_\lambda^2 \tau_\lambda k^2 dk$ by taking the facts that $\sum_\mathbf{k}=\frac{V}{(2\pi)^3}\int d\mathbf{k}=\frac{V}{(2\pi)^3}\int 4\pi k^2 dk$ and that the integration of $|v_{\lambda,z}|^2$ gives $v_\lambda^2/3$. As shown in Fig.\,\ref{fig_argon_comp}\,(e), $\kappa_{3+4}$ and $\kappa_{\rm NMA}$ agree well with each other as well as those by Turney \textit{et al.}\,\cite{Turney2009prb1}. In contrast, $\kappa_3$ is considerably over-predicted especially at high temperatures. For a clearer insight, we plot the ratio of $\kappa_{3+4}/\kappa_3$ as a function of temperature in the inset. Four-phonon scattering reduces $\kappa$ of argon by 35\%-65\% at temperatures of 20-80 K. The results clearly demonstrate the importance of four-phonon scattering in thermal transport in a strongly anharmonic material or at high temperature. We note that $\kappa_{\rm NMA}$ is based on MD simulations which follow the classical (Boltzmann) distribution, while ALD calculations are based on quantum (Bose-Einstein) phonon distribution. Thus, the agreement between $\kappa_{3+4}$ and $\kappa_{\rm NMA}$ is better at higher temperatures where quantum physics is closer to classical physics. For low temperatures we did not attempt to replace the Bose-Einstein distribution in the ALD formula with the Boltzmann distribution in order to compare with the MD results, since this approach is not exact (See Sec.\,\ref{Sec_Boltz} for details).
\subsection{Significant four-phonon scattering rates in diamond, Si and Ge at high temperatures}
The importance of four-phonon scattering in less anharmonic materials -- diamond, silicon and germanium is studied. As shown in Fig.\,\ref{t_all}, the general temperature dependencies of $\tau_{3,\lambda}^{-1}\sim T$ and $\tau_{4,\lambda}^{-1}\sim T^2$ have been observed for both acoustic and optical phonons in all these materials. In Fig.\,\ref{fig_t43CSiGe}, we choose $T=$ 300 and 1000 K to show the relative importance of four-phonon scattering, $\tau_{4,\lambda}^{-1}/\tau_{3,\lambda}^{-1}$, as a function of the reduced wave vector from $\Gamma$ to $X$. At room temperature, $\tau_{4,\lambda}^{-1}/\tau_{3,\lambda}^{-1}$ for acoustic branches is roughly below 0.1, confirming less anharmonicities in these three materials than in argon. As $T$ rises to 1000 K, $\tau_{4,\lambda}^{-1}/\tau_{3,\lambda}^{-1}$ increases to 0.1-1 for most acoustic phonons in silicon and germanium, indicating that four-phonon scattering becomes comparable to three-phonon scattering. In comparison, with the same lattice structure, diamond has the strongest bonding strength and the least anharmonicity while germanium has the softest bonds and the most anharmonicity. In Figs.\,\ref{t_all} and\,\ref{fig_t43CSiGe}, it is clearly seen that four-phonon scattering is more important for more strongly anharmonic materials and higher temperatures. In contrast to acoustic phonons, optical phonons typically have much higher four-phonon scattering rates which are comparable to three-phonon scattering rates even at low temperatures. The accuracy of the results has been demonstrated by the general agreement between $\tau_{3,\lambda}^{-1}+\tau_{4,\lambda}^{-1}$ and $\tau_{{\rm NMA},\lambda}^{-1}$. In Fig.\,\ref{tGe}, we compare $\tau_{3,\lambda}^{-1}+\tau_{4,\lambda}^{-1}$ to $\tau_{{\rm NMA},\lambda}^{-1}$ in Ge at high temperatures of 800 K and 1200 K. Reasonable agreement is found for the acoustic phonons considering the uncertainty of MD simulations. If four-phonon scattering is excluded, no agreement can be achieved. One interesting finding is that $\tau_{3,\lambda}^{-1}+\tau_{4,\lambda}^{-1}$ of optical phonons is typically lower than $\tau_{{\rm NMA},\lambda}^{-1}$, indicating a possibility of high five-phonon scattering rates of optical phonons.
\begin{figure}[tbph]
\includegraphics[width= 0.45\linewidth]{t_all_noPbTe.eps}
\includegraphics[width= 0.45\linewidth]{t_all_log_noPbTe.eps}
\caption{The $\tau_{3,\lambda}^{-1}$ (blue) and $\tau_{4,\lambda}^{-1}$ (red) of all the resolvable modes (excluding the $\Gamma$ point) from $\Gamma$ to $X$ as a function of temperature in diamond, Si and Ge. Each subfigure contains 32 curves. The 16 blue curves are $\tau_{3,\lambda}^{-1}$ for eight longitudinal and eight transverse modes with the reduced wave vectors of $\mathbf{k}^*$=($zeta$/8,0,0), where $zeta$ is an integer from 1 to 8. The 16 red curves are $\tau_{4,\lambda}^{-1}$ for these same modes. In (b), we show the logarithmic plots corresponding to plot (a) to demonstrate the power-law dependence of $\tau_{4,\lambda}^{-1}$ on temperature.}
\label{t_all}
\end{figure}
\begin{figure}[tbph
\centering
\includegraphics[width= 3.5in]{tCSiGe2.eps}
\caption{The ratio $\tau_{4,\lambda}^{-1}/\tau_{3,\lambda}^{-1}$ with respect to the reduced wave vector ($\Gamma$-X) for the TA [blue square], LA [blue circle], TO [red square] and LO [red circle] branches at 300 K and 1000 K in (a,b) diamond, (c,d) silicon and (e,f) germanium. The green dashed lines at $\tau_{4,\lambda}^{-1}/\tau_{3,\lambda}^{-1}$=10\% help to guide the eye.}\label{fig_t43CSiGe}
\end{figure}
\begin{figure}[tbph]
\begin{center}
\includegraphics[width= 0.46\textwidth]{t_Ge.eps}
\end{center}
\caption{The comparison between $\tau_{3,\lambda}^{-1}+\tau_{4,\lambda}^{-1}$ (solid curves) and $\tau_{{\rm NMA},\lambda}^{-1}$ (dashed curves) in Ge as a function of the reduced wave vector ($\Gamma$-$X$) at 800 K (blue) and 1200 K (red), for the TA, LA, LO, and TO branches.}\label{tGe}
\end{figure}
\begin{figure}[tpbh
\centering
\includegraphics[width= 3.5in]{kCSiGe.eps}
\caption{The lattice thermal conductivity $\kappa$ values of diamond, silicon, and germanium predicted from $\tau_{3,\lambda}^{-1}$, $\tau_{3,\lambda}^{-1}+\tau_{4,\lambda}^{-1}$, $\tau_{{\rm NMA},\lambda}^{-1}$, and the GK method as a function of temperature, with the inset showing the ratio $\kappa_{3+4}/\kappa_{3}$.}\label{fig_CSiGe}
\end{figure}
The lattice thermal conductivities of diamond, silicon and germanium are shown in Fig.\,\ref{fig_CSiGe}. $\kappa_3$ and $\kappa_{3+4}$ match well with each other at low temperatures, indicating that four-phonon scattering is negligible. At room temperature, $\kappa_{3+4}$ is lower than $\kappa_3$ by 1\%, 8\%, and 15\% for diamond, silicon, and germanium, respectively, as shown in the inset. As the temperature increases to 1000 K, such discrepancy grows to 15\%, 25\%, and 36\%, respectively. The discrepancy between a previously predicted $\kappa_3$ of Si at 1000 K using first principles\,\cite{Esf2011prb} and the experimental value\,\cite{Glass1964PR} is about 27\%, which is consistent with our calculations. Such results indicate that even in weakly anharmonic materials, four-phonon scattering may play a critical role at high temperatures. A good agreement between $\kappa_{3+4}$ and $\kappa_{\rm NMA}$ as well as $\kappa_{\rm GK(MD)}$ is found for silicon and germanium at high temperatures in Fig.\,\ref{fig_CSiGe}. The comparison in diamond is not done since diamond has a high Debye temperature ($\sim$2200 K), below which $\kappa_{3+4}$ obtained from quantum mechanics is not comparable to $\kappa_{\rm NMA}$ and $\kappa_{\rm GK(MD)}$ from classical MD. Since we use empirical interatomic potentials which are approximations to the true atomic interactions, the numbers presented here should be understood with caution or on a semi-quantitative basis.
\section{Discussion}
\subsection{Issue in the Boltzmann distribution-based ALD formula}
\label{Sec_Boltz}
In Sec.\,\ref{Sec_formula}, the ALD formulas are derived by using Bose-Einstein distribution starting from Eq.\,(\ref{eq_math_0}). In the following part, we derive the ALD formula based on Boltzmann distribution, taking three-phonon scattering as an example. Equations\,(\ref{eq_math_0}) and (\ref{eq_math1}), or the relation $\lambda\!\rightarrow\!\lambda_1\!+\!\lambda_2$: $1+\frac{1}{n_\lambda^0}=(1+\frac{1}{n_{\lambda_1}^0})(1+\frac{1}{n_{\lambda_2}^0})$ become
\begin{equation}
\label{eq_Boltzmann1}
\lambda\!\rightarrow\!\lambda_1\!+\!\lambda_2\!: \frac{1}{n_\lambda^0}=\frac{1}{n_{\lambda_1}^0}+\frac{1}{n_{\lambda_2}^0}.
\end{equation}
Equations\,(\ref{eq_math_2}) and (\ref{eq_math2}), or the relation $\lambda\!+\lambda_1\!\rightarrow\!\lambda_2$: $(1+\frac{1}{n_\lambda^0})(1+\frac{1}{n_{\lambda_1}^0})=1+\frac{1}{n_{\lambda_2}^0}$ become
\begin{equation}
\label{eq_Boltzmann2}
\lambda\!+\lambda_1\!\rightarrow\!\lambda_2\!: \frac{1}{n_\lambda^0}+\frac{1}{n_{\lambda_1}^0}=\frac{1}{n_{\lambda_2}^0}.
\end{equation}
By substituting Eqs.\,(\ref{eq_n})-(\ref{eq_n3prime}) into Eq.\,(\ref{BTE2}) by using the relations in Eqs.\,(\ref{eq_Boltzmann1}) and (\ref{eq_Boltzmann2}), we obtain
\begin{equation}
\label{BTE3_B}
\frac{\partial n_\lambda'}{\partial t} |_s = -\sum_{\lambda_1\lambda_2}\left\{ \frac{1}{2}\left[(1+n_{\lambda_1}^0+n_{\lambda_2}^0)n_\lambda'+n_\lambda^0\right]\mathcal{L}_- \!+\! \left[(n_{\lambda_1}^0- n_{\lambda_2}^0)n_\lambda'-n_{\lambda_2}^0\right]\mathcal{L}_+ \right\}.
\end{equation}
This equation is found to contain two additional constant terms $+n_\lambda^0$ and $-n_{\lambda_2}^0$ in the brackets, compared to Eq.\,(\ref{BTE3}) based on the Bose-Einstein distribution. The constant terms lead the decay of the perturbation $n_\lambda'$ to be not exponential, and thus the exact relaxation time cannot be well defined, unless the two terms can be neglected or can cancel off with each other during the summation over $\lambda_1,\lambda_2$. However, we have found that they are not negligible. Also, the cancellation is not guaranteed. For example, if $\lambda$ is an optical phonon near $\Gamma$ point, the right-hand side of Eq.\,(\ref{BTE3_B}) only contains the first half (the splitting process).
Therefore, it is not an exact approach to directly employ Boltzmann occupation number in the Bose-Einstein distribution-based ALD formula to capture the classical effect. The Boltzmann distribution-based ALD formula cannot well define the phonon relaxation time.
\subsection{Role of normal process} For some materials an appropriate handling of $U$ and $N$ processes is important for the prediction of $\kappa$\,\cite{Omini1995pb}. For example, when $N$ processes dominate, the scattering does not introduce thermal resistance directly, and an exact solution to the linearized BTE is required beyond the SMRTA\,\cite{Omini1995pb, Broido2007apl}. Such physics has been found to be important in three-phonon scattering in graphene\,\cite{Lindsay2010SLG} where $\tau_{U}^{-1}\%$ is very low. In this work, however, the SMRTA is still valid for the phonon transport in argon, silicon, and germanium since $\tau_{U}^{-1}\%$, especially for acoustic phonons which dominate lattice thermal conductivity, is not low for either three-phonon scattering\,\cite{Ward2010prb} or four-phonon scattering. The latter is found to have a $\tau_{U}^{-1}\%$ that increases monotonically with increasing temperature and wave vector, as shown in Fig.\,\ref{U_all}, where we show the percentage $\tau_{U}^{-1}$\% of the Umklapp scattering rates of the total scattering rates for three-phonon processes, $\tau_{3,\lambda,U}^{-1}/\tau_{3,\lambda}^{-1}$, and four-phonon processes, $\tau_{4,\lambda,U}^{-1}/\tau_{4,\lambda}^{-1}$. For both acoustic and optical phonons, $\tau_U^{-1}$\% in three- and four-phonon scattering increases monotonically with increasing temperature. For acoustic phonons, three and four-phonon processes have similar $\tau_U^{-1}$\%; whereas for optical branches, four-phonon scattering has much higher $\tau_U^{-1}$\% than three-phonon scattering.
\begin{figure}[tphb
\centering
\includegraphics[width= 3.3in]{Ar_U.eps}
\includegraphics[width= 3.3in]{Si_U.eps}
\caption{(a) Ar and (b) Si are taken as examples to show the percentage of $U$ processes in three-phonon (solid curves) and four-phonon (dashed curves) scattering. The wavevector is along $\Gamma$-$X$. The temperature is 10-80 K for argon, and 100-1200 K for Si. The temperatures from low to high are represented by the colors from red to purple, and the corresponding curves are from bottom to top.}\label{U_all}
\end{figure}
\subsection{Negligible three-phonon to the second order}
We note that two three-phonon processes: $\lambda_1+\lambda_2\rightarrow\lambda'$ and $\lambda'\rightarrow\lambda_3+\lambda_4$ may be combined to give the three-phonon scattering to the second order, which is another type of fourth-order process\,\cite{Ziman_book}, as shown in Fig.\,\ref{fig_3_2nd} (b). Here $\lambda'$ is an intermediate virtual state. The energy is conserved from the initial state $\lambda_1+\lambda_2$ to the final state $\lambda_3+\lambda_4$, while the energy is not necessarily conserved in the first step or in the second step alone\,\cite{Ziman_book}. The energy denominators of 3-phonon scattering
\begin{equation}
\frac{\langle i|\hat{H}_3|f\rangle}{|E_i-E_f|}
\label{eq_H3_transition}
\end{equation}
and four-phonon scattering
\begin{equation}
\frac{\langle i|\hat{H}_4|f\rangle}{|E_i-E_f|}
\label{eq_H4_transition}
\end{equation}
vanish due to the energy conservation law $E_i=E_f$. Here $|i\rangle$ and $|f\rangle$ represent the initial and final states respectively. For example in three-phonon scattering $\lambda_1+\lambda_2\rightarrow\lambda_3$, $|i\rangle$ represents the state $|n_{\lambda_1}+1,n_{\lambda_2}+1,n_{\lambda_3}\rangle$, and $|f\rangle$ represents the state $|n_{\lambda_1},n_{\lambda_2},n_{\lambda_3}+1\rangle$. In contrast to Eqs.\,(\ref{eq_H3_transition}) and (\ref{eq_H4_transition}), the transition matrix element in the combined three-phonon process is
\begin{equation}
\frac{\langle i|\hat{H}_3|vir\rangle\langle vir|\hat{H}_3|f\rangle}{|E_i-E_{vir}|}.
\label{eq_H32_transition}
\end{equation}
The discussion of the denominator in Eq.\,(\ref{eq_H32_transition}) can be divided into two cases. In Case 1, the energy is not conserved in the first or the second step\,\cite{Ziman_book}. The energy denominators for the transition are not small. Therefore, the transition rate is considered to be not large as discussed in Ref.\,\cite{Carruthers1962pr1}. In Case 2, the energy conservation condition for the first step is nearly satisfied or satisfied. This process was named as ``the resonance in three-phonon scattering'' and discussed by Carruthers\,\cite{Carruthers1962pr1}. In this case, although the scattering is in the same order with the intrinsic four-phonon scattering, the number of scattering events that satisfy the energy and momentum selection rule is only $10^{-3}-10^{-5}$ of that in the intrinsic four-phonon scattering in our study. This is because the resonant three-phonon scattering has a strong requirement that the intermediate state has to be an existing phonon mode in the $\mathbf{k}$-mesh, while the intrinsic four-phonon scattering has no such requirement. For example, for Si with a $16\times16\times16$ $\mathbf{k}$-mesh and the energy conservation tolerant range as 1.24 meV (0.3 THz), the TA mode at $\mathbf{k}^*=(0.5,0,0)$ has 4.6$\times10^{7}$ intrinsic four-phonon events, and only 2.7$\times10^4$ resonant three-phonon events. For the TA mode at $\mathbf{k}^*=(0.625,0,0)$, the number of intrinsic four-phonon events is similarly about 4.6$\times10^{7}$ while the number of resonant three-phonon events is only 36. Therefore, the overall three phonon to the second order scattering rate is negligible compared to the intrinsic four-phonon scattering and thus is not considered in our work. Nevertheless, it is definitely worth a quantitative study in the future.
\begin{figure}[htpb]
\begin{center}
\includegraphics[width= 0.5\linewidth]{3phonon_diag_second_order.eps}
\end{center}
\caption{The diagram examples for the comparison between (a) the intrinsic four-phonon scattering and (b) the three-phonon scattering to the second order.}\label{fig_3_2nd}
\end{figure}
\section{Conclusions}
To conclude, a rigorous and direct method to calculate four-phonon scattering rates $\tau_{4,\lambda}^{-1}$ within the ALD framework has been developed. We have obtained $\tau_{4,\lambda}^{-1}$ by explicitly determining quantum mechanical scattering probability matrices for the full Brillouin zone. By investigating the bulk argon, diamond, silicon and germanium, we have found the key features of four-phonon scattering: (1) $\tau_{4,\lambda}^{-1}$ increases quadratically with temperature, one order higher than $\tau_{3,\lambda}^{-1}$, (2) $\tau_{4,\lambda}^{-1}$ is more important in more strongly anharmonic bulk materials, (3) for optical phonons, the fourth and higher order phonon scattering is much more important than three-phonon scattering even at low temperature, such finding could be also important in the studies of optical properties, electron-phonon coupling, photovoltaics, etc\,\cite{Bao2012jqsrt}, (4) the relative ratio of Umklapp scattering rate in four-phonon process is generally comparable or even larger than that in three-phonon process, and (5) the three-phonon to the second order is negligible compared to four-phonon process, although they are in the same perturbation order. Particularly, $\tau_{4,\lambda}^{-1}$ can reduce the thermal conductivity of Si by $\sim$25\% at 1000 K. Existing practice of ALD is limited to three-phonon scattering so the accuracy is only guaranteed for relatively low temperature. Now by including $\tau_{4,\lambda}^{-1}$ with our approach, ALD will be applicable for both low and high temperatures.
\section*{ACKNOWLEDGMENTS}
We appreciate the help from Alan J. H. McGaughey, Lucas Lindsay, and Christopher A Robinson for proofreading our manuscript. Simulations were preformed at the Rosen Center for Advanced Computing (RCAC) of Purdue University. The work was partially supported by the National Science Foundation (Award No. 1150948) and the Defense Advanced Research Projects Agency (Award No. HR0011-15-2-0037).
|
1,314,259,994,931 | arxiv | \section{Introduction}
\IEEEPARstart{D}{uring} the months between October 2015 and March
2016, Greece has witnessed an unprecedented influx of refugees across
the sea borders with Turkey. Tens or even hundreds of boats, each
carrying 50 people or more, were landing on a daily basis throughout
the Greek islands that are closest to the Turkish coastline. One of
the most important factors in forecasting the influx rate within a
short time frame of 24-48 hours is the weather conditions in these
sea passages, specifically wind intensity and direction, as these
are directly associated with the severity of sea condition and, hence,
the danger involved in the crossing. The most lethal events of boats
capsizing and sinking happened mostly in days and nights when winds
were growing in power or shortly after steady strong winds had already
produced high waves in these areas.
Unfortunately, weather conditions is only one of the factors affecting
the forecasting of influx rates. Compared to longer trips, e.g. between
Libya and Italy, the case of Greece involves a relatively short trip
of 5-7 n.m. and a couple of hours at most, for a fully loaded boat
and working engine. This means that even in bad weather, some groups
of refugees dared to come across (sometimes forced to by the smugglers)
even in very risky conditions. The result was 3,771 registered deaths
and many more missing from hundreds of capsizing and sinking events
during 2015 alone. According to the United Nations High Commissioner
for Refugees (UNHCR) \cite{UNHCR-data3-url}, the International Organization
for Migration (IOM) \cite{IOM-data-url} and the Medecins Sans Frontieres
(MSF) \cite{MSF-rep-url}, during 2015 more than a million people
reached Europe from Turkey and North Africa, seeking safety and asylum.
In order to better organize Search \& Rescue (SAR) sea operations
and logistical support for the humanitarian relief teams at the first
reception islands, it is crucial that some level of influx forecasting
is available. Indeed, previous works have shown that this is a realistic
goal that can be addressed with data-driven methods in the short-term
\cite{influxsmug_2016,influxanal_arxiv_2016,influxanal_safeevros_2016},
regardless of the more general factors and political decisions that
affect the refugee flows in the long-term. Hence, it is extremely
important to have appropriate tools for up-to-date weather forecasting
that uses local sources and weather stations, specifically for wind
speed and direction at these sea passages. The idea is to have these
forecasts available as inputs for the influx prediction modules, together
with proper time-series analysis on both the weather data as well
as the influx data itself.
It should be noted that the alternative of using full weather data
from sources like the Greek Meteorological Service (EMY) or NOAA is
good for postmortem analysis and training datasets, but inappropriate
in actual deployment mode for three main reasons: (a) these data are
usually massive, small mobile devices cannot parse them, (b) they
require reliable Internet access for large downloads, (c) they are
available mostly as forecasts 3-6 hours beforehand, instead of processing
real-time data from local weather stations. Therefore, having simple
prediction models for winds that can run fast and offline is extremely
useful if they are to be deployed in actual SAR operations.
In this study, five areas of first-reception refugee influx `hotspots'
is used as the baseline for simple predictive modeling in the Aegean
Sea. More specifically, local weather stations in the islands of Lesvos
(Petra and Thermi), Chios, Samos and Kos, the five most active sea
passages and landing areas during the specific high-peak period, are
used as input data for analyzing the statistics of winds (average
speed, gust, direction). Furthermore, these five input sources are
used in designing simple linear regressors for covering other intermediate
areas of interest via cross-site interpolation. Experimental results
are presented separately for each location, as well as test cases
of two linear regressors, one for near-spot and one for far-spot wind
prediction.
\section{Material and Data Overview}
\subsection{Weather data series}
The data used in this study are a subset of the 2015-2016 weather
data feeds from Meteo.gr, an open-access Internet portal \cite{Meteo-data-url}
maintained by the National Observatory of Greece for providing weather
data from ground stations. These data were collected and aggregated
in the special dataset GR-RWL1-O15J16 \cite{AegeanW-dataset-url},
already used in other works for training predictive models that combine
localized weather with refugee influx time series.
As mentioned above, five locations with local weather stations were
used as the testbed for this study, specifically in the islands of
Chios, Kos, Lesvos (Petra and Thermi) and Samos. These were the five
most active sea passages and landing areas during the specific high-peak
period (1-Oct-2015 to 31-Jan-2016), associated with more than 85\%
of the total influx in the Aegean Sea, and they are used as input
data for analyzing the statistics of winds, namely the daily values
of average speed, gust and direction. Figure \ref{fig:Weather-locations-Greece}
shows the islands and the approximate locations (center of circle)
of the local weather stations used as data feeds in this study.
\begin{figure*}[tbph]
\begin{centering}
\textsf{\includegraphics[width=17cm]{Greece-central-AegeanSea-map2marked}}
\par\end{centering}
\caption{\label{fig:Weather-locations-Greece}Five locations of refugee sea
passages in the Aegean Sea islands, with local weather stations and
detailed wind data (Oct.2015-Jan.2016): (1) Chios, (2) Kos, (3) Lesvos/Petra,
(4) Lesvos/Thermi, (5) Samos.}
\end{figure*}
\subsection{Software packages and hardware}
The main software packages that were used in this study were:
\begin{itemize}
\item Mathworks MATLAB v8.6 (R2015b), including: Signal Processing Toolbox,
System Identification Toolbox, Statistics \& Machine Learning Toolbox
\cite{Matlab-url}.
\item Additional toolboxes for MATLAB (own \& third-party) for specific
algorithms, as referenced later on in the corresponding sections.
\item WEKA v3.7.13 (x64). Open-Source Machine Learning Suite \cite{Weka-url}.
\item Spreadsheet applications: Microsoft Excel 2007, LibreOffice Sheet
5.1 (x64).
\item Custom-built programming tools in Java and C/C++ for data manipulation
(import/export).
\end{itemize}
The data experiments and processing were conducted using: (a) Intel
i7 quad-core @ 2.0 GHz / 8 GB RAM / MS-Windows 8.1 (x64), and (b)
Intel Atom N270 dual-core @ 1.6 GHz / 2 GB RAM / Ubuntu Linux 16.04
LTS (x32).
\section{Methods and Algorithms}
The statistical and frequency properties of the daily arrivals data
series were analyzed via pairwise correlation and full system identification,
specifically by Auto-Regressive Moving Average (ARMA) approximations,
as described below.
\subsection{Auto-correlation analysis}
Pairwise correlation produces a quantitative metric for the statistical
dependencies between values of two data series at different lags.
In the case when a single data series is compared to itself, the \emph{auto-correlation}
corresponds to the statistical dependencies between subsequent values
of the same series \cite{Hsu-SigSys-1995,Hamming-filters-1989}. Hence,
value pairs with high correlation correspond to regular patterns in
the series, i.e., encode periodicity at smaller or larger scales,
according to the selected lag:
\begin{equation}
R_{yy}\left(t_{1},t_{2}\right)=R\left(k\right)=\frac{\mathrm{E}\left[\left(y\left(t\right)-\mu_{t}\right)\cdot\left(y\left(t+k\right)-\mu_{t+k}\right)\right]}{\sigma_{t}\cdot\sigma_{t+k}}\label{eq:autocorr-def}
\end{equation}
where $y\left(t\right)$ is the time series, $\mu_{t}$ and $\sigma_{t}$
are the mean and standard deviation, $k$ is the lag and $R\left(k\right)$
is the correrponding auto-correlation vector of length $2k+1$. In
this study, the wind variables of average speed and gust were analyzed
via auto-correlation with a lag limit of $k\pm122$ against the current
day, i.e., within the full 123-days time frame available (four months).
\subsection{ARMA system identification}
More generic and powerful than auto-correlation or standard linear
regression alone, the \emph{Auto-Regressive Moving Average} (ARMA)
model is the standard approach for describing any linear digital filter
or signal generator in the time domain. It is essentially a combination
of an auto-correlation component that relates the current outputs
to previous ones and a smoothing component than averages the inputs
over a fixed-size window.
The typical linear ARMA model is described as \cite{Hamming-filters-1989,Porat-signals-1994,Ther-RndSig-1992,Han-ARMA-1980}:
\begin{equation}
A_{m}\left(z\right)\ast\overrightarrow{y}\left(t\right)=B_{k}\left(z\right)\ast\overrightarrow{u}\left(t\right)+e\left(t\right)\label{eq:ARMA-generic}
\end{equation}
where $\overrightarrow{u_{k}}\left(t\right)$ is the input vector
of size $k$ at time step $t$, $\overrightarrow{y}\left(t\right)$
is the output vector of size $m$ (i.e., the current plus the $m-1$
previous ones), $B_{k}\left(z\right)$ is the convolution kernel for
the inputs, $A_{m}\left(z\right)$ is the convolution kernel for the
outputs and $e\left(t\right)$ is the residual model error. Normally,
$A_{m}\left(z\right)$ and $B_{k}\left(z\right)$ are vectors of scalar
coefficients that can be fixed, if the model is static, or variable,
if the model is adaptive (constantly ``retrained'').
Both coefficient vectors, as well as their sizes, are subject to optimization
of the model design according to some criterion, which typically is
the minimization of the residual error $e\left(t\right)$. In practice,
this is defined as $e\left(t\right)=\left\Vert \hat{y}\left(t\right)-y\left(t\right)\right\Vert _{2}^{2}$,
where $\left\Vert .\right\Vert _{2}$ is the standard Euclidean norm,
$\hat{y}\left(t\right)$ is the ARMA-approximated output and $y\left(t\right)$
is the true (measured) process output. The sizes $m$ and $k$ are
the \emph{orders} of the model and they are usually estimated either
by information-theoretic algorithms \cite{Hamming-filters-1989,Porat-signals-1994,Han-ARMA-1980}
or by exploiting known properties (if any) of the generating process,
e.g. with regard to its periodicity. Such a model is described as
ARMA($m$,$k$), where AR($m$) is the auto-regressive component and
MA($k$) is the moving-average component.
In approximation form, expanding the convolutions and estimating the
current output $\hat{y}\left(t\right)$, the analytical form of Eq.\ref{eq:ARMA-generic}
is:
\begin{equation}
\hat{y}\left(t\right)=\sum_{i=1}^{m}\left(a_{i}\cdot y\left(t-i\right)\right)+\sum_{j=0}^{k}\left(b_{j}\cdot x\left(t-j\right)\right)+e\left(t\right)\label{eq:ARMA-analytical}
\end{equation}
The error term $e\left(t\right)$ in Eq.\ref{eq:ARMA-analytical}
can also be expanded to multiple terms of a separate convolution kernel,
similarly to $A_{m}\left(z\right)$ and $B_{k}\left(z\right)$, but
it is most commonly grouped into one scalar factor, i.e., with an
order of one. In such cases, the model and be described as ARMA($m$,$k$,$q$)
where $q>1$ is the order of the convolution kernel for the error
term.
When applied to a signal generated by a process of unknown statistical
properties, an ARMA approximation of it reveals a variety of important
properties regarding this process. In practice, the (estimated) order
$m$ of the AR component shows how strong the statistical coupling
is between subsequent outputs, while the order $k$ of the MA component
shows the ``memory'' of the process, i.e., how far in the past inputs
the process ``sees'' in order to produce the current output.
\subsection{Interpolation for missing values}
In the selected time frame, the weather dataset for Samos contained
one day of missing data (10-Jan-2016) due to maintenance of the weather
station there. In order to fill-in the missing data, interpolation
was applied separately for each target wind variable, i.e., average
speed, gust and direction.
More specifically, moving average of two and four points (missing
point in the middle) were tested, as well as cubic spline interpolation
(QS) \cite{Schaum-MathTabl-2012}, as shown in Table \ref{tab:Samos-missing-value}
for the `average wind speed' parameter. The results from leave-one-out
cross-validation tests across the entire Samos data series per-dimension
confirmed, as expected, that QS is the most resilient to interpolation
errors (RMSE), compared to the MA(2) and MA(4) methods. Moreover,
using more than two points immediately adjacent to the missing one
produces slightly larger interpolation error when MA is employed.
\begin{table}[htbp]
\caption{\label{tab:Samos-missing-value}Interpolation values for missing data
(10-Jan-2016) and leave-one-out cross-validation error over the entire
Samos `average wind speed' data series (123 days).}
\centering{
\begin{tabular}{|c|c|c|c|}
\hline
Wind Speed (km/h) & Method & Value & RMSE\tabularnewline
\hline
\hline
\multirow{3}{*}{average} & MA(2) & 3.00 & 3.25\tabularnewline
\cline{2-4}
& MA(4) & 3.85 & 3.52\tabularnewline
\cline{2-4}
& \textbf{QS} & 2.57 & \textbf{2.97}\tabularnewline
\hline
\multirow{3}{*}{gust} & MA(2) & 25.75 & 9.81\tabularnewline
\cline{2-4}
& MA(4) & 35.80 & 10.67\tabularnewline
\cline{2-4}
& \textbf{QS} & 21.73 & \textbf{9.33}\tabularnewline
\hline
\end{tabular}
\end{table}
These results hint that there may be large short-term fluctuations
present in the data series and higher-order polynomial, QS or other,
has to be employed for accurate interpolation of the missing values.
This is also evident by the fact that the actual interpolated values
from MA(2) in Table \ref{tab:Samos-missing-value} are much closer
to the QS values, measured as overall the most accurate w.r.t. RMSE
(see Eq.\ref{eq:rmse-def}), than the ones produced by MA(4). Hence,
using the simple rule-of-thumb that assumes only small changes from
the previous 24-hour time frame seems to be unsafe in practice for
accurate forecasting.
The QS interpolation used in this case throughout he study is expected
to be adequately accurate, namely $\pm3$ km/h for average wind speed
and about $\pm9.3$ km/h for wind gust, which for the winds scale
for Samos (see next section) translates to less than $\pm1$B for
both `average' and `gust' parameters. Therefore, this single-day interpolated
instance in the Samos data series can be considered as non-intrusive
for the inherent statistics and the validity of the main experimental
protocol.
\section{Experiments and Results}
The following subsection present the analysis and modeling of the
wind data series for the five sites, including (a) statistics per
location, (b) correlation analysis and (c) ARMA for predictive modeling.
\subsection{Statistics per location}
The following figures present the statistics of wind direction, average
speed and gusts for the five sites. Figures \ref{fig:Chios-wdir},
\ref{fig:Kos-wdir}, \ref{fig:LesvosP-wdir}, \ref{fig:LesvosT-wdir}
and \ref{fig:Samos-wdir} illustrate the normalized polar histograms
of dominant winds (daily average). Figures \ref{fig:Chios-wavg},
\ref{fig:Kos-wavg}, \ref{fig:LesvosP-wavg}, \ref{fig:LesvosT-wavg}
and \ref{fig:Samos-wavg} illustrate the normalized histograms of
the average speed. Figures \ref{fig:Chios-wgust}, \ref{fig:Kos-wgust},
\ref{fig:LesvosP-wgust}, \ref{fig:LesvosT-wgust} and \ref{fig:Samos-wgust}
illustrate the normalized histogram of gusts. All statistics are for
the time frame used in this study, i.e., 1-Oct-2015 to 31-Jan-2016
(123 days).
\subsection{Correlation analysis }
The following figures present the pairwise auto-correlation analysis
of wind average speed and gust for the five sites. In each plot, the
entire dates range is used (1-Oct-2015 to 31-Jan-2016) in one-step
lags. The `urand' diagonal (dotted) is displayed as guideline for
comparison against a Gaussian random walk with mean and standard deviation
same as the corresponding gust data series.
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[width=8.5cm]{Chios-WindsDir-polar}}
\par\end{centering}
\caption{\label{fig:Chios-wdir}Normalized polar histogram of dominant winds
(daily average) for Chios.}
\end{figure}
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[width=8.5cm]{Chios-WindsSpeed-histo}}
\par\end{centering}
\caption{\label{fig:Chios-wavg}Normalized histogram of average wind speed
for Chios.}
\end{figure}
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[width=8.5cm]{Chios-GustWindsSpeed-histo}}
\par\end{centering}
\caption{\label{fig:Chios-wgust}Normalized histogram of wind gusts for Chios.}
\end{figure}
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[width=8.5cm]{Kos-WindsDir-polar}}
\par\end{centering}
\caption{\label{fig:Kos-wdir}Normalized polar histogram of dominant winds
(daily average) for Kos.}
\end{figure}
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[width=8.5cm]{Kos-WindsSpeed-histo}}
\par\end{centering}
\caption{\label{fig:Kos-wavg}Normalized histogram of average wind speed for
Kos.}
\end{figure}
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[width=8.5cm]{Kos-GustWindsSpeed-histo}}
\par\end{centering}
\caption{\label{fig:Kos-wgust}Normalized histogram of wind gusts for Kos.}
\end{figure}
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[width=8.5cm]{LesvosPetra-WindsDir-polar}}
\par\end{centering}
\caption{\label{fig:LesvosP-wdir}Normalized polar histogram of dominant winds
(daily average) for Lesvos/Petra.}
\end{figure}
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[width=8.5cm]{LesvosPetra-WindsSpeed-histo}}
\par\end{centering}
\caption{\label{fig:LesvosP-wavg}Normalized histogram of average wind speed
for Lesvos/Petra.}
\end{figure}
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[width=8.5cm]{LesvosPetra-GustWindsSpeed-histo}}
\par\end{centering}
\caption{\label{fig:LesvosP-wgust}Normalized histogram of wind gusts for Lesvos/Petra.}
\end{figure}
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[width=8.5cm]{LesvosThermi-WindsDir-polar}}
\par\end{centering}
\caption{\label{fig:LesvosT-wdir}Normalized polar histogram of dominant winds
(daily average) for Lesvos/Thermi.}
\end{figure}
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[width=8.5cm]{LesvosThermi-WindsSpeed-histo}}
\par\end{centering}
\caption{\label{fig:LesvosT-wavg}Normalized histogram of average wind speed
for Lesvos/Thermi.}
\end{figure}
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[width=8.5cm]{LesvosThermi-GustWindsSpeed-histo}}
\par\end{centering}
\caption{\label{fig:LesvosT-wgust}Normalized histogram of wind gusts for Lesvos/Thermi.}
\end{figure}
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[width=8.5cm]{Samos-WindsDir-polar}}
\par\end{centering}
\caption{\label{fig:Samos-wdir}Normalized polar histogram of dominant winds
(daily average) for Samos.}
\end{figure}
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[width=8.5cm]{Samos-WindsSpeed-histo}}
\par\end{centering}
\caption{\label{fig:Samos-wavg}Normalized histogram of average wind speed
for Samos.}
\end{figure}
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[width=8.5cm]{Samos-GustWindsSpeed-histo}}
\par\end{centering}
\caption{\label{fig:Samos-wgust}Normalized histogram of wind gusts for Samos.}
\end{figure}
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[width=8.5cm]{Chios-WindsAutocorr}}
\par\end{centering}
\caption{\label{fig:Chios-winds-autocorr}Auto-correlation plot (normalized)
of wind average and gust for Chios. }
\end{figure}
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[width=8.5cm]{Kos-WindsAutocorr}}
\par\end{centering}
\caption{\label{fig:Kos-winds-autocorr}Auto-correlation plot (normalized)
of wind average and gust for Kos.}
\end{figure}
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[width=8.5cm]{LesvosP-WindsAutocorr}}
\par\end{centering}
\caption{\label{fig:LesvosP-winds-autocorr}Auto-correlation plot (normalized)
of wind average and gust for Lesvos/Petra.}
\end{figure}
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[width=8.5cm]{LesvosT-WindsAutocorr}}
\par\end{centering}
\caption{\label{fig:LesvosT-winds-autocorr}Auto-correlation plot (normalized)
of wind average and gust for Lesvos/Thermi.}
\end{figure}
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[width=8.5cm]{Samos-WindsAutocorr}}
\par\end{centering}
\caption{\label{fig:Samos-winds-autocorr}Auto-correlation plot (normalized)
of wind average and gust for Samos.}
\end{figure}
Table \ref{tab:Winds-corr} presents the pairwise correlations between
the three wind data series per site. Statistical significance confirmation
is indicated with bold numbers for $a\leq0.05$ and plain numbers
for $a\leq0.1$, otherwise italics indicate non-significant result
for correlation. The same notation is used in Table \ref{tab:Winds-loc-corr},
presenting the cross-site correlations for wind average speed.
\begin{table}[htbp]
\caption{\label{tab:Winds-corr}Correlations between wind variables for all
five sites (see text for details).}
\centering{
\begin{tabular}{|c|c|c|c|}
\hline
Site & avg/gust & avg/dir & gust/dir\tabularnewline
\hline
\hline
\multirow{1}{*}{Chios} & \textbf{0.775} & 0.519 & \emph{0.086}\tabularnewline
\hline
\multirow{1}{*}{Kos} & \textbf{0.642} & \emph{0.410} & \textbf{0.685}\tabularnewline
\hline
\multirow{1}{*}{Lesvos/P} & \textbf{0.856} & \emph{0.022} & \emph{0.319}\tabularnewline
\hline
\multirow{1}{*}{Lesvos/T} & \textbf{0.739} & \textbf{0.837} & \emph{0.452}\tabularnewline
\hline
\multirow{1}{*}{Samos} & \textbf{0.830} & \emph{0.027} & \emph{0.176}\tabularnewline
\hline
\end{tabular}
\end{table}
\begin{table}[htbp]
\caption{\label{tab:Winds-loc-corr}Cross-site correlations for wind average
speed (see text for details).}
\centering{
\begin{tabular}{|c|c|c|c|c|}
\hline
Chios & Kos & Lesvos/P & Lesvos/T & Samos\tabularnewline
\hline
\hline
1 & \textbf{0.250} & \textbf{0.636} & \textbf{0.884} & 0.168\tabularnewline
\hline
& 1 & \textbf{0.277} & \textbf{0.225} & \textbf{0.216}\tabularnewline
\hline
& & 1 & \textbf{0.672} & 0.166\tabularnewline
\hline
& & & 1 & \textbf{0.204}\tabularnewline
\hline
& & & & 1\tabularnewline
\hline
\end{tabular}
\end{table}
\subsection{ARMA for predictive modeling}
As describe above, ARMA can provide model identification for analysis
and/or prediction. In this study, various AR, MA and ARMA designs
were employed for the three wind variables (speed average, gust, direction)
with a focus on forecasting future values from a historic time frame.
More specifically, in the case of AR a sequence of previous data points
from the \emph{same} series were used to predict future values; in
the case of MA a sequence of previous \emph{and current} data points
from the corresponding series of \emph{other} sites were used to predict
future values; and in the case of ARMA these two approaches are combined.
In practice, the MA part functions as input aggregator, i.e., regression
against the other sites, and the AR part functions as output filter,
i.e., regression against the same site's history.
Figure \ref{fig:ARX-Kos-4sites-wavg} illustrates the results of ARMA
predictive modeling for Kos' wind average speed data series, for various
AR and MA configurations. Even when using an AR component of order
7 or 10 and a MA component of 5 or 10 (historic values from the four
other sites), no model seems to converge accurately in term of prediction.
In contrast, Figure \ref{fig:ARX-LesvosT-4sites-wavg} illustrates
how a similar ARMA (7,5) model achieves very efficient approximation
for the Lesvos/Thermi site.
\begin{figure*}[tbph]
\begin{centering}
\textsf{\includegraphics[width=17cm]{Pred_MA-Kos-4sites_fixed}}
\par\end{centering}
\caption{\label{fig:ARX-Kos-4sites-wavg}ARMA(\emph{m},\emph{k}) predictive
modeling for Kos' wind average speed data series (see text for details).}
\end{figure*}
\begin{figure*}[tbph]
\begin{centering}
\textsf{\includegraphics[width=17cm]{Pred_MA-LesvosThermi-4sites_fixed}}
\par\end{centering}
\caption{\label{fig:ARX-LesvosT-4sites-wavg}ARMA(\emph{m},\emph{k}) predictive
modeling for Lesvos/Thermi's wind average speed data series (see text
for details).}
\end{figure*}
The fitness $F$ is a measure of accuracy of the approximation model
and for a specific data series is defined analytically as:
\begin{equation}
F=100\cdot\left(1-\frac{\left\Vert y-\hat{y}\right\Vert _{2}}{\left\Vert y-\bar{y}\right\Vert _{2}}\right)\label{eq:matlab-fitness}
\end{equation}
where $y$ is the true data point, $\hat{y}$ is the model's approximation,
$\bar{y}$ is the data series mean and $\left\Vert .\right\Vert _{2}$
is the standard Euclidean norm. For comparison, RMSE is defined as:
\begin{equation}
err\left(y,\hat{y}\right)_{RMSE}=\sqrt{\frac{1}{n}\sum_{i=1}^{n}\left(y-\hat{y}\right)^{2}}\label{eq:rmse-def}
\end{equation}
From Figure \ref{fig:ARX-LesvosT-4sites-wavg} it is clearly evident
that the ARMA(7,5) approximation is very efficient and successfully
tracks the wind average speed data series very closely, using 7 previous
values from the same site and 4 previous plus the current one from
the other four sites. In other words, the model implements a spatio-temporal
linear regressor with a total of 27 free parameters (trained) to make
a 1-day look-ahead prediction. The exact performance of the model
is $F=62.29(\%)$, FPE=5.064 (Akaike' final prediction error \cite{Akaike_FPE_1999})
and RMSE=1.884 (km/h). The trained ARMA(7,5) model is described below:
\begin{equation}
\begin{array}{cc}
A_{7}\left(z\right) & =1-0.125\cdot z^{-1}-0.081\cdot z^{-2}-0.199\cdot z^{-3}\\
& +0.231\cdot z^{-4}+0.036\cdot z^{-5}\\
& -0.059\cdot z^{-6}+0.019\cdot z^{-7}
\end{array}\label{eq:ARMA750-Avec}
\end{equation}
and:
\begin{equation}
\begin{array}{cc}
B_{5,1}\left(z\right) & =0.780-0.105\cdot z^{-1}-0.111\cdot z^{-2}\\
& -0.039\cdot z^{-3}+0.217\cdot z^{-4}
\end{array}\label{eq:ARMA750-Bvec1}
\end{equation}
\begin{equation}
\begin{array}{cc}
B_{5,2}\left(z\right) & =0.333-0.186\cdot z^{-1}+0.179\cdot z^{-2}\\
& -0.121\cdot z^{-3}-0.034\cdot z^{-4}
\end{array}\label{eq:ARMA750-Bvec2}
\end{equation}
\begin{equation}
\begin{array}{cc}
B_{5,3}\left(z\right) & =0.083+0.029\cdot z^{-1}+0.033\cdot z^{-2}\\
& -0.028\cdot z^{-3}-0.008\cdot z^{-4}
\end{array}\label{eq:ARMA750-Bvec3}
\end{equation}
\begin{equation}
\begin{array}{cc}
B_{5,5}\left(z\right) & =0.020-0.069\cdot z^{-1}+0.002\cdot z^{-2}\\
& +0.058\cdot z^{-3}-0.029\cdot z^{-4}
\end{array}\label{eq:ARMA750-Bvec5}
\end{equation}
In these polynomials, $z^{-n}$ is the delay factor of the kernel,
as described by the analytical form of Eq.\ref{eq:ARMA-analytical}.
Hence, the coefficient of $z^{-n}$ in $A_{7}\left(z\right)$ is essentially
$a_{n}$, i.e., the magnitude for the auto-regressive factor (output)
$n$ days back. The second index $k$ in $B_{5,k}\left(z\right)$
refers to the corresponding site, in the same order as presented above.
Since Lesvos/Thermi ($k=4$) is the target site, all the other sites
are associated to their corresponding polynomials, i.e., Chios with
Eq.\ref{eq:ARMA750-Bvec1}, Kos with Eq.\ref{eq:ARMA750-Bvec2}, Lesvos/Petra
with Eq.\ref{eq:ARMA750-Bvec3} and Samos with Eq.\ref{eq:ARMA750-Bvec5}.
\section{Discussion}
The methods presented in this study fall under the general concept
of regression, i.e., using available values from single or combined
spatio-temporal data series to predict their future evolution. In
this sense, even the missing data interpolation described earlier
for Samos could be used as a very simple and fast approach to do this
in practice. However, it is clear that this is not the optimal approach
in terms of accuracy, as the results from ARMA illustrate later on.
Thus, it is imperative to conduct in-depth statistical and correlation
analysis in each data series separately and in combination, in order
to estimate their spatio-temporal dependencies and inherent complexity,
which are to be used as guidelines for the design of efficient ARMA
or other similar model approximations.
The polar histograms of wind speeds reveal that there are clearly
dominant wind directions in all five sites: for Chios it is North,
for Kos it is West, for Lesvos/Petra it is South-East, for Lesvos/Thermi
it is West/North-West and for Samos it is South/South-East. In the
three later cases the evidence is very strong, showing a very compact
peak towards the source of the dominant wind direction. This is extremely
useful when the wind models are to be associated statistically with
corresponding sea condition models, for example the average wave height,
since the local geographical morphology becomes more or less irrelevant
in these approximations.
In all the sites, the histograms of the wind average speeds present
an almost-Gaussian probability distribution function (pdf) and, thus,
the corresponding mean, standard deviation, etc, for each case can
be considered statistically valid. On the other hand, wind gusts in
all sites is highly skewed if the same wind speed margins are used.
These pdfs may also be approximated by Gaussians, but further investigation
is required in terms of descriptive statistics and parametric distributions,
as it is clearly evident that they include multiple peaks, large `flat'
ranges, etc.
The auto-correlation plots reveal the inherent dependencies between
subsequent values in the data series, separated by specific time lags.
In general, it is expected that non-random signals present a triangular-shaped
plot centered at the zero-lag peak, which exhibits the maximum auto-correlation
value. The larger the deviation from this triangular shape, the larger
the stochastic factors in the signal. In other words, for a perfectly
random signal of pure white noise, the corresponding plot should be
`flat' instead of triangular-shaped. In the plots illustrated here,
the wind average speeds in the five sites present smaller or larger
stochastic properties. In particular, Samos seems to be the most unpredictable
case, with large drops immediately before/after the zero-lag peak,
as well as multiple small peaks in other lag values. on the other
hand, Kos seems to be the most deterministic case, as the plot is
almost perfectly triangular-shaped. It is expected that data series
with large deviations from the `ideal' triangular shape, if approximated
by linear models like ARMA, will require larger convolution kernels,
i.e., higher-order AR components, in order to capture longer historic
sequences.
The correlations between the wind variables, illustrated in Table
\ref{tab:Winds-corr}, reveal that there is verified statistical dependency
($a\leq0.05$) between average speed and gusts, as expected, in all
five sites. Furthermore, there is strong dependency ($a\leq0.05$)
between dominant wind direction and (a) average speed in Lesvos/Thermi
and (b) gust in Kos, as well as (c) average speed in Chios at a lower
level ($a\leq0.10$). In these cases, it is expected that corresponding
sea condition models, especially average wave height and direction,
will be more accurate and useful than in the other sites.
Furthermore, the cross-correlation results, illustrated in Table \ref{tab:Winds-loc-corr},
reveal that there is a strong association ($a\leq0.05$) between the
sites that are geographically adjacent or adequately near to each
other. In particular, the average wind speed at Chios is strongly
correlated with the two Lesvos sites, especially the one at Thermi.
Looking at the map in Figure \ref{fig:Weather-locations-Greece},
it is clear why: the two islands are near to each other with open
sea between them, especially between Lesvos/Thermi and Chios, which
exhibits the largest cross-correlation value in the Table. There is
also a very strong correlation between the two sites at Lesvos, as
expected. Nevertheless, the link between the two Lesvos sites is still
weaker than the one between Lesvos/Thermi and Chios, probably because
in the first case there is a large geographical obstacle between them
- the large mount of the island of Lesvos itself. It is worth noticing
that the distance across the two sites at Lesvos is about 34 km, while
the distance across the Lesvos/Thermi and Chios sites is about 92
km, more than x2.7 times larger, but entirely over open sea or flat
land. These quantified observations are extremely important when designing
cross-site interpolation models, as only this kind of statistical
analysis can reveal such unexpected and counter-intuitive results
regarding the weight that needs to be assigned to each spatial data
node.
The results from the ARMA model examples more or less confirm the
conclusions drawn from the cross-correlations between sites. The average
wind speed at Kos is difficult to predict from the values of the other
four sites, as Figure \ref{fig:ARX-Kos-4sites-wavg} illustrates,
because their cross-correlations are marginally around 0.25 even for
the closest one (Samos). This example was selected specifically to
show that the most remote sites in terms of spatial distribution is
the most difficult to forecast via linear predictive modeling like
ARMA. There is still some correlation present between the sites over
large geographical areas, primarily because the Aegean Sea is enclosed
from three sides and spans over a basin of roughly 280 x 500 km (main
area). This makes it very rare to have drastically different wind
conditions over its islands, except in areas with specific geographical
features, as in the case of the narrow `closed' sea passage north
of the Lesvos/Petra site, compared to the more `open' passage east
of the Lesvos/Thermi site.
The ARMA(7,5) model detailed for the Lesvos/Thermi site is a typical
example of how spatio-temporal linear regressors can achieve very
accurate predictions in the short term. In particular, the average
wind speed that is modeled can be approximated with an expected error
(RMSE) of less than $\pm1.9$ km/h, which is almost 37\% better than
the error achieved by the cubic spline interpolation (QS) as described
for the Samos data series, despite the fact that in the second case
the interpolation is for an intermediate point (i.e., both sides bounded
by true values) instead of 1-day look-ahead extrapolation (i.e., only
one side bounded). Given the fact that such ARMA models are very simple
to implement, based on vector operations over 30 or less parameters
(trained), it is clearly evident that they can be extremely useful
when analytical weather forecasts like NOAA or computationally intensive
simulation-based models cannot be used in practice, e.g. as part of
a mobile or web application.
\section{Conclusion}
In this study, the three main wind variables (average speed, gust,
direction) were investigated for five main locations that have been
the most active `hotspots' in terms of refugee influx in the Aegean
Sea islands during the Oct/2015 - Jan/2016 period.
The analysis of the three-per-site data series included standard statistical
analysis and parametric distributions (Gaussians), auto-correlation
analysis, as well as cross-correlation analysis between the sites.
Various ARMA models were designed and trained in order to estimate
the feasibility and accuracy of such spatio-temporal linear regressors
for predictive analytics, compared also with standard moving average
and cubic spline interpolation used for missing values.
The results proved that such data-driven statistical approaches are
extremely useful in identifying unexpected and sometimes counter-intuitive
associations between the available spatial data nodes. It is worth
noticing that such discoveries verify and quantify important semantic
information that is related to special geographical features, such
as narrow sea passages and large obstacles to wind flows. This is
very important when designing corresponding models for short-term
forecasting of sea condition, especially average wave height and direction,
which is in fact what defines the associated weather risk of crossing
these passages in refugee influx patterns.
\appendices{}
\bibliographystyle{IEEEtran}
|
1,314,259,994,932 | arxiv | \section{Introduction}
Harris criterion \cite{Harris} deals with the stability of criticality against weak randomness in an average sense. Suppose a spin system with a critical temperature $T_{c}$, reduced by disorder. Since disorder breaks the translational symmetry, it is natural to consider an average local critical temperature $\langle T_{c}(\bm{x}) \rangle$ in a correlated volume $\xi^{d}$, where $\xi$ is the correlation length related with the criticality and $d$ is the spatial dimension. When variation of the average local critical temperature in the correlated volume, $\Delta \langle T_{c}(\bm{x}) \rangle \sim \xi^{- d / 2}$, is smaller than distance from the global ordering temperature, $t \sim \xi^{-1 / \nu}$ with the correlation-length critical exponent $\nu$, the Harris criterion of $\Delta \langle T_{c}(\bm{x}) \rangle < t$ thus $d \nu > 2$ tells us that the nature of the clean critical point is stable against weak randomness. When the Harris criterion is violated, i.e., $d \nu \leq 2$, disorder becomes relevant at the critical point, expected to change the nature of the clean critical point. If the resulting fixed-point value of disorder turns out to be finite, the Harris criterion is fulfilled with a modified correlation-length critical exponent $\nu'$ at such a disordered critical point. In particular, if the strength of randomness continues to increase toward infinity, referred to as an infinite randomness fixed point, the resulting disorder physics is governed by extreme inhomogeneity of the system \cite{Vojta_Review,Vlad_Review} and thus, the average sense is not much meaningful. In this situation local ``ordering" is allowed although macroscopic coherence is prohibited, the region of which is called rare region. ``Rare" in the name originates from an exponentially low probability to find such a region, given by $p_{rr}(L) \sim \exp ( - c L^{d} )$ with a positive numerical constant $c$, where $L$ is the length scale of the region and the exponent is an energy of the region. Such a rare region behaves as a super spin whose dynamics is extremely slow, being responsible for singularity in free energy and thus, dominating thermodynamics, referred to as Griffiths singularity \cite{Griffiths,McCoy_Wu}. As a result, rare region effects dominate critical physics not only at but also near the strong disorder critical point, called Griffiths phase.
Such rare region effects can be described by an averaged susceptibility $\chi_{av} \sim \int_{0}^{L_{sys}} d L p_{rr}(L) \chi_{rr}(L)$ from the rare region susceptibility $\chi_{rr}(L)$. Based on the rare region susceptibility, rare region effects have been classified into three categories \cite{Vojta_Review,Vlad_Review}. Class A: The rare region susceptibility is given by $\chi_{rr}(L) \sim L^{a}$. As a result, the Griffiths singularity is weak, essentially unobservable. Class B: The rare region susceptibility is given by $\chi_{rr}(L) \sim \exp(a L^{d})$ with a positive numerical constant $a$. As a result, the averaged susceptibility diverges inside the Griffiths phase. This happens when the rare region lies at the lower critical dimension, prohibiting the rare region from static ordering and promoting quantum tunneling between degenerate ordered states. Class C: When the rare region is above the lower critical dimension, the rare region susceptibility already shows divergence at its finite size $L$, meaning that such a rare region experiences a phase transition toward an ordered state at least locally and tunneling events are suppressed. As a result, they become randomly frozen and the phase transition is smeared \cite{Smeared}. Actually, this argument based on the lower critical dimension of the rare region serves a fascinating criterion in the case of strong disorder. However, it is not straightforward to have quantitative predictions within this criterion, in particular, at finite temperatures.
In this paper we revisit this long standing issue on the criterion of quantum Griffiths phenomena and smeared phase transitions. We study quantum phase transitions at the critical disorder strength of Anderson localization, physics of which are governed by the extreme inhomogeneity as discussed before. An idea is to find distribution of critical temperature from that of disorder. Focusing on rare regions, we develop a local mean-field theory to describe their ordering behaviors, where the wave-function multifractality \cite{Multifractality} is introduced into the self-consistent equation of an order parameter \cite{PTK1,PTK2,PTC}. As a result, we find a distribution function for critical temperature, where both the Stoner transition and the Kondo effect are examined. The distribution function of the Kondo temperature ($T_{K})$ shows a power-law divergent character in the $T_{K} \rightarrow 0$ limit regardless of the Kondo coupling constant. On the other hand, the distribution function of the ferromagnetic transition temperature ($T_{c}$) displays an abrupt change from a power-law divergent behavior to a vanishing tendency in the $T_{c} \rightarrow 0$ limit, increasing an interaction parameter for ferromagnetic instability across a critical value. The critical value turns out to be slightly larger than that of the clean limit. The typical transition temperature given by a geometric average reflects the power-law divergent character of the distribution function in the regime of low critical temperatures. Thus, these results imply that the typical Kondo temperature always vanish due to finite density of the distribution function while the typical ferromagnetic transition temperature shows a phase transition at the critical interaction parameter. This leads us to propose a criterion for quantum Griffiths phenomena and smeared phase transitions at finite temperatures: Above the typical transition temperature, quantum Griffiths phenomena occur while below it, smeared phase transitions result. We suggest that the ferromagnetic transition at Anderson localization shows the evolution from quantum Griffiths phenomena to smeared transitions around the critical interaction parameter at low temperatures. It should be pointed out that although the lower critical dimension of a rare region is replaced with a typical value of the critical temperature for local ordering, these two points of views are not inconsistent.
Two critical assumptions have been made: One is applicability of mean-field theory to a rare region of the nano-scale and the other is independence between rare regions. Both approximations can be improved by introducing fluctuation corrections into the mean-field description \cite{Fluctuation_Corrections} and taking into account couplings between rare regions, respectively.
\section{Local mean-field theory at the Anderson metal-insulator transition}
\subsection{Stoner instability}
\subsubsection{Formulation}
We start from an effective Hubbard Hamiltonian
\begin{eqnarray}
&& \mathcal{H} = \int d^{d} \boldsymbol{r} \Big\{ c_{\sigma}^{\dagger}(\tau,\boldsymbol{r}) \Bigl( - \frac{\boldsymbol{\nabla_{\boldsymbol{r}}}^{2}}{2m} - \mu + v(\boldsymbol{r}) \Bigr) c_{\sigma}(\tau,\boldsymbol{r}) \nonumber \\ && + U c_{\uparrow}^{\dagger}(\tau,\boldsymbol{r}) c_{\uparrow}(\tau,\boldsymbol{r}) c_{\downarrow}^{\dagger}(\tau,\boldsymbol{r}) c_{\downarrow}(\tau,\boldsymbol{r}) \Big\} ,
\end{eqnarray}
where $c_{\sigma}(\tau,\boldsymbol{r})$ is an electron annihilation operator at time $\tau$ and position $\bm{r}$ with spin $\sigma$. $U$ is an effective local interaction parameter, $\mu$ is an electron chemical potential, and $v(\boldsymbol{r})$ is an external electric potential, randomly distributed.
In order to investigate ferromagnetic quantum phase transitions in disordered metals, we perform the Hubbard-Stratonovich transformation for the spin-triplet channel. Taking into account the disorder average in the presence of random electric potentials, we reach the following expression for an effective free energy
\begin{eqnarray} && \mathcal{F} = - \frac{1}{\beta} \int_{-\infty}^{\infty} d v(\boldsymbol{r}) P[v(\boldsymbol{r})] \ln \int D c_{\sigma}(\tau,\boldsymbol{r})
D \boldsymbol{\Phi}(\tau,\boldsymbol{r}) \nonumber \\ && \exp \Bigl[ - \int_{0}^{\beta} d \tau \int d^{d} \boldsymbol{r}
\Bigl\{ c_{\sigma}^{\dagger}(\tau,\boldsymbol{r}) \Bigl( \partial_{\tau} - \mu + \frac{U}{6}
- \frac{\boldsymbol{\nabla_{\boldsymbol{r}}}^{2}}{2m} \nonumber \\ && + v(\boldsymbol{r}) \Bigr) c_{\sigma}(\tau,\boldsymbol{r})
- c_{\sigma}^{\dagger}(\tau,\boldsymbol{r}) [ \boldsymbol{\Phi}(\tau,\boldsymbol{r})
\cdot \boldsymbol{\tau}]_{\sigma\sigma'} c_{\sigma'}(\tau,\boldsymbol{r}) \nonumber \\ && + \frac{3}{2 U}
[\boldsymbol{\Phi}(\tau,\boldsymbol{r})]^{2} \Bigr\} \Bigr] , \label{Free_Energy_Start} \end{eqnarray}
where $\boldsymbol{\Phi}(\tau,\boldsymbol{r})$ is an effective magnetic field, coupled to a spin-density field and determined self-consistently within the mean-field approximation. $P[v(\bm{r})]$ is a distribution function, given by the Gaussian $P[v(\bm{r})] = \mathcal{N}_{v} \exp\Bigl(- \int d^{d} \bm{r} \frac{v^{2}(\bm{r})}{2 \Gamma_{v}} \Bigr)$ for example, where $\Gamma_{v}$ is a variance and $\mathcal{N}_{v}$ is determined by the normalization condition $\int_{-\infty}^{\infty} d v(\bm{r}) P[v(\bm{r})] = 1$.
The basic idea is to reformulate this effective free energy, resorting to the eigenfunction basis for each configuration of random electric potentials, given by
\begin{eqnarray} && \Bigl( - \frac{\boldsymbol{\nabla_{\boldsymbol{r}}}^{2}}{2m} - \mu_{r}
+ v(\boldsymbol{r}) \Bigr) \Psi_{n}(\boldsymbol{r}) = \varepsilon_{n} \Psi_{n}(\boldsymbol{r}),
\label{Eigenfunction} \end{eqnarray}
where $\Psi_{n}(\boldsymbol{r})$ is an eigenfunction with an eigenvalue $\varepsilon_{n}$ and an effective chemical potential $\mu_{r} = \mu - \frac{U}{6}$ in a fixed disorder configuration $v(\boldsymbol{r})$. Performing the ``Fourier transformation" of the electron field as
\begin{eqnarray}
&& c_{\sigma}(\tau,\bm{r}) = \frac{1}{\beta} \sum_{i\omega} e^{- i \omega \tau} \sum_{n} \Psi_{n}(\boldsymbol{r}) \psi_{\sigma n}(i\omega), \label{Fourier_Disorder}
\end{eqnarray}
where $\psi_{\sigma n}(i\omega)$ is an electron field in the disorder basis, and rewriting the free energy [Eq. (\ref{Free_Energy_Start})] within this representation, we obtain an effective mean-field free energy
\begin{eqnarray} && \mathcal{F} = - \frac{1}{\beta} \int_{-\infty}^{\infty} d v(\bm{r}) P[v(\bm{r})] \ln \int D \psi_{\sigma n}(i\omega) \nonumber \\ &&
\exp \Bigl[ - \sum_{i\omega} \sum_{n} \psi_{\sigma n}^{\dagger}(i\omega) \Bigl( [- i\omega + \varepsilon_{n}]
\delta_{nn'} \delta_{\sigma\sigma'} \nonumber \\ && - \sum_{n'} \int d^{d} \bm{r} \Psi_{n}^{\dagger}(\bm{r}) \Psi_{n'}(\bm{r})
\bm{\Phi}(\bm{r}) \cdot \bm{\sigma}_{\sigma\sigma'} \Bigr) \psi_{\sigma' n'} (i\omega) \nonumber \\ &&
- \beta \int d^{d} \bm{r} \frac{3}{2 U} [\boldsymbol{\Phi}(\boldsymbol{r})]^{2} \Bigr] ,
\end{eqnarray}
where the magnetization order parameter is assumed to be static and determined self-consistently in the mean-field analysis. The Gaussian integral for $\psi_{\sigma n}(i\omega)$ gives rise to the following expression for the mean field free energy
\begin{eqnarray}
&& \mathcal{F} \approx - T \int_{-\infty}^{\infty} d v(\bm{r}) P[v(\bm{r})] \nonumber \\ && \sum_{n} \Bigl[
\ln \Bigl\{ 1 + \exp\Bigl( - \frac{\varepsilon_{n} - \int d^{d} \bm{r} |\Psi_{n}(\bm{r})|^{2} {\Phi}(\bm{r})}{T} \Bigr) \Bigr\} \nonumber \\
&& + \ln \Bigl\{ 1 + \exp\Bigl( - \frac{\varepsilon_{n} + \int d^{d} \bm{r} |\Psi_{n}(\bm{r})|^{2} {\Phi}(\bm{r})}{T} \Bigr) \Bigr\} \nonumber \\ && - \frac{1}{T} \int d^{d} \bm{r} \frac{3}{2 U} [{\Phi}(\boldsymbol{r})]^{2} \Bigr] , \label{Effective_Free_Energy} \end{eqnarray} where the magnetization order parameter $\bm{\Phi}(\bm{r}) = \Phi(\bm{r}) \bm{\hat{z}}$ is determined by the self-consistent equation \begin{eqnarray} && \frac{3}{U} {\Phi}(\bm{r}) = \sum_{n} |\Psi_{n}(\bm{r})|^{2} \Bigl\{ f\Bigl(\varepsilon_{n} - \int d^{d} \bm{r} |\Psi_{n}(\bm{r})|^{2} {\Phi}(\bm{r})\Bigr) \nonumber \\ && - f\Bigl(\varepsilon_{n} + \int d^{d} \bm{r} |\Psi_{n}(\bm{r})|^{2} {\Phi}(\bm{r})\Bigr) \Bigr\} . \label{Self_Consistent_Equation} \end{eqnarray} $f(\varepsilon_{n}) = \frac{1}{e^{\varepsilon_{n}/T} + 1}$ is the Fermi-Dirac distribution function.
It should be noted that coupling effects between $n$ and $n'$ are neglected as the zeroth order approximation. We point out that the existence of off-diagonal terms in the energy space is a general feature in any mean-field theories with strong randomness. Indeed, this approximation has been used not only in the Stoner-Anderson problem but also in the Kondo-Anderson transition \cite{PTK2}. In order to justify the diagonal-in-energy approximation, one can consider higher order processes such as $\langle e^{- \mathcal{S}_{int} } \rangle_{0} \approx \exp\Big\{ - \langle \mathcal{S}_{int} \rangle_{0} + \frac{1}{2} \Big( \langle \mathcal{S}_{int}^{2} \rangle_{0} - \langle \mathcal{S}_{int} \rangle_{0}^{2} \Big) \Big\}$ with $\mathcal{S}_{int} = \sum_{i \omega} \sum_{n \not= n'} \int d^{d} \bm{r} \Psi_{n}^{\dagger}(\bm{r}) \Psi_{n'}(\bm{r}) \Phi(\bm{r}) \sigma \psi_{\sigma n}^{\dagger}(i\omega) \psi_{\sigma n'}(i\omega)$, where \begin{eqnarray} && \langle \mathcal{O} \rangle_{0} = \frac{1}{Z}_{0} \int D \psi_{\sigma n}(i\omega) \mathcal{O} e^{- \mathcal{S}_{0}} , \nonumber \\ && \mathcal{S}_{0} = \sum_{i \omega} \sum_{n} \psi_{\sigma n}^{\dagger}(i \omega) \Big(- i \omega + \varepsilon_{n} \nonumber \\ && + \int d^{d} \bm{r} |\Psi_{n}(\bm{r})|^{2} \sigma \Phi(\bm{r}) \Big) \psi_{\sigma n}(i\omega) , \nonumber \\ && Z_{0} = \int D \psi_{\sigma n}(i\omega) e^{- \mathcal{S}_{0}} , \nonumber \end{eqnarray} respectively. While the first-order term of $\langle \mathcal{S}_{int} \rangle_{0}$ vanishes identically, the second-order term can be expressed as follows \begin{eqnarray} && \mathcal{S}^{(2)}_{eff} \equiv - \frac{1}{2} \Big( \langle \mathcal{S}_{int}^{2} \rangle_{0} - \langle \mathcal{S}_{int} \rangle_{0}^{2} \Big) \nonumber \\ && = - \frac{1}{2} \sum_{i \omega} \sum_{i \omega'} \sum_{n \not= n'} \sum_{m \not= m'} \int d^{d} \bm{r} \int d^{d} \bm{r}' \nonumber \\ && \Psi_{n}^{\dagger}(\bm{r}) \Psi_{n'}(\bm{r}) \Psi_{m}^{\dagger}(\bm{r}') \Psi_{m'}(\bm{r}') \Phi(\bm{r}) \sigma \Phi(\bm{r}') \sigma' \nonumber \\ && \psi_{\sigma n}^{\dagger}(i\omega) \langle \psi_{\sigma n'}(i\omega) \psi_{\sigma' m}^{\dagger}(i\omega') \rangle_{c} \psi_{\sigma' m'}(i\omega') \nonumber \\ && = - \frac{1}{2} \sum_{i \omega} \sum_{n \not= n'} \sum_{m'} \int d^{d} \bm{r} \int d^{d} \bm{r}' \nonumber \\ && \Psi_{n}^{\dagger}(\bm{r}) \Psi_{n'}(\bm{r}) \Psi_{n'}^{\dagger}(\bm{r}') \Psi_{m'}(\bm{r}') \Phi(\bm{r}) \Phi(\bm{r}') \nonumber \\ && \psi_{\sigma n}^{\dagger}(i\omega) \frac{1}{- i \omega + \varepsilon_{n'} + \int d^{d} \bm{r} |\Psi_{n'}(\bm{r})|^{2} \sigma \Phi(\bm{r}) } \psi_{\sigma m'}(i\omega) \nonumber \\ && \approx - \frac{1}{2} \sum_{i \omega} \sum_{n} \psi_{\sigma n}^{\dagger}(i\omega) \psi_{\sigma n}(i\omega) \nonumber \\ && \sum_{n'}\frac{\int d^{d} \bm{r} \int d^{d} \bm{r}' \Psi_{n}^{\dagger}(\bm{r}) \Psi_{n'}(\bm{r}) \Psi_{n'}^{\dagger}(\bm{r}') \Psi_{n}(\bm{r}') \Phi(\bm{r}) \Phi(\bm{r}')}{- i \omega + \varepsilon_{n'} + \int d^{d} \bm{r} |\Psi_{n'}(\bm{r})|^{2} \sigma \Phi(\bm{r}) } . \nonumber \end{eqnarray} Here, $\langle \cdot\cdot\cdot \rangle_{c}$ means to keep the connected part of diagrams. We point out that the diagonal approximation has been used again in the last step. Then, this diagonal approximation can be justified when the following condition is satisfied \begin{eqnarray} && \frac{\frac{1}{2} \Big| \sum_{n'} \frac{\int d^{d} \bm{r} \int d^{d} \bm{r}' \Psi_{n}^{\dagger}(\bm{r}) \Psi_{n'}(\bm{r}) \Psi_{n'}^{\dagger}(\bm{r}') \Psi_{n}(\bm{r}') \Phi(\bm{r}) \Phi(\bm{r}')}{- i \omega + \varepsilon_{n'} + \int d^{d} \bm{r} |\Psi_{n'}(\bm{r})|^{2} \sigma \Phi(\bm{r}) } \Big|}{\Big| \int d^{d} \bm{r} |\Psi_{n}(\bm{r})|^{2} \sigma \Phi(\bm{r}) \Big|} \ll 1 . \nonumber \end{eqnarray} We claim that this criterion is fulfilled when we evaluate the critical temperature, where the local magnetization order parameter of a rare region vanishes. This explains why the diagonal approximation also works in the Kondo-Anderson problem for the distribution function of the Kondo temperature.
\subsubsection{Eigenfunction multifractality}
Next, we replace the integral for the average in disorder configurations $\int_{-\infty}^{\infty} d v(\bm{r}) P[v(\bm{r})]$ with $\int_{-\infty}^{\infty} \Pi_{n} d \alpha_{n}(\boldsymbol{r}) P[\{\alpha_{n}(\boldsymbol{r})\}]$ for the average in the statistics of eigenfunctions. We note that all the information for the statistics of eigenfunctions are encoded into the distribution function of $P[\{\alpha_{n}(\boldsymbol{r})\}]$ with $\alpha_{n}(\bm{r}) = - \frac{\ln |\Psi_{n}(\bm{r})|^{2}}{\ln L}$ \cite{Alpha}, given by the Gaussian distribution function for all $\alpha_{n}(\boldsymbol{r})$ \cite{Multifractality}, which will be clarified below. An important point is how to perform the integration for the wave-function distribution. Recently, this procedure has been discussed intensively, where an idea is to take into account the so-called joint distribution function which deals with pairs of eigenfunctions \cite{PTK2}, given by
\begin{eqnarray}
&& \int_{-\infty}^{\infty} \Pi_{n} d \alpha_{n}(\boldsymbol{r}) P[\{\alpha_{n}(\boldsymbol{r})\}]
\approx \int_{-\infty}^{\infty} d \alpha (\boldsymbol{r}) P^{(1)}[\alpha (\boldsymbol{r})] \nonumber \\ &&
\int_{-\infty}^{\infty} \Pi_{n} d \alpha_{n}(\boldsymbol{r}) \frac{P^{(2)}[\alpha_{n}(\boldsymbol{r})
\not= \alpha(\boldsymbol{r})]}{P^{(1)}[\alpha(\boldsymbol{r})]} .
\end{eqnarray}
Here, $P^{(2)}[\alpha_{n}(\boldsymbol{r}) \not= \alpha(\boldsymbol{r})]$ is the joint distribution function, where one eigenfunction $\alpha(\boldsymbol{r})$ is at the mobility edge $\varepsilon_{m}$ and the other wave function $\alpha_{n}(\boldsymbol{r})$ is away from the mobility edge. On the other hand, $P^{(1)}[\alpha (\boldsymbol{r})]$ is the distribution function for the single eigenfunction at the mobility edge. Both distribution functions are given by the Gaussian distribution function (the log-normal distribution function for the intensity of an eigenfunction), constructed to reproduce the wave-function multifractality of the random matrix theory or the supersymmetric nonlinear $\sigma-$model approach \cite{Multifractality}. Then, this integral means to perform the integral for $\alpha_{n}(\boldsymbol{r})$ with a fixed $\alpha(\boldsymbol{r})$ first, based on the mutual distribution function, and to do for $\alpha(\boldsymbol{r})$ next, based on the single eigenfunction distribution function. As a result, we reach the following expression
\begin{widetext}
\begin{eqnarray}
&& \mathcal{F} \approx - T \int_{-\infty}^{\infty} d \alpha (\boldsymbol{r}) P^{(1)}[\alpha (\boldsymbol{r})]
\sum_{n} \Bigl[ \ln \Bigl\{ 1 + \exp\Bigl( - \frac{\varepsilon_{n} - \int d^{d} \bm{r} \Bigl\langle |\Psi_{n}(\bm{r})|^{2}
\Bigr\rangle_{|\Psi_{m}(\bm{r})|^{2} = L^{-\alpha(\boldsymbol{r})}} {\Phi}(\bm{r})}{T} \Bigr) \Bigr\} \nonumber \\
&& + \ln \Bigl\{ 1 + \exp\Bigl( - \frac{\varepsilon_{n} + \int d^{d} \bm{r} \Bigl\langle |\Psi_{n}(\bm{r})|^{2}
\Bigr\rangle_{|\Psi_{m}(\bm{r})|^{2} = L^{-\alpha(\boldsymbol{r})}} {\Phi}(\bm{r})}{T} \Bigr) \Bigr\} \Bigr]
+ \int_{-\infty}^{\infty} d \alpha(\boldsymbol{r}) P^{(1)}[\alpha(\boldsymbol{r})] \int d^{d} \bm{r} \frac{3}{2 U} [{\Phi}(\boldsymbol{r})]^{2} ,
\label{Free_Energy_Mother}
\end{eqnarray}
\end{widetext}
where the average for the mutual distribution function gives rise to
\begin{eqnarray}
&& \Bigl\langle |\Psi_{n}(\bm{r})|^{2} \Bigr\rangle_{|\Psi_{m}(\bm{r})|^{2} = L^{-\alpha(\boldsymbol{r})}}
\nonumber \\ && \equiv \int_{-\infty}^{\infty} \Pi_{n} d \alpha_{n}(\boldsymbol{r}) \frac{P^{(2)}[\alpha_{n}(\boldsymbol{r})
\not= \alpha(\boldsymbol{r})]}{P^{(1)}[\alpha(\boldsymbol{r})]} |\Psi_{n}(\bm{r})|^{2} \nonumber \\ &&
= L^{-d} \Bigl| \frac{\varepsilon_{n} - \varepsilon_{m}}{\varepsilon_{c}} \Bigr|^{r_{\alpha(\boldsymbol{r})}} \label{Average_Multifractality}
\end{eqnarray}
with an exponent \cite{PTK2}
\begin{eqnarray}
&& r_{\alpha(\boldsymbol{r})} = \frac{\alpha(\boldsymbol{r}) - \alpha_{0}}{d} - \frac{\eta}{2d} g_{nm} ,
~~~ \eta = 2 (\alpha_{0} - d) \\ &&
g_{nm} = \frac{\ln |(\varepsilon_{n} - \varepsilon_{m})/\varepsilon_{c}|}{d \ln L} .
\end{eqnarray}
Here, $L$ is the size of a system, $d$ is a space dimension, $\varepsilon_{c}$ is a cutoff, which shows strong correlations of eigenfunctions with different energies up to $\varepsilon_{c}$, and $\alpha_{0}$ is a typical value of the logarithm of an eigenfunction.
Taking the integral for discrete energies as follows $\sum_{n} \approx \rho_{m} \int_{-\varepsilon_{c}}^{\varepsilon_{c}} d \varepsilon_{n}$ \cite{Energy_Level_Comment}, we obtain
\begin{eqnarray}
&& \mathcal{F} \approx - T \rho_{m} \int_{-\varepsilon_{c}}^{\varepsilon_{c}} d \varepsilon_{n}
\int_{-\infty}^{\infty} d \alpha (\boldsymbol{r}) P^{(1)}[\alpha (\boldsymbol{r})] \nonumber \\ &&
\Bigl[ \ln \Bigl\{ 1 + \exp\Bigl( - \frac{\varepsilon_{n} - \Delta_{n}}{T} \Bigr) \Bigr\}
+ \ln \Bigl\{ 1 + \exp\Bigl( - \frac{\varepsilon_{n} + \Delta_{n}}{T} \Bigr) \Bigr\} \Bigr] \nonumber \\
&& + \int_{-\infty}^{\infty} d \alpha(\boldsymbol{r}) P^{(1)}[\alpha(\boldsymbol{r})]
\int d^{d} \bm{r} \frac{3}{2 U} [{\Phi}(\boldsymbol{r})]^{2}
\label{Free_Energy_Inhomogeneity}
\end{eqnarray}
with $\Delta_{n} \equiv \int d^{d} \bm{r} \Bigl\langle |\Psi_{n}(\bm{r})|^{2} \Bigr\rangle_{|\Psi_{m}(\bm{r})|^{2} = L^{-\alpha(\boldsymbol{r})}} {\Phi}(\bm{r}) \approx L^{-d} \int d^{d} \bm{r} \Bigl| \frac{\varepsilon_{n}}{\varepsilon_{c}} \Bigr|^{r_{\alpha(\boldsymbol{r})}} \Phi(\bm{r})$.
The magnetization order parameter is determined by the self-consistent equation for a given function $\alpha(\bm{r})$
\begin{eqnarray}
&& \Delta_{l} = \frac{U}{3} \rho_{m} \int \frac{d^{d} \bm{r} }{L^d} \Bigl| \frac{\varepsilon_{l}}{\varepsilon_{c}}
\Bigr|^{r_{\alpha(\bm{r})}} \nonumber \\ && \int_{-\varepsilon_{c}}^{\varepsilon_{c}} d \varepsilon_{n}
\Bigl| \frac{\varepsilon_{n}}{\varepsilon_{c}} \Bigr|^{r_{\alpha(\boldsymbol{r})}} \Bigl\{ f(\varepsilon_{n} - \Delta_n)
- f(\varepsilon_{n} + \Delta_n) \Bigr\} .
\end{eqnarray}
This self-consistent equation shows correlation effects in the energy space. In order to obtain $\Delta_{l}$, we should know $\Delta_{n}$ for all values of $n$, exhibiting correlations in the energy space. Solving these coupled equations in the energy space, we obtain the magnetization order parameter in a given function of $\alpha(\bm{r})$. Performing the average for $\alpha(\bm{r})$ with an appropriate distribution function $P[\alpha(\bm{r})]$, we take into account correlation effects in the energy space.
\subsubsection{Local mean-field theory}
Unfortunately, this effective free energy is not completely local in space since there exists an integral for the whole space given by $\int d^{d} \bm{r} \Bigl\langle |\Psi_{n}(\bm{r})|^{2} \Bigr\rangle_{|\Psi_{m}(\bm{r})|^{2} = L^{-\alpha(\boldsymbol{r})}} {\Phi}(\bm{r})$. An essential simplification is to ``lose" or ``overestimate" (more precisely, see the below discussion) the information on strong spatial inhomogeneity as the zeroth order approximation. Replacing $\alpha(\bm{r})$ with $\alpha$ and taking the integral for discrete energies as follows $\sum_{n} \approx \rho_{m} \int_{-\varepsilon_{c}}^{\varepsilon_{c}} d \varepsilon_{n}$ in Eq. (\ref{Free_Energy_Mother}), we obtain a local mean-field theory for the Stoner transition at Anderson localization
\begin{eqnarray}
&& L^{-d} \mathcal{F} \approx - T \rho_{m} \int_{-\varepsilon_{c}}^{\varepsilon_{c}} d \varepsilon_{n}
\int_{-\infty}^{\infty} d \alpha P(\alpha) \nonumber \\ && \Bigl[ \ln \Bigl\{ 1 + \exp\Bigl( - \frac{\varepsilon_{n}
- \Bigl| \frac{\varepsilon_{n}}{\varepsilon_{c}} \Bigr|^{r_{\alpha}} \Phi(\alpha)}{T} \Bigr) \Bigr\} \nonumber \\ &&
+ \ln \Bigl\{ 1 + \exp\Bigl( - \frac{\varepsilon_{n}
+ \Bigl| \frac{\varepsilon_{n}}{\varepsilon_{c}} \Bigr|^{r_{\alpha}} \Phi(\alpha)}{T} \Bigr) \Bigr\} \Bigr] \nonumber \\
&& + \int_{-\infty}^{\infty} d \alpha P(\alpha) \frac{3}{2 U} \Phi^{2}(\alpha) ,
\end{eqnarray}
where $L^{-d} \int d^{d} \bm{r} {\Phi}(\bm{r})$ is replaced with $\Phi(\alpha)$, determined by the ``gap" equation for the order parameter
\begin{eqnarray}
&& \Phi(\alpha) = \frac{U}{3} \rho_{m} \int_{-\varepsilon_{c}}^{\varepsilon_{c}} d \varepsilon_{n}
\Bigl| \frac{\varepsilon_{n}}{\varepsilon_{c}} \Bigr|^{r_{\alpha}} \Bigl\{ f\Bigl(\varepsilon_{n}
- \Bigl| \frac{\varepsilon_{n}}{\varepsilon_{c}} \Bigr|^{r_{\alpha}} \Phi(\alpha) \Bigr) \nonumber \\ &&
- f\Bigl(\varepsilon_{n} + \Bigl| \frac{\varepsilon_{n}}{\varepsilon_{c}} \Bigr|^{r_{\alpha}} \Phi(\alpha) \Bigr) \Bigr\} .
\label{Gap_Equation_FM}
\end{eqnarray}
The distribution function is given by
\begin{eqnarray}
&& P(\alpha) = \mathcal{N} L^{ - \frac{(\alpha-\alpha_{0})^{2}}{2\eta}} ,
\end{eqnarray}
where $\mathcal{N}$ is a positive numerical constant determined from $\int_{-\infty}^{\infty} d \alpha P(\alpha) = 1$.
The critical temperature for a given disorder configuration is determined by \begin{eqnarray} && 1 = \frac{U \rho_{m}}{3 T_{c}} \int_{0}^{\varepsilon_{c}} d \varepsilon \Bigl( \frac{\varepsilon}{\varepsilon_{c}} \Bigr)^{2 r_{\alpha}} \frac{1}{\cosh^{2}\Bigl(\frac{\varepsilon}{2T_{c}}\Bigr)} , \label{Local_Tc} \end{eqnarray} which results from Eq. (\ref{Gap_Equation_FM}), performing the Taylor expansion for the order parameter up to the first order in the right hand side. Then, we obtain $T_{c} = T_{c}(r_{\alpha})$.
This relation allows us to translate $P(\alpha)$ into $P(T_c) = \left|\frac{dT_c}{d \alpha} \right|^{-1} P(\alpha)$.
We would like to point out that the magnetization order parameter is given by a function of $\alpha$ and both the $\alpha-$dependent $\Phi(\alpha)$ and the integration for $\alpha$ with $P(\alpha)$ are expected to keep correlation
effects in the energy space. However, it is true that strong spatial fluctuations in the intensity of eigenfunctions are overestimated in our mean-field theory. A physical picture for this mean-field analysis is as follows. Suppose an island at position $\bm{r}$ with a characteristic length scale, determined by both interactions and disorders, where the intensity of an eigenfunction may be regarded to be uniform, responsible for the uniform magnetization within the island. Then, we consider another island at position $\bm{r}'$ near the previous island, introducing some couplings such as electron hopping and magnetic interaction between these nearest-neighbor islands. Based on this granular picture, one may perform a weak-coupling analysis for interactions between these granules. One may suspect three kinds of possibilities, which correspond to relevance, irrelevance, and marginality of granular interactions, respectively. We believe that the present mean-field analysis focuses on the case of irrelevant granular interactions, giving rise to random magnetization for each intensity of eigenfunctions beyond a certain (granular) length scale \cite{Discussion_Inhomogeneity}.
The above discussion can be stated more mathematically as follows. Suppose two competing length scales: One is the length scale referred to as the size of a granule, allowing us to replace $\alpha(\bm{r})$ with $\alpha$, and the other is the correlation length of the magnetization order parameter to guarantee uniformity within the length scale. The local mean-field theory can be justified when the first length scale is larger than the second. In order to verify whether this is possible or not, let us consider the other case that the second length scale is larger than the first. Then, we are allowed to set $\Delta_{n} \approx \Big( \int \frac{d^{d} \bm{r} }{L^d} \Bigl| \frac{\varepsilon_{n}}{\varepsilon_{c}} \Bigr|^{r_{\alpha(\boldsymbol{r})}} \Big) \Phi$. As a result, the self-consistent equation for the order parameter is given by
\begin{eqnarray}
&& \Big( \int \frac{d^{d} \bm{r} }{L^d} \Bigl| \frac{\varepsilon_{l}}{\varepsilon_{c}} \Bigr|^{r_{\alpha(\boldsymbol{r})}} \Big) \Phi = \frac{U}{3} \rho_{m} \int \frac{d^{d} \bm{r} }{L^d} \Bigl| \frac{\varepsilon_{l}}{\varepsilon_{c}} \Bigr|^{r_{\alpha(\bm{r})}} \nonumber \\ && \int_{-\varepsilon_{c}}^{\varepsilon_{c}} d \varepsilon_{n}
\Bigl| \frac{\varepsilon_{n}}{\varepsilon_{c}} \Bigr|^{r_{\alpha(\boldsymbol{r})}} \Bigl\{ f\Big(\varepsilon_{n} - \int \frac{d^{d} \bm{r} }{L^d} \Bigl| \frac{\varepsilon_{n}}{\varepsilon_{c}} \Bigr|^{r_{\alpha(\boldsymbol{r})}} \Phi\Big) \nonumber \\ && - f\Big(\varepsilon_{n} + \int \frac{d^{d} \bm{r} }{L^d} \Bigl| \frac{\varepsilon_{n}}{\varepsilon_{c}} \Bigr|^{r_{\alpha(\boldsymbol{r})}} \Phi\Big) \Bigr\} . \nonumber
\end{eqnarray}
It is not easy to see the existence of a solution. In this respect, performing the Taylor expansion for the order parameter up to the first order in the right hand side, we obtain an equation for the critical temperature, given by \begin{eqnarray} && \int \frac{d^{d} \bm{r} }{L^d} \Bigl| \frac{\varepsilon_{l}}{\varepsilon_{c}} \Bigr|^{r_{\alpha(\boldsymbol{r})}} \nonumber \\ && = \frac{U \rho_{m}}{3 T_{c}} \int \frac{d^{d} \bm{r} }{L^d} \Bigl| \frac{\varepsilon_{l}}{\varepsilon_{c}} \Bigr|^{r_{\alpha(\bm{r})}} \int_{0}^{\varepsilon_{c}} d \varepsilon \Bigl( \frac{\varepsilon}{\varepsilon_{c}} \Bigr)^{2 r_{\alpha(\boldsymbol{r})}} \frac{1}{\cosh^{2}\Bigl(\frac{\varepsilon}{2T_{c}}\Bigr)} . \nonumber \end{eqnarray} Given $\alpha(\bm{r})$, is there a solution of this equation? Although we do not know the answer in a general situation, we know that there is a solution when $\alpha(\bm{r})$ is replaced with $\alpha$ in this equation. We stress that both length scales should be determined self-consistently beyond the present theoretical consideration. This granular-medium picture deserves to be investigated more sincerely near future.
\subsection{The Kondo effect}
\subsubsection{Formulation}
We start from an effective Kondo Hamiltonian
\begin{eqnarray}
&& \mathcal{H} = \int d^{d} \boldsymbol{r} \Big\{ c^\dagger_\sigma (\tau, \bm r) \left(- \frac{\boldsymbol{\nabla_{\boldsymbol{r}}}^{2}}{2m}
- \mu + v(\bm r)\right) c_\sigma (\tau, \bm r) \nonumber \\ && + J_K \delta^{(d)}(\bm{r}) \bm s \cdot \bm S \Big\}
\end{eqnarray}
where the spin of the conduction electron is given as $\bm s = c^\dagger_\sigma (\tau, \bm r) \bm \sigma_{\sigma \sigma'} c_{\sigma'} (\tau, \bm r)$ and the spin of the impurity as $\bm S = f^\dagger_\sigma (\tau) \bm \sigma_{\sigma \sigma'} f_{\sigma'} (\tau)$ in the fermion projective representation, backup by the single occupancy constraint $f_{\sigma}^{\dagger} f_{\sigma} = N_{s} S$ with $N_{s} = 2$ and $S = 1/2$ \cite{Hewson_Kondo}.
Performing the Hubbard-Stratonovich transformation for the Kondo-hybridization spin-singlet channel, we obtain the following expression for the free energy
\begin{eqnarray}
&& \mathcal{F} = -\frac{1}{\beta} \int_{-\infty}^{\infty} dv(\bm r) P[v(\bm{r})] \ln \int
D c_\sigma(\tau, \bm r) D f_\sigma (\tau) \nonumber \\ && D b(\tau) D \lambda(\tau)
\exp \Bigl[-\int_0^\beta d\tau \Bigl\{ \int d^d \bm r c^\dagger_\sigma (\tau, \bm r) \Bigl(\partial_\tau - \mu
\nonumber \\ && - \frac{\boldsymbol{\nabla_{\boldsymbol{r}}}^{2}}{2m} + v(\bm r) \Bigr) c_\sigma (\tau, \bm r)
- \frac{J_K}{N_s} \Bigl(c^\dagger_\sigma (\tau) b^\dagger_\sigma (\tau) f_\sigma (\tau) \nonumber \\ && + H.c. \Bigr)
+ \frac{J_K}{N_s} b^\dagger_\sigma (\tau) b_\sigma (\tau) +
f^\dagger_\sigma (\tau) \partial_\tau f_\sigma (\tau) \nonumber \\ &&
+ i \lambda (\tau) \Bigl( f^\dagger_\sigma (\tau) f_\sigma (\tau) - N_s S \Bigr) \Bigr\} \Bigr] ,
\end{eqnarray}
where the disorder average is taken into account. $b_\sigma (\tau)$ may be identified with an order parameter for the local Fermi-liquid state, given by $b_\sigma (\tau) = \Bigl\langle c^\dagger_\sigma (\tau) f_\sigma (\tau) \Bigr\rangle$ in the saddle-point approximation \cite{Hewson_Kondo}. $\lambda(\tau)$ is a Lagrange multiplier field to impose the fermion-number constraint.
\subsubsection{Eigenfunction multifractality}
Following the same procedure as that of the previous subsection, we rewrite this effective free energy in terms of the eigenfunction for a given disorder configuration [Eqs. (\ref{Eigenfunction}) and (\ref{Fourier_Disorder})]. As a result, we obtain
\begin{eqnarray}
&& \mathcal{F} = - \frac{1}{\beta} \int_{-\infty}^{\infty} \Pi_{n} d \alpha_{n}(\boldsymbol{r}) P[\{\alpha_{n}(\boldsymbol{r})\}]
\ln \int D \psi_{\sigma n}(\tau) D f_\sigma (\tau) \nonumber \\ && D b(\tau) D \lambda(\tau) \exp \Bigl[-\int_0^\beta d\tau \Bigl\{ \sum_n \psi^\dagger_{\sigma n}(\tau) \Bigl(\partial_\tau + \varepsilon_n \Bigr) \psi_{\sigma n}(\tau)
\nonumber \\ && - \frac{J_K}{N_s} \Bigl(\sum_n \Psi_n^{\dagger} \psi^\dagger_{\sigma n}(\tau) b^\dagger_\sigma (\tau) f_\sigma (\tau) + H.c. \Bigr)
+ \frac{J_K}{N_s} b^\dagger_\sigma (\tau) b_\sigma (\tau) \nonumber \\ && + f^\dagger_\sigma (\tau) \partial_\tau f_\sigma (\tau)
+ i \lambda (\tau) \Bigl( f^\dagger_\sigma (\tau) f_\sigma (\tau) - N_s S \Bigr) \Bigr\} \Bigr].
\end{eqnarray}
Taking into account the mean-field approximation of $b(\tau) \to b$ and $\lambda(\tau) \to - i \lambda$ and performing both Gaussian integrals for conduction electrons and localized fermions, we obtain
\begin{eqnarray}
&& \mathcal{F} = -\frac{1}{\beta} \int_{-\infty}^{\infty} \Pi_{n} d \alpha_{n}(\boldsymbol{r}) P[\{\alpha_{n}(\boldsymbol{r})\}] \nonumber \\ &&
\Bigl\{N_s \sum_{i\omega} \ln \Bigl( -i \omega + \lambda + \frac{J_K^2 b^2}{N_s^2} \sum_n \frac{|\Psi_n|^2}{i \omega - \varepsilon_n} \Bigr) \nonumber \\ &&
-\beta \Bigl(\frac{J_K}{N_s} b^2 - N_s S \lambda \Bigr) + N_s \sum_n \ln(1 + e^{-\beta \varepsilon_n}) \Bigr\} .
\end{eqnarray}
Minimizing the free energy with respect to $\lambda$ and $b$, respectively, yields
\begin{eqnarray}
&& S = \frac{1}{\beta} \sum_{i\omega} \frac{1}{i \omega - \lambda - \frac{J_K^2 b^2}{N_s^2} \sum_n \frac{|\Psi_n|^2}{i \omega - \varepsilon_n}} ,
\label{dF_dlambda}\\
&& 1 = -\frac{J_K}{\beta} \sum_{i \omega} \frac{\sum_n \frac{|\Psi_n|^2}{i \omega - \varepsilon_n}}
{i \omega - \frac{J_K^2 b^2}{N_s^2} \sum_n \frac{|\Psi_n|^2}{i \omega - \varepsilon_n}} .
\label{dF_db}
\end{eqnarray}
It is straightforward to determine the chemical potential $\lambda$ of localized fermions. Substituting
\begin{eqnarray}
&& \sum_n \frac{|\Psi_n|^2}{i \omega - \varepsilon_n} = -i\omega \int_{-\infty}^{\infty} d\varepsilon_n \rho(\varepsilon_n)
\frac{|\Psi_n|^2}{\omega^2 + \varepsilon_n^2} \nonumber \\ && \approx -i\pi \rho_\omega |\Psi_\omega|^2 \mbox{sgn}(\omega)
\end{eqnarray}
to Eq. (\ref{dF_dlambda}) gives $\lambda = 0$, where the last approximation takes the low-frequency limit. Zero chemical potential means that localized fermions are at half filling, i.e., in the Kondo regime.
\subsubsection{Local mean-field theory}
Compared with the Stoner transition, the local mean-field theory is quite natural in the Kondo effect since the ``phase transition" itself is local. The position may be regarded as a dummy variable. Taking into account $|\Psi_n|^2 \longrightarrow \langle |\Psi_n|^2\rangle$ with $\alpha(\bm r) \to \alpha$ and $\sum_{n} \to \rho_{m} \int_{-\varepsilon_{c}}^{\varepsilon_{c}} d \varepsilon_{n}$, we arrive at a local mean-field theory for the Kondo effect at the Anderson transition
\begin{eqnarray}
&& \mathcal{F} = -\frac{\rho_m}{\beta} \int_{-\varepsilon_c}^{\varepsilon_c} d \varepsilon_{n} \int_{-\infty}^{\infty} d \alpha P(\alpha) \nonumber \\ &&
\Bigl\{N_s \sum_{i\omega} \ln \Bigl( -i \omega + \frac{J_K^2 b^2}{N_s^2} \sum_n \frac{\left|\frac{\varepsilon_n}{\varepsilon_c}\right|^{r_\alpha}}{i \omega - \varepsilon_n} \Bigr)
- \beta \frac{J_K}{N_s} b^2 \nonumber \\ && + N_s \sum_n \ln(1 + e^{-\beta \varepsilon_n}) \Bigr\} .
\end{eqnarray}
As a result, the Kondo temperature is determined by
\begin{eqnarray}
&& 1 = \frac{J_K \rho_m}{2} \int_{-\varepsilon_c}^{\varepsilon_c} d \varepsilon_{n} \left|\frac{\varepsilon_n}{\varepsilon_c}\right|^{r_\alpha} \frac{1}{\varepsilon_n} \tanh \left(\frac{\varepsilon_n}{2 T_{K}} \right) ,
\end{eqnarray}
essentially the same as that of Refs. \cite{PTK1,PTK2}. Approximating $\tanh x \approx x$ for $x < 1$ and $\tanh x \approx 1$ for $x >1$, we find
\begin{eqnarray}
T_K & = & \frac{\varepsilon_c}{2}\left[(r_\alpha + 1) \left(1 -\frac{r_\alpha}{J_{K} \rho} \right) \right]^{1/r_\alpha}
\end{eqnarray}
for $T_K < \frac{\varepsilon_c}{2}$ and $-1 < r_\alpha < J_{K} \rho$, and
\begin{eqnarray}
T_K & = & \frac{\varepsilon_c}{2} \frac{J_{K} \rho}{r_\alpha + 1}
\end{eqnarray}
for $T_K > \frac{\varepsilon_c}{2}$ and $r_\alpha > -1$, respectively.
This relation determines the distribution of the Kondo temperature via $P(T_K) = \left|\frac{dT_K}{d \alpha} \right|^{-1} P(\alpha)$.
We would like to point out that our way how to obtain the distribution function of the Kondo temperature is not the same as that in Ref. \cite{PTK2} although the same mean-field equation is utilized. When we find the distribution function of the Kondo temperature from that of disorder-eigenfunctions, an essential point is how to introduce the constraint of the mean-field equation for the critical temperature into the equation of the distribution function. More rigorously speaking, the problem is how to perform the integration of the Lagrange multiplier field $t$ for the delta-function imposing the constraint of the mean-field equation, given by \begin{eqnarray} && P(T_{K}) = - \int_{0}^{\infty} d \alpha P(\alpha) \frac{d F[T_{K}]}{d T_{K}} \delta(1 - F[T_{K}]) \nonumber \\ && = - \int_{0}^{\infty} d \alpha P(\alpha) \frac{d }{d T_{K}} \int_{-\infty}^{\infty} d t \frac{i e^{i t}}{2 \pi t} \exp\big(- i t F[T_{K}] \big) , \nonumber \end{eqnarray} where $F[T_{K}] = \frac{J_K \rho_m}{2} \int_{-\varepsilon_c}^{\varepsilon_c} d \varepsilon_{n} \left|\frac{\varepsilon_n}{\varepsilon_c}\right|^{r_\alpha} \frac{1}{\varepsilon_n} \tanh \left(\frac{\varepsilon_n}{2 T_{K}} \right)$ is the right hand side of the mean-field equation for the Kondo temperature \cite{PTK2}. The previous study performs the integration for $t$ up to the second order analytically. On the other hand, we perform the integration for $t$ up to an infinite order numerically.
\section{A criterion of quantum Griffiths phenomena vs. smeared phase transitions}
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{TKr}
\caption{Local Kondo temperature vs. $r_\alpha$ for various Kondo interactions. The local Kondo temperature as a function of $r_{\alpha}$ can be found from Eq. (27). A noticeable point is that the local Kondo temperature is well defined to decrease continuously in the pseudogap region until it vanishes. It turns out that the hybridization order parameter decreases to vanish continuously, exhibiting a conventional ``second-order transition" in the phase diagram of the Kondo coupling and temperature with a given $r_{\alpha}$, not shown here.}
\label{Kondo_TKr}
\vspace*{1cm}
\includegraphics[width=0.45\textwidth]{Tcr}
\caption{Local critical temperature of the Stoner transition vs. $r_\alpha$ for various local interactions. The local critical temperature as a function of $r_{\alpha}$ can be found from Eq. (18). The local critical temperature drops down abruptly above a certain positive value of $r_{\alpha}$, i.e., in the pseudogap region with a fixed interaction parameter, where the inset confirms this observation. This implies that the local pseudogap region shows an abrupt phase boundary in the phase diagram of the local interaction and temperature.}
\label{Stoner_TCr}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{PTK}
\caption{A distribution function of the Kondo temperature. The distribution function of the Kondo temperature is given by $P(T_K) = \left|\frac{dT_K}{d \alpha} \right|^{-1} P(\alpha)$ with the mean-field equation (27), where the information of the wave-function multifractality is introduced. An essential point is that it shows a power-law increasing behavior for all Kondo interactions of $J < 0.8 D$, approaching the zero Kondo temperature. This power-law physics results from rare events, meaning that a local magnetic moment remains unscreened, which originates from the local pseudogap region. The inset clarifies this quantum Griffiths physics.}
\label{Kondo_PTK}
\vspace*{1cm}
\includegraphics[width=0.45\textwidth]{PTc}
\caption{A distribution function of the Stoner transition temperature. The distribution function of the local critical temperature is given by $P(T_c) = \left|\frac{dT_c}{d \alpha} \right|^{-1} P(\alpha)$ with the mean-field equation (18), where the information of the wave-function multifractality is introduced. The power-law increasing behavior stops in $1.1 < U/U_{c} < 1.2$, where the distribution function vanishes in $T_{c} \leq 0.1 \varepsilon_{c}$ above $U \sim 1.2 U_{c}$, more clarified in the inset figure. This results from the fact that the critical temperature changes discontinuously in the pseudogap region ($r_{\alpha} > 0$) near $U \sim U_{c}$, giving rise to $\left|\frac{dT_c}{d \alpha} \right| \rightarrow \infty$ as $T_{c} \rightarrow 0$.}
\label{Stoner_PTC}
\end{figure}
The local Kondo temperature decreases to vanish at the critical eigenfunction intensity, $r_\alpha^{c} = J_{K} \rho$, increasing $r_{\alpha}$ in a given Kondo interaction, which corresponds to reducing the local density of states, where $\rho(\varepsilon_{n}) = \rho_{m} \left|\frac{\varepsilon_n}{\varepsilon_c}\right|^{r_\alpha}$. Larger Kondo interactions enhance the Kondo temperature in a given $r_{\alpha}$, i.e., given local density of states. See Fig. \ref{Kondo_TKr}. Essentially the same trend has been observed in the Stoner transition. However, there exists an important different aspect between these two cases: Increasing local interactions, we find that the local critical temperature of the Stoner transition drops down abruptly above a certain positive value of $r_{\alpha}$. In other words, the critical temperature changes discontinuously from a finite lowest critical temperature to the zero critical temperature at a certain positive $r_{\alpha}$ above the critical interaction parameter. See Fig. \ref{Stoner_TCr}. Here, the positive $r_{\alpha}$ means that the local region is in a pseudogap state, where the density of states vanishes in a power-law fashion, approaching the zero energy. This observation leads us to conclude that the local pseudogap region ($r_{\alpha} > 0$) shows an abrupt phase boundary in the Stoner phase diagram of the local interaction and temperature. On the other hand, the local Kondo temperature is well defined to decrease continuously in the pseudogap region until it vanishes. It turns out that the hybridization order parameter decreases to vanish continuously, exhibiting a conventional ``second-order transition" in the phase diagram of the Kondo coupling and temperature with a given $r_{\alpha}$.
The relation between the local critical temperature and the local eigenfunction intensity or the local density of states allows us to translate the log-normal distribution function for the eigenfunction intensity into a power-law distribution function for the local critical temperature. The distribution function of the local Kondo temperature shows a power-law increasing behavior for all Kondo interactions of $J < 0.8 D$, approaching the zero Kondo temperature. See Fig. \ref{Kondo_PTK}. This means that the local magnetic moment remains unscreened, which originates from the local pseudogap region. Mathematically, the high probability of the local moment physics comes from $\left|\frac{dT_K}{d \alpha} \right| \rightarrow 0$ as $r_{\alpha} \rightarrow J_{K} \rho$. As a result, the typical value of the Kondo temperature vanishes identically, where the typical value is defined as a geometric average \begin{eqnarray} && \langle T_{K} \rangle_{typ} \equiv \exp\Bigl\{ \int_{0}^{\infty} d T_{K} P(T_{K}) \ln T_{K} \Bigr\} . \end{eqnarray} On the other hand, the power-law increasing behavior in the distribution function of the ferromagnetic critical temperature disappears in $1.1 < U/U_{c} < 1.2$, where the distribution function vanishes in $T_{c} \leq 0.1 \varepsilon_{c}$ above $U \sim 1.2 U_{c}$. See Fig. \ref{Stoner_PTC}. This results from the fact that the critical temperature changes discontinuously in the pseudogap region ($r_{\alpha} > 0$) near $U \sim U_{c}$, giving rise to $\left|\frac{dT_c}{d \alpha} \right| \rightarrow \infty$ as $T_{c} \rightarrow 0$ and thus, $P(T_{c} \rightarrow 0) \rightarrow 0$. As a result, the typical value of the ferromagnetic transition temperature vanishes identically in $U < 1.2 U_{c}$ while it becomes finite above this characteristic value of the interaction parameter. The typical transition temperature is expected to change discontinuously.
These typical local transition temperatures lead us to propose phase diagrams for the Kondo effect and the Stoner transition, respectively, when dynamics of electrons lies at the Anderson metal-insulator transition. See Figs. \ref{Kondo_phase_diagram} and \ref{Stoner_phase_diagram}. The crossover Kondo temperature from a decoupled local moment state to a local Fermi-liquid state is well known in the clean limit,
given by $T_{K} \sim D e^{- \frac{1}{J_{K} \rho}}$ in the weak-coupling limit and $T_{K} \sim J_{K}$ in the strong Kondo coupling regime \cite{Hewson_Kondo}. This Kondo temperature turns out to be much suppressed at the Anderson transition, measured in the arithmetic average $\langle T_{K} \rangle = \int_{0}^{\infty} d T_{K} P(T_{K}) T_{K}$. Here, we focus on the typical Kondo temperature, most probable and thus, regarded to be a reasonable measure for a phase transition, more correctly, a crossover energy scale. We claim that quantum Griffiths phenomena occur when the measuring temperature is above the typical transition temperature, dominated by physics of rare events. As shown in the above, the typical Kondo temperature turns out to vanish due to dominant behaviors of local pseudogap regions, thus governed by decoupled local moment physics. As a result, we conclude that the finite temperature region of the Kondo effect at the Anderson transition shows quantum Griffiths phenomena, where anomalous power-law physics are expected to appear (Fig. \ref{Kondo_phase_diagram}).
In the Stoner transition the arithmetically averaged transition temperature is much suppressed as the phase diagram of the Kondo effect, compared with the transition temperature of the clean case. A noticeable feature is that the typical transition temperature shows an abrupt change as a function of the interaction parameter, where it vanishes in $U < 1.2 U_{c}$ but it becomes finite above the characteristic interaction parameter (Fig. \ref{Stoner_phase_diagram}). This discontinuous enhancement of the typical transition temperature around the characteristic interaction parameter results from the disappearance of the power-law tail in the distribution function of the critical temperature. Based on this phase diagram, we propose that quantum Griffiths effects would be observed above the typical transition temperature. If we focus on the low-temperature regime, quantum Griffiths phenomena occur below the characteristic interaction parameter and disappear above it, where the power-law tail of the distribution function is gone. The nature of the ferromagnetically ordered state below the typical transition temperature and above the characteristic interaction parameter is not completely clarified. However, it is natural to suspect that the phase transition across this typical transition temperature is smeared in nature since such a local region is already ferromagnetically ordered and the ordering temperature should be broadened around the typical transition temperature. In this respect we propose the typical transition temperature as a criterion for the appearance of either quantum Griffiths phenomena or smeared phase transitions.
\begin{figure}[t]
\includegraphics[width=0.48\textwidth]{Kondo_Anderson_Phase_Diagram}
\caption{A schematic Kondo phase diagram at the Anderson metal-insulator transition. A characteristic feature is that the typical Kondo temperature vanishes due to dominant behaviors of local pseudogap regions, thus governed by decoupled local moment physics. This leads us to conclude that the finite temperature region shows quantum Griffiths phenomena, where the existence of the power-law tail in the distribution function is responsible for quantum Griffiths effects.}
\label{Kondo_phase_diagram}
\vspace*{1cm}
\includegraphics[width=0.48\textwidth]{Stoner_Anderson_Phase_Diagram}
\caption{A schematic Stoner phase diagram at the Anderson metal-insulator transition. A noticeable feature is that the typical transition temperature increases discontinuously as a function of the interaction parameter, where it vanishes in $U < 1.2 U_{c}$ but it becomes finite above the characteristic interaction parameter. This discontinuous enhancement around the characteristic interaction parameter results from the disappearance of the power-law tail in the distribution function of the critical temperature. As a result, quantum Griffiths effects disappear below the typical transition temperature since the local region is already ferromagnetically ordered. It is natural to suspect that the phase transition across this typical transition temperature is smeared in nature since such a local region is already ferromagnetically ordered and the ordering temperature should be broadened around the typical transition temperature.}
\label{Stoner_phase_diagram}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{Ximp}
\caption{Typical impurity spin susceptibility in the Kondo effect. At high temperatures, the first term in Eq. (\ref{Chi_imp}) plays a dominant role in the susceptibility, resulting in the Curie-type behavior. At low temperatures, the second term in Eq. (\ref{Chi_imp}) is a leading contribution, which turns out to be identical to the inverse of the typical Kondo temperature. As a result, the typical impurity spin susceptibility diverges slower than the Curie behavior due to the Kondo effect.}
\label{Ximp}
\vspace*{1cm}
\includegraphics[width=0.45\textwidth]{XStoner}
\caption{Typical spin susceptibility in the Stoner transition. At high temperatures, it results in the Curie-type behavior, given by the first term in Eq. (\ref{Chi_FM}). At low temperatures, the second term in Eq. (\ref{Chi_FM}) plays a central role in the spin susceptibility, identifying the typical local spin susceptibility with the inverse of the typical transition temperature as that of the Kondo effect. Since the typical transition temperature evolves discontinuously from zero to a finite value as a function of the interaction parameter, we find that the divergent behavior of the typical local spin susceptibility above the typical transition temperature and below the characteristic interaction parameter ($\sim 1.2 U_{c}$) disappears to saturate into a finite value, regarded to be the Pauli spin susceptibility, below the typical transition temperature and above the characteristic interaction parameter.}
\label{X_Stoner}
\end{figure}
In order to support that the typical transition temperature is a criterion for the quantum Griffiths and smeared transition phenomena, we evaluate the spin susceptibility in a geometric average, expected to be quite sensitive to the typical transition temperature. We call this quantity the typical spin susceptibility. The typical impurity spin susceptibility can be calculated as follows
\begin{eqnarray}
\chi_{imp}^{typ}(T) & = & \exp\Bigl\{ \int_{0}^T dT_K P(T_K) \ln \chi(T > T_{K}) \nonumber \\ &+& \int_T^{\infty} dT_K P(T_K) \ln \chi(T < T_{K}) \Bigr\} , \label{Chi_imp}
\end{eqnarray}
where $\chi(T < T_{K}) = \frac{C}{T_K}$ and $\chi(T > T_{K}) = \frac{C}{T}$ with a positive numerical constant $C$ associated with the spin quantum number of an impurity. It is straightforward to estimate both high and low temperature limits of the typical impurity spin susceptibility. At high temperatures, the first term plays a dominant role in the susceptibility, resulting in the Curie-type behavior. At low temperatures, the second term is a leading contribution, which turns out to be identical to the inverse of the typical Kondo temperature. As a result, the typical impurity spin susceptibility diverges, approaching zero temperature, slower than the Curie behavior due to the Kondo effect. This estimation is indeed confirmed in Fig. \ref{Ximp}, where the typical impurity spin susceptibility gives a power-law divergent behavior.
The typical local spin susceptibility in the Stoner transition can be evaluated as follows
\begin{eqnarray}
\chi_{FM}^{typ}(T) & = & \exp\Bigl\{ \int_{0}^T d T_c P(T_c) \ln \chi(T > T_c) \nonumber \\ &+& \int_T^{\infty} d T_c P(T_c) \ln \chi(T < T_c) \Bigr\} , \label{Chi_FM}
\end{eqnarray}
where $\chi(T < T_c) = \frac{\tanh \left[\frac{m(T)}{2 T} \right]}{m(T)}$ and $\chi(T > T_c) = \frac{1}{2 T}$ with the magnetization order parameter of $m(T) = m \sqrt{T_{c} - T}$. At high temperatures, it results in the Curie-type behavior, given by the first term. At low temperatures, the second term plays a central role in the spin susceptibility. Taking the approximation of $\chi(T \ll T_c) = \frac{\tanh \left[\frac{m \sqrt{T_{c} - T}}{2 T} \right]}{m \sqrt{T_{c} - T}} \approx \frac{\tanh \left[\frac{m \sqrt{T_{c}}}{2 T} \right]}{m \sqrt{T_{c}}} \approx \frac{1}{m \sqrt{T_{c}}}$, the typical local spin susceptibility becomes the inverse of the typical transition temperature as that of the Kondo effect. The typical transition temperature evolves discontinuously from zero to a finite value as a function of the interaction parameter. As a result, we conclude that the divergent behavior of the typical local spin susceptibility above the typical transition temperature and below the characteristic interaction parameter ($\sim 1.2 U_{c}$) disappears to saturate into a finite value, regarded to be the Pauli spin susceptibility, below the typical transition temperature and above the characteristic interaction parameter. The crossover behavior from quantum Griffiths phenomena to smeared phase transitions is also seen in the typical local spin susceptibility for the Stoner transition at the Anderson metal-insulator transition. See Fig. \ref{X_Stoner}.
\section{Summary and discussion}
The argument based on the lower critical dimension of rare events serves an intuitive picture for a criterion on quantum Griffiths phenomena vs. smearing phase transitions at zero temperature \cite{Vojta_Review}. It is not clear how to generalize this physical picture toward a finite temperature region. In this study we proposed a criterion, expected to work at finite temperatures. It is not surprising to introduce a typical transition temperature as the ``critical" temperature at the Anderson metal-insulator transition. An essential question is how to calculate the typical transition temperature. An idea was to construct a mean-field theory for the symmetry breaking transition of rare regions, where the wave-function multifractality is introduced to impose the role of Anderson localization in phase transitions. It is true that this local mean-field theory framework does not take into account the information involved with correlations between local regions (correlated inhomogeneity). However, investigating the local critical temperature as a function of the eigenfunction intensity, the local mean-field theory construction can be justified within a granular picture, where correlations between rare events (granules) may not be relevant. See the discussion of section II-A-3 with Ref. \cite{Discussion_Inhomogeneity}. As a result, we could obtain the relation between the critical temperature and the eigenfunction intensity. This allowed us to translate the log-normal distribution function of the eigenfunction intensity into the power-law distribution function of the critical temperature. Then, it was straightforward to calculate the typical value of the transition temperature, resorting to the distribution function of the critical temperature.
We investigated two kinds of ``phase transitions": the Kondo effect vs. the Stoner transition. It turns out that the typical Kondo temperature vanishes for all Kondo interactions in $J_{K} < 0.8 D$, originating from the persistence of the power-law tail up to the zero temperature in the distribution function of the Kondo temperature. Within the local mean-field theory framework, it is clear that this power-law divergent distribution function results from the role of the local pseudogap region, regarded to be a rare event. On the other hand, the typical ferromagnetic transition temperature evolves discontinuously as a function of the local interaction parameter, where it vanishes below the characteristic interaction parameter about $U \sim 1.2 U_{c}$ but it becomes finite abruptly above the interaction parameter. This behavior is also rooted in the fact that the power-law divergent behavior in the distribution function of the Stoner transition temperature disappears to drop down in a certain temperature range near the typical transition temperature above the characteristic interaction parameter. These typical temperatures lead us to propose phase diagrams for the Kondo effect and the Stoner transition, respectively, at the Anderson metal-insulator transition. Since the typical Kondo temperature vanishes identically, we suggested quantum Griffiths effects at finite temperatures in this disordered Kondo system. On the other hand, since the typical Stoner transition temperature is finite above the characteristic interaction parameter, we claimed that quantum Griffiths phenomena above the typical transition temperature disappears replaced by smeared phase transitions due to preexisting ferromagnetic ordering.
In order to support this physical picture, we calculated the typical spin susceptibility, which turns out to be connected to the typical transition temperature quite closely at low temperatures. Indeed, we observed that the typical local spin susceptibility is given by the inverse of the typical transition temperature at low temperatures. The typical impurity spin susceptibility showed its divergent behavior, where the divergent degree is weaker than that of the Curie-type behavior due to the Kondo effect. Such a power-law divergent behavior, observed in the whole Kondo-interaction range, is consistent with quantum Griffiths phenomena. One the other hand, the typical local spin susceptibility diverges at low temperatures below the characteristic interaction parameter, but it becomes saturated into a finite value below the typical transition temperature and above the characteristic interaction parameter, consistent with the behavior of smeared phase transitions.
An important issue, not discussed in the present study, is on the role of repulsive interactions between density fluctuations in both the Kondo effect and the Stoner transition. Such effective interactions may be introduced into this mean-field theory framework via the Hartree-Fock approximation, described by additional mean-field equations and expected to cause the Altshuler-Aronov suppression of the density of states \cite{AAcorrection_inDFL}. As a result, pseudogap physics identified with rare events would promote quantum Griffiths effects more than the present situation without repulsive interactions. It is important to perform the full numerical analysis for the Hartree-Fock theory with ferromagnetic transition, where both self-consistent renormalizations for interactions and disorders and strong spatial fluctuations in dynamics of order parameters can be taken into account.
\section*{Acknowledgement}
This study was supported by the Ministry of Education, Science, and Technology (No. NRF-2015R1C1A1A01051629 and No. 2011-0030046) of the National Research Foundation of Korea (NRF) and by TJ Park Science Fellowship of the POSCO TJ Park Foundation. This work was also supported by the POSTECH Basic Science Research Institute Grant (2015). We would like to appreciate fruitful discussions in the APCTP workshop on Delocalisation Transitions in Disordered Systems in 2015. We thank S. Kettemann for fruitful collaborations and insightful discussions at the initial stage. KS also appreciates enlightening discussions with V. Dobrosavljevic.
|
1,314,259,994,933 | arxiv | \section*{Acknowledgements}
I would like to thank the organizers of \textit{CKM 2012} for the invitation and a pleasant workshop. I gratefully acknowledge the support by grants of the German National Academic Foundation, the State of Bavaria, the Elite Network of Bavaria, and the German Academic Exchange Service. My work has further been supported by the DFG Research Unit SFB/TR9.
|
1,314,259,994,934 | arxiv | \section{\label{sec:level1}Introduction\protect\\\lowercase{} }
Atoms and molecules interacting with intense ($\gtrsim 10^{14}\,{\rm W/cm}^2$) visible-to-midinfrared laser pulses exhibit nonperturbative nonlinear response such as above-threshold ionization (ATI), tunneling ionization, high harmonic generation (HHG) and nonsequential double ionization (NSDI){\cite{Protopapas1997RPP}.
HHG, especially, forms the basis for attosecond science \cite{Agostini2004RPP,Krausz2009RMP,Gallmann2013ARPC} as highly successful means to generate attosecond coherent light pulses in the extreme-ultraviolet (XUV) and soft x-ray regions \cite{Chang,Zhao2012OL,Takahashi2013NatComm,Popmintchev2012Science} as well as to probe the electronic structure \cite{Itatani2004Nature,Smirnova2009Nature} and dynamics \cite{Calegari2014Science,Worner2015Science,Okino2015ScienceAdvanced} in atoms and molecules. In the context of the latter, it is crucial to understand how HHG spectra reflect the electronic structure. As representative examples in atomic systems, the Cooper minimum in Ar \cite{Worner2009PRL}, autoionizing resonance in ${\rm Sn}^+$ \cite{1367-2630-15-1-013051}, and the giant resonance in Xe \cite{Pabst2013PRL} have been reported to imprint themselves in HHG spectra. All of these can be understood basically as features of single-photon ionization, i.e., the inverse process of recombination, which is the last step in the semiclassical three-step model \cite{Corkum1993PRL,Kulander1993} of HHG.
In this Letter, we predict a new mechanism leading to a drastic enhancement in HHG spectra, induced by the interaction of the recolliding electron with the electrons in the parent ion. We numerically simulate HHG from a one-dimensional (1D) multielectron model atom using a recently developed first-principles method called the time-dependent complete-active-space self-consistent-field (TD-CASSCF) method \cite{Sato2013PRA,Ishikawa2015JSTQE,Sato2016PRA}. We find, in harmonic spectra from 1D Be, a prominent peak that cannot be attributed to any resonant transition in Be and ${\rm Be}^+$. Our analyses reveal that, whereas the cation (${\rm Be}^+$) plays a dominant role in the formation of the main plateau (neutral Be is immediately ionized and does not contribute to HHG), the peak originates from resonant excitation in the dication (${\rm Be}^{2+}$) induced by the recolliding electron. In addition, the action of the rescattering electron also drastically enhance harmonic generation from the dication to form the second plateau in the HHG spectrum. Thus, HHG spectra can reflect not only the electronic structure of the species (typically a neutral atom but a cation in this study) from which the returning electron is released \cite{Worner2009PRL,1367-2630-15-1-013051,Pabst2013PRL}, but also that of the parent ion (typically a cation but a dication in this study) with which the returning electron collides. The enhancement of the latter signal is a clear manifestation of multielectron effects.
The Hamiltonian for a 1D $N$-electron model atom interacting with an external laser electric field $ E(t) $ is taken in the length gauge as (we use the Hartree atomic units throughout unless otherwise stated),
\begin{align}
\label{eq:Hamiltonian}
H =& \sum^{N}_{i=1}\left[- \frac{1}{2}\dfrac{\partial^2}{\partial x_{i}^{2}} -\frac{4}{\sqrt{x_{i}^2+1}} - E(t)x_{i}\right] \nonumber \\
&+ \sum^{N}_{i>j} \frac{1}{\sqrt{(x_{i}-x_{j})^2+1}} ,
\end{align}
where $x_i (i=1,\cdots,N)$ denotes the position of the $i$th electron, and we use soft Coulomb potentials for electron-nuclear and electron-electron Coulomb interactions. We simulate the electron dynamics governed by this Hamiltonian using the recently developed TD-CASSCF method \cite{Sato2013PRA,Ishikawa2015JSTQE,Sato2016PRA}.
In this method, the $N$-electron wave function $\Psi (t)$ is expressed as a superposition,
\begin{equation}
\Psi (t) = \sum_{I} C_I(t) \Phi_I (t),
\end{equation}
of Slater determinants $\Phi_I (t)$ built from a given number $n$ of orthonormal orbital functions $\{ \psi_p (t)\}$. Both configuration-interaction (CI) coefficients $C_I(t)$ and orbital functions are time-dependent, which allows the use of considerably fewer orbitals than in fixed-orbital approaches. The orbitals are flexibly classified into {\it core} and {\it active} subspaces (see Fig.~1 of Ref.~\cite{Sato2013PRA}). We assume that $n_C$ core orbitals, accommodating tightly bound electrons, are doubly occupied all the time, whereas we consider all the possible distributions of $N_A (=N-2n_C)$ electrons among $n_A$ active orbitals, to take account of strong excitation and ionization.
It is also possible to further split the core space into {\it frozen core} (FC, fixed with no response to the field) and {\it dynamical core} (DC, allowed to vary in time and respond to the field).
Let us use notation $(n_{FC},n_{DC},n_A)$ for the TD-CASSCF method with $n_{FC}$ FC orbitals, $n_{DC}$ DC orbitals, and $n_A$ active orbitals. Note that $(0, 0, n)$ is equivalent to the multiconfiguration time-dependent Hartree-Fock (MCTDHF) method \cite{PhysRevA.71.012712, Kato2004533, Sawada2016PRA} with $n$ occupied orbitals. The equations of motion for CI coefficients and orbital functions are derived, based on the time-dependent variational principle \cite{Frenkel1934, LOWDIN19721, QUA:QUA560070414}. See Ref. \cite{Sato2013PRA} for detailed description of the TD-CASSCF method and Ref. \cite{Ishikawa2015JSTQE} for a broad review of ab initio methods for multielectron dynamics.
In this study, we specifically consider a 1D Be model atom ($N=4$) and a laser field with a $\sin^2$ envelope,
\begin{equation}
E(t)={E_0}\sin^2\frac{\pi t}{T}\sin \omega t \qquad (0\leq t \leq T),
\end{equation}
where $T$ denotes a foot-to-foot pulse width. For all the results presented in this work except for Fig.~\ref{fig:Graph1}(b), we use 750 nm central wavelength, $5.2\times 10^{14}\,{\rm W/cm}^2$ peak intensity, and 22 optical-cycle foot-to-foot pulse width. The peak intensity is
assumed to be $5.2\times 10^{14}\,{\rm W/cm}^2$.
Orbital functions are discretized on 2,048 equidistant grid points with box size $|x|<400$.
We implement an absorbing boundary by a $\cos^{1/4}$-shape mask function at 15 \% side edges of the box. The time propagation is performed using 10,000 time steps per optical cycle.
We have also tested up to four times smaller grid spacing and six times smaller time step, and confirmed that the results remain virtually the same. The initial ground state is obtained through imaginary-time propagation.
Harmonic spectra are calculated as the squared magnitude of Fourier transform of dipole acceleration.
In order to reduce the background level of the spectra,
simulations are continued for some periods after the end of the
pulse, and the dipole acceleration after the pulse is multiplied by a
polynomial damping function.
\begin{table}[tb
\caption{\label{tab:table1}%
Ionization potential $I_p$ (evaluated through Koopmans' theorem), cutoff energy $E_c$, and barrier suppression intensity $I_{BS}$ of each species.
}
\begin{ruledtabular}
\begin{tabular}{lccc}
& $I_p$ (eV) & $E_c$ (eV) & $I_{BS}$ ($ {\rm W}/{\rm cm}^{2} $) \\
\colrule
Be & 8.5 & 95.1 & $2.1\times 10^{13}$\\
${\rm Be}^{+}$ & 22.5 & 109.1 & $2.56\times 10^{14}$\\
${\rm Be}^{2+}$ & 65.4 & 152.0 & $8.12\times 10^{15}$\\
\end{tabular}
\end{ruledtabular}
\end{table}
Figure \ref{fig:Graph0} shows HHG spectra calculated with three different subspace decompositions schematically depicted in Fig.~\ref{fig:2}. We assume that $(0,0,10)$ is the most accurate. The resulting spectrum (red thick curve in Fig.~\ref{fig:Graph0}) exhibits two remarkable features:
\begin{enumerate}
\item a second plateau and cutoff around 150 eV beyond the first cutoff around 110 eV
\item a prominent peak at 30.6 eV, ca. $10^3$ times higher than the plateau
\end{enumerate}
Both are present also in the result for $(0,1,9)$ (black curve), whereas missing in that for $(1,0,9)$ (blue curve), which unambiguously indicates an essential contribution from the core electrons.
In Table \ref{tab:table1} we list, for ${\rm Be}, {\rm Be}^+$, and ${\rm Be}^{2+}$, the ionization potential $I_p$ evaluated through Koopmans' theorem, the cutoff energy $E_c$ by a common formula $E_c = I_p + 3.17 U_p$ with $U_p$ being the ponderomotive energy, and the barrier suppression intensity $I_{BS}$ \cite{1367-2630-15-1-013051, doi:10.1142/S0218863595000343}. From the values of $E_c$ in this table, one notices that the first and second plateaus in Fig.~\ref{fig:Graph0} are due to HHG from the cation and dication, respectively. The neutral Be is immediately ionized and, thus, does not contribute to high-harmonic spectrum, since its barrier suppression intensity ($2.1\times 10^{13}{\rm W}/{\rm cm}^{2}$) is much smaller than the laser peak intensity~\cite{Ilkov92JoPB}.
In order to further confirm this, we separate the total harmonic spectrum into contributions from different charge states. For this purpose we calculate the contribution from the charge state $q+$ to dipole acceleration, or charge-state-resolved dipole acceleration $\ddot{d}_q$, conveniently defined as (cf. Appendix of Ref.~\cite{Sato2013PRA}),
\begin{align}
\label{eq:charge-state-resolved-dipole-acceleration}
\ddot{d}_q (t) &\equiv
\left(\begin{array}{c}N \\ q \end{array}\right) \int_ >dx_1\cdots \int_> dx_q \int_< dx_{q+1}\cdots \int_< dx_N \nonumber\\
&\times \Psi^*(x_1,\cdots,x_N,t) \,\ddot{x}\, \Psi(x_1,\cdots,x_N,t),
\end{align}
where $\int_<$ ($\int_>$) denotes integration over a region $|x|<X_0$ ($|x|>X_0$) with $X_0 = 20$ a.u. in this study. The acceleration operator $\ddot{x}$ is evaluated as described in \cite{Sato2016PRA}. In this equation, we have omitted the summation with respect to spin variables for simplicity. Whereas the contributions from neutral Be and ${\rm Be}^{3+}$ are negligible, the first plateau is dominated by the contribution from ${\rm Be}^+$, and the second plateau is formed by the response of ${\rm Be}^{2+}$ (Fig.~\ref{fig:Graph3}).
\begin{figure}[tb]
\centering
\includegraphics[width=\linewidth]{figs/subspacing.pdf}
\caption{(Color online) Pictorial explanation of orbital subspace decompositions for Be used in this study. (a) MCTDHF (0,0,10) with ten active orbitals, considered to be the most accurate (b) CASSCF (0,1,9) with one DC and nine active orbitals (c) CASSCF (1,0,9) with one FC and nine active orbitals.}
\label{fig:2}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=1.\linewidth]{figs/fig3}
\caption{(Color online) High-harmonic spectra from a 1D Be model atom calculated with three different subspace decompositions depicted in Fig.~\ref{fig:2}}
\label{fig:Graph0}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=1\linewidth]{figs/fig4}
\caption{(Color online) Charge-state-resolved harmonic spectra extracted from the simulation starting from neutral Be and calculated as the squared magnitude of the Fourier transform of Eq.~(\ref{eq:charge-state-resolved-dipole-acceleration}). The total HHG spectrum (the same as the red curve in Fig.~\ref{fig:Graph0}) is also shown.}
\label{fig:Graph3}
\end{figure}
The sharp peak in Fig.~\ref{fig:Graph0} suggests the presence of an excitation resonance. Hence, we compare the HHG spectra with
excitation spectra of Be, Be$^+$, and Be$^{2+}$ [Fig.~\ref{fig:Graph1}(b)],
obtained by Fourier transform of the dipole response to a quasi-delta-function pulse with a field being finite at a single time step.
One can clearly see that the first excitation energy (30.6 eV) of ${\rm Be}^{2+}$ coincides with the peak position in the harmonic spectrum. Also in Fig.~\ref{fig:Graph3}, ${\rm Be}^{2+}$ predominantly contributes to the peak. Therefore, both of the two above-mentioned remarkable features originate from the response of ${\rm Be}^{2+}$.
In order to further investigate the contribution from different species separately, we show in Fig.~\ref{fig:Graph1}(a) HHG spectra calculated by use of the MCTDHF method with ten spatial orbitals} starting from Be, ${\rm Be}^+$, and ${\rm Be}^{2+}$, respectively. It should be noted that the curves for ${\rm Be}^+$ and ${\rm Be}^{2+}$ in Fig.~\ref{fig:Graph1}(a) are, unlike in Fig.~\ref{fig:Graph3}, the results of the simulations whose initial states are ${\rm Be}^+$ and ${\rm Be}^{2+}$, respectively. The spectra from Be and ${\rm Be}^+$ are similar, confirming that the neutral species, immediately ionized, do not contribute. On the other hand, if we start simulations from ${\rm Be}^{2+}$, no plateau but only up to the fifth harmonics are observed; this is in fact reasonable if we note that the laser intensity is much lower than the barrier suppression intensity (Table \ref{tab:table1}). In addition, although a peak at 30.6 eV, related to first excitation, can be seen, it is lower by many orders of magnitude than in the spectrum from ${\rm Be}^+$ and in Fig.~\ref{fig:Graph0}. These observations imply the existence of a mechanism to enhance the response of ${\rm Be}^{2+}$.
In order to explore the effect of a rescattering electron originating from ${\rm Be}^+$, we have simulated high-harmonic generation from ${\rm Be}^{2+}$ by adding a Coulomb field from the rescattering electron,
\begin{equation}
\label{eq:rsc-electron-field}
H_{\rm rsc} = \sum_{i=1}^{N} \int \frac{\rho (x^\prime,t)}{\sqrt{(x_{i}-x^\prime)^2+1}}\,dx^\prime,
\end{equation}
as an external field to the Hamiltonian for ${\rm Be}^{2+}$, where $N=2$, and $\rho (x,t)$ denotes the time-dependent probability density of the ${\rm Be}^+$ valence electron, which forms an oscillating dipole and is calculated in a separate simulation starting from ${\rm Be}^+$ with a frozen-core orbital $(1,0,1)$. The resulting spectrum is shown by the green dashed line in Fig.~\ref{fig:Graph1}(a). The comparison with the blue dotted line reveals that $H_{\rm rsc}$ dramatically enhances the harmonic response of ${\rm Be}^{2+}$, including the peak at 30.6 eV, and almost recovers that of ${\rm Be}^+$. Whereas this mechanism is similar to enhancement by an assisting harmonic pulse \cite{PhysRevLett.91.043002, PhysRevA.70.013412, PhysRevLett.99.053904, PhysRevA.80.011807}, the enhancement is due to direct Coulomb force from the oscillating dipole, rather than harmonics emitted from it. In the words of the semiclassical three-step model, the recolliding electron ejected from ${\rm Be}^+$ lifts ${\rm Be}^{2+}$ to the first excited state, which subsequently emits a photon to form the peak at 30.6 eV, and, at the same time, facilitate tunneling ionization, enhancing harmonic emission in the second plateau. The nonresonant components of the oscillating dipole also lead to virtual excitation and facilitate tunneling ionization of ${\rm Be}^{2+}$. Thus, electron-electron interaction plays an essential role in generation of the second plateau and the peak at 30.6 eV.
\begin{figure}[tb]
\centering
\includegraphics[width=1\linewidth]{figs/fig5}
\caption{(Color online) (a) High-harmonic spectra from 1D Be (thick solid red), ${\rm Be}^+$ (thick dotted black), and ${\rm Be}^{2+}$ (dotted blue), respectively, calculated with the MCTDHF methods with ten orbitals. The spectrum from Be$^{2+}$ in the field of the rescattering electron Eq.~(\ref{eq:rsc-electron-field}) is also shown (thin dashed green). Note that simulations have been started with Be, ${\rm Be}^+$, and ${\rm Be}^{2+}$, respectively, as an initial state. The thick solid red curve (Be) is the same as that in Fig.~\ref{fig:Graph0} and the thick solid black curve in Fig.~\ref{fig:Graph3}.
(b) Excitation spectra of 1D Be, ${\rm Be}^+$, and ${\rm Be}^{2+}$, respectively, calculated through excitation by a quasi-delta-function pulse. The highest peak is located at 30.6 eV and corresponds to the first excited state of 1D ${\rm Be}^{2+}$. The gray vertical dashed line indicates the position of 30.6 eV.}
\label{fig:Graph1}
\end{figure}
It is expected from this scenario that photons at the peak are emitted at any time within an optical cycle while those in the first and second plateaus are emitted upon recombination of the rescattering electrons, the second plateau being delayed by a half cycle with respect to the first one. These are confirmed by Fig.~\ref{fig:time-frequency-analysis}, which shows the time-frequency analysis of HHG from ${\rm Be}^+$ by a $5.2\times 10^{14}\,{\rm W/cm}^2$ flat-top pulse with a half-cycle ramp-on. We can recognize typical arcs corresponding to the first (from ${\rm Be}^+$) and second (from ${\rm Be}^{2+}$) plateaus from the second and third half cycle, respectively. On the other hand, as expected, we see constant strong emission around 30 eV, which indicates that it is not due to recombination. Non-Born-Oppenheimer study on molecules \cite{Bandrauk2008PRL}, using the nuclear dynamics as a clock, may help disentangle these processes even more clearly.
The processes leading to the harmonic spectrum shown in Fig.~\ref{fig:Graph0} are summarized as follows. The laser intensity is so high that the neutral Be is completely depleted in the early stage of the pulse and hardly contributes to the spectrum. ${\rm Be}^+$ plays a dominant role in the formation of the first plateau and cutoff. Unexpectedly, the rescattering electron emitted from ${\rm Be}^+$ does not only contribute to the first plateau but also greatly enhances the response of ${\rm Be}^{2+}$, which leads to the formation of the sharp peak at 30.6 eV and the second plateau. It should be emphasized that, in contrast to resonance-induced enhancement mechanisms previously reported \cite{1367-2630-15-1-013051,Pabst2013PRL}, the peak is {\it not} related with the resonant excitation of ${\rm Be}^+$ but with that of ${\rm Be}^{2+}$.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{figs/time-frequency-analysis-new}
\caption{(Color online) Time-frequency analysis, or Gabor transform with a temporal window size of 0.095 cycle, of HHG from ${\rm Be}^+$ by a $5.2\times 10^{14}\,{\rm W/cm}^2$ flat-top pulse with a half-cycle ramp-on. Horizontal dash-dotted line: position of 30.6 eV. Dotted (dashed) curves: classically calculated kinetic energies of the returning electrons plus the ionization energy of ${\rm Be}^+$ (${\rm Be}^{2+}$). Note that the energy of quantum trajectories are slightly higher than that of classical ones due to finite distance between the origin and tunnel exit \cite{Lewenstein1994PRA,Ishikawa2010INTECH}.}
\label{fig:time-frequency-analysis}
\end{figure}
In conclusion, we have investigated high-harmonic generation from 1D Be model atom using the all-electron TD-CASSCF method. In addition to the main plateau formed by HHG from ${\rm Be}^+$, we have found two prominent features: second plateau and a peak that is ca. $10^3$ times higher than the main plateau. Thanks to flexible subspace divisions characteristic of the TD-CASSCF, we have identified them as originating from ${\rm Be}^{2+}$. However, this response of ${\rm Be}^{2+}$ produced via tunneling ionization of Be and ${\rm Be}^+$ is totally different from that in the case where ${\rm Be}^{2+}$ is put under irradiation from the beginning. The recollision of the electron ejected from ${\rm Be}^{+}$ leads to dramatic enhancement of the response of ${\rm Be}^{2+}$ by exciting it and largely facilitating subsequent tunneling ionization. Thus, electron correlation plays an essential role in the appearance of the prominent peak and second plateau. Although we have specifically treated a 1D Be model atom, HHG enhancement by electron correlation as revealed here is presumably common to any 3D real target atoms and molecules. Whereas a resonance peak may be hidden under the main plateau, the enhancement of an extended plateau is expectedly more easily observable. The present study will open a new possibility to study multielectron effects and electron correlation using high-harmonic generation, which is usually considered a predominantly single-electron process.
We thank Armin Scrinzi for fruitful discussions.
This research is supported in part by Grant-in-Aid for Scientific
Research (No.~25286064, No.~26390076, No.~26600111, and No.~16H03881) from the Ministry of Education, Culture, Sports, Science and
Technology (MEXT) of Japan, and also by the Photon Frontier Network Program of MEXT.
This research is also partially
supported by the Center of Innovation Program from the Japan Science and
Technology Agency, JST, and by CREST, JST.
\nocite{*}
|
1,314,259,994,935 | arxiv | \section*{{\sc Abstract}} \addcontentsline{toc}{section}{\sc Abstract}
Modern population genetics studies typically involve genome-wide genotyping of individuals from a diverse network of ancestries. An important problem is how to formulate and estimate probabilistic models of observed genotypes that account for complex population structure. The most prominent work on this problem has focused on estimating a model of admixture proportions of ancestral populations for each individual. Here, we instead focus on modeling variation of the genotypes without requiring a higher-level admixture interpretation. We formulate two general probabilistic models, and we propose computationally efficient algorithms to estimate them. First, we show how principal component analysis (PCA) can be utilized to estimate a general model that includes the well-known Pritchard-Stephens-Donnelly admixture model as a special case. Noting some drawbacks of this approach, we introduce a new ``logistic factor analysis'' (LFA) framework that seeks to directly model the logit transformation of probabilities underlying observed genotypes in terms of latent variables that capture population structure. We demonstrate these advances on data from the Human Genome Diversity Panel and 1000 Genomes Project, where we are able to identify SNPs that are highly differentiated with respect to structure while making minimal modeling assumptions.
\section{{\sc Introduction}}
Understanding genome-wide genetic variation among individuals is one of the primary goals of modern human genetics. Genome-wide association studies aim to identify genetic variants throughout the entire genome that are associated with a complex trait \cite{mccarthy2008,Frazer2009,Consortium2007}. One of the major challenges in analyzing these studies is the problem of spurious associations due to population structure \cite{Pritchard1999}, and methods to deal with this are still in development \cite{astle2009,Price2010,Kang2010}. A related effort is underway to provide a comprehensive, genome-wide understanding of how genetic variation among humans is driven by evolutionary and demographic forces \cite{jorde2001}. A rigorous characterization of this variation will lead to a better understanding of the history of migration, expand our ability to identify signatures of natural selection, and provide important insights into the mechanisms of human disease \cite{nielsen2007,Rosenberg2002}. For example, the Human Genome Diversity Project (HGDP) is an international project that has genotyped a large collection of DNA samples from individuals distributed around the world, aiming to assess worldwide genetic diversity at the genomic level \cite{Cann2002,Rosenberg2002,Rosenberg2005}. The 1000 Genomes Project (TGP) is comprehensively cataloging human genetic variation by producing complete genome sequences of well over 1000 individuals of diverse ancestries \cite{Consortium2010}.
Systematically characterizing genome-wide patterns of genetic variation is difficult due to the numerous and complex forces driving variation. There is a fundamental need to provide probabilistic models of observed genotypes in the presence of complex population structure. A series of influential publications have proposed methods to estimate a model of admixture, where the primary focus is on the admixture proportions themselves \cite{PritchardStephens2000,Tang2005,Alexander:2009p2792}, which in turn may produce estimates of the allele frequencies of every genetic marker for each individual. Here, we instead focus directly on these individual-specific allele frequencies, which gives us potential advantages in terms of accuracy and computational efficiency.
We propose two flexible genome-wide models of individual-specific allele frequencies as well as methods to estimate them. First, we develop a model that includes as special cases the aforementioned models; specifically, the Balding-Nichols (BN) model \cite{Balding1995} and its extension to the Pritchard-Stephens-Donnelly (PSD) model \cite{PritchardStephens2000}. However, we identify some limitations of our method to estimate this model. We therefore propose an alternative model based on the log-likelihood of the data that allows for rapid estimation of allele frequencies while maintaining a valid probabilistic model of genotypes.
The estimate of the first model is based on principal component analysis (PCA), which is a tool often applied to genome-wide data of genetic variation in order to uncover structure. One of the earliest applications of PCA to population genetic data was carried out by Menozzi et al. \cite{Menozzi1978}. Exploratory analysis of complex population structure with PCA has been thoroughly studied \cite{Menozzi1978,Sokal1999a,Rendine1999,Novembre2008,Manni2010}. We show that a particular application of PCA can also be used to estimate allele frequencies in highly structured populations, although we have to deal with the fact that PCA is a real-valued operation and is not guaranteed to produce allele frequency estimates that lie in the unit interval [0,1].
The estimate of the second model is based on a generalized factor analysis approaches that directly model latent structure in observed data, including categorical data \cite{BKM2011} in which genotypes are included. We utilize a factor model of population structure \cite{Engelhardt2010} in terms of nonparametric latent variables, and we propose a method called ``logistic factor analysis'' (LFA) that extends the PCA perspective towards likelihood-based probabilistic models and statistical inference. LFA is shown to provide accurate and interpretable estimates of individual-specific allele frequencies for a wide range of population structures. At the same time, this proposed approach provides visualizations and numerical summaries of structure similar to that of PCA, building a convenient bridge from exploratory data analysis to probabilistic modeling.
We compare our proposed methods to existing algorithms (ADMIXTURE \cite{Alexander:2009p2792} and fastStructure \cite{Raj2013}) and show that when the goal is to estimate all individual-specific allele frequencies, our proposed approaches are conclusively superior in both accuracy and computational speed. We apply the proposed methods to the HGDP and TGP data sets, which allows us to estimate allele frequencies of every SNP in an individual-specific manner. Using LFA, we are also able to rank SNPs for differentiation according to population structure based on the likelihoods of the fitted models. In both data sets, the most differentiated SNP is proximal to {\em SLC24A5}, and the second most differentiated SNP is proximal to {\em EDAR}. Variation in both of these genes has been hypothesized to be under positive selection in humans. In the TGP data set, the second most different SNP is rs3827760, which confers a missense mutation in {\em EDAR} and has been recently experimentally validated as having a functional role in determining a phenotype \cite{Kamberov:2013p2731}. We also identify several SNPs that are highly differentiated in these global human studies that have recently been associated with diseases such as cancer, obesity, and asthma.
\section{{\sc Methods}}
\subsection{Models of Allele Frequencies}
\label{allelefreqs}
It is often the case that human and other outbred populations are ``structured'' in the sense that the genotype frequencies at a particular locus are not homogeneous throughout the population \cite{astle2009}. Geographic characterizations of ancestry often explain differing genotype frequencies among subpopulations. For example, an individual of European ancestry may receive a particular genotype according to a probability different than an individual of Asian ancestry. This phenomenon has been observed not only across continents, but on very fine scales of geographic characterizations of ancestry. Recent studies have shown that population structure in human populations is quite complex, occurring more on a continuous rather than a discrete basis \cite{Rosenberg2002}. We can illustrate the spectrum of structural complexity with Figure \ref{fig:cluster}, which shows dendrograms of hierarchically clustered individuals from the HapMap (phase II), HGDP, and TGP data sets. The HapMap samples strongly indicate explicit membership of each individual to one of three discrete subpopulations (due to the intended sampling scheme). On the other hand, the clusterings of the HGDP and TGP individuals show a very complex configuration, more representative of random sampling of global human populations.
Let us introduce $\bmm{Z}$ as an unobserved variable capturing an individual's structure. Let $x_{ij}$ be the observed genotype for SNP $i$ and individual $j$ ($i=1,\ldots,m$, $j=1,\ldots,n$), and assume that $x_{ij}$ is coded to take the values $0, 1, 2$. We will call the observed $m \times n$ genotype matrix $\bm{{\rm X}}$. For SNP $i$, the allele frequency can viewed as a function of $\bmm{Z}$, i.e. $\pi_i (\bmm{Z})$. For a sampled individual $j$ from an overall population, we have ``individual-specific allele frequencies'' \cite{Thornton:2012p2787} defined as $\pi_{ij} \equiv \pi_{i}(\bmm{z}_j)$ at SNP $i$. Each value of $\pi_{ij}$ informs us as to the expectation of that particular SNP/individual pair, supposing we observed a new individual at that locus with the same structure; i.e. ${\rm E}[x_{ij}]/2 = \pi_{ij}$. If an observed SNP genotype $x_{ij}$ is treated as a random variable, then under Hardy-Weinberg Equilibrium $\pi_{ij}$ serves to model $x_{ij}$ as a Binomial parameter: $x_{ij} \sim \mbox{Binomial}(2,\pi_{ij})$. The focus of this paper is on the simultaneously estimation of all $m \times n$ $\pi_{ij}$ values.
The flexible, accurate, and computationally efficient estimation of individual-specific allele frequencies is important for population genetic analyses. For example, Corona et al. (2013) \cite{Corona:2013p2708} recently showed that considering the worldwide distribution of allele frequencies of SNPs known to be associated with human diseases may be a fundamental component to understanding the relationship between ancestry and disease. Testing for Hardy-Weinberg equilibrium reduces to testing whether the genotype frequencies for SNP $i$ follow probabilities $\pi_{ij}^2$, $2\pi_{ij}(1-\pi_{ij})$, and $(1-\pi_{ij})^2$ for all individuals $j=1, \ldots, n$. It can be shown that the well-known ${\rm F}_{\rm ST}$ measure can be characterized for SNP $i$ using values of $\pi_{ij}$, $j=1, 2, \ldots, n$ (Section \ref{fst}). Finally, we have recently developed a test of association that corrects for population structure and involves the estimation of $\log\left(\frac{\pi_{ij}}{1-\pi_{ij}}\right)$ \cite{Song2013}. Therefore, flexible and well-behaved estimates of the individual-specific allele frequencies $\pi_{ij}$ are needed for downstream population genetic analyses.
It is straightforward to write other models of population structure in terms of $\bmm{Z}$. For the Balding-Nichols model, each individual is assigned to a population, thus $\bmm{z}_j$ indicates individual $j$'s population assignment. For the Pritchard-Stephens-Donnelly (PSD) model, each individual is considered to be an admixture of a finite set of ancestral populations. Following the notation of \cite{PritchardStephens2000}, we can write $\bmm{z}_j$ as a vector with elements $q_{kj}$, where $k$ indexes the ancestral populations, and we constrain $q_{kj}$ to be between 0 and 1 subject to $\sum_k q_{kj} =1$. Assuming the PSD model allows us to write each $\pi_{ij} = \sum_k p_{ik} q_{kj}$ and leads to a matrix form: $\bm{{\rm F}} = \bm{{\rm P}} \bm{{\rm Q}}$, where $\bm{{\rm F}}$ is the $m \times n$ matrix of allele frequencies with $(i,j)$ entry $\pi_{ij}$, $\bm{{\rm P}}$ is the $m \times d$ matrix of ancestral population allele frequencies $p_{ik}$, and $\bm{{\rm Q}}$ is the $d \times n$ matrix of admixture proportions. The elements of $\bm{{\rm P}}$ and $\bm{{\rm Q}}$ are explicitly restricted to the range $[0,1]$.
The PSD model is focused on the matrices $\bm{{\rm P}}$ and $\bm{{\rm Q}}$, which have standalone interpretations, but we aim instead to estimate all $\pi_{ij}$ with a high level of accuracy and computational efficiency. Writing the structure of the allele frequency matrix $\bm{{\rm F}}$ as a linear basis, we have:
\begin{equation}\label{eq:pi}
\mbox{\textbf{Model 1:}\ \ \ \ \ } \bm{{\rm F}}=\bmm{\Gamma} \bm{{\rm S}}
\end{equation}
where $\bmm{\Gamma}$ is $m \times d$ and $\bm{{\rm S}}$ is $d\times n$ with $d \leq n$. The $d\times n$ matrix $\bm{{\rm S}}$ encapsulates the genetic population structure for these individuals since $\bf S$ is not SNP-specific. The $m \times d$ matrix $\bmm{\Gamma}$ maps how the structure $\bm{{\rm S}}$ is manifested in the allele frequencies. Operationally, each SNP's allele frequency are a linear combination of the rows of $\bm{{\rm S}}$, where the linear weights for SNP $i$ are contained in row $i$ of $\bmm{\Gamma}$. We define the dimension $d$ so that $d=1$ corresponds to the case of no structure: when $d=1$, $\bm{{\rm S}} = (1, 1, \ldots, 1)$ and $\bmm{\Gamma}$ is the column vector of marginal allele frequencies.
This model is not necessarily the most effective way to estimate $\pi_{ij}$ when working in the context of a probabilistic model or with the likelihood function given the data. Model 1 resembles linear regression, where the allele frequencies are treated as a real-valued response variable that is linearly dependent on the structure. A version of regression for the case of categorical response variables (e.g., genotypes) with underlying probability parameters is logistic regression. We developed an approach we call ``logistic factor analysis'', which is essentially an extension of nonparametric factor analysis to $\{0,1,2\}$ valued genotype data. Much of the justification for LFA is similar to that of {\em generalized linear models} \cite{GLM}.
The log-likelihood is the preferred mathematical framework for representing the information the data contain about unknown parameters \cite{TPE}. Suppose that Hardy-Weinberg equilibrium holds such that $x_{ij} \sim \mbox{Binomial}(2, \pi_{ij})$. We can write the log-likelihood of the data for SNP $i$ and individual $j$ as:
\begin{align*}
\ell(\pi_{ij} | x_{ij}) &= \log \left( {\rm Pr}(x_{ij} | \pi_{ij}) \right) \propto \log \left( \pi_{ij}^{x_{ij}} (1-\pi_{ij})^{2-x_{ij}} \right)\\
&= x_{ij} \log\left( \frac{\pi_{ij}}{1-\pi_{ij}} \right) + 2\log(1-\pi_{ij}).
\end{align*}
The log-likelihood of SNP $i$ for all unrelated individuals is the sum: $\sum_{j=1}^{n} \ell(\pi_{ij} | x_{ij})$. The term $\log\left( \frac{\pi_{ij}}{1-\pi_{ij}} \right)$ is the logit function and is written as ${\rm logit}(\pi_{ij})$. ${\rm logit}(\pi_{ij})$ is called the ``natural parameter'' or ``canonical parameter'' of the Binomial distribution and is the key component of logistic regression. An immediate benefit of working with ${\rm logit}(\pi_{ij})$ is that it is real valued, which allows us to directly model ${\rm logit}(\pi_{ij})$ with a linear basis.
Let $\bm{{\rm L}}$ be the $m \times n$ matrix with $(i,j)$ entry equal to ${\rm logit}(\pi_{ij})$. We formed the following parameterization of $\bm{{\rm L}}$:
\begin{equation} \label{eq:logitmodel}
\mbox{\textbf{Model 2:}\ \ \ \ \ } \bm{{\rm L}}=\bm{{\rm A}} \bm{{\rm H}}
\end{equation}
where $\bf A$ is $m\times d$ and $\bm{{\rm H}}$ is $d\times n$ with $d \leq n$. In this case we can write
$$
{\rm logit}(\pi_{ij}) = \sum_{k=1}^{d} a_{ik} h_{kj},
$$
where all parameters are free to span the real numbers $\mathbb{R}$.
We call the rows of $\bm{{\rm H}}$ ``logistic latent factors'' or just ``logistic factors'' as they represent unobserved variables that explain the inter-individual differences in allele frequencies. In other words, the ${\rm logit}$ of the vector of individual-specific allele frequencies for SNP $i$ can be written as a linear combination of the rows of $\bm{{\rm H}}$:
$$
[{\rm logit}(\pi_{i1}), \ldots, {\rm logit}(\pi_{in})] = {\rm logit}(\bm{\pi}_i) = \sum_{k=1}^{d} a_{ik} \bm{h}_k,
$$
where $\bm{h}_k$ is the $k$th row of $\bm{{\rm H}}$. Likewise, we can write:
$$
(\pi_{i1}, \ldots, \pi_{in}) = \bm{\pi}_i = \frac{\exp\left[\sum_{k=1}^{d} a_{ik} \bm{h}_k\right]}{1 + \exp\left[\sum_{k=1}^{d} a_{ik} \bm{h}_k\right]}.
$$
The relationship between our proposed LFA approach and existing approaches of estimating latent variables in categorical data is detailed in Section~\ref{existingmethods}. Specifically, it should be noted that even though we propose calling the approach ``logistic factor analysis'', we do not make any assumptions about the distribution of the factors (which are often assumed to be Normal). A technically more detailed name of the method is a ``logistic nonparametric linear latent variable model for Binomial data.''
\subsection{Estimation and Algorithms}
\label{algs}
The two models presented earlier make minimal assumptions as to the nature of the structure. For example, in Model 1, both $\bmm{\Gamma}$ nor $\bm{{\rm S}}$ are real valued. This allows us to apply an efficient PCA-based algorithm directly to the genotype matrix $\bm{{\rm X}}$, obtaining estimates of $\bt{\bm{{\rm F}}}$, $\bt{\bmm{\Gamma}}$, and $\bt{\bm{{\rm S}}}$. In essence, $\bt{\bm{{\rm F}}}$ is estimated by forming the projection of $\bm{{\rm X}}/2$ onto the top $d$ principal components of $\bm{{\rm X}}$ with an explicit intercept for the $d=1$ case. One drawback of this approach is that because PCA is designed for continuous data, we have to artificially constrain $\bt{\bm{{\rm F}}}$ to be in the range $[0,1]$. However, we show below that $\bt{\bm{{\rm F}}}$ is still an extremely accurate estimate of the allele frequencies $\bm{{\rm F}}$ for all formulations of $\bm{{\rm F}}$ considered here, including the PSD model.
\vspace{\baselineskip}
\noindent
\textbf{Algorithm 1}: Estimating $\bm{{\rm F}}$ from PCA
\begin{enumerate}
\item Let $\bt{\mu}_i$ be the sample mean of row $i$ of $\bm{{\rm X}}$. Set $x^*_{ij} = x_{ij} - \bt{\mu}_i$ and let $\bm{{\rm X}}^*$ be the $m \times n$ matrix with $(i,j)$ entry $x^*_{ij}$.
\item Perform singular value decomposition (SVD) on $\bm{{\rm X}}^*$ which decomposes $\bm{{\rm X}}^* = \bm{{\rm U}} \bmm{\Delta} \bm{{\rm V}}^T$. Note that the rows of $\bmm{\Delta} \bm{{\rm V}}^T$ are the $n$ row-wise principal components of $\bm{{\rm X}}^*$ and $\bm{{\rm U}}$ are the principal component loadings.
\item Let $\bt{\bm{{\rm X}}}^*_{d-1}$ be the projection of $\bm{{\rm X}}^*$ on the top $d-1$ eigen-vectors of this SVD, $\bt{\bm{{\rm X}}}_{d-1}^* = \bm{{\rm U}}_{1:(d-1)} \bmm{\Delta}_{1:(d-1)} \bm{{\rm V}}_{1:(d-1)}^T$.
\item \label{fstar} Construct $\bt{\bm{{\rm F}}}^*$ by adding $\bt{\mu}_i$ to row $i$ of $\bt{\bm{{\rm X}}}^*_{d-1}$ (for $i=1, \ldots, n$) and multiplying the resulting matrix by $1/2$. In mathematical terms, $\bt{\bm{{\rm F}}}^* = \bt{\bmm{\Gamma}} \bt{\bm{{\rm S}}}$ where
\begin{align*}
\bt{\bmm{\Gamma}} &= \begin{pmatrix}
\ & \frac{1}{2}\bt{\mu}_1 \\
\frac{1}{2} \bm{{\rm U}}_{1:(d-1)} \bmm{\Delta}_{1:(d-1)} & \vdots \\
\ & \frac{1}{2}\bt{\mu}_m
\end{pmatrix} \\
&= \begin{pmatrix}
\frac{1}{2} u_{11} \delta_{1} & \cdots & \frac{1}{2} u_{1,d-1} \delta_{d-1} & \frac{1}{2}\bt{\mu}_1 \\
\frac{1}{2} u_{21} \delta_{1} & \cdots & \frac{1}{2} u_{2,d-1} \delta_{d-1} & \frac{1}{2}\bt{\mu}_2 \\
\vdots & \ & \vdots & \vdots \\
\frac{1}{2} u_{m1} \delta_{1} & \cdots & \frac{1}{2} u_{m,d-1} \delta_{d-1} & \frac{1}{2}\bt{\mu}_m
\end{pmatrix}, \\
\bt{\bm{{\rm S}}} &= \begin{pmatrix}
\bm{{\rm V}}_{1:(d-1)}^T \\
1 \; 1\; \ldots \; 1
\end{pmatrix} \\
&= \begin{pmatrix}
v_{11} & v_{21} & \cdots & v_{n1} \\
v_{12} & v_{22} & \cdots & v_{n2} \\
\vdots & \vdots & \ & \vdots \\
v_{1,d-1} & v_{2,d-1} & \cdots & v_{n,d-1} \\
1 & 1 & \cdots & 1
\end{pmatrix},
\end{align*}
and $\delta_i$ is the $i$th diagonal entry of $\bmm{\Delta}$. Let $\bt{\pi}^*_{ij}$ to be the $(i,j)$ entry of $\bt{\bm{{\rm F}}}^*$.
\item Since it may be the case that some $\bt{\pi}^*_{ij}$ are such that $\bt{\pi}^*_{ ij}< 0$ or $\bt{\pi}^*_{ij} > 1$, we truncate these. The final PCA based estimate of $\bm{{\rm F}}$ is formed as $\widetilde{\bm{{\rm F}}}$ where the $(i,j)$ entry $\widetilde{\pi}_{ij}$ is defined to be
$$
\widetilde{\pi}_{ij} = \begin{cases} C & \mbox{if } \bt{\pi}^*_{ij} \leq C
\\ \bt{\pi}^*_{ij} & \mbox{if } C < \bt{\pi}^*_{ij} < 1-C
\\ 1-C & \mbox{if } \bt{\pi}^*_{ij} \geq 1-C
\end{cases}
$$
for some $C \gtrsim 0$. An estimate of $\bm{{\rm L}}$ can be formed as $\bt{\bm{{\rm L}}} = {\rm logit}(\bt{\bm{{\rm F}}})$.
\end{enumerate}
\noindent Here we used $C = \frac{1}{2n}$. In summary, $\widetilde{\bm{{\rm F}}}$ is a projection of $\bm{{\rm X}}$ into its top principal components, scaled by $1/2$, and truncated so that all values lie in the interval $(0,1)$.
For Model 2, we propose a method for estimating the latent variables $\bm{{\rm H}}$. Starting from the $\bt{\bm{{\rm F}}}$ found by Algorithm 1, we apply the ${\rm logit}$ transformation to the subset of rows where we did not have to adjust the values that were $<0$ or $>1$, and then extract the right singular vectors of this transformed subset. As long as the subset is large enough to span the same space as the row space of $\bm{{\rm L}}$, this approach accurately estimates the basis of $\bm{{\rm H}}$. Next, we calculate the maximum likelihood estimation of $\bm{{\rm A}}$ parametrized by $\widehat{\bm{{\rm H}}}$ to yield $\widehat{\bm{{\rm A}}}$ and then $\widehat{\bm{{\rm L}}} = \widehat{\bm{{\rm A}}} \widehat{\bm{{\rm H}}}$. This involves performing a logistic regression of each SNP's data on $\widehat{\bm{{\rm H}}}$. In order to estimate the individual-specific allele frequency matrix $\bm{{\rm F}}$, we calculate $\widehat{\bm{{\rm F}}} = {\rm logit}^{-1}(\widehat{\bm{{\rm L}}})$. An important property to note is that all $\widehat{\pi}_{ij} \in [0,1]$ due to the fact that we are modeling the natural parameter.
\vspace{\baselineskip}
\noindent
\textbf{Algorithm 2}: Estimating Logistic Factors
\begin{enumerate}
\item Apply Algorithm 1 to obtain the estimate $\bt{\bm{{\rm F}}}^*$ from Step \ref{fstar}.
\item Recalling that $\bt{\pi}^*_{ij}$ is the $(i,j)$ entry of $\bt{\bm{{\rm F}}}^*$, we choose some $C \gtrsim 0$ and form $$\mathcal{S} = \{i: C<\bt{\pi}^*_{ij}<1-C, \forall j=1,...,n\}.$$ $\mathcal{S}$ identifies the rows of $\bt{\bm{{\rm F}}}^*$ where the ${\rm logit}$ function can be applied stably. Here we use $C = \frac{1}{2n}$.
\item Define $\bt{\bm{{\rm F}}}_\mathcal{S}$ to be the corresponding subset of rows of $\bt{\bm{{\rm F}}}^*$, and calculate $\bt{\bm{{\rm L}}}_\mathcal{S} = {\rm logit}\left(\bt{\bm{{\rm F}}}_\mathcal{S} \right)$. Let $\bt{\bm{{\rm L}}}_\mathcal{S}'$ be the row-wise mean centered and standard deviation scaled matrix $\bt{\bm{{\rm L}}}_\mathcal{S}$.
\item Perform SVD on $\bt{\bm{{\rm L}}}_\mathcal{S}'$ resulting in $\bt{\bm{{\rm L}}}_\mathcal{S}' = \bm{{\rm T}} \bm{\Lambda} \bm{{\rm W}}^T$. Set $\widehat{\bm{{\rm H}}}$ to be the $d \times n$ matrix composed of the top $d-1$ right singular vectors of the SVD of $\widehat{\bm{{\rm L}}}_\mathcal{S}'$ stacked on the row $n$-vector $(1, 1, \cdots, 1)$:
\begin{align*}
\widehat{\bm{{\rm H}}} &= \begin{pmatrix}
\ & \ & \bm{{\rm W}}^{T}_{1:(d-1)} & \ & \ \\
1 & 1 & \cdots & 1 & 1
\end{pmatrix}\\
&= \begin{pmatrix}
w_{11} & w_{21} & \cdots & w_{n1} \\
w_{12} & w_{22} & \cdots & w_{n2} \\
\vdots & \vdots & \ & \vdots \\
w_{1,d-1} & w_{2,d-1} & \cdots & w_{n,d-1} \\
1 & 1 & \cdots & 1
\end{pmatrix}.
\end{align*}
\end{enumerate}
\vspace{\baselineskip}
\noindent
\textbf{Algorithm 3}: Estimating $\bm{{\rm F}}$ and $\bm{{\rm L}}$ from LFA
\label{alg3}
\begin{enumerate}
\item Apply Algorithm 2 to $\bm{{\rm X}}$ to obtain $\widehat{\bm{{\rm H}}}$.
\item For each SNP $i$, perform a logistic regression of the SNP genotypes $\bm{{\rm x}}_i = (x_{i1}, x_{i2}, \ldots, x_{in})$ on the rows of $\widehat{\bm{{\rm H}}}$, specifically by maximizing the log-likelihood
$$
\ell(\bm{\pi}_i | \bm{{\rm x}}_i, \widehat{\bm{{\rm H}}}) = \sum_{j=1}^n x_{ij} \log\left( \frac{\pi_{ij}}{1-\pi_{ij}} \right) + 2\log(1-\pi_{ij})
$$
under the constraint that ${\rm logit}(\pi_{ij}) = \sum_{k=1}^{d} a_{ik} \widehat{h}_{kj}$. It should be noted that an intercept is included because $\widehat{h}_{dj}=1$ $\forall j$ by construction.
\item Set $\widehat{a}_{ij}$ ($j=1, \ldots, n$) to be equal to the maximum likelihood estimates from the above model fit, for each of $i=1,\ldots, m$. Let $\widehat{\bm{{\rm L}}} = \widehat{\bm{{\rm A}}} \widehat{\bm{{\rm H}}}$, $\widehat{\bm{{\rm F}}} = {\rm logit}^{-1}(\widehat{\bm{{\rm L}}})$, and $\widehat{\pi}_{ij}$ be the $(i,j)$ entry of $\widehat{\bm{{\rm F}}}$:
$$
\widehat{\pi}_{ij} = \frac{\exp \left\{ \sum_{k=1}^{d} \widehat{a}_{ik} \widehat{h}_{kj} \right\}}{1 + \exp \left\{ \sum_{k=1}^{d} \widehat{a}_{ik} \widehat{h}_{kj} \right\}}.
$$
\end{enumerate}
PCA-based estimation of Model 1 requires one application of singular value decomposition (SVD) and LFA requires two applications of SVD. We leverage the fact that $n \gg d$ to utilize Lanczos bidiagonalization which is an iterative method for computing the singular value decomposition of a matrix \cite{Baglama2006}. Lanczos bidiagonalization excels at computing a few of the largest singular values and corresponding singular vectors of a sparse matrix. While the sparsity of genotype matrices is fairly low, we find that in practice using this method to perform the above estimation algorithms is more effective than using methods that require the calculation of all the singular values and vectors. This results in a dramatic reduction of the computational time needed for the implementation of our methods.
\section{{\sc Results}}
We applied our methods to a comprehensive set of simulation studies and to the HGDP and TGP data sets.
\subsection{Simulation Studies}
To directly evaluate the performance of the estimation methods (Section~\ref{algs}), we devised a simulation study where we generated synthetic genotype data with varying levels of complexity in population structure. Genotypes were simulated based on allele frequencies subject to structure from the BN model, the PSD model, spatially structure populations, and real data sets. For the first three types of simulations, the allele frequencies were parameterized by Model 1, while for the real data simulations, the allele frequencies were taken from model fits on the data themselves.
A key property to assess is how well the estimation methods capture the overall structure. One way to evaluate this is to determine how well $\bt{\bm{{\rm S}}}$ from the PCA based method (Algorithm 1) estimates the true underlying $\bm{{\rm S}}$, and likewise how well $\widehat{\bm{{\rm H}}}$ from LFA estimates the true $\bm{{\rm H}}$. Note that even though the genotype data was generated from the $\bm{{\rm F}}$ of Model 1, we can evaluate $\widehat{\bm{{\rm H}}}$ by converting with $\bm{{\rm L}} = {\rm logit}(\bm{{\rm F}})$. To evaluate PCA, we regressed each row of $\bm{{\rm F}}$ on $\bt{\bm{{\rm S}}}$ and calculated the average $R^2$; similarly, for LFA we regressed each row of $\bm{{\rm L}}$ on $\widehat{\bm{{\rm H}}}$ and calculated the average $R^2$ value. The results are presented in Table \ref{tab:r2}. Both methods estimate the true latent structure well.
We specifically note that when the PSD model was utilized to simulate structure, we were able to recover the structure $\bm{{\rm S}}$ very well (Supplementary Figure \ref{fig:PSDrange}) without needing to employ the computationally intensive and assumption-heavy Bayesian model fitting techniques from ref. \cite{PritchardStephens2000}. Additionally, it seems that the $\bt{\bm{{\rm S}}}$ largely captures the geometry of $\bm{{\rm S}}$ where it may be the case that $\bm{{\rm S}}$ can be recovered with a high degree of accuracy by transforming $\bt{\bm{{\rm S}}}$ back into the simplex. By comparing the results on the real data (Figures \ref{fig:HGDPpclf}-\ref{fig:TGPpclf}) with the simulated data (Supplementary Figure \ref{fig:PSDrange}), one is able to visually assess how closely the assumptions of the PSD model resemble real data sets. When structure was simulated that differed substantially from the assumptions of the PSD model, our estimation methods were able to capture that structure just as well (Supplementary Figure \ref{fig:Spatialrange}). This demonstrates the flexibility of the proposed approaches.
We also compared PCA and LFA to two methods of fitting the PSD model, ADMIXTURE \cite{Alexander:2009p2792} and fastStructure \cite{Raj2013}, by seeing how well the methods estimated the individual specific allele frequencies $\pi_{ij}$ (Table \ref{tab:err}). For the real data scenarios, we generated synthetic genotypes based on estimates of $\bm{{\rm F}}$ from the four different methods, thus giving each method an opportunity to fit its own simulation. The methods were compared by computing three different error metrics with respect to the oracle $\bm{{\rm F}}$: Kullback-Leibler divergence, absolute error, and root mean squared error. PCA and LFA significantly outperformed ADMIXTURE and fastStructure, which confirms the intuitive understanding of the differences between the models: the goal of Model 1 and 2 is to estimate the allele frequencies $\pi_{ij}$, while the PSD model provides a probabilistic interpretation of the structure by modeling them as admixture proportions.
The computational time required to perform the proposed methods was also significantly better than ADMIXTURE and fastStructure. Both proposed methods completed calculations on average over 10 times faster than ADMIXTURE and fastStructure, with some scenarios as high as 150 times faster. This is notable in that both ADMIXTURE and fastStructure are described as computationally efficient implementations of methods to estimate the PSD model \cite{Alexander:2009p2792,Raj2013}.
\subsection{Analysis of the HGDP and TGP Data}
We analyzed the HGDP and TGP data using the proposed methods. The HGDP data consisted of $n=940$ individuals and $m=431,345$ SNPs, and the TGP data consisted of $n=1500$ and $m=339,100$ (see Supplementary Section~\ref{realdata} for details). We first applied PCA and LFA to these data sets and made bi-plots of the top three PCs and top three LFs (Figures \ref{fig:HGDPpclf} and \ref{fig:TGPpclf}). It can be seen that PCA and LFA provide similar visualizations of the structure present in these data. We next chose a dimension $d$ for the LFA model (Model 2) for each data set. This was done by identifying the value of $d$ that provides the best overall goodness of fit with Hardy-Weinberg equilibrium (Supplementary Section~\ref{choosingd}). We identified $d=15$ for HGDP and $d=7$ for TGP based on this criterion.
One drawback of utilizing a PCA based approach (Algorithm 1) for estimating the individual-specific allele frequencies $\bm{{\rm F}}$ is that we are not guaranteed that all values of the estimates lie in $[0,1]$, so some form of truncation is necessary. We found that 65.4\% of the SNPs in the HGDP data set and 26.5\% in the TGP data set resulted in at least one estimated individual-specific allele frequency $< 0$ or $> 1$ before the truncation was applied. Therefore, the truncation in forming the estimate $\widetilde{\bm{{\rm F}}}$ is necessary when employing Algorithm 1 to estimate $\bm{{\rm F}}$ from Model 1. On the other hand, due to the formulation of Model 2, all estimated allele frequencies fall in the valid range when applying LFA (Algorithms 2 and 3).
The LFA framework provides a natural computational method for ranking SNPs according to how differentiated they are with respect to structure. Note that existing methods typically require one to first assign each individual to one of $K$ discrete subpopulations \cite{Coop:2009p1530} which may make unnecessary assumptions on modern data sets such as HGDP and TGP. In order to rank SNPs for differentiation, we calculate the deviance statistic when performing a logistic regression of the SNPs genotypes on the logistic factors. Specifically we calculated the deviance by comparing the models ${\rm logit}(\bm{\pi}_i) = a_{id} \bm{h}_d$ vs. ${\rm logit}(\bm{\pi}_i) = \sum_{k=1}^{d} a_{ik} \bm{h}_k$, where the former model is intercept only (i.e., $d=1$, no structure).
Our application of LFA to identify SNPs with allele frequencies differentiated according to structure can be developed further. First, the recently proposed ``jackstraw'' approach \cite{Chung2013} provides a manner in which statistical significance can be assigned to these SNPs. Assigning statistical significance to the population differentiation of SNPs has traditionally been a difficult problem \cite{Akey2002}. Second, we found the deviance measure tends to have more extreme values for SNPs with larger minor allele frequencies (MAFs). Therefore, the ranking of SNPs may be made more informative if MAF is taken into account. Third, although this ranking is identifying differentiation and not specifically selection, it may provide a useful starting point in understanding methods that attempt to detect selection.
The most differentiated SNPs (Supplementary Tables \ref{tab:HGDPtop} and \ref{tab:TGPtop}) reveal some noteworthy results, especially considering the flexible approach to forming the ranking. SNPs located within or very close to {\em SLC24A5} were the top ranked in both HGDP and TGP. This gene is well known to be involved in determining skin pigmentation in humans \cite{Lamason:2005fj} and is hypothesized to have been subject to positive selection \cite{Sabeti2007}. The next most highly ranked SNPs in both studies are located in {\em EDAR}, which plays a major role in distinguishing phenotypes (e.g., hair follicles) among Asians. SNP rs3827760 is the second most differentiated SNP in the TGP data, which has also been hypothesized to be under positive selection in humans and whose causal role in the hair follicle phenotype has been verified in a mouse model \cite{Kamberov:2013p2731}. SNPs corresponding to these two genes for both studies are plotted in increasing order of $\widehat{\pi}_{ij}$ values, revealing subtle variation within each major ancestral group in addition to coarser differences in allele frequency (Figure~\ref{fig:topSNPs}). Other noteworthy genes with highly differentiated proximal SNPs include:
\begin{itemize}
\item {\em FOXP1}, which is a candidate gene for involvement in tumor progression and plays an important regulatory role with {\em FOXP2} \cite{Banham2001,Shigekawa2011};
\item {\em TBC1D1} in which genetic variation has been shown to confer risk for severe obesity in females \cite{Stone2006};
\item {\em KIF3C}, a novel kinesin-like protein, which has been hypothesized to be involved in microtubule-based transport in neuronal cells \cite{Sardella1998};
\item {\em KCNMA1}, a recently identified susceptibility locus for obesity \cite{Jiao2011};
\item {\em CTNNA3} in which genetic variation has been shown to be associated with diisocyanate-induced occupational asthma \cite{Bernstein2013};
\item {\em PTK6}, breast tumor kinase (Brk), which is known to function in cell-type and context-dependent processes governing normal differentiation \cite{Ostrander2010}.
\end{itemize}
We have provided information on the 5000 most differentiated SNPs for both TGP and HGDP in supplementary files.
\subsection{Software}
An R package called \texttt{lfa} is available at \url{https://github.com/StoreyLab/lfa}.
\section{{\sc Discussion}}
We have investigated two latent variable models of population structure to simultaneously estimate all individual-specific allele frequencies from genome-wide genotyping data. Model 1, a direct model of allele frequencies, can be estimated by using a modified PCA and Model 2, a model of the ${\rm logit}$ transformation of allele frequencies, is estimated through a new approach we called ``logistic factor analysis'' (LFA). For both models, the latent variables are estimated in a nonparametric fashion, meaning we do not make any assumptions about the underlying structure captured by the latent variables. These models are general in that they allow for each individual's genotype to be generated from an allele frequency specific to that individual, which includes discretely structured populations, admixed populations, and spatially structured populations. In LFA, we construct a model of the ${\rm logit}$ of these allele frequencies in terms of underlying factors that capture the population structure. We have proposed a computationally efficient method to estimate this model that requires only two applications of SVD. This approach builds on the success of PCA in that we are able to capture population structure in terms of a low-dimensional basis. It improves on PCA in that the latent variables we estimate can be straightforwardly incorporated into downstream statistical inference procedures that require well-behaved estimates of allele frequencies. In particular, statistical inferences of Hardy-Weinberg equilibrium, ${\rm F}_{\rm ST}$, and marker-trait associations are amenable to complex population structures within our framework.
We demonstrated our proposed approach on the HGDP and TGP data sets and several simulated data sets motivated by the HapMap, HGDP, and TGP data sets as well as the PSD model and spatially distributed structures. It was shown that our method estimates the underlying logistic factors with a high degree of accuracy. We also showed that applying PCA to genotype data estimates a row basis of population structure on the original allele frequency scale to a high degree of accuracy. However, problems occur when trying to recover estimates of individual-specific allele frequencies because PCA is a real-valued model that does not always result in allele frequency estimates lying between 0 and 1.
Although PCA has become very popular for genome-wide genotype data, it should be stressed that PCA is fundamentally a method for characterizing variance and special care should be taken when applying it to estimate latent variables. The authoritative treatment of PCA \cite{Jolliffe10} eloquently makes this point throughout the text and considers cases where factor analysis is more appropriate than PCA through examples reminiscent of the population structure problem. Here, we have shown that modeling and estimating population structure can be understood from the factor analysis perspective, leading to estimates of individual-specific allele frequencies through their natural parameter on the ${\rm logit}$ scale. At the same time, we have avoided some of the difficulties of traditional parameteric factor analysis by maintaining the relevant nonparametric properties of PCA, specifically in making no assumptions about the underlying probability distributions of the logistic factors that capture population structure.
\clearpage
\section{{\sc Figures and Tables}}
\begin{figure}[!h]
\centerline{\includegraphics[width=0.7\textwidth]{./figures/lfa_hclust_sqrty.pdf}}
\caption{A hierarchical clustering of individuals from the HapMap, HGDP, and TGP data sets. A dendrogram was drawn from a hierarchical clustering using Ward distance based on SNP genotypes (MAF $> 5\%$). Whereas the HapMap project shows a definitive discrete population structure (by sampling design), the HGDP and TGP data show the complex structure of human populations.}
\label{fig:cluster}
\end{figure}
\clearpage
\begin{figure}
\centerline{\includegraphics[width=\textwidth]{./figures/hgdp_940_biplots.pdf}}
\caption{Principal components versus logistic factors for the HGDP data set. The top three principal components from the HGDP data are plotted in a pairwise fashion in the top panel. The top three logistic factors are plotted analogously in the bottom panel. It can be seen that both approaches yield similar visualizations of structure.}
\label{fig:HGDPpclf}
\end{figure}
\clearpage
\begin{figure}
\centerline{\includegraphics[width=\textwidth]{./figures/1kG_1500_biplots.pdf}}
\caption{Principal components versus logistic factors for the TGP data set. The top three principal components from the TGP data are plotted in a pairwise fashion in the top panel. The top three logistic factors are plotted analogously in the bottom panel. It can be seen that both approaches yield similar visualizations of structure.}
\label{fig:TGPpclf}
\end{figure}
\clearpage
\begin{sidewaysfigure}
\centerline{\includegraphics[width=\textwidth]{./figures/top_snps_AF.pdf}}
\caption{SNPs with highly differentiated allele frequencies with respect to structure. Two of the most highly different SNPs according to LFA are shown for the HGDP and TGP data sets. For each SNP, the $\widehat{\pi}_{ij}$ values are ordered and they are colored according reported ancestry. The horizontal bars on the sides of the plots denote the usual allele frequency estimates formed within each ancestral group.}
\label{fig:topSNPs}
\end{sidewaysfigure}
\clearpage
\begin{table}
\caption{Accuracy in estimating linear bases for $\bm{{\rm S}}$. Column 1 shows the scenario from which the data were simulated. Columns 2 and 3 display the estimation accuracy of the PCA based method (Column 2) and LFA (Column 3). Column 2 shows the mean $R^2$ value when regressing the true $(\pi_{i1}, \pi_{i2}, \ldots, \pi_{in})$ on $\widehat{\bm{{\rm S}}}$ from PCA, averaging across all SNPs. Column 3 shows the mean $R^2$ value when regressing the true $\left({\rm logit}(\pi_{i1}), {\rm logit}(\pi_{i2}), \ldots, {\rm logit}(\pi_{in})\right)$ on $\widehat{\bm{{\rm H}}}$ from LFA, averaging across all SNPs. All estimated standard errors fell between $10^{-6}$ and $10^{-8}$ so these are not shown. Note for each scenario, $R^2$ values are higher for the method from which the true $\bm{{\rm F}}$ matrix was generated. All but the two scenarios marked with an asterisk (*) are from Model 1, while the two marked scenarios are from Model 2, where we took $\bm{{\rm F}} = {\rm logit} ^{-1} \bm{{\rm L}}$. }
\label{tab:r2}
\
\begin{center}
\begin{tabular}{|l|cc|cc|}\hline
&\multicolumn{2}{c|}{Mean $R^2$} \\
\hline
\rule{0pt}{3ex} Scenario & $\bm{{\rm F}} \sim \bt{\bm{{\rm S}}}$ & ${\rm logit}(\bm{{\rm F}}) \sim \widehat{\bm{{\rm H}}}$ \\
\hline\hline
TGP fit by PCA & 0.9998 & 0.9722 \\
TGP fit by LFA * & 0.9912 & 0.9990 \\
HGDP fit by PCA & 0.9996 & 0.9614 \\
HGDP fit by LFA * & 0.9835 & 0.9983 \\
BN & 0.9999 & 0.9999 \\
PSD $\alpha=0.01$ & 0.9998 & 0.9974 \\
PSD $\alpha=0.1$ & 0.9998 & 0.9879 \\
PSD $\alpha=0.5$ & 0.9996 & 0.9827 \\
PSD $\alpha=1$ & 0.9993 & 0.9844 \\
Spatial $a=0.1$ & 0.9999 & 0.9964 \\
Spatial $a=0.25$ & 0.9999 & 0.9962 \\
Spatial $a=0.5$ & 0.9999 & 0.9964 \\
Spatial $a=1$ & 0.9998 & 0.9970 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{sidewaystable}
\caption{Accuracy in estimating $\pi_{ij}$ parameters by the PCA based method and LFA. Each row is a different simulation scenario. Each column is the accuracy of a method's fits with the given metric.}
\label{tab:err}
\begin{tabular*}{\linewidth}{ @{\extracolsep{\fill}} ll |cccc|cccc|cccc @{}}
\toprule
\multicolumn{2}{c}{Scenario} & \multicolumn{4}{c}{Median KL} &
\multicolumn{4}{c}{Mean Abs. Err.} & \multicolumn{4}{c}{RMSE} \\
\midrule \midrule \addlinespace \\
& & PCA & LFA & ADX & FS & PCA & LFA & ADX & FS & PCA & LFA & ADX & FS \\
\cmidrule{3-6} \cmidrule{7-10} \cmidrule{11-14}
\addlinespace \\
\multirow{1}{*}{\rotatebox{90}{BN}}
& & 6.9{\rm E}{-5} & 6.8{\rm E}{-5} & 2.6{\rm E}{-3} & 2.6{\rm E}{-3} & 5.8{\rm E}{-3} & 5.8{\rm E}{-3} & 3.7{\rm E}{-2} & 3.7{\rm E}{-2} & 7.5{\rm E}{-3} & 7.5{\rm E}{-3} & 5.8{\rm E}{-2} & 5.8{\rm E}{-2} \\
\midrule \addlinespace \\
\multirow{4}{*}{\rotatebox{90}{PSD}}
& $\alpha=0.01$ & 7.0{\rm E}{-5} & 7.3{\rm E}{-5} & 1.6{\rm E}{-2} & 1.6{\rm E}{-2} & 5.6{\rm E}{-3} & 5.8{\rm E}{-3} & 9.7{\rm E}{-2} & 9.7{\rm E}{-2} & 7.2{\rm E}{-3} & 7.6{\rm E}{-3} & 1.7{\rm E}{-1} & 1.7{\rm E}{-1} \\
& $\alpha=0.1$ & 6.7{\rm E}{-5} & 9.2{\rm E}{-5} & 3.6{\rm E}{-2} & 3.6{\rm E}{-2} & 5.6{\rm E}{-3} & 6.9{\rm E}{-3} & 1.6{\rm E}{-1} & 1.6{\rm E}{-1} & 7.2{\rm E}{-3} & 9.3{\rm E}{-3} & 2.4{\rm E}{-1} & 2.4{\rm E}{-1} \\
& $\alpha=0.5$ & 6.3{\rm E}{-5} & 8.5{\rm E}{-5} & 5.4{\rm E}{-2} & 5.4{\rm E}{-2} & 5.6{\rm E}{-3} & 6.8{\rm E}{-3} & 1.4{\rm E}{-1} & 1.4{\rm E}{-1} & 7.3{\rm E}{-3} & 9.0{\rm E}{-3} & 1.8{\rm E}{-1} & 1.8{\rm E}{-1} \\
& $\alpha=1.0$ & 6.1{\rm E}{-5} & 7.4{\rm E}{-5} & 3.3{\rm E}{-2} & 3.3{\rm E}{-2} & 5.6{\rm E}{-3} & 6.3{\rm E}{-3} & 1.4{\rm E}{-1} & 1.4{\rm E}{-1} & 7.4{\rm E}{-3} & 8.4{\rm E}{-3} & 2.2{\rm E}{-1} & 2.2{\rm E}{-1} \\
\midrule \addlinespace \\
\multirow{4}{*}{\rotatebox{90}{Spatial}}
& $a=0.1$ & 7.3{\rm E}{-5} & 1.2{\rm E}{-4} & 8.2{\rm E}{-3} & 8.1{\rm E}{-3} & 5.5{\rm E}{-3} & 7.6{\rm E}{-3} & 7.4{\rm E}{-2} & 7.4{\rm E}{-2} & 7.0{\rm E}{-3} & 1.0{\rm E}{-2} & 1.2{\rm E}{-1} & 1.2{\rm E}{-1} \\
& $a=0.25$ & 6.9{\rm E}{-5} & 1.1{\rm E}{-4} & 8.6{\rm E}{-3} & 8.6{\rm E}{-3} & 5.6{\rm E}{-3} & 7.4{\rm E}{-3} & 9.3{\rm E}{-2} & 9.3{\rm E}{-2} & 7.2{\rm E}{-3} & 9.8{\rm E}{-3} & 1.6{\rm E}{-1} & 1.6{\rm E}{-1} \\
& $a=0.5$ & 6.6{\rm E}{-5} & 9.5{\rm E}{-5} & 1.0{\rm E}{-2} & 1.0{\rm E}{-2} & 5.6{\rm E}{-3} & 6.9{\rm E}{-3} & 6.7{\rm E}{-2} & 6.7{\rm E}{-2} & 7.2{\rm E}{-3} & 9.2{\rm E}{-3} & 1.0{\rm E}{-1} & 1.0{\rm E}{-1} \\
& $a=1.0$ & 6.3{\rm E}{-5} & 7.8{\rm E}{-5} & 1.2{\rm E}{-2} & 1.2{\rm E}{-2} & 5.7{\rm E}{-3} & 6.4{\rm E}{-3} & 1.1{\rm E}{-1} & 1.1{\rm E}{-1} & 7.4{\rm E}{-3} & 8.5{\rm E}{-3} & 1.7{\rm E}{-1} & 1.7{\rm E}{-1} \\
\midrule \addlinespace \\
\multirow{4}{*}{\rotatebox{90}{TGP fit}}
& PCA & 4.1{\rm E}{-4} & 5.2{\rm E}{-4} & 2.8{\rm E}{-3} & 3.4{\rm E}{-3} & 1.3{\rm E}{-2} & 1.5{\rm E}{-2} & 8.1{\rm E}{-2} & 8.3{\rm E}{-2} & 1.8{\rm E}{-2} & 2.1{\rm E}{-2} & 1.5{\rm E}{-1} & 1.5{\rm E}{-1} \\
& LFA & 4.3{\rm E}{-4} & 4.8{\rm E}{-4} & 2.4{\rm E}{-3} & 2.7{\rm E}{-3} & 1.3{\rm E}{-2} & 1.4{\rm E}{-2} & 7.9{\rm E}{-2} & 8.1{\rm E}{-2} & 1.8{\rm E}{-2} & 2.0{\rm E}{-2} & 1.4{\rm E}{-1} & 1.5{\rm E}{-1} \\
& ADX & 5.4{\rm E}{-4} & 4.4{\rm E}{-4} & 5.0{\rm E}{-3} & 5.5{\rm E}{-3} & 1.5{\rm E}{-2} & 1.3{\rm E}{-2} & 1.1{\rm E}{-1} & 1.1{\rm E}{-1} & 2.0{\rm E}{-2} & 1.9{\rm E}{-2} & 2.0{\rm E}{-1} & 2.0{\rm E}{-1} \\
& FS & 4.1{\rm E}{-4} & 5.5{\rm E}{-4} & 7.8{\rm E}{-4} & 9.2{\rm E}{-4} & 1.3{\rm E}{-2} & 1.5{\rm E}{-2} & 5.6{\rm E}{-2} & 5.8{\rm E}{-2} & 1.8{\rm E}{-2} & 2.1{\rm E}{-2} & 1.3{\rm E}{-1} & 1.3{\rm E}{-1} \\
\midrule \addlinespace \\
\multirow{4}{*}{\rotatebox{90}{HGDP fit}}
& PCA & 1.0{\rm E}{-3} & 1.2{\rm E}{-3} & 1.3{\rm E}{-2} & 1.4{\rm E}{-2} & 2.3{\rm E}{-2} & 2.5{\rm E}{-2} & 1.2{\rm E}{-1} & 1.2{\rm E}{-1} & 3.4{\rm E}{-2} & 3.6{\rm E}{-2} & 2.2{\rm E}{-1} & 2.2{\rm E}{-1} \\
& LFA & 9.9{\rm E}{-4} & 1.1{\rm E}{-3} & 1.3{\rm E}{-2} & 1.2{\rm E}{-2} & 2.2{\rm E}{-2} & 2.4{\rm E}{-2} & 1.2{\rm E}{-1} & 1.2{\rm E}{-1} & 3.5{\rm E}{-2} & 3.7{\rm E}{-2} & 2.2{\rm E}{-1} & 2.2{\rm E}{-1} \\
& ADX & 1.6{\rm E}{-3} & 1.4{\rm E}{-3} & 2.3{\rm E}{-3} & 2.3{\rm E}{-3} & 2.6{\rm E}{-2} & 2.6{\rm E}{-2} & 5.6{\rm E}{-2} & 5.6{\rm E}{-2} & 3.6{\rm E}{-2} & 3.7{\rm E}{-2} & 1.0{\rm E}{-1} & 1.0{\rm E}{-1} \\
& FS & 1.4{\rm E}{-3} & 1.6{\rm E}{-3} & 3.1{\rm E}{-2} & 2.9{\rm E}{-2} & 2.6{\rm E}{-2} & 2.7{\rm E}{-2} & 1.4{\rm E}{-1} & 1.3{\rm E}{-1} & 3.6{\rm E}{-2} & 3.8{\rm E}{-2} & 2.2{\rm E}{-1} & 2.1{\rm E}{-1} \\
\bottomrule
\end{tabular*}
\end{sidewaystable}
\clearpage
\section{{\sc Supplementary Material}}
\subsection{Data sets}
\label{realdata}
The HGDP data set was constructed by intersecting the data available from the HGDP web site, \url{http://www.hagsc.org/hgdp/files.html}, with the set of individuals ``H952'' identified by Rosenberg (2006) \cite{Rosenberg2006hgdp} with a high confidence as containing no first and second-degree relative pairs. This yielded complete SNP genotype data on 431,345 SNPs for 940 individuals.
In order to obtain data from the TGP we first obtained the genotype data that had been measured through the Omni Platform, 2011-11-17, \url{ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/technical/working}.
We removed related individuals based on the TGP sample information.
We then sorted individuals according to least percentage of SNPs with missing data, and we selected the top 1500 individuals. This yielded complete SNP genotype data on 339,100 SNPs for 1500 individuals.
We utilized the HapMap data set in the simulated data described below. We obtained the HapMap data release 23a, NCBI build 36 from \url{www.hapmap.org} consisting of unrelated individuals: 60 from European ancestry group (CEU), 60 from Yoruba, Africa (YRI) , and 90 from Japan and China (JPT+CHB). We identified all SNPs with observed minor allele frequency $\geq 5\%$ and with no missing data. The total number of SNPs used after filtering in each population were CEU: 1,416,940, YRI: 1,539,314, JPT+CHB: 759,452. We then identified all SNPs common to all three populations resulting in a total of 363,955.
\subsection{Choosing the model dimension}
\label{choosingd}
The model dimension $d$ was determined for the HGDP and TGP data sets under the rationale that when $d$ is large enough, then the great majority of SNPs should appear to be in HWE. When $d$ is too small, then the structure which has not been accounted for will lead to spurious deviations from HWE. Values $d=1, 2, \ldots, 20$ were considered for each data set, and we ended up identifying $d=15$ for HGDP and $d=7$ for TGP. We note that these choices could also be interpreted as reasonable according to a scree plot when PCA was applied to the genotype data.
For a given $d$ value, we formed $\widehat{\bm{{\rm F}}}$ using the LFA method. We calculated a HWE goodness of fit statistic for each SNP $i$ as follows:
$$
\sum_{k=0}^2 \frac{\left[ \sum_{j=1}^n 1(x_{ij}=k) - \sum_{j=1}^n {2 \choose k} \widehat{\pi}_{ij}^k (1-\widehat{\pi}_{ij})^{2-k} \right]^2}{\sum_{j=1}^n {2 \choose k} \widehat{\pi}_{ij}^k (1-\widehat{\pi}_{ij})^{2-k}},
$$
where $\sum_{j=1}^n 1(x_{ij}=k)$ is the observed number of genotypes equal to $k$ and $\sum_{j=1}^n {2 \choose k} \widehat{\pi}_{ij}^k (1-\widehat{\pi}_{ij})^{2-k}$ is the expected number of genotypes equal to $k$ under HWE. We then utilized $\widehat{\bm{{\rm F}}}$ to simulate five instances of a genotype matrix $\bm{{\rm X}}^0$ under HWE where we simulated $x^0_{ij} \sim \mbox{Binomial}(2, \widehat{\pi}_{ij})$. On each simulated genotype matrix $\bm{{\rm X}}^0$, we again applied LFA to obtain $\widehat{\bm{{\rm F}}}^0$ and calculate HWE goodness of fit statistics. These goodness of fit statistics were then pooled across all five simulated data sets and across all SNPs to form the null distribution, which then allowed us to calculate a HWE p-value for each observed SNP. (It should be noted that we also formed a separate null distribution according to minor allele frequency bins of length 0.05, and we arrived at the same conclusion.) We then compared these p-values to the Uniform(0,1) distribution and also against the p-values from the $d+1$ case. This allowed us to identify a value of $d$ where the HWE p-values were both close to the Uniform(0,1) distribution and to the HWE p-values from the $d+1$ case.
\subsection{Simulated data}
For each simulation scenario, genotypes $\bm{{\rm X}}$ were simulated such that $x_{ij} \sim \mbox{Binomial}(2, \pi_{ij})$, where $\pi_{ij}$ were elements of the allele frequency matrix $\bm{{\rm F}}$. The results from the simulated data are summarized in Tables 1 and 2.
\subsub{Balding-Nichols (BN).} For each SNP in the HapMap data set, we estimated its marginal allele frequency according to the observed frequency and estimated its ${\rm F}_{\rm ST}$ value using the Weir \& Cockerham estimate \cite{weir84}. We set the simulated data to have $m=100,000$ SNPs and $n=5000$ individuals with $d=3$. Using Model 1, the $\bm{{\rm S}}$ matrix was generated by sampling its columns $\mathbf{s}^j$ i.i.d. from $(1,0,0)^T$, $(0,1,0)^T$, and $(0,0,1)^T$ with respective probabilities $60/210$, $60/210$, and $90/210$ to reflect the original data's subpopulation proportions. For each row $i$ of $\bmm{\Gamma}$, we simulated i.i.d. draws from the Balding-Nichols model: $\bmm{\gamma}_{i1}, \bmm{\gamma}_{i2}, \bmm{\gamma}_{i3} \stackrel{i.i.d.}{\sim} \mbox{BN}(p_i, F_i)$, where the pair $(p_i, F_i)$ was randomly selected from among the marginal allele frequency and ${\rm F}_{\rm ST}$ pairs calculated on the HapMap data set.
\subsub{PSD.} We analyzed each SNP in the HGDP data set to estimate its marginal allele frequency according to the observed marginal frequency and ${\rm F}_{\rm ST}$ using the Weir \& Cockerham estimate \cite{weir84}. To estimate ${\rm F}_{\rm ST}$, each individual in the HGDP data set was assigned to one of $K=5$ subpopulations according to the analysis in Rosenberg et al. (2002) \cite{Rosenberg2002}. We set $m=100,000$ SNPs and $n=5000$ individuals with $d=3$. Again utilizing Model 1, each row $i$ of $\bmm{\Gamma}$ was simulated according to $\gamma_{i1}, \gamma_{i2}, \gamma_{i3} \stackrel{i.i.d.}{\sim} \mbox{BN}(p_i, F_i)$, where the pair $(p_i, F_i)$ was randomly selected from among the marginal allele frequency and ${\rm F}_{\rm ST}$ pairs calculated on the HGDP data set. To generate $\bm{{\rm S}}$, we simulated $(s_{1j}, s_{2j}, s_{3j}) \stackrel{i.i.d.}{\sim} \mbox{Dirichlet}(\bmm{\alpha})$ for $j=1,\ldots,5000$. We considered $\bmm{\alpha} = (0.01, 0.01, 0.01)$, $\bmm{\alpha} = (0.1, 0.1, 0.1)$, $\bmm{\alpha} = (0.5, 0.5, 0.5)$, and $\bmm{\alpha} = (1, 1, 1)$. It should be noted that as $\bmm{\alpha} \rightarrow \bmm{0}$, the draws from the Dirichlet distribution become increasingly closer to assigning each individual to one of three discrete subpopulations with equal probability. When $\bmm{\alpha} = (1, 1, 1)$, the admixture proportions are distributed uniformly over the simplex.
\subsub{Spatial.} This scenario is meant to create population structure that is driven by spatial position of the individual. We set the simulated data to have $m=100,000$ SNPs and $n=5000$ individuals with $d=3$. Rows $i=1,2$ of $\bm{{\rm S}}$ were simulated as $s_{ij} \stackrel{i.i.d.}{\sim} \mbox{Beta}(a, a)$ for $j=1, \ldots, 5000$, and row 3 of $\bm{{\rm S}}$ contained the intercept term, $s_{3j} = 1$. We considered four values of $a$: 0.1, 0.25, 0.5, and 1. The first two rows of $\bm{{\rm S}}$ place each individual in a two-dimensional space (Figure \ref{fig:Spatialrange}), where the ancestry of individual $j$ is located at $(s_{1j}, s_{2j})$ in the unit square. When $a=1$, the Beta$(a,a)$ distribution is Uniform$(0,1)$, so this scenario represents a uniform distribution of individuals in unit square. As $a \rightarrow 0$, the Beta$(a,a)$ places each individual with equal probabilities in one of the four corners of the unit square. The matrix $\bmm{\Gamma}$ was created by sampling $\gamma_{ij} \stackrel{i.i.d.}{\sim} 0.9 \times \mbox{Uniform}(0, 1/2)$ for $j=1,2$ and $\gamma_{i3} = 0.05$. It should be noted that all $\pi_{ij} \in [0.05, 0.95]$ by construction.
\subsub{Real Data.} For the HGDP and TGP scenarios, we estimated an allele frequency matrix $\bm{{\rm F}}$ from the real data via four different methods. For HGDP we had $m=431,345$ SNPs by $n=940$ individuals with $d=15$, and for TGP we had $m=339,100$ and $n=1,500$ with $d=7$. The four methods are:
\begin{itemize}
\item {\em PCA}: $\bm{{\rm F}}$ was taken to be the matrix $\bt{\bm{{\rm F}}}$ estimated via Algorithm 1.
\item {\em LFA}: $\bm{{\rm F}} = {\rm logit}^{-1}(\widehat{\bm{{\rm L}}})$, where $\widehat{\bm{{\rm L}}}$ was estimated via Algorithm 3.
\item {\em ADX}: $\bm{{\rm F}}$ was taken to be the matrix formed by computing the marginal allele frequencies in the Pritchard-Stephens-Donnelly model, i.e. $\bm{{\rm F}} = \bm{{\rm P}} \bm{{\rm Q}}$, and $\bm{{\rm P}}$ and $\bm{{\rm Q}}$ were estimated via the software ADMIXTURE \cite{Alexander:2009p2792}.
\item {\em FS}: Same as above except $\bm{{\rm P}}$ and $\bm{{\rm Q}}$ are estimated via the software fastStructure \cite{Raj2013}.
\end{itemize}
\subsection{Error Measures Used to Evaluate Estimates of $\bm{{\rm F}}$ and $\bm{{\rm L}}$}
Estimates of $\pi_{ij}$ were evaluated with three different metrics. Let $\widehat{\pi}_{ij}$ be the estimate for any given method.
The {\em Kullback-Leibler divergence} for the binomial distribution allows us to measure the difference between the distribution from the estimated allele frequencies to the distribution from the oracle allele frequencies:
$$
\mbox{KL} = \pi_{ij} \ln\left( \frac{\pi_{ij}}{\widehat{\pi}_{ij}}\right) + (1-\pi_{ij}) \ln\left( \frac{1-\pi_{ij}}{1-\widehat{\pi}_{ij}}\right).
$$
\noindent {\em Mean absolute error} compares the allele frequencies directly:
$$
\mbox{MAE} = \frac{1}{m \times n} \sum_{i=1}^{m} \sum_{j=1}^{n} \left| \pi_{ij} - \widehat{\pi}_{ij} \right|.
$$
\noindent {\em Root mean squared error}:
$$
\mbox{RMSE} = \sqrt{\frac{1}{m \times n} \sum_{i=1}^{m} \sum_{j=1}^{n} \left( {\rm logit}(\pi_{ij}) - {\rm logit}(\widehat{\pi}_{ij}) \right)^2}.
$$
\subsection{${\rm F}_{\rm ST}$ for individual-specific allele frequencies}
\label{fst}
By considering the derivation of ${\rm F}_{\rm ST}$ for $K$ discrete populations as described in Weir (1984, 1996) \cite{Weir1996,weir84}, it can be seen that a potential generalization of ${\rm F}_{\rm ST}$ to arbitrary population structure is
$$
{\rm F}_{\rm ST} = 1 - \frac{{\rm E}_{\bmm{Z}}[{\rm Var}(x | \bmm{Z})]}{{\rm Var}(x)},
$$
where, as described in Section \label{allelefreqs}, $\bmm{Z}$ is a latent variable capturing an individual's population structure position or membership. The allele frequency of a SNP conditional on $\bmm{Z}$ can be viewed as being a function of $\bmm{Z}$, which we have denoted by $\pi(\bmm{Z})$. If $n$ individuals are sampled independently and homogeneously from the population\footnote{When the individuals are not sampled homogeneously throughout the population (e.g., in the HapMap data with 60, 60, and 90 observations from three discretely defined subpopulations), then it may be the case that the above quantity should be modified to reflect the stratified or non-homogeneous sampling.} such that $\bmm{z}_1, \ldots, \bmm{z}_n$ are i.i.d. from the distribution on $\bmm{Z}$, then for SNP $i$ in HWE, it follows that ${\rm Var}(x_{ij} | z_j) = 2 \pi_{ij} (1-\pi_{ij})$ and
$$
{\rm F}_{\rm ST} \stackrel{a.s.}{=} \lim_{n \rightarrow \infty} 1-\frac{\frac{1}{n}\sum_{j=1}^n \pi_{ij} (1-\pi_{ij})}{\overline{\pi}_i (1-\overline{\pi}_i)},
$$
where $\overline{\pi}_i = \sum_{j=1}^n \pi_{ij}/n$ is the marginal allele frequency among the $n$ individuals. Thus, good estimates of the $\pi_{ij}$ values may be useful for estimating ${\rm F}_{\rm ST}$ in this general setting. One example would be to form a plug-in estimate of ${\rm F}_{\rm ST}$ by replacing $\pi_{ij}$ with $\widehat{\pi}_{ij}$ from the proposed LFA method.
\subsection{Relationship of LFA to existing models and methods}
\label{existingmethods}
The problem of modeling a genotype matrix $\bm{{\rm X}}$ in order to uncover latent variables that explain cryptic structure is a special case of a much more general problem that has been considered for several years in the statistics literature \cite{Bartholomew1984,Moustaki2000}. Under a latent variable model, it is assumed that the ``manifest'' (observed) variables are the result of the ``latent'' (unobserved) variables. Different types of the latent variable models can be grouped according to whether the manifest and latent variables are categorical or continuous. For example, factor analysis is a latent variable method for the case where both manifest variable and latent variable are continuous. A proposed naming convention \cite{BKM2011} is summarized as follows:
\begin{center}
\begin{tabular}{|c|cc|}
\hline
\ & \multicolumn{2}{c|}{Manifest variables} \\
Latent variables & Continuous & Categorical \\
\hline
Continuous &Factor analysis & Latent trait analysis \\
Categorical & Latent profile analysis & Latent class analysis \\
\hline
\end{tabular}
\end{center}
The problem we consider is that the manifest variables (observed gentoypes) are categorical, and they are driven by latent variables (population structure) that may either be categorical (discrete population structure) or continuous (complex population structure). Therefore, the LFA method may be described as a nonparametric latent variable estimation method that jointly captures latent trait analysis and latent class analysis. Another naming convention that we could apply to LFA would be to call it a nonparametric latent variable model for Binomial data. The naming conventions of latent variable models are inconsistent and often confusing \cite{BKM2011}.
Bartholomew (1980) \cite{Batholomew1980} proposed a model related to equation (2) to identify latent variables that influence the probabilities of a collection of Binomial random variables. See also Bartholomew et al. 2011 for a comprehensive treatment of this area, which they call ``general linear latent variable models'' (GLLVM). In particular, when the manifest variables $x_{ij} \sim$ Bernoulli($\pi_{ij}$) and the latent variables $h_{kj}$ are continuous variables, the GLLVM in this case is Model 2 , ${\rm logit}(\pi_{ij}) = \sum_{k=1}^d a_{ik} h_{kj}$. While we begin with this model, there are some key differences. The number of manifest variables in the data considered in Bartholomew (1980) and related work is notably smaller than genome-wide genotype data, so the assumptions and estimation approach differ substantially. Model assumptions are typically made about the probability distributions of the latent variables; we consider these model assumptions too strong and also unnecessary for the genome-wide genotype data considered here, although they may be quite reasonable for the problems considered in other contexts. Existing methods typically estimate Model 2 by calculating the joint posterior distribution of the $h_{kj}$ based on an assumed prior distribution of the latent variables.
Our LFA approach for estimating the row basis of $\bf L$ is nonparametric since it does not require a prior assumption on the distribution of latent variables, $\bm{{\rm H}}$. The model fitting methods of ref. \cite{BKM2011} are too computationally intensive for high-dimensional data, requiring many iterations and potential convergence issues. Our proposed algorithm requires performing SVD twice, which leads to a dramatic reduction in computational burden and difficulties. Engelhardt and Stephens (2010) \cite{Engelhardt2010} make an interesting connection between classical factor analysis models of $\bm{{\rm F}}$ and other models of population structure, but the factor analysis model runs into the difficulty that the latent factors are assumed to be Normal distributed, and the constraint that alleles frequencies are in $[0,1]$ is not easily accommodated by this continuous, real-valued model.
Several extensions of PCA to categorical data have been proposed \cite{Schapire2002,Schein2003,Guo2008}. We found that the algorithms perform very slowly on genome-wide genotyping data, and the estimation can be quite poor when $d > 1$. Also, PCA is essentially a method for characterizing variance in data \cite{Jolliffe10}, and the latent variable approach is more directly aimed at uncovering latent population structure. Non-negative matrix factorization (NMF) \cite{Paatero1994} is another matrix factorization for count data (e.g., Poisson random variables). This identifies two non-negative matrices whose product approximates the original matrix. However, similarly to PCA, we do not find that this approach easily translates into interpretable models of population and it is computationally intensive. NMF has proven to be quite useful as a numerical tool for decomposing images into parts humans recognize as distinct \cite{Lee1999}.
\clearpage
\section{{\sc Supplementary Figures and Tables}}
\begin{figure}[!h]
\centering
\includegraphics[width=0.7\textwidth]{figures/decomp.pdf}
\caption{A comparison of LFA model (2) and its estimate to model (1) and its PCA estimate. The proposed LFA approach first models the logit of the individual-specific allele frequencies in terms of the product of two matrices, the left matrix establishing how population structure is present in allele frequencies, and the right matrix giving the structure. Whereas the LFA approach preserves the scale of the model through the estimate (all real-valued numbers), the same is not true to PCA. This leads to issues in the estimation of individual-specific allele frequencies when utilizing PCA. We have shown, however, that PCA estimates very well a row basis for $\bm{{\rm S}}$ from Model 1. This connects PCA to an explicit model of population structure.}
\label{fig:decomp}
\end{figure}
\clearpage
\begin{figure}
\centerline{\includegraphics[width=0.85\textwidth]{figures/PSD_S_1234.pdf}}
\caption{A mapping from $\bm{{\rm S}}$ to $\bt{\bm{{\rm S}}}$ for four simulated $\bm{{\rm S}}$ matrices under the PSD model. The left column shows the simulated structure $\bm{{\rm S}}$ for each of four scenarios (a--d) and the right column shows the resulting estimated row basis of $\bm{{\rm S}}$ produced from PCA. It can be seen that the scale on which $\bm{{\rm S}}$ was generated, all values in (0,1), is lost in the principal components, values in $\mathbb{R}$.}
\label{fig:PSDrange}
\end{figure}
\clearpage
\begin{figure}
\centerline{\includegraphics[width=0.85\textwidth]{figures/spatial_S_1234.pdf}}
\caption{A mapping from $\bm{{\rm S}}$ to $\bt{\bm{{\rm S}}}$ for four simulated $\bm{{\rm S}}$ matrices under the Spatial model. The left column shows the simulated structure $\bm{{\rm S}}$ for each of four scenarios (a--d) and the right column shows the resulting estimated row basis of $\bm{{\rm S}}$ produced from PCA. It can be seen that the scale on which $\bm{{\rm S}}$ was generated, all values in (0,1), is lost in the principal components, values in $\mathbb{R}$.}
\label{fig:Spatialrange}
\end{figure}
\clearpage
\begin{sidewaystable}
\caption{Accuracy in estimating $\pi_{ij}$ parameters by the PCA based method and LFA. Each row is a different simulation scenario. Each column is the accuracy of a method's fits with the given metric.}
\label{tab:err}
\begin{tabular*}{\linewidth}{ @{\extracolsep{\fill}} ll |cccc|cccc|cccc @{}}
\toprule
\multicolumn{2}{c}{Scenario} & \multicolumn{4}{c}{Median KL} &
\multicolumn{4}{c}{Mean Abs. Err.} & \multicolumn{4}{c}{RMSE} \\
\midrule \midrule \addlinespace \\
& & PCA & LFA & ADX & FS & PCA & LFA & ADX & FS & PCA & LFA & ADX & FS \\
\cmidrule{3-6} \cmidrule{7-10} \cmidrule{11-14}
\addlinespace \\
\multirow{1}{*}{\rotatebox{90}{BN}}
& & 6.9{\rm E}{-5} & 6.8{\rm E}{-5} & 2.6{\rm E}{-3} & 2.6{\rm E}{-3} & 5.8{\rm E}{-3} & 5.8{\rm E}{-3} & 3.7{\rm E}{-2} & 3.7{\rm E}{-2} & 7.5{\rm E}{-3} & 7.5{\rm E}{-3} & 5.8{\rm E}{-2} & 5.8{\rm E}{-2} \\
\midrule \addlinespace \\
\multirow{4}{*}{\rotatebox{90}{PSD}}
& $\alpha=0.01$ & 7.0{\rm E}{-5} & 7.3{\rm E}{-5} & 1.6{\rm E}{-2} & 1.6{\rm E}{-2} & 5.6{\rm E}{-3} & 5.8{\rm E}{-3} & 9.7{\rm E}{-2} & 9.7{\rm E}{-2} & 7.2{\rm E}{-3} & 7.6{\rm E}{-3} & 1.7{\rm E}{-1} & 1.7{\rm E}{-1} \\
& $\alpha=0.1$ & 6.7{\rm E}{-5} & 9.2{\rm E}{-5} & 3.6{\rm E}{-2} & 3.6{\rm E}{-2} & 5.6{\rm E}{-3} & 6.9{\rm E}{-3} & 1.6{\rm E}{-1} & 1.6{\rm E}{-1} & 7.2{\rm E}{-3} & 9.3{\rm E}{-3} & 2.4{\rm E}{-1} & 2.4{\rm E}{-1} \\
& $\alpha=0.5$ & 6.3{\rm E}{-5} & 8.5{\rm E}{-5} & 5.4{\rm E}{-2} & 5.4{\rm E}{-2} & 5.6{\rm E}{-3} & 6.8{\rm E}{-3} & 1.4{\rm E}{-1} & 1.4{\rm E}{-1} & 7.3{\rm E}{-3} & 9.0{\rm E}{-3} & 1.8{\rm E}{-1} & 1.8{\rm E}{-1} \\
& $\alpha=1.0$ & 6.1{\rm E}{-5} & 7.4{\rm E}{-5} & 3.3{\rm E}{-2} & 3.3{\rm E}{-2} & 5.6{\rm E}{-3} & 6.3{\rm E}{-3} & 1.4{\rm E}{-1} & 1.4{\rm E}{-1} & 7.4{\rm E}{-3} & 8.4{\rm E}{-3} & 2.2{\rm E}{-1} & 2.2{\rm E}{-1} \\
\midrule \addlinespace \\
\multirow{4}{*}{\rotatebox{90}{Spatial}}
& $a=0.1$ & 7.3{\rm E}{-5} & 1.2{\rm E}{-4} & 8.2{\rm E}{-3} & 8.1{\rm E}{-3} & 5.5{\rm E}{-3} & 7.6{\rm E}{-3} & 7.4{\rm E}{-2} & 7.4{\rm E}{-2} & 7.0{\rm E}{-3} & 1.0{\rm E}{-2} & 1.2{\rm E}{-1} & 1.2{\rm E}{-1} \\
& $a=0.25$ & 6.9{\rm E}{-5} & 1.1{\rm E}{-4} & 8.6{\rm E}{-3} & 8.6{\rm E}{-3} & 5.6{\rm E}{-3} & 7.4{\rm E}{-3} & 9.3{\rm E}{-2} & 9.3{\rm E}{-2} & 7.2{\rm E}{-3} & 9.8{\rm E}{-3} & 1.6{\rm E}{-1} & 1.6{\rm E}{-1} \\
& $a=0.5$ & 6.6{\rm E}{-5} & 9.5{\rm E}{-5} & 1.0{\rm E}{-2} & 1.0{\rm E}{-2} & 5.6{\rm E}{-3} & 6.9{\rm E}{-3} & 6.7{\rm E}{-2} & 6.7{\rm E}{-2} & 7.2{\rm E}{-3} & 9.2{\rm E}{-3} & 1.0{\rm E}{-1} & 1.0{\rm E}{-1} \\
& $a=1.0$ & 6.3{\rm E}{-5} & 7.8{\rm E}{-5} & 1.2{\rm E}{-2} & 1.2{\rm E}{-2} & 5.7{\rm E}{-3} & 6.4{\rm E}{-3} & 1.1{\rm E}{-1} & 1.1{\rm E}{-1} & 7.4{\rm E}{-3} & 8.5{\rm E}{-3} & 1.7{\rm E}{-1} & 1.7{\rm E}{-1} \\
\midrule \addlinespace \\
\multirow{4}{*}{\rotatebox{90}{TGP fit}}
& PCA & 4.1{\rm E}{-4} & 5.2{\rm E}{-4} & 2.8{\rm E}{-3} & 3.4{\rm E}{-3} & 1.3{\rm E}{-2} & 1.5{\rm E}{-2} & 8.1{\rm E}{-2} & 8.3{\rm E}{-2} & 1.8{\rm E}{-2} & 2.1{\rm E}{-2} & 1.5{\rm E}{-1} & 1.5{\rm E}{-1} \\
& LFA & 4.3{\rm E}{-4} & 4.8{\rm E}{-4} & 2.4{\rm E}{-3} & 2.7{\rm E}{-3} & 1.3{\rm E}{-2} & 1.4{\rm E}{-2} & 7.9{\rm E}{-2} & 8.1{\rm E}{-2} & 1.8{\rm E}{-2} & 2.0{\rm E}{-2} & 1.4{\rm E}{-1} & 1.5{\rm E}{-1} \\
& ADX & 5.4{\rm E}{-4} & 4.4{\rm E}{-4} & 5.0{\rm E}{-3} & 5.5{\rm E}{-3} & 1.5{\rm E}{-2} & 1.3{\rm E}{-2} & 1.1{\rm E}{-1} & 1.1{\rm E}{-1} & 2.0{\rm E}{-2} & 1.9{\rm E}{-2} & 2.0{\rm E}{-1} & 2.0{\rm E}{-1} \\
& FS & 4.1{\rm E}{-4} & 5.5{\rm E}{-4} & 7.8{\rm E}{-4} & 9.2{\rm E}{-4} & 1.3{\rm E}{-2} & 1.5{\rm E}{-2} & 5.6{\rm E}{-2} & 5.8{\rm E}{-2} & 1.8{\rm E}{-2} & 2.1{\rm E}{-2} & 1.3{\rm E}{-1} & 1.3{\rm E}{-1} \\
\midrule \addlinespace \\
\multirow{4}{*}{\rotatebox{90}{HGDP fit}}
& PCA & 1.0{\rm E}{-3} & 1.2{\rm E}{-3} & 1.3{\rm E}{-2} & 1.4{\rm E}{-2} & 2.3{\rm E}{-2} & 2.5{\rm E}{-2} & 1.2{\rm E}{-1} & 1.2{\rm E}{-1} & 3.4{\rm E}{-2} & 3.6{\rm E}{-2} & 2.2{\rm E}{-1} & 2.2{\rm E}{-1} \\
& LFA & 9.9{\rm E}{-4} & 1.1{\rm E}{-3} & 1.3{\rm E}{-2} & 1.2{\rm E}{-2} & 2.2{\rm E}{-2} & 2.4{\rm E}{-2} & 1.2{\rm E}{-1} & 1.2{\rm E}{-1} & 3.5{\rm E}{-2} & 3.7{\rm E}{-2} & 2.2{\rm E}{-1} & 2.2{\rm E}{-1} \\
& ADX & 1.6{\rm E}{-3} & 1.4{\rm E}{-3} & 2.3{\rm E}{-3} & 2.3{\rm E}{-3} & 2.6{\rm E}{-2} & 2.6{\rm E}{-2} & 5.6{\rm E}{-2} & 5.6{\rm E}{-2} & 3.6{\rm E}{-2} & 3.7{\rm E}{-2} & 1.0{\rm E}{-1} & 1.0{\rm E}{-1} \\
& FS & 1.4{\rm E}{-3} & 1.6{\rm E}{-3} & 3.1{\rm E}{-2} & 2.9{\rm E}{-2} & 2.6{\rm E}{-2} & 2.7{\rm E}{-2} & 1.4{\rm E}{-1} & 1.3{\rm E}{-1} & 3.6{\rm E}{-2} & 3.8{\rm E}{-2} & 2.2{\rm E}{-1} & 2.1{\rm E}{-1} \\
\bottomrule
\end{tabular*}
\end{sidewaystable}
\clearpage
\pagestyle{empty}
\begin{table}
\vspace{-0.6in}
\caption{\footnotesize The top 50 SNPs most associated with structure in the HGDP data, identified by performing a logistic regression of SNP genotypes on the logistic factors. Shown are the SNP ID and location, deviance measure of differentiation, gene closest to the SNP, distance to gene (rounded to nearest 10bp), and the variant type (if none shown, then intergenic).}
\label{tab:HGDPtop}
\scriptsize
\begin{center}
\vspace{-0.12in}
\begin{tabular}{rlllrrlrl}
\hline
& rsid & chr & position & deviance & genesymbol & locusID & distance & variant type \\
\hline
1 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs1834640}{rs1834640} & 15 & 48392165 & 1605.28 & SLC24A5 & \href{http://www.ncbi.nlm.nih.gov/gene/283652}{283652} & 21000 & \\
2 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs2250072}{rs2250072} & 15 & 48384907 & 1313.82 & SLC24A5 & \href{http://www.ncbi.nlm.nih.gov/gene/283652}{283652} & 28260 & \\
3 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs12440301}{rs12440301} & 15 & 48389924 & 1263.83 & SLC24A5 & \href{http://www.ncbi.nlm.nih.gov/gene/283652}{283652} & 23240 & \\
4 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs260690}{rs260690} & 2 & 109579738 & 1262.72 & EDAR & \href{http://www.ncbi.nlm.nih.gov/gene/10913}{10913} & 0 & intron-variant \\
5 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs9837708}{rs9837708} & 3 & 71487582 & 1189.48 & FOXP1 & \href{http://www.ncbi.nlm.nih.gov/gene/27086}{27086} & 0 & intron-variant \\
6 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs260714}{rs260714} & 2 & 109562495 & 1184.50 & EDAR & \href{http://www.ncbi.nlm.nih.gov/gene/10913}{10913} & 0 & intron-variant \\
7 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs4918664}{rs4918664} & 10 & 94921065 & 1178.40 & XRCC6P1 & \href{http://www.ncbi.nlm.nih.gov/gene/387703}{387703} & 45340 & \\
8 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs10882168}{rs10882168} & 10 & 94929434 & 1160.99 & XRCC6P1 & \href{http://www.ncbi.nlm.nih.gov/gene/387703}{387703} & 36970 & \\
9 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs300153}{rs300153} & 2 & 17986417 & 1143.48 & MSGN1 & \href{http://www.ncbi.nlm.nih.gov/gene/343930}{343930} & 11360 & \\
10 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs9809818}{rs9809818} & 3 & 71480566 & 1135.58 & FOXP1 & \href{http://www.ncbi.nlm.nih.gov/gene/27086}{27086} & 0 & intron-variant \\
11 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs6583859}{rs6583859} & 10 & 94893473 & 1119.25 & NIP7P1 & \href{http://www.ncbi.nlm.nih.gov/gene/389997}{389997} & 26290 & \\
12 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs11187300}{rs11187300} & 10 & 94920291 & 1114.22 & XRCC6P1 & \href{http://www.ncbi.nlm.nih.gov/gene/387703}{387703} & 46120 & \\
13 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs260698}{rs260698} & 2 & 109566759 & 1111.64 & EDAR & \href{http://www.ncbi.nlm.nih.gov/gene/10913}{10913} & 0 & intron-variant \\
14 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs1834619}{rs1834619} & 2 & 17901485 & 1111.40 & SMC6 & \href{http://www.ncbi.nlm.nih.gov/gene/79677}{79677} & 0 & intron-variant \\
15 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs11637235}{rs11637235} & 15 & 48633153 & 1104.45 & DUT & \href{http://www.ncbi.nlm.nih.gov/gene/1854}{1854} & 0 & intron-variant \\
16 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs4497887}{rs4497887} & 2 & 125859777 & 1097.13 & RNA5SP102 & \href{http://www.ncbi.nlm.nih.gov/gene/100873373}{100873373} & 169180 & \\
17 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs7091054}{rs7091054} & 10 & 95018444 & 1085.45 & RPL17P34 & \href{http://www.ncbi.nlm.nih.gov/gene/643863}{643863} & 25280 & \\
18 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs7090105}{rs7090105} & 10 & 75131545 & 1075.50 & ANXA7 & \href{http://www.ncbi.nlm.nih.gov/gene/310}{310} & 3640 & \\
19 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs973787}{rs973787} & 4 & 38263893 & 1074.57 & TBC1D1 & \href{http://www.ncbi.nlm.nih.gov/gene/23216}{23216} & 123090 & \\
20 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs4279220}{rs4279220} & 4 & 38254182 & 1070.43 & TBC1D1 & \href{http://www.ncbi.nlm.nih.gov/gene/23216}{23216} & 113380 & \\
21 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs7556886}{rs7556886} & 2 & 17908130 & 1062.58 & SMC6 & \href{http://www.ncbi.nlm.nih.gov/gene/79677}{79677} & 0 & intron-variant \\
22 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs12473565}{rs12473565} & 2 & 175163335 & 1056.31 & LOC644158 & \href{http://www.ncbi.nlm.nih.gov/gene/644158}{644158} & 1390 & \\
23 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs6500380}{rs6500380} & 16 & 48375777 & 1051.10 & LONP2 & \href{http://www.ncbi.nlm.nih.gov/gene/83752}{83752} & 0 & intron-variant \\
24 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs2384319}{rs2384319} & 2 & 26206255 & 1033.88 & KIF3C & \href{http://www.ncbi.nlm.nih.gov/gene/3797}{3797} & 810 & upstream-variant-2KB \\
25 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs12220128}{rs12220128} & 10 & 94975011 & 1023.79 & XRCC6P1 & \href{http://www.ncbi.nlm.nih.gov/gene/387703}{387703} & 6090 & \\
26 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs17034770}{rs17034770} & 2 & 109616376 & 1019.03 & EDAR & \href{http://www.ncbi.nlm.nih.gov/gene/10913}{10913} & 10540 & \\
27 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs3792006}{rs3792006} & 2 & 26498222 & 998.96 & HADHB & \href{http://www.ncbi.nlm.nih.gov/gene/3032}{3032} & 0 & intron-variant \\
28 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs4918924}{rs4918924} & 10 & 94976956 & 994.79 & XRCC6P1 & \href{http://www.ncbi.nlm.nih.gov/gene/387703}{387703} & 8030 & \\
29 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs1984996}{rs1984996} & 10 & 95008745 & 990.92 & RPL17P34 & \href{http://www.ncbi.nlm.nih.gov/gene/643863}{643863} & 34980 & \\
30 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs3751631}{rs3751631} & 15 & 52534344 & 987.33 & MYO5C & \href{http://www.ncbi.nlm.nih.gov/gene/55930}{55930} & 0 & reference,synonymous-codon \\
31 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs4578856}{rs4578856} & 2 & 17853388 & 987.29 & SMC6 & \href{http://www.ncbi.nlm.nih.gov/gene/79677}{79677} & 0 & intron-variant \\
32 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs13397666}{rs13397666} & 2 & 109544052 & 986.80 & EDAR & \href{http://www.ncbi.nlm.nih.gov/gene/10913}{10913} & 0 & intron-variant \\
33 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs12619554}{rs12619554} & 2 & 17352372 & 986.20 & ZFYVE9P2 & \href{http://www.ncbi.nlm.nih.gov/gene/100420972}{100420972} & 113180 & \\
34 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs3736508}{rs3736508} & 11 & 45975130 & 981.05 & PHF21A & \href{http://www.ncbi.nlm.nih.gov/gene/51317}{51317} & 0 & missense,reference \\
35 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs12472075}{rs12472075} & 2 & 177691130 & 973.02 & RPL29P8 & \href{http://www.ncbi.nlm.nih.gov/gene/100131991}{100131991} & 16650 & \\
36 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs9522149}{rs9522149} & 13 & 111827167 & 965.50 & ARHGEF7 & \href{http://www.ncbi.nlm.nih.gov/gene/8874}{8874} & 0 & intron-variant \\
37 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs2917454}{rs2917454} & 10 & 78892415 & 964.40 & KCNMA1 & \href{http://www.ncbi.nlm.nih.gov/gene/3778}{3778} & 0 & intron-variant \\
38 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs10882183}{rs10882183} & 10 & 94974083 & 961.04 & XRCC6P1 & \href{http://www.ncbi.nlm.nih.gov/gene/387703}{387703} & 5160 & \\
39 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs10079352}{rs10079352} & 5 & 117494640 & 960.33 & LOC100505811 & \href{http://www.ncbi.nlm.nih.gov/gene/100505811}{100505811} & 123620 & \\
40 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs10935320}{rs10935320} & 3 & 139056584 & 958.33 & MRPS22 & \href{http://www.ncbi.nlm.nih.gov/gene/56945}{56945} & 6270 & \\
41 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs9571407}{rs9571407} & 13 & 34886039 & 957.04 & LINC00457 & \href{http://www.ncbi.nlm.nih.gov/gene/100874179}{100874179} & 123540 & \\
42 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs6542787}{rs6542787} & 2 & 109556365 & 955.56 & EDAR & \href{http://www.ncbi.nlm.nih.gov/gene/10913}{10913} & 0 & intron-variant \\
43 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs953035}{rs953035} & 1 & 36079508 & 954.67 & PSMB2 & \href{http://www.ncbi.nlm.nih.gov/gene/5690}{5690} & 0 & intron-variant \\
44 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs4657449}{rs4657449} & 1 & 165465281 & 951.72 & LOC400794 & \href{http://www.ncbi.nlm.nih.gov/gene/400794}{400794} & 0 & intron-variant \\
45 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs9960403}{rs9960403} & 18 & 13437993 & 949.43 & LDLRAD4 & \href{http://www.ncbi.nlm.nih.gov/gene/753}{753} & 0 & intron-variant \\
46 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs203150}{rs203150} & 18 & 38037221 & 944.32 & RPL17P45 & \href{http://www.ncbi.nlm.nih.gov/gene/100271414}{100271414} & 312750 & \\
47 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs2823882}{rs2823882} & 21 & 17934419 & 942.05 & LINC00478 & \href{http://www.ncbi.nlm.nih.gov/gene/388815}{388815} & 0 & intron-variant \\
48 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs10886189}{rs10886189} & 10 & 119753963 & 937.81 & RAB11FIP2 & \href{http://www.ncbi.nlm.nih.gov/gene/22841}{22841} & 10460 & \\
49 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs2441727}{rs2441727} & 10 & 68224886 & 937.08 & CTNNA3 & \href{http://www.ncbi.nlm.nih.gov/gene/29119}{29119} & 0 & intron-variant \\
50 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs310644}{rs310644} & 20 & 62159504 & 931.90 & PTK6 & \href{http://www.ncbi.nlm.nih.gov/gene/5753}{5753} & 260 & downstream-variant-500B \\
\hline
\hline
\end{tabular}
\end{center}
\normalsize
\end{table}
\clearpage
\begin{table}
\vspace{-0.6in}
\caption{\footnotesize The top 50 SNPs most associated with structure in the TGP data, identified by performing a logistic regression of SNP genotypes on the logistic factors. Shown are the SNP ID and location, deviance measure of differentiation, gene closest to the SNP, distance to gene (rounded to nearest 10bp), and the variant type (if none shown, then intergenic).}
\label{tab:TGPtop}
\scriptsize
\begin{center}
\vspace{-0.25in}
\begin{tabular}{rlllrrlrl}
\hline
& rsid & chr & position & deviance & genesymbol & locusID & distance & variant type \\
\hline
1 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs1426654}{rs1426654} & 15 & 48426484 & 3129.76 & SLC24A5 & \href{http://www.ncbi.nlm.nih.gov/gene/283652}{283652} & 0 & missense,reference \\
2 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs3827760}{rs3827760} & 2 & 109513601 & 2395.27 & EDAR & \href{http://www.ncbi.nlm.nih.gov/gene/10913}{10913} & 0 & missense,reference \\
3 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs922452}{rs922452} & 2 & 109543883 & 2338.38 & EDAR & \href{http://www.ncbi.nlm.nih.gov/gene/10913}{10913} & 0 & intron-variant \\
4 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs372985703}{rs372985703} & 17 & 19172196 & 1975.16 & EPN2 & \href{http://www.ncbi.nlm.nih.gov/gene/22905}{22905} & 0 & intron-variant \\
5 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs4924987}{rs4924987} & 17 & 19247075 & 1949.03 & B9D1 & \href{http://www.ncbi.nlm.nih.gov/gene/27077}{27077} & 0 & intron-variant,missense,reference \\
6 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs260687}{rs260687} & 2 & 109578855 & 1925.18 & EDAR & \href{http://www.ncbi.nlm.nih.gov/gene/10913}{10913} & 0 & intron-variant \\
7 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs7209202}{rs7209202} & 17 & 58532239 & 1890.67 & APPBP2 & \href{http://www.ncbi.nlm.nih.gov/gene/10513}{10513} & 0 & \\
8 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs7211872}{rs7211872} & 17 & 58550725 & 1890.67 & APPBP2 & \href{http://www.ncbi.nlm.nih.gov/gene/10513}{10513} & 0 & \\
9 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs67929453}{rs67929453} & 3 & 139109825 & 1890.57 & LOC100507291 & \href{http://www.ncbi.nlm.nih.gov/gene/100507291}{100507291} & 0 & intron-variant,upstream-variant-2KB \\
10 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs260643}{rs260643} & 2 & 109539653 & 1850.71 & EDAR & \href{http://www.ncbi.nlm.nih.gov/gene/10913}{10913} & 0 & intron-variant \\
11 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs260707}{rs260707} & 2 & 109574150 & 1838.37 & EDAR & \href{http://www.ncbi.nlm.nih.gov/gene/10913}{10913} & 0 & intron-variant \\
12 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs1545071}{rs1545071} & 18 & 67695505 & 1821.35 & RTTN & \href{http://www.ncbi.nlm.nih.gov/gene/25914}{25914} & 0 & intron-variant \\
13 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs12729599}{rs12729599} & 1 & 1323078 & 1812.91 & CCNL2 & \href{http://www.ncbi.nlm.nih.gov/gene/81669}{81669} & 0 & intron-variant \\
14 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs12347078}{rs12347078} & 9 & 344508 & 1811.16 & DOCK8 & \href{http://www.ncbi.nlm.nih.gov/gene/81704}{81704} & 0 & intron-variant \\
15 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs12142199}{rs12142199} & 1 & 1249187 & 1779.28 & CPSF3L & \href{http://www.ncbi.nlm.nih.gov/gene/54973}{54973} & 0 & reference,synonymous-codon \\
16 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs12953952}{rs12953952} & 18 & 67737927 & 1750.15 & RTTN & \href{http://www.ncbi.nlm.nih.gov/gene/25914}{25914} & 0 & intron-variant \\
17 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs9467091}{rs9467091} & 6 & 10651772 & 1746.75 & GCNT6 & \href{http://www.ncbi.nlm.nih.gov/gene/644378}{644378} & 4270 & \\
18 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs7165971}{rs7165971} & 15 & 55921013 & 1736.83 & PRTG & \href{http://www.ncbi.nlm.nih.gov/gene/283659}{283659} & 0 & intron-variant \\
19 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs6132532}{rs6132532} & 20 & 2315543 & 1730.64 & TGM3 & \href{http://www.ncbi.nlm.nih.gov/gene/7053}{7053} & 0 & intron-variant \\
20 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs959071}{rs959071} & 17 & 19142226 & 1729.18 & EPN2 & \href{http://www.ncbi.nlm.nih.gov/gene/22905}{22905} & 0 & intron-variant \\
21 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs10962599}{rs10962599} & 9 & 16795286 & 1726.24 & BNC2 & \href{http://www.ncbi.nlm.nih.gov/gene/54796}{54796} & 0 & intron-variant \\
22 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs967377}{rs967377} & 20 & 53222217 & 1724.93 & DOK5 & \href{http://www.ncbi.nlm.nih.gov/gene/55816}{55816} & 0 & intron-variant \\
23 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs4891381}{rs4891381} & 18 & 67595449 & 1723.79 & CD226 & \href{http://www.ncbi.nlm.nih.gov/gene/10666}{10666} & 0 & intron-variant \\
24 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs377561427}{rs377561427} & 15 & 63988357 & 1713.98 & HERC1 & \href{http://www.ncbi.nlm.nih.gov/gene/8925}{8925} & 0 & frameshift-variant,reference \\
25 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs73889254}{rs73889254} & 22 & 46762214 & 1711.40 & CELSR1 & \href{http://www.ncbi.nlm.nih.gov/gene/9620}{9620} & 0 & intron-variant \\
26 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs4918664}{rs4918664} & 10 & 94921065 & 1700.64 & XRCC6P1 & \href{http://www.ncbi.nlm.nih.gov/gene/387703}{387703} & 45340 & \\
27 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs2759281}{rs2759281} & 1 & 204866365 & 1691.03 & NFASC & \href{http://www.ncbi.nlm.nih.gov/gene/23114}{23114} & 0 & intron-variant \\
28 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs12065033}{rs12065033} & 1 & 173579034 & 1682.54 & ANKRD45 & \href{http://www.ncbi.nlm.nih.gov/gene/339416}{339416} & 0 & utr-variant-3-prime \\
29 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs9796793}{rs9796793} & 16 & 30495652 & 1681.28 & ITGAL & \href{http://www.ncbi.nlm.nih.gov/gene/3683}{3683} & 0 & intron-variant \\
30 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs1240708}{rs1240708} & 1 & 1335790 & 1675.48 & LOC148413 & \href{http://www.ncbi.nlm.nih.gov/gene/148413}{148413} & 0 & intron-variant,upstream-variant-2KB \\
31 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs2615876}{rs2615876} & 10 & 117665860 & 1670.53 & ATRNL1 & \href{http://www.ncbi.nlm.nih.gov/gene/26033}{26033} & 0 & intron-variant \\
32 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs2823882}{rs2823882} & 21 & 17934419 & 1669.32 & LINC00478 & \href{http://www.ncbi.nlm.nih.gov/gene/388815}{388815} & 0 & intron-variant \\
33 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs8097206}{rs8097206} & 18 & 38024931 & 1663.29 & RPL17P45 & \href{http://www.ncbi.nlm.nih.gov/gene/100271414}{100271414} & 300460 & \\
34 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs8071181}{rs8071181} & 17 & 58508582 & 1662.44 & C17orf64 & \href{http://www.ncbi.nlm.nih.gov/gene/124773}{124773} & 0 & reference,synonymous-codon \\
35 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs1075389}{rs1075389} & 15 & 64174177 & 1661.21 & MIR422A & \href{http://www.ncbi.nlm.nih.gov/gene/494334}{494334} & 10950 & \\
36 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs6875659}{rs6875659} & 5 & 175158653 & 1657.54 & HRH2 & \href{http://www.ncbi.nlm.nih.gov/gene/3274}{3274} & 22410 & \\
37 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs7171940}{rs7171940} & 15 & 64170986 & 1654.01 & MIR422A & \href{http://www.ncbi.nlm.nih.gov/gene/494334}{494334} & 7760 & \\
38 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs2148359}{rs2148359} & 9 & 7385508 & 1652.16 & RPL4P5 & \href{http://www.ncbi.nlm.nih.gov/gene/158345}{158345} & 91440 & \\
39 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs7531501}{rs7531501} & 1 & 234338303 & 1648.15 & SLC35F3 & \href{http://www.ncbi.nlm.nih.gov/gene/148641}{148641} & 0 & intron-variant \\
40 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs57742857}{rs57742857} & 15 & 93567352 & 1645.21 & CHD2 & \href{http://www.ncbi.nlm.nih.gov/gene/1106}{1106} & 0 & intron-variant \\
41 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs931564}{rs931564} & 17 & 58631702 & 1636.86 & LOC388406 & \href{http://www.ncbi.nlm.nih.gov/gene/388406}{388406} & 10200 & \\
42 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs4738296}{rs4738296} & 8 & 73857539 & 1632.70 & LOC100288310 & \href{http://www.ncbi.nlm.nih.gov/gene/100288310}{100288310} & 0 & intron-variant \\
43 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs4402785}{rs4402785} & 2 & 104766351 & 1631.33 & LOC100287010 & \href{http://www.ncbi.nlm.nih.gov/gene/100287010}{100287010} & 228950 & \\
44 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs12988506}{rs12988506} & 2 & 33162854 & 1630.14 & LOC100271832 & \href{http://www.ncbi.nlm.nih.gov/gene/100271832}{100271832} & 0 & intron-variant \\
45 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs9410664}{rs9410664} & 9 & 91196828 & 1625.48 & NXNL2 & \href{http://www.ncbi.nlm.nih.gov/gene/158046}{158046} & 6120 & \\
46 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs2041564}{rs2041564} & 2 & 72453847 & 1623.91 & EXOC6B & \href{http://www.ncbi.nlm.nih.gov/gene/23233}{23233} & 0 & intron-variant \\
47 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs6024103}{rs6024103} & 20 & 54034601 & 1623.41 & LOC101927796 & \href{http://www.ncbi.nlm.nih.gov/gene/101927796}{101927796} & 2270 & \\
48 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs6583859}{rs6583859} & 10 & 94893473 & 1619.79 & NIP7P1 & \href{http://www.ncbi.nlm.nih.gov/gene/389997}{389997} & 26290 & \\
49 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs12913832}{rs12913832} & 15 & 28365618 & 1611.23 & HERC2 & \href{http://www.ncbi.nlm.nih.gov/gene/8924}{8924} & 0 & intron-variant \\
50 & \href{http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=rs632876}{rs632876} & 2 & 216572452 & 1610.26 & LINC00607 & \href{http://www.ncbi.nlm.nih.gov/gene/646324}{646324} & 0 & intron-variant \\
\hline
\hline
\end{tabular}
\end{center}
\normalsize
\end{table}
\clearpage
\addcontentsline{toc}{section}{ {\sc References}}
\bibliographystyle{nature}
|
1,314,259,994,936 | arxiv | \section{Introduction}
\label{SEC:intro}
We consider the numerical solution of the initial-boundary value problem (IBVP)
of the porous medium equation (PME) in two dimensions,
\begin{equation}
\begin{cases}
u_t = \nabla \cdot(|u|^m \nabla u) , \quad & \text{in} \quad \Omega \times (t_0,T] \\
u(\vect{x},t_0) = u_0(\vect{x}) , \quad & \text{on} \quad \Omega \\
u(\vect{x},t) = 0 , \quad & \text{on} \quad \partial \Omega \times (t_0,T]
\end{cases}
\label{PME-1}
\end{equation}
where $\Omega$ is a bounded polygonal domain, $u_0(\vect{x})$ is a given function, and $m\ge 1$ is a physical
parameter. PME is a nontrivial generalization of the heat equation. It is found in many areas of
the physical sciences, including gas flow in porous medium, incompressible fluid dynamics,
nonlinear heat transfer, and image processing; e.g., see \cite{Vazquez2007} and references therein.
In the case of gas flow in porous medium, $u$ represents the density of the gas,
$u^m$ the pressure, $u \nabla (u^m)$ the flux, $\nabla (u^m)$ the velocity, and $m$ is the isentropic coefficient.
In the case of radiation diffusion in plasmas, $m$ stands for the power of temperature appearing in the nonlinear diffusion
coefficient and can take values up to 5.5 \cite[Page 23]{Vazquez2007}.
PME itself represents a nonlinear diffusion process.
One of its many interesting features is its degeneracy which induces a property of the finite propagation:
if $u_0(\vect{x})$ has a compact support, the solution
will have a compact support for any time $t > t_0$.
This in effect creates a free boundary that stays between the regions where $u$ is nonzero and
where $u$ vanishes and propagates at a finite speed for all time.
Contrary to the heat equation which smooths out its initial solutions, PME solution
can become nonsmooth even in cases where it has a smooth initial solution. Moreover,
for a certain type of initial solutions, the solution can exhibit a waiting-time phenomenon
for which the free boundary will not move until a finite amount of time has elapsed.
PME has been studied extensively in theory and there is a vast literature,
including the earlier work by Ole{\u\i}nik et al. \cite{Oleinik1958},
Kala{\v{s}}nikov \cite{Kalashnikov1967}, Aronson \cite{Aronson1969},
the more recent work by Shmarev \cite{Shmarev2005,Shmarev2003},
and the monograph by V{\'a}zquez \cite{Vazquez2007} and references therein.
The numerical solution of PME has also received considerable attention from researchers.
Particularly, error estimates have been obtained for various finite element approximations.
For example, using a regularization approach
(to avoid degenerate or negative diffusion, for instance, with the diffusion
coefficient $|u|^m$ being replaced by $\max (|u|, \epsilon/2)^m$ for some regularization parameter $\epsilon > 0$)
and taking
$\epsilon = \mathcal{O}(h^{\frac{2m+4}{m^2+4m+2}})$, Rose \cite{Rose-1983} shows that
the error for a P1 finite element (for space) -- backward Euler (for time)
approximation of PME is bounded by
\begin{equation}
\left (\sum_{n} \Delta t \| u_h^n - u \|_{L^{m+2}(\Omega)}^{m+2} \right )^{\frac{1}{m+2}}
\le C \left (\Delta t^{\frac{1}{m+1}} + \left (\ln \left (\frac{1}{h}\right )\right )^{\frac{1}{(m+1)(m+2)}} h^{\frac{2}{m+1}}\right ),
\label{rose-1}
\end{equation}
where $h$ is the maximum element diameter and $u_h^n$ is the numerical approximation of $u$ at $t= t_n$.
Nochetto and Verdi \cite{Nochetto-1988} consider a class of degenerate PDEs which includes PME
as a special example and improve the result of \cite{Rose-1983}. They show that the error
for a P1 finite element -- 1st-order semi-implicit approximation of PME is bounded by
\begin{align}
&\| u_h - u \|_{L^{\infty}(0,T; H^{-1}(\Omega))}
+ \| (u_{h})^{m+1} - u^{m+1} \|_{L^2(0,T; L^2(\Omega))}
+ \left \| \int_0^t ((u_{h})^{m+1} - u^{m+1})\right \|_{L^{\infty}(0,T; H^{1}(\Omega))}
\notag
\\
& \qquad \qquad
\le C \left (\frac{h^2}{\epsilon} + \frac{h^4}{\epsilon^2 \Delta t} + \Delta t\right )^{\frac 1 2}
\label{nochetto-1}
\\
& \qquad \qquad = \mathcal{O} (h^{\frac{m+2}{2(m+1)}}),\quad \text{if}\quad
\Delta t = \mathcal{O}(h^{\frac{m+2}{m+1}}),\quad \epsilon = \mathcal{O}(h^{\frac{m}{m+1}}).
\notag
\end{align}
For the P1 finite element -- backward Euler approximation, error bounds in various norm are obtained,
for instance,
\begin{equation}
\| u_h - u \|_{L^\infty(0,T; H^{-1}(\Omega))} \le C \left (\Delta t + \left (\ln \left (\frac{1}{h}\right )\right )^{\frac{2m+3}{2m+2}} h\right )
\label{rulla-1}
\end{equation}
by Rulla and Walkington \cite{Rulla1996},
\begin{equation}
\| u_h - u \|_{L^2(0,T; L^2(\Omega))} \le C h^{\frac{m^2+6m+8}{6m^2+14m+8}}
\quad \text{when}\quad \Delta t = \mathcal{O}(h^{\frac{5 m+4}{2m}})
\label{ebmeyer-1}
\end{equation}
by Ebmeyer \cite{Ebmeyer-1998}, and
\begin{equation}
\| u_h - u \|_{L^{m+2}(0,T; L^{m+2}(\Omega))} \le
C \left ( \Delta t^{\frac 1 2} + h + h^{\frac{1}{m+1} \left (\frac{d m}{2m+4}+1\right )} \right )^{\frac{1}{m+2}}
\label{wei-1}
\end{equation}
by Wei and Lefton \cite{Wei1999}, where $d$ is the space dimension.
It is remarked that these estimates are obtained for quasi-uniform meshes. The convergence rate is first order at best
and decreases with $m$. Some of these estimates are shown to be optimal
in the corresponding norm in lieu of the known regularity of the solution of PME.
Moreover, it is worth mentioning that Ebmeyer and Liu \cite{Ebmeyer-2008} obtain error estimates
in quasi-norm and Emmrich and {\v{S}}i{\v{s}}ka \cite{Emmrich2012} prove that
a Galerkin finite element -- backward Euler approximation converges to the weak solution of PME
using the theory of monotone operators.
More recently, Duque et al. \cite{Duque2013} establish $L^{1+\max (\gamma/2)}$ error bounds for
the approximation of a general order continuous Galerkin in space and a general order
discontinuous Galerkin in time for PME with a variable exponent $m = \gamma(\vect{x})$.
Zhang and Wu \cite{Zhang-2009} consider the numerical simulation of the one-dimensional PME
on a uniform mesh using a high-order local discontinuous
Galerkin finite element method. The method can effectively eliminate unwanted, nonphysical
oscillations in the computed solution near the free boundary and lead to a high-order
convergence rate within the solution support and away from the free boundary.
The low regularity (steep gradient and corner shape near the moving free boundary)
and evolving nature of the solution makes adaptive moving mesh methods
an attractive tool to improve accuracy in the numerical solution of PME.
A number of works exist in this direction. For example,
Budd et al. \cite{BCHR99} investigate the numerical simulation of
self-similarity solutions of one-dimensional PME using the Moving Mesh PDE (MMPDE)
moving mesh method \cite{HRR94a,HR11} and a specially designed monitor function
to preserve the scaling invariance of PME.
In a series of papers \cite{BHJ05,BHJ05a,BHJJ06} (also see the review paper \cite{Baines-2011}),
Baines and his co-workers study the numerical solution of PME in one and two dimensions using
a moving mesh finite element method that is based upon conserving a local proportion of the total mass
that is present in the projected initial data. Numerical results show that their method gives a second-order
convergence for $m=1$ but only a first-order convergence for $m=3$ when a uniform initial mesh
is used \cite{BHJ05a,Baines-2011}. For $m=3$, the second-order convergence can be recovered in one dimension
if an optimal initial mesh is used. Unfortunately, such an optimal mesh
is significantly more expensive to compute in two dimensions than in one dimension.
Recently, Duque et al. \cite{Duque2014, Duque2015} present a moving mesh finite element method
based on an MMPDE for PME with variable exponents and with/without absorption.
The method shows a first-order convergence when tested for the Barenblatt-Pattle solution of PME.
The objective of this paper is to study an adaptive moving mesh finite element method for the numerical
solution of PME. The method is also based on an MMPDE but significantly different from
the method of \cite{Duque2014, Duque2015}.
The MMPDE we use is formulated by minimizing an energy (cf. (\ref{Ih}))
based on the equidistribution and alignment conditions and the mesh adaptation is controlled
through a matrix-valued function (i.e., a metric tensor) instead of a scalar function.
The advantage of using a metric tensor is that it provides information not only to control
the size of mesh elements but also their shape and orientation.
Generally speaking, a so generated mesh has better alignment with the geometry of the physical solution
than that with a scalar mesh adaptation function.
Moreover, a newly developed compact formulation of the method (cf. (\ref{mmpde-2}) and (\ref{mmpde-3}))
makes its implementation much easier and more efficient.
Mesh adaptation based on the gradient and Hessian of the solution will be considered.
The arclength metric tensor (a gradient-based metric tensor) has been widely used
in the context of moving mesh methods because it results in more stable mesh movement than
a Hessian-based metric tensor and works well for many problems. On the other hand,
there is no theoretical guarantee that the arclength metric tensor will lead to the optimal convergence order
for piecewise linear interpolation or finite element approximation since the error is determined
by the Hessian of the solution in these situations.
As a matter of fact, there are problems, although not very common, for which we have to use
a Hessian-based metric tensor in order to achieve the optimal convergence order for piecewise linear interpolation
or finite element approximation. Interestingly, PME is one of those problems. We shall show that the linear finite element approximation
of the Barenblatt-Pattle solution of PME shows a first-order convergence for arclength-based adaptive meshes
and a second-order convergence for Hessian-based adaptive meshes.
Another feature of the method that is different from those of
Duque et al. \cite{Duque2014, Duque2015} and Baines et al. \cite{BHJ05,BHJ05a,Baines-2011,BHJJ06}
is that PME is solved on a large domain that contains the free boundary
for the whole time period under consideration. In this way, there is no need to explicitly trace
the movement of the free boundary and thus the method can readily deal with more complicated structures
in the solution and in the differential equation. Numerical examples with simple free boundary
(such as the Barenblatt-Pattle solution) and more complex and even merging free boundaries
will be presented as well as those for PME with variable exponents and absorption.
In addition to the above mentioned MMPDE method, a number of other moving mesh methods
have been developed in the past; e.g., see
Hirt et al. \cite{HAC74} (ALE -- Arbitrary Lagrangian-Eulerian)
Miller and Miller \cite{MM81} (MFE -- moving finite element),
Liao and Anderson \cite{LA92} (deformation map),
Li et al. \cite{LTZ01} (mesh rezoning),
Cao et al. \cite{CHR02} (geometric conservation law),
Baines et al. \cite{BHJ05a} (conservation of fraction mass),
and
Budd and Williams \cite{BW06} (parabolic {M}onge-{A}mp\`ere equation).
The interested reader is also referred to the books/review articles
\cite{Bai94a,Baines-2011,Shashkov-2015,BHR09,HR11,Tan05} and references therein.
The outline of the paper is as follows. Some properties of PME that are relevant to the numerical
simulation are described in Section~\ref{SEC:PME-theory}. Section~\ref{SEC:mmfem} is devoted
to the description of the moving mesh finite element method, including the linear finite element
discretion of PME on a moving mesh and the generation of an adaptive moving mesh using
the MMPDE approach. Numerical examples are presented in Sections~\ref{SEC:PME-numerics}
and \ref{SEC:PME-numerics-2} for PME and PME with variable exponents and absorption, respectively.
Finally, Section~\ref{SEC:conclusion} contains the conclusions and further comments.
\vspace{10pt}
\section{Properties of the porous medium equation}
\label{SEC:PME-theory}
Before we describe the finite element approximation of IBVP (\ref{PME-1}), it is instructive to recall some
of its properties that are relevant to numerical simulation. First of all,
it is known (e.g., see V{\'a}zquez \cite{Vazquez2007}) that when $u_0^{m+2} \in L^{1}(\Omega)$,
IBVP \eqref{PME-1} has a weak solution $u$ satisfying
$u(\cdot, t)^{m+2} \in L^{1}(\Omega)$ for any $t \in (t_0, T]$ and $u^{m+1} \in L^{2}(0,T; H^1_0(\Omega))$.
Moreover, a nonnegative weak solution exists
if $u_0^{m+2} \in L^{1}(\Omega)$, $u_0 \in L^1(\Omega)$, and $u_0 \ge 0$. The uniqueness
of the weak solution is guaranteed if it is further assumed that $u \in L^2(\Omega \times (0,T))$.
PME is degenerate whenever $u = 0$. Due to this degeneracy, PDE has the property of the finite speed
of propagation: if compact-supported initially, its solution remains compact-supported at any finite
time, with the support monotonically expanding as time evolves. The boundary of the support forms a moving interface
$\Gamma(t)$ which is commonly referred to as a free boundary. The velocity of the free boundary is
given by Darcy's law (e.g., see Shmarev \cite{Shmarev2005}), i.e.,
\begin{equation}
\Gamma'(t) = - \lim_{\vect{x} \to \Gamma(t)^-} \nabla \left( \frac{u^m(\vect{x},t)}{m} \right),
\label{Darcy-law}
\end{equation}
where the limit is taken from the interior of the support.
In addition, PME exhibits a waiting-time phenomenon: for a certain type of initial solutions
the free boundary does not move until a finite amount of time has elapsed.
Loosely speaking, from (\ref{Darcy-law}) we may expect to see this phenomenon for initial solutions having
vanishing $\nabla (u^m)$ at the initial free boundary.
A few classes of special solutions to IBVP \eqref{PME-1} have been known, among which is the Barenblatt-Pattle
solution, viz.,
\begin{equation}
u(r,t) =
\begin{cases}
\frac{1}{\lambda^d(t)}
\left(
1 - \left( \frac{r}{r_0 \lambda(t)} \right)^2
\right)^{\frac{1}{m}} , \quad & \text{for} \quad |r| \leq r_0 \lambda(t) \\
0 , \quad & \text{for} \quad |r| > r_0 \lambda(t)
\end{cases}
\label{BP-soln}
\end{equation}
where
\[
r = | \vect{x} | , \quad
\lambda(t) = \left(\frac{t}{t_0} \right)^{\frac{1}{2+d m}} ,
\quad
t_0 = \frac{r_0^2 m}{2 (2 + d m)} ,
\]
and $r_0>0$ is a given parameter.
It is radially symmetric, self-similar, and compact-supported for any finite time.
Moreover, $u^m$ is Lipschitz continuous in $\vect{x}$ and $t$
and $\nabla (u^m)$ is bounded in the support of $u(\cdot, t)$, $\text{supp}(u(\cdot, t))$.
Furthermore, the solution is H\"{o}lder continuous.
The slope of the solution at the free boundary is finite for $m = 1$
and becomes infinite when $m>1$, which causes challenges for the numerical solution of PME.
These regularity properties also hold for general compactly supported solutions of (\ref{PME-1}),
and their free boundaries can be shown to be at least Lipschitz continuous in both space and time;
e.g., see \cite{Aronson1969,Caffarelli1980,Daskalopoulos1998a,Shmarev2005}.
\section{The moving mesh finite element method}
\label{SEC:mmfem}
In this section we describe the adaptive moving mesh finite element approximation of IBVP~\eqref{PME-1}.
To begin with, we note that there are roughly two approaches for solving the IBVP.
The first, as used in \cite{Rose-1983,Zhang-2009}, is to solve PME in a large domain
containing the free boundary for the whole time period of the simulation.
With this approach, there is no need to explicitly treat the free boundary, which makes
the approach more amenable to problems with complex solution supports.
The main disadvantage of this approach is that the solution has a corner shape between
the regions of zero and nonzero solution values and thus its regularity on the whole domain
is at most $H^1$. An $H^1$ regularity often means at best a first-order convergence
in the numerical solution as the mesh is refined.
The second approach is to solve the problem only in the region of compact support; e.g., see
\cite{BHJ05a,Duque2014,Duque2015}.
One of the advantages of this approach is that a smaller spatial domain is used
and thus fewer mesh points can be used to achieve the same computational accuracy.
Moreover, the regularity of the solution is better on the support than on a larger domain
since it does not have a corner shape. As a result, the numerical solution
can have a higher convergence order than that with the first approach.
The main disadvantage is that the boundary movement has to be treated explicitly
using Darcy's law (\ref{Darcy-law}).
We use the first approach in this work. We choose this approach due to its advantage of no need to
explicitly treat the free boundary and its potential to deal with problems
having complex solution supports. To better resolve the corner shape in the solution and
improve the computational accuracy, we employ an MMPDE-based moving mesh
method \cite{HR11} to adaptively and dynamically concentrate the mesh points around the free boundary.
Linear finite elements and the fifth-order Radau IIA method (e.g., see Hairer and Wanner \cite{HW96})
are used for the spatial and temporal discretization, respectively. As we will see in Section~\ref{SEC:PME-numerics},
a second-order convergence of the finite element approximation in space can be achieved when
a properly adapted mesh is used.
\subsection{Finite element discretization}
\label{SEC:fem}
We now describe the finite element discretization. Denote the time instants by
\begin{equation}
t_0 = 0 < t_1 < \ldots < t_{n_f} \equiv T .
\label{time-grid}
\end{equation}
For the moment, we assume that the simplicial meshes $\mathcal{T}_h^n$, $n = 0,\ldots,n_f$
for the physical domain $\Omega$ at these time instants are known and
have the same connectivity and the same numbers of vertices and elements.
(Their generation will be discussed in the next subsection.)
Denote the coordinates of the vertices of $\mathcal{T}_h^n$
by $\vect{x}_j^n$, $j = 1, ..., N_v$, where $N_v$ is the number of all vertices.
The mesh $\mathcal{T}_h(t)$ between any two time instants $t_n$ and $t_{n+1}$
is defined through linear interpolation, i.e.,
\begin{align}
& \vect{x}_j(t) =
\frac{t-t_n}{t_{n+1}-t_n} \vect{x}_j^{n+1} +
\frac{t_{n+1}-t}{t_{n+1}-t_n} \vect{x}_j^{n} , \quad \forall j = 1, ..., N_v
\\
&\dot{\vect{x}}_j (t) = \frac{\vect{x}_j^{n+1}-\vect{x}_j^{n}}{t_{n+1}-t_n},\quad
j = 1, ..., N_v.
\end{align}
Denote by $\phi_j(\vect{x}, t)$ the linear basis function associated with vertex $\vect{x}_j(t)$.
For convenience, we assume that the vertices are arranged such that the first $N_{vi}$ vertices
are the interior vertices.
Let
\[
V_h(t) = \text{span}\{\phi_1(\cdot, t), ..., \phi_{N_{vi}}(\cdot, t)\} .
\]
Then, the linear finite element approximation to the solution of IBVP \eqref{PME-1} is defined
as $u_h(\cdot, t) \in V_h(t)$, $ t \in (t_0, T]$ such that
\begin{equation}
\begin{cases}
& \int_{\Omega} \frac{\partial u_h}{\partial t} v~d\vect{x} = - \int_{\Omega} |u_h|^m \nabla u_h
\cdot \nabla v~d\vect{x} ,
\quad \forall v \in V_h(t),\quad t_0 < t \leq T \\
& \int_{\Omega} (u_h(\vect{x},0) - u^0(\vect{x})) v~d\vect{x} = 0, \quad \forall v \in V_h(t) .
\end{cases}
\label{fem0}
\end{equation}
The above equation can be cast in matrix form. Indeed, expressing
\[
u_h(\vect{x},t) = \sum_{j=1}^{N_{v_i}} u_j(t) \phi_j(\vect{x},t)
\]
and differentiating it with respect to $t$, we have
\[
\frac{\partial u_h}{\partial t} = \sum_{j=1}^{N_{v_i}} \frac{\partial u_j}{\partial t} \phi_j(\vect{x},t) +
\sum_{j=1}^{N_{v_i}} u_j(t) \frac{\partial \phi_j}{\partial t} .
\]
It can be shown (e.g. see Jimack and Wathen \cite[Lemma 2.3]{Jimack-1991}) that
\[
\frac{\partial \phi_j}{\partial t} = - \nabla \phi_j \cdot \dot{\vect{X}},\quad a.e. \text{ in } \Omega
\]
where the mesh velocity $\dot{\vect{X}}$ is defined as
\[
\dot{\vect{X}} (\vect{x},t) = \sum_{j=1}^{N_v} \dot{\vect{x}}_j(t) \phi_j (\vect{x}, t) .
\]
Then, we get
\[
\frac{\partial u_h}{\partial t} = \sum_{j=1}^{N_{v_i}} \frac{\partial u_j}{\partial t} \phi_j(\vect{x},t) - \nabla u_h\cdot \dot{\vect{X}} .
\]
From this and taking $v = \phi_i$ ($i = 1, ..., N_{vi}$) in (\ref{fem0}) successively, we obtain
\[
\sum_{j=1}^{N_{vi}} \left(\int_{\Omega}
\phi_j \phi_i~d\vect{x}\right) \frac{d u_j}{dt}
= \int_{\Omega} \nabla u_h \cdot \left( \dot{\vect{X}} \phi_i
- u_h^m \nabla \phi_i \right) ~d\vect{x}, \quad i = 1, ..., N_{vi}, \quad t_0 < t \leq T
\]
which can be cast in the matrix form as
\begin{equation}
B(\vect{X}) \dot{\vect{U}} = F(\vect{U}, \vect{X}, \dot{\vect{X}}),
\label{fem-2}
\end{equation}
where $B$ is the mass matrix and $\vect{X}$ and $\vect{U}$ are the vectors representing
the mesh and solution, respectively.
This ODE system is integrated from $t_n$ to $t_{n+1}= t_n + \Delta t_n$ using the fifth-order Radau IIA method,
with $\Delta t_n$ being determined
by a standard time step size selection procedure (e.g., see Hairer et al. \cite[Section II.4]{HNW93}) and
using a two-step error estimator of Gonzalez-Pinto et al.~\cite{Montijano2004}.
The relative and absolute tolerances $rtol = 10^{-6}$ and $atol = 10^{-8}$ are taken in the computation.
The whole computation alternates between the integration of PME and the generation of the mesh.
Starting with the current mesh $\mathcal{T}_h^n$ and a solution $u_h^n(\vect{x})\approx u(\vect{x}, t_n)$
defined thereon, a new mesh $\mathcal{T}_h^{n+1}$ is generated
using the moving mesh strategy to be described in the next subsection.
Then, the discrete PME (\ref{fem-2}) is integrated from $t_n$ to $t_{n+1}$
(as described above)
to obtain the solution
approximation $u_h^{n+1}(\vect{x})$.
\subsection{An MMPDE-based moving mesh strategy}
\label{SEC:mmpde}
We now describe the generation of $\mathcal{T}_h^n$, $ n = 1, ..., n_f$.
We assume that the mesh $\mathcal{T}_h^n$ and a computed solution
$u_h^n(\vect{x})$ are known at $t = t_n$. We also assume that a reference computational mesh
$\hat{\mathcal{T}}_{c,h} = \{ \hat{\vect{\xi}}_j, j = 1, ..., N_v\}$ having the same
connectivity and the same numbers of vertices and elements as $\mathcal{T}_h^n$ has been chosen.
In our computation, it is taken as a uniform mesh
(in the Euclidean metric)
defined on $\Omega$.
$\hat{\mathcal{T}}_{c,h}$ stays fixed for the whole computation.
\begin{figure}[t]
\centering
{\footnotesize
\begin{tikzpicture}[scale = 0.8]
\draw [thick,->] (0,0) -- (7,0);
\draw [right] (7,0) node {$\xi$};
\draw [thick,->] (0,0) -- (0,7);
\draw [above] (0,7) node {$x$};
\draw[line width=0.25ex] (0,0) -- (1.4, 1.2) -- (4.4,5) -- (6,6);
\draw (1.4,0) -- (1.4, 1.2) -- (0, 1.2);
\draw (4.4,0) -- (4.4, 5) -- (0, 5);
\draw (6,0) -- (6, 6) -- (0, 6);
\draw[fill] (0,0) circle (.5ex);
\draw[fill] (1.4,1.2) circle (.5ex);
\draw[fill] (4.4,5) circle (.5ex);
\draw[fill] (6,6) circle (.5ex);
\draw[fill] (1.4,0) circle (.5ex);
\draw[fill] (4.4,0) circle (.5ex);
\draw[fill] (6,0) circle (.5ex);
\draw[fill] (0,1.2) circle (.5ex);
\draw[fill] (0,5) circle (.5ex);
\draw[fill] (0,6) circle (.5ex);
\draw [below] (0,0) node {$\xi_1^{n+1}(\hat{\xi}_1)$};
\draw [below] (1.4,0) node {$\xi_2^{n+1}$};
\draw [below] (4.7,0) node {$\xi_3^{n+1}$};
\draw [below] (6,0) node {$\xi_4^{n+1}(\hat{\xi}_4)$};
\draw [left] (0,0.1) node {$x_1^{n}(x_1^{n+1})$};
\draw [left] (0,6) node {$x_4^{n}(x_4^{n+1})$};
\draw [left] (0,1.2) node {$x_2^{n}$};
\draw [left] (0,5) node {$x_3^{n}$};
\draw [left] (0,4.4933) node {$x_3^{n+1}$};
\draw [left] (0,1.96) node {$x_2^{n+1}$};
\draw[fill=red] (2,0) circle (.5ex);
\draw[fill=red] (4,0) circle (.5ex);
\draw [below] (2.1,0) node {$\hat{\xi}_2$};
\draw [below] (4,0) node {$\hat{\xi}_3$};
\draw[fill=red] (2,1.96) circle (.5ex);
\draw[fill=red] (0,1.96) circle (.5ex);
\draw[fill=red] (4,4.4933) circle (.5ex);
\draw[fill=red] (0,4.4933) circle (.5ex);
\draw [right] (6,6) node {$x=\Phi_h(\xi)$};
\draw[red] (2,0) -- (2, 1.96) -- (0, 1.96);
\draw[red,->] (2,0) -- (2, 1);
\draw[red,->] (2, 1.96) -- (1, 1.96);
\draw[red] (4,0) -- (4, 4.4933) -- (0, 4.4933);
\draw[red,->] (4,0) -- (4, 2.25);
\draw[red,->] (4, 4.4933) -- (2, 4.4933);
\end{tikzpicture}
}
\caption{A sketch of the relations among the meshes $\hat{\mathcal{T}}_{c,h} = \{ \hat{\vect{\xi}}_j\}$,
$\mathcal{T}_{c,h}^{n+1} = \{\vect{\xi}_j^{n+1} \}$, $\mathcal{T}_h^n=\{\vect{x}_j^{n}\}$,
and $\mathcal{T}_h^{n+1}=\{\vect{x}_j^{n+1}\}$. The function $\vect{x} = \Phi_h(\vect{\xi})$ is determined
as the correspondence between $\mathcal{T}_{c,h}^{n+1}$ and $\mathcal{T}_h^{n}$; and $\mathcal{T}_h^{n+1}$ is computed
as $\mathcal{T}_h^{n+1} = \Phi_h(\hat{\mathcal{T}}_{c,h})$ using linear interpolation.}
\label{fig:mesh-relation}
\end{figure}
The generation of $\mathcal{T}_h^{n+1}$ is through the computational mesh
$\mathcal{T}_{c,h} = \{ \vect{\xi}_j, j = 1, ..., N_v\}$ which serves as an intermediate variable.
(A sketch of the relations among the meshes $\hat{\mathcal{T}}_{c,h}$,
$\mathcal{T}_{c,h}^{n+1}$, $\mathcal{T}_h^n$, and $\mathcal{T}_h^{n+1}$ is shown in Fig.~\ref{fig:mesh-relation}.)
First, an MMPDE-based mesh equation
(to be described below) for the velocities of the computational vertices is employed.
It takes the form (cf. (\ref{mmpde-2}))
\[
\begin{cases}
\frac{d \vect{\xi}_j}{d t} = \vect{v}_j(\mathbb{M}, \mathcal{T}_h^n; \vect{\xi}_1, ..., \vect{\xi}_{N_v}),&\quad j = 1, ..., N_v, \quad t \in (t_n, t_{n+1}]
\\
\vect{\xi}_j(t_n) = \hat{\vect{\xi}}_j,& \quad j = 1, ..., N_v
\end{cases}
\]
where $\vect{v}_j$ denotes the mesh velocity for the $j$-th node which depends on $\mathcal{T}_h^n$, the metric tensor
$\mathbb{M}$ defined thereon, and $\mathcal{T}_{c,h}$. Here, the initial mesh is taken to be the reference computational mesh $\hat{\mathcal{T}}_{c,h}$.
The system is integrated and the mesh $\mathcal{T}_{c,h}^{n+1} \approx \mathcal{T}_{c,h}(t_{n+1})$ is obtained. During the integration, both $\mathbb{M}$
and $\mathcal{T}_h^n$ are kept fixed. Notice that $\mathcal{T}_{c,h}^{n+1}$ and $\mathcal{T}_h^n$
form a correspondence relation, say, $\vect{x}_j^n = \Phi_h( \vect{\xi}_j^{n+1}),\, j = 1, ..., N_v$ or $\vect{x} = \Phi_h( \vect{\xi} )$.
Then, the vertices of the new physical mesh $\mathcal{T}_h^{n+1}$ are defined as
$\vect{x}_j^{n+1} = \Phi_h(\hat{\vect{\xi}}_j), \, j = 1, ..., N_v$. Since $\Phi_h$ is defined only at
the vertices of $\mathcal{T}_{c,h}^{n+1}$, we need to compute $\Phi_h(\hat{\vect{\xi}}_j)$ using interpolation.
Linear interpolation is used since it is important to keep the nonsingularity of the mesh
while it is unnecessary to compute the mesh to high accuracy.
The metric tensor is assumed to be symmetric and uniformly positive definite on $\Omega$.
It is used to control the size, shape, and orientation of the elements of the mesh to be generated.
We consider three types of mesh, one is uniform and the other two are arclength- and Hessian-based
adaptive meshes. The metric tensors associated with the adaptive meshes are defined as
\begin{align}
\mathbb{M} & = (\mathbb{I} + \nabla u_h^n (\nabla u_h^n)^T)^{\frac{1}{2}} ,
\label{M-arclength}
\\
\mathbb{M} & = \left[ \det \left( \mathbb{I} + |H(u_h^n)| \right)\right]^{-\frac{1}{6}}
\left( \mathbb{I} + |H(u_h^n)|\right) ,
\label{M-hessian}
\end{align}
where $\mathbb{I}$ is the $d\times d$ identity matrix, $H(u_h^n)$ is a recovered Hessian
for the piecewise linear finite element solution $u_h^n$, and
$|H(u_h^n)| = Q \text{diag} (|\lambda_1|, ..., |\lambda_d|) Q^T$ with
$Q \text{diag} (\lambda_1, ..., \lambda_d) Q^T$ being the eigen-decomposition of $H(u_h^n)$.
The tensor (\ref{M-arclength}) is the frequently used arclength monitor function which, loosely speaking,
places mesh points according to the uniformity in arclength. The tensor (\ref{M-hessian})
is optimal \cite{HS03} for the $L^2$ norm of linear interpolation error.
In our computation, we use a least squares fitting strategy for Hessian recovery
(e.g., see \cite{Kamens09PhD,KaHu2013}).
We now describe the formulation of the MMPDE-based mesh equation.
A key component of the formulation is the $\mathbb{M}$-uniform mesh concept
with which any (nonuniform) adaptive mesh is viewed as a uniform
one in some metric. It is known \cite{Hua06,HR11} that such an $\mathbb{M}$-uniform mesh $\mathcal{T}_h$
approximately satisfies the equidistribution and alignment conditions
\begin{align}
& |K| \sqrt{\det(\mathbb{M}_K)} = \frac{\sigma_h |K_c|}{|\Omega_c|} , \quad \forall K \in \mathcal{T}_h
\label{eq-1}
\\
&\frac{1}{d} \text{trace}
\left(
\left( F'_K \right)^{-1}
\mathbb{M}_K^{-1} \left (F'_K \right )^{-T}
\right)
=
\det \left(
\left( F'_K \right)^{-1}
\mathbb{M}_K^{-1} \left (F'_K \right )^{-T}
\right)^{\frac{1}{d}} , \quad \forall K \in \mathcal{T}_h
\label{ali-1}
\end{align}
where $|K|$ is the volume of $K$, $\mathbb{M}_K$ is the average of $\mathbb{M}$ over $K$, $\det(\cdot)$ and $\text{trace}(\cdot)$
denote the determinant and trace of a matrix, respectively, $|K_c|$ is the volume of the element $K_c \in \mathcal{T}_{c,h}$
corresponding to $K$, $F'_K$ is the Jacobian matrix of the affine mapping
$F_K: K_c \to K$, and
\[
\sigma_h = \sum_{K \in \mathcal{T}_h} |K| \sqrt{\det(\mathbb{M}_K)}, \quad
|\Omega_c | = \sum_{K_c \in \mathcal{T}_{c,h}} |K_c| .
\]
The equidistribution condition (\ref{eq-1}) requires that the volume of $K$ in the metric $\mathbb{M}$ be proportional
to $|K_c|$ with constant proportionality while the alignment condition (\ref{ali-1}) requires that $K$ be similar to $K_c$.
The meshes that closely satisfy these conditions can be obtained by minimizing the energy function
\begin{align}
I_h & = \theta \sum_{K \in \mathcal{T}_h} |K| \sqrt{\det(\mathbb{M}_K)}
\left(
\text{trace}({(F_K')}^{-1} {\mathbb{M}}_{K}^{-1} {(F_K')}^{-T})
\right)^{\frac{d p}{2}} \nonumber \\
& \quad \quad + (1-2\theta) d^{\frac{dp}{2}}
\sum_{K \in \mathcal{T}_h} |K| \sqrt{\det (\mathbb{M}_K)}
\left(
\frac{|K_c|}{|K| \sqrt{\det(\mathbb{M}_K)}}
\right)^{p},
\label{Ih}
\end{align}
which is a Riemann sum of a continuous functional developed in \cite{Hua01b}
based on equidistribution and alignment for variational mesh adaptation.
Here, $\theta \in (0, \frac{1}{2}]$ and $p > 1$ are non-dimensional parameters. We choose $\theta = 1/3$
and $p = 2$ in our computation.
Notice that $I_h$ is a function of the position of the computational vertices $\{ \vect{\xi}_j\}_{j=1}^{N_v}$ and
the physical coordinates $\{ \vect{x}_j\}_{j=1}^{N_v}$. For the current situation, we choose $\mathcal{T}_h$ to be $\mathcal{T}_h^n$
(the current physical mesh). Then, $I_h$ is the function of $\vect{\xi}_j$, $j = 1, ..., N_v$ only.
Instead of taking direct minimization of $I_h$ with respect to these coordinates, we follow the MMPDE approach
\cite{HRR94a} and define the moving mesh equation as a gradient system of $I_h$,
\begin{equation}
\frac{d \vect{\xi}_j}{d t} = - \frac{P_j}{\tau} \left [ \frac{\partial I_h} {\partial \vect{\xi}_j} \right ]^T,
\quad j = 1, ..., N_v
\label{mmpde-1}
\end{equation}
where the derivative of $I_h$ with respect to $\vect{\xi}_j$, ${\partial I_h}/{\partial \vect{\xi}_j}$,
is considered as a row vector, $\tau > 0$ is a parameter used to control the response time of the mesh movement
to the change in the metric tensor, and $P_j = \det(\mathbb{M}(\vect{x}_j))^{\frac{p-1}{2}} $ is chosen such that
(\ref{mmpde-1}) is invariant under the scaling transformation of $\mathbb{M}$: $\mathbb{M} \to c \mathbb{M}$ for any positive
constant $c$. The derivative of $I_h$ with respect to $\vect{\xi}_j$ can be found
analytically using the notion of scalar-by-matrix differentiation; see \cite{HK2014}.
With these analytical formulas, we can rewrite (\ref{mmpde-1}) into
\begin{equation}
\label{mmpde-2}
\frac{d \vect{\xi}_j} {d t}= \frac{P_j}{\tau} \sum_{K \in \omega_j} |K| \vect{v}_{j_K}^K , \quad j = 1, \dotsc, N_v
\end{equation}
where $\omega_j$ is the element patch associated with the $j$-th vertex, $j_K$ is its local index of the vertex on $K$,
and $\vect{v}_{j_K}^K$ is the velocity contributed by the element $K$ to the vertex $j_K$.
The velocities contributed by $K$ to its vertices are given by
\begin{equation}
\label{mmpde-3}
\begin{bmatrix} {(\vect{v}_1^K)}^T \\ \vdots \\ {(\vect{v}_d^K)}^T \end{bmatrix}
= - E_K^{-1} \frac{\partial G}{\partial \mathbb{J}} - \frac{\partial G}{\partial \det(\mathbb{J})}
\frac{\det(\hat{E}_K)}{\det(E_K)} \hat{E}_K^{-1},
\quad
\vect{v}_0^K = - \sum_{i=1}^d \vect{v}_d^K ,
\end{equation}
where $E_K = [\vect{x}_1^K-\vect{x}_0^K, ..., \vect{x}_d^K-\vect{x}_0^K]$ is the edge matrix of $K$,
$\hat{E}_K$ is the edge matrix for $K_c$ which is defined similarly, the function $G$ is associated with
the energy (\ref{Ih}) and defined as
\[
G(\mathbb{J}, \det(\mathbb{J}), \mathbb{M})
= \theta \sqrt{\det(\mathbb{M})} \left ( \text{trace}(\mathbb{J} \mathbb{M}^{-1} \mathbb{J}^T) \right )^{\frac{dp}{2}}
+ (1-2\theta) d^{\frac{d p}{2}} \sqrt{\det(\mathbb{M})} \left (\frac{\det(\mathbb{J})}{\sqrt{\det(\mathbb{M})}} \right )^p,
\]
and its derivatives (evaluated at $(\mathbb{J}, \det(\mathbb{J}), \mathbb{M}) = ((F_K')^{-1}, \det(F_K')^{-1}, \mathbb{M}_K)$)
with respect to the first (the Jacobian matrix) and second arguments are given by
\begin{align*}
& \frac{\partial G}{\partial \mathbb{J}} = d p \theta \sqrt{\det(\mathbb{M})} \left ( \text{trace}(\mathbb{J} \mathbb{M}^{-1} \mathbb{J}^T)
\right )^{\frac{dp}{2}-1}\mathbb{M}^{-1} \mathbb{J}^T,
\\
& \frac{\partial G}{\partial \det(\mathbb{J})} = p (1-2\theta) d^{\frac{d p}{2}} \det(\mathbb{M})^{\frac{1-p}{2}}
\det (\mathbb{J})^{p-1} .
\end{align*}
Notice that ${\partial G}/{\partial \mathbb{J}}$ is a $d$-by-$d$ matrix.
In practical computation, we can first compute the local velocities $\vect{v}_j^K$, $j = 0, ..., d$ for all elements
using (\ref{mmpde-3}) and then obtain the velocity for any mesh point
by summing the volume weighted contributions from its neighboring elements (cf. (\ref{mmpde-2})).
The mesh equation should be modified for boundary mesh points.
For fixed points (such as corners),
we can set the velocity to be zero. For those on a boundary edge or surface, the mesh velocities
should be modified such that they do not move out of the domain.
The mesh equation (\ref{mmpde-2}) (with proper modifications for boundary mesh points) is integrated
from $t=t_n$ to $t_{n+1}$ starting with $\hat{\mathcal{T}}_{c,h}$ as the initial mesh. In our computation,
the Matlab ODE solver {\em ode15s} (an implicit scheme) is used to integrate (\ref{mmpde-2}).
Equation (\ref{mmpde-2}) is called the $\vect{\xi}$-formulation of the MMPDE moving mesh method
since it has been formulated in terms of the derivatives of $I_h$ with respect to $\vect{\xi}_j$ and
the velocities for the computational coordinates. We can obtain an $\vect{x}$-formulation by directly
differentiating $I_h$ with respect to $\vect{x}_j$ (with $\mathcal{T}_{c,h}$ being taken as $\hat{\mathcal{T}}_{c,h}$ and fixed)
and the new physical mesh $\mathcal{T}_h^{n+1}$ by directly integrating this formulation. The main disadvantage
of this formulation is that its formula is more complicated than that of the $\vect{\xi}$-formulation
and the metric tensor, which is defined on $\mathcal{T}_h^{n}$, needs to be updated every time the physical
mesh is changed during the time integration of the mesh equation for $\mathcal{T}_h^{n+1}$.
It is analytically shown in \cite{HK2015} that the mesh governed by the $\vect{x}$-formulation will stay
nonsingular when it is nonsingular initially. Although such a theoretical result is not available for
the $\vect{\xi}$-formulation, our limited numerical experience shows that the $\vect{\xi}$-formulation
also produces nonsingular meshes.
\section{Numerical Results for PME}
\label{SEC:PME-numerics}
In this section we present numerical results obtained with the moving mesh finite element method described
in the previous section for a number of PME examples. They include the Barenblatt-Pattle
solution and the generalizations of several one-dimensional examples studied by
Zhang and Wu \cite{Zhang-2009}. These examples are selected to demonstrate the accuracy of our method
as well as its ability to deal with solutions having complex support and the waiting-time phenomenon.
For the cases having an exact solution, the error in the computed
solution will be measured in the (global) $L^2$ norm, i.e.,
\[
\|e_h\|_{L^2(t_0,T; L^2(\Omega))} = \left ( \int_{t_0}^{T} \int_{\Omega} e_h^2(\vect{x}, t) d \vect{x} d t \right )^{\frac{1}{2}}.
\]
We choose this norm because various error estimates are obtained in this norm, e.g., see (\ref{ebmeyer-1}).
(An exception is Fig.~\ref{fig:pme4-adaptivity-L1} where the convergence history in $L^1$ norm is plotted
for comparison purpose.)
In our computation, we use $\tau = 10^{-4}$ (for the mesh movement), the maximal allowed time step size
$\Delta t_{max} = 10^{-3}$ (for integrating PME), and the Hessian-based metric tensor (\ref{M-hessian}),
unless stated otherwise.
\begin{exam}[Barenblatt-Pattle solution]
\label{exam4.1}
We first consider the Barenblatt-Pattle solution (\ref{BP-soln}) with $r_0 = 0.5$ and $T = (t_0+0.1)/2$.
We use it to verify the accuracy of the numerical method and the effects of the mesh adaptivity and
the physical parameter $m$ on the computational accuracy.
Typical meshes and computed solutions at the final time obtained with the uniform mesh and two adaptive mesh
strategies are shown in Fig.~\ref{fig:mesh-soln-compare} and the convergence history is shown in
Fig.~\ref{fig:pme4-adaptivity} for the cases $m = 1$ and $2$. We can see that for both uniform and
the arclength-based adaptive meshes, the convergence order is
about 1.5 (i.e., $\mathcal{O}(N^{-\frac{1.5}{2}})$) for $m=1$ and
1 for the case $m=2$, with an arclength-based adaptive mesh producing slightly more
accurate solutions for both cases. We notice that the exact solution (\ref{BP-soln}) is
in $H^1(\Omega)$ for $m = 1$ and $W^{1,\frac{m}{m-1}-\epsilon}(\Omega)$ for $m> 1$, where $\epsilon$ is
a small positive number. The observed convergence order is higher than what we can expect from
the solution regularity. (For example, the theoretical estimate (\ref{ebmeyer-1})
shows a convergence order of $15/28$ for $m=1$ and $6/15$ for $m=2$.)
Even more surprisingly, Hessian-based adaptive meshes lead to a second-order
convergence rate for both the $m=1$ and $2$ cases.
We do not have a rigorous explanation for this but would like to point out two relevant observations.
The first is that the mesh is denser near the free boundary with the Hessian-based metric tensor than
with the arclength metric tensor (e.g., see Fig.~\ref{fig:mesh-soln-compare}).
The other is that the exact solution has higher regularity in its support than on the whole domain $\Omega$.
Indeed, it can be directly verified that
\[
\sqrt{\det(|H(u(\cdot, t))|)} \in L^{\frac{2m}{3m-2}-\epsilon}(\text{supp}(u(\cdot, t))),
\]
where $\epsilon$ is a small positive number and $H(u(\cdot, t))$ denotes the Hessian of $u$. It is known \cite{HS03} that for an $\mathbb{M}$-uniform
mesh associated with the metric tensor (\ref{M-hessian}), the linear interpolation error on a polygonal domain $D$
is bounded by
\[
\| u - \Pi_1 u \|_{L^2(D)} \leq C N^{-1} \| \sqrt{\det(|H(u)|)}\|_{L^{\frac{2}{3}}(D)} + h.o.t. ,
\]
where $h.o.t.$ stands for higher-order terms. From this we can expect a second-order convergence
if we consider linear interpolation only in the support of the solution with a Hessian-based adaptive mesh.
Although this analysis does not apply directly to our current situation with a larger domain than the support,
it may shed some light on why the scheme with Hessian-based adaptive meshes shows
a second-order convergence.
For comparison purpose, we plot the convergence history in the $L^1$ norm in Fig.~\ref{fig:pme4-adaptivity-L1}. It can be
seen that the the $L^1$ norm of the error behaves similarly as the $L^2$ norm.
We have seen from Figs.~\ref{fig:pme4-adaptivity} and \ref{fig:pme4-adaptivity-L1} that mesh adaptation, especially the
Hessan-based one, can significantly improve the accuracy. But this
comes with additional cost. To show if the mesh adaptation can also improve the efficiency,
we plot the solution error against the required CPU time (in seconds) in Fig.~\ref{fig:pme4-cpu-m2}
for the computation corresponding to Fig.~\ref{subfig:pme4-adaptivity-m2}.
We can see that a uniform mesh is more efficient when low accuracy is desired while mesh adaptation shows
advantages for high accuracy. This is consistent with our limited experience with adaptive
moving mesh computation (also see \cite[Page 17]{HR11}). The location of the break-even point depends on
specific problems and specific mesh adaptation strategies. For the current situation,
we have $(N, \|e_h\|_{L^2}) \approx (300, 3\times 10^{-4})$
for Hessian-based adaptation and $(5000, 10^{-4})$ for arclength-based adaptation.
We now examine the effects of the parameter $\tau$ on the accuracy. Recall that $\tau$ is used in
the moving mesh equation (\ref{mmpde-1}) to adjust the response time of the mesh movement to
the changes in the metric tensor.
The smaller $\tau$ is, the faster the response is. On the other hand, for smaller $\tau$,
the mesh equation (\ref{mmpde-2}) becomes stiffer and harder to integrate.
Fortunately, this only causes a slight increase in the cost when an implicit solver (Matlab solver {\em ode15s} in our computation)
is used for the mesh equation.
The convergence history is shown in Fig.~\ref{fig:pme4-tauDelta}
for Hessian-based adaptive meshes for $\tau = 10^{-2}, 10^{-3}$, and $10^{-4}$.
We can see that for both cases with $m=1$ and $m=2$, the convergence with $\tau = 10^{-2}$ and
$10^{-3}$ slows down when the mesh is becoming finer whereas that with $\tau=10^{-4}$ stays
second order at least for the same considered range of the number of mesh elements.
This indicates that the mesh concentration needs to follow up the movement of the free boundary
very closely or otherwise we may lose the accuracy improvements gained with mesh adaptation.
Next, we consider the effects of the physical parameter $m$. As seen in Section~\ref{SEC:PME-theory},
the solution at the free boundary becomes steeper for larger $m$. It is not surprising that PME will also become
more difficult to solve numerically. Indeed, as we can see in Fig.~\ref{fig:pme4-m-vary:uniform},
the convergence rate for the uniform mesh decreases as $m$ increases.
This is qualitatively consistent with the theoretical analysis for various finite element
approximations for PME on quasi-uniform meshes
which also shows a decrease in convergence order with $m$; cf. (\ref{rose-1}), (\ref{nochetto-1}), (\ref{ebmeyer-1}),
and (\ref{wei-1}) and e.g., see \cite{Ebmeyer-1998,Ebmeyer-2008,Nochetto-1988,Rose1983}.
On the other hand, for Hessian-based adaptive meshes the convergence order is second
for $m=1$, 2, and 3, although the error is larger for larger $m$;
see Fig.~\ref{fig:pme4-m-vary:adaptive}.
The final mesh and computed solution obtained for $m=3$ with the Hessian-based
mesh adaptation are shown in Fig.~\ref{fig:pme4-m3}.
It is worth pointing out that there are small oscillations around the free boundary
in computed solutions; e.g., see Fig.~\ref{fig:oscillations-cross-section}.
This is due to the nature of the problem where the solution is steep and has a corner shape
near the free boundary and the loss of the maximum principle in the discretization.
A standard finite element discretization like the one we used here
typically leads to solutions with oscillations for this type of problems
(also see Zhang and Wu \cite{Zhang-2009} for the case with the one-dimensional PME).
The oscillations may be suppressed using, for instance, monotone schemes (e.g., see \cite{BaSo1991,NH2015,Oberman2006})
or structure-preserving schemes (e.g., see \cite{LSSV07,LiHu2013,Le05,YuSh2008,Zhang-2009,ZZS2013}).
These schemes and their combination with adaptive mesh movement for PME are worth future investigations.
\qed \end{exam}
\begin{figure}[ht]
\begin{center}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.29]{pme4_mesh_0.pdf}\caption{Uniform mesh}\end{subfigure}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.3]{pme4_soln_0.pdf}\caption{Uniform mesh}\end{subfigure}\\
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.29]{pme4_mesh_1.pdf}\caption{Arclength metric tensor}\end{subfigure}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.3]{pme4_soln_1.pdf}\caption{Arclength metric tensor}\end{subfigure}\\
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.29]{pme4_mesh_2.pdf}\caption{Hessian-based metric tensor}\end{subfigure}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.3]{pme4_soln_2.pdf}\caption{Hessian-based metric tensor}\end{subfigure}
\caption{Example~\ref{exam4.1} with $m=2$.
The meshes (closer view near (-0.35, -0.35)) and computed solutions at $t = T$ obtained with uniform and
arclength- and Hessian-based adaptive meshes ($N = 25600$).}
\label{fig:mesh-soln-compare}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\begin{subfigure}[b]{0.35\linewidth}
\includegraphics[scale=0.35]{pme4-adaptivity-m1.pdf}
\caption{$m = 1$}
\label{subfig:pme4-adaptivity-m1}
\end{subfigure}%
\hspace{3mm}
\begin{subfigure}[b]{0.35\linewidth}
\includegraphics[scale=0.35]{pme4-adaptivity-m2.pdf}
\caption{$m = 2$}
\label{subfig:pme4-adaptivity-m2}
\end{subfigure}%
\caption{Example~\ref{exam4.1}. Convergence history (in $L^2$ norm) for the three meshing strategies as $N$ (the number of the elements) increases.}
\label{fig:pme4-adaptivity}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\begin{subfigure}[b]{0.35\linewidth}
\includegraphics[scale=0.35]{L1pme4-adaptivity-m1.pdf}
\caption{$m = 1$}
\label{subfig:pme4-adaptivity-m1-L1}
\end{subfigure}%
\hspace{3mm}
\begin{subfigure}[b]{0.35\linewidth}
\includegraphics[scale=0.35]{L1pme4-adaptivity-m2.pdf}
\caption{$m = 2$}
\label{subfig:pme4-adaptivity-m2-L1}
\end{subfigure}%
\caption{Example~\ref{exam4.1}. Convergence history (in $L^1$ norm) for the three meshing strategies as $N$ increases.}
\label{fig:pme4-adaptivity-L1}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.35]{CPU-PME4-m2.pdf}
\caption{Example~\ref{exam4.1} with $m=2$. The $L^2$ norm of the error is plotted against the CPU time in seconds
for the computation corresponding to Fig.~\ref{subfig:pme4-adaptivity-m2}.}
\label{fig:pme4-cpu-m2}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\begin{subfigure}[b]{0.35\linewidth}
\includegraphics[scale=0.30]{pme4-num-tauDelta-m1.pdf}
\caption{$m = 1$}
\label{subfig:pme4-tauDelta-m1}
\end{subfigure}%
\begin{subfigure}[b]{0.35\linewidth}
\includegraphics[scale=0.29]{pme4-num-tauDelta-m2.pdf}
\caption{$m = 2$}
\label{subfig:pme4-tauDelta-m2}
\end{subfigure}%
\caption{Example~\ref{exam4.1}. Convergence history for different values of $\tau$.}
\label{fig:pme4-tauDelta}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\begin{subfigure}[b]{0.35\linewidth}
\includegraphics[scale=0.34]{pme4-num-m-uniform.pdf}
\caption{Uniform mesh}
\label{fig:pme4-m-vary:uniform}
\end{subfigure}%
\hspace{1mm}
\begin{subfigure}[b]{0.35\linewidth}
\includegraphics[scale=0.34]{pme4-num-m.pdf}
\caption{Hessian-based adaptive mesh}
\label{fig:pme4-m-vary:adaptive}
\end{subfigure}%
\caption{Example~\ref{exam4.1}. Convergence history for different values of $m$.}
\label{fig:pme4-m-vary}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\begin{subfigure}[b]{0.35\linewidth}
\includegraphics[scale=0.29]{pme4_mesh_m4.pdf}
\caption{Mesh}
\end{subfigure}%
\begin{subfigure}[b]{0.35\linewidth}
\includegraphics[scale=0.30]{pme4_soln_m4.pdf}
\caption{Computed solution}
\end{subfigure}%
\caption{Example~\ref{exam4.1}. The final mesh (close view near (-0.35, -0.35)) and computed solution for $m = 3 $
with the Hessian-based mesh adaptation ($N = 25600$).}
\label{fig:pme4-m3}
\end{center}
\end{figure}
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.25]{PME4-oscillation-m_2-uniform-N_102400.pdf}\caption{uniform mesh}\end{subfigure}%
\hspace{3mm}
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.25]{PME4-oscillation-m_2-arclength-N_102400.pdf}\caption{arclength-based mesh}\end{subfigure}%
\hspace{3mm}
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.25]{PME4-oscillation-m_2-hessian-N_102400.pdf}\caption{Hessian-based mesh}\end{subfigure}%
\caption{Example~\ref{exam4.1} with $m=2$. The cross section at $y = 0$ of the computed solutions at $t=T$ obtained
with a uniform mesh and arclength- and Hessian-based adaptive meshes ($N = 102,400$).}
\label{fig:oscillations-cross-section}
\end{figure}
\begin{exam}[Solution with complex support]
\label{exam4.2}
We now consider examples with complex solution support.
The first example models the movement and interaction of two columns of a substance, which start out with the same height. It has
\[
m = 5, \quad \Omega = (-5.5,5.5) \times (-5.5,5.5),
\]
\begin{equation}
u_0(x,y) =
\begin{cases}
1 , & \quad \text{for} \quad (x,y) \in (0.5,3) \times (0.5,3) \\
1 , & \quad \text{for} \quad (x,y) \in (-3,-0.5) \times (-3,-0.5) \\
0 , & \quad \text{otherwise} .
\end{cases}
\label{two-box-1}
\end{equation}
A typical adaptive mesh and the corresponding computed solution obtained
with the Hessian-based mesh adaptation are shown in Fig.~\ref{fig:Two-Box-I-soln-mesh}.
It can be seen that as time evolves, the support of the solution expands from the two boxes,
and then merges into one big region. The mesh adaptation strategy works nicely for the current
example, with the mesh points moving to concentrate around the free boundary.
Particularly, the mesh stays concentrated and nonsingular even during the merging process of
the two separated support regions.
Moreover, the numerical results show that the support of the solution becomes smoother as time evolves,
consistent with the theoretical prediction (e.g., see \cite{Shmarev2005}).
\qed \end{exam}
\begin{exam}[Solution with complex support]
\label{exam4.3}
The next example is similar to the previous one except that the initial solution has different heights in the two boxes,
\begin{equation}
u_0(x,y) =
\begin{cases}
1 , & \quad \text{for} \quad (x,y) \in (0.5,3) \times (0.5,3) \\
1.5 , & \quad \text{for} \quad (x,y) \in (-3,-0.5) \times (-3,-0.5) \\
0 , & \quad \text{otherwise} .
\end{cases}
\label{two-box-2}
\end{equation}
A typical adaptive mesh and the corresponding solution are shown in Fig.~\ref{fig:Two-Box-II-soln-mesh}.
Once again, the mesh is concentrated correctly around the free boundaries
as they evolve with time. Moreover, the region with larger initial solution values
expands faster than the region with smaller values. Overall, the support of the solution
for this example expands faster than that of the previous example. At $t=50$, the two boxes have already
merged into a single region of calabash shape.
\qed \end{exam}
\begin{figure}[htp]
\centering
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.34]{pme4-mesh-twoBox1_tf-0.pdf}\caption{$t = 0$}\end{subfigure}\hspace{5mm}%
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.32]{pme4-soln-twoBox1_tf-0.pdf}\caption{$t = 0$}\end{subfigure}\\%
\vspace{-1mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.34]{pme4-mesh-twoBox1_tf-51.pdf}\caption{$t = 0.51$}\end{subfigure}\hspace{5mm}%
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.32]{pme4-soln-twoBox1_tf-51.pdf}\caption{$t = 0.51$}\end{subfigure}\\%
\vspace{-1mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.34]{pme4-mesh-twoBox1_tf-10001.pdf}\caption{$t = 100.01$}\end{subfigure}\hspace{5mm}%
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.32]{pme4-soln-twoBox1_tf-10001.pdf}\caption{$t = 100.01$}\end{subfigure}\\%
\vspace{-1mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.34]{pme4-mesh-twoBox1_tf-50000.pdf}\caption{$t = 500$}\end{subfigure}\hspace{5mm}%
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.32]{pme4-soln-twoBox1_tf-50000.pdf}\caption{$t = 500$}\end{subfigure}%
\caption{Example~\ref{exam4.2}. An adaptive mesh and the corresponding computed solution at various time instants ($N = 14400$).}
\label{fig:Two-Box-I-soln-mesh}
\end{figure}
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.34]{pme4-mesh-twoBox2_tf-0.pdf}\caption{$t = 0$}\end{subfigure}\hspace{5mm}%
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.32]{pme4-soln-twoBox2_tf-0.pdf}\caption{$t = 0$}\end{subfigure}\\%
\vspace{-1mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.34]{pme4-mesh-twoBox2_tf-50.pdf}\caption{$t = 0.5$}\end{subfigure}\hspace{5mm}%
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.32]{pme4-soln-twoBox2_tf-50.pdf}\caption{$t = 0.5$}\end{subfigure}\\%
\vspace{-1mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.34]{pme4-mesh-twoBox2_tf-10000.pdf}\caption{$t = 100$}\end{subfigure}\hspace{5mm}%
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.32]{pme4-soln-twoBox2_tf-10000.pdf}\caption{$t = 100$}\end{subfigure}\\%
\vspace{-1mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.34]{pme4-mesh-twoBox2_tf-50000.pdf}\caption{$t = 500$}\end{subfigure}\hspace{5mm}%
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.32]{pme4-soln-twoBox2_tf-50000.pdf}\caption{$t = 500$}\end{subfigure}%
\caption{Example~\ref{exam4.3}. An adaptive mesh and the corresponding computed solution at various time instants ($N = 14400$).}
\label{fig:Two-Box-II-soln-mesh}
\end{figure}
\begin{exam}[Waiting-time phenomenon]
\label{exam4.4}
From Section~\ref{SEC:PME-theory} we recall that PME exhibits the waiting-time phenomenon for a certain type of
initial solutions. To see this, we consider
\[
m = 8, \quad \Omega = (-\pi, \pi) \times (-\pi, \pi) ,
\]
\begin{equation}
u_0(x,y) =
\begin{cases}
\cos(\sqrt{x^2 + y^2}) , & \quad \text{for} \quad \sqrt{x^2 + y^2} \leq \frac{\pi}{2} \\
0 , & \quad \text{otherwise} .
\end{cases}
\label{waiting-time-1}
\end{equation}
We have
\[
\nabla \cos^m(\sqrt{x^2+y^2}) = -\frac{m\, \cos^{m-1}(\sqrt{x^2+y^2})\, \sin(\sqrt{x^2+y^2})}{\sqrt{x^2+y^2}}
\begin{bmatrix} x \\ y \end{bmatrix} ,
\]
which diminishes at $\sqrt{x^2 + y^2} = \frac{\pi}{2}$. From Darcy's law (\ref{Darcy-law}),
we do not anticipate that the free boundary moves initially.
In Figs.~\ref{fig:waiting-2D-cross-section} and Fig.~\ref{fig:waiting-2D} we show
the cross section at $y = 0$ of a computed solution and the solution itself.
The results show that the free boundary of the solution does not move until around $t = 10$.
Before this time, the solution is steepening. Interestingly, the steepening does not occur
on the whole initial support. Instead, it first occurs on a smaller region inside the support and
then this region is expanding until it fills the whole initial support. After that, the free boundary
waits until it becomes sufficiently steep and then moves.
\qed \end{exam}
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-waiting_tf-1.pdf}\caption{$t = 0$}\end{subfigure}%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-waiting_tf-6.pdf}\caption{$t = 0.05$}\end{subfigure}%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-waiting_tf-11.pdf}\caption{$t = 0.1$}\end{subfigure}%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-waiting_tf-26.pdf}\caption{$t = 0.25$}\end{subfigure}\\%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-waiting_tf-51.pdf}\caption{$t = 0.5$}\end{subfigure}%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-waiting_tf-111.pdf}\caption{$t = 1.1$}\end{subfigure}%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-waiting_tf-501.pdf}\caption{$t = 5$}\end{subfigure}%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-waiting_tf-1001.pdf}\caption{$t = 10$}\end{subfigure}\\%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-waiting_tf-1101.pdf}\caption{$t = 11$}\end{subfigure}%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-waiting_tf-1201.pdf}\caption{$t = 12$}\end{subfigure}%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-waiting_tf-1301.pdf}\caption{$t = 13$}\end{subfigure}%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-waiting_tf-1401.pdf}\caption{$t = 14$}\end{subfigure}\\%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-waiting_tf-1501.pdf}\caption{$t = 15$}\end{subfigure}%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-waiting_tf-1601.pdf}\caption{$t = 16$}\end{subfigure}%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-waiting_tf-1701.pdf}\caption{$t = 17$}\end{subfigure}%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-waiting_tf-1801.pdf}\caption{$t = 18$}\end{subfigure}\\%
\caption{Example~\ref{exam4.4}. The cross section at $y = 0$ of a computed solution is shown at various time instants ($N = 40000$).}
\label{fig:waiting-2D-cross-section}
\end{figure}
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.34]{pme4-waiting-mesh_tf-10.pdf}\caption{$t = 0.1$}\end{subfigure}\hspace{5mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.32]{pme4-waiting-soln_tf-10.pdf}\caption{$t = 0.1$}\end{subfigure}\\%
\vspace{-1mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.34]{pme4-waiting-mesh_tf-50.pdf}\caption{$t = 0.5$}\end{subfigure}\hspace{5mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.32]{pme4-waiting-soln_tf-50.pdf}\caption{$t = 0.5$}\end{subfigure}\\%
\vspace{-1mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.34]{pme4-waiting-mesh_tf-500.pdf}\caption{$t = 5$}\end{subfigure}\hspace{5mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.32]{pme4-waiting-soln_tf-500.pdf}\caption{$t = 5$}\end{subfigure}\\%
\vspace{-1mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.34]{pme4-waiting-mesh_tf-1801.pdf}\caption{$t = 18.01$}\end{subfigure}\hspace{5mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.32]{pme4-waiting-soln_tf-1801.pdf}\caption{$t = 18.01$}\end{subfigure}%
\caption{Example~\ref{exam4.4}. A computed solution is shown at various time instants ($N = 40000$).}
\label{fig:waiting-2D}
\end{figure}
\section{Numerical experiment for PME with variable exponents and absorption}
\label{SEC:PME-numerics-2}
To demonstrate the robustness of the moving mesh finite element method described in Section~\ref{SEC:mmfem},
we consider its application to PME with absorption and/or variable exponents,
\begin{equation}
u_t = \nabla \cdot (|u|^{\gamma} \nabla u) - \lambda u^{\sigma}, \quad \Omega \times (t_0, T]
\label{PME-2}
\end{equation}
subject to a homogeneous Dirichlet boundary condition and an initial condition.
Here, $\gamma = \gamma (\vect{x}, t)$ and $\sigma = \sigma (\vect{x}, t)$ are nonnegative bounded
functions and $\lambda$ is a constant.
PME in the form of (\ref{PME-2}) arises in continuum mechanics to model the motion of a barotropic gas
through a porous medium, where the pressure is considered to depend on the density and
temperature \cite{Antontsev2005}.
Like the standard PME, (\ref{PME-2}) with constant exponents (i.e., PME with absorption)
has been studied extensively; e.g., see \cite{Knerr1979,Shmarev2005}.
However, there are very few theoretical results for the case with variable exponents \cite{Antontsev2005,Lian2008}.
For example, there is no theoretical result on the movement of the free boundary (cf. (\ref{Darcy-law}))
although the solution to (\ref{PME-2}) is known to have the property of finite speed of propagation.
Neither is there much numerical work on this situation; see \cite{Duque2013,Duque2014,Duque2015}.
\begin{exam}[Constant exponents with absorption]
\label{exam5.1}
We first consider an example with an absorption term,
\begin{align*}
& \lambda = 1, \quad \gamma = 2,\quad \sigma = 0.1,
\quad \Omega = (-1.5 \pi, 1.5 \pi) \times (-1.5 \pi, 1.5 \pi) , \\
&
u_0 =
\begin{cases}
| \sin(\sqrt{x^2 + y^2}) | , & \quad \text{for} \quad \sqrt{x^2 + y^2} \in (\frac{\pi}{6}, \pi) \\
0.5 , & \quad \text{for} \quad \sqrt{x^2 + y^2} \in [0, \frac{\pi}{6}) \\
0 , & \quad \text{otherwise} .
\end{cases}
\end{align*}
This example is the two-dimensional generalization of a one-dimensional example in \cite{Zhang-2009} that shows
a splitting phenomenon in the middle after a finite time.
An adaptive mesh and the corresponding computed solution are shown in Fig.~\ref{fig:Splitting-soln-mesh}.
We can see that as time evolves, the solution is becoming lower and the support
is expanding on the outer boundary. Meanwhile, the solution is being ``punched through'' with a hole
at the middle of the support. This is an additional feature with the absorption term.
\qed \end{exam}
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.34]{pme4-splitting-mesh_tn-0.pdf}\caption{$t = 0$}\end{subfigure}\hspace{5mm}%
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.32]{pme4-splitting-soln_tf-0.pdf}\caption{$t = 0$}\end{subfigure}\\%
\vspace{-1mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.34]{pme4-splitting-mesh_tn-40.pdf}\caption{$t = 0.40$}\end{subfigure}\hspace{5mm}%
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.32]{pme4-splitting-soln_tf-40.pdf}\caption{$t = 0.40$}\end{subfigure}\\%
\vspace{-1mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.34]{pme4-splitting-mesh_tn-64.pdf}\caption{$t = 0.64$}\end{subfigure}\hspace{5mm}%
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.32]{pme4-splitting-soln_tf-64.pdf}\caption{$t = 0.64$}\end{subfigure}\\%
\vspace{-1mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.34]{pme4-splitting-mesh_tn-80.pdf}\caption{$t = 0.80$}\end{subfigure}\hspace{5mm}%
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.32]{pme4-splitting-soln_tf-80.pdf}\caption{$t = 0.80$}\end{subfigure}%
\caption{Example~\ref{exam5.1}. An adaptive mesh and the corresponding solution at various time instants ($N = 40000$).}
\label{fig:Splitting-soln-mesh}
\end{figure}
\begin{exam}[Variable exponent without absorption]
\label{exam5.2}
For this example,
\begin{align*}
& \lambda = 0,\quad \gamma = \left (\frac{x}{2}\right )^2 + \left (\frac{y}{2}\right )^2 + 1.1,
\quad \Omega = (-2,2)\times (-2,2),
\\
& u_0 = \begin{cases} - \sin(2 \pi \sqrt{x^2 + y^2}),& \text{for} \quad 0.5 < \sqrt{x^2 + y^2} < 1 \\
0,& \text{otherwise}.
\end{cases}
\end{align*}
This example has been studied in \cite{Duque2013,Duque2015}. The support of the solution
has a hole in the middle which disappears in a finite time.
We take $t \in [0, 0.2]$ in the computation.
An adaptive mesh and the corresponding numerical solution is shown in Fig. \ref{fig:VarExp1-example5.2}.
The result appears to have better resolution than that in \cite{Duque2013} where a uniform mesh has been used.
Moreover, our method works just fine through the closing of the inside hole (cf. Fig. \ref{fig:VarExp1-example5.2})
whereas the method in \cite{Duque2015} which explicitly traces the free boundary encounters
the mesh singularity problem near the time when the hole is closing.
\qed \end{exam}
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.34]{pme4-mesh-varExp-1_tf-0.pdf}\caption{$t = 0$}\end{subfigure}\hspace{5mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.26]{pme4-varExp-1_tf-0.pdf}\caption{$t = 0$}\end{subfigure}\\%
\vspace{2mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.34]{pme4-mesh-varExp-1_tf-2.pdf}\caption{$t = 0.02$}\end{subfigure}\hspace{5mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.26]{pme4-varExp-1_tf-2.pdf}\caption{$t = 0.02$}\end{subfigure}\\%
\vspace{2mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.34]{pme4-mesh-varExp-1_tf-15.pdf}\caption{$t = 0.10$}\end{subfigure}\hspace{5mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.26]{pme4-varExp-1_tf-15.pdf}\caption{$t = 0.15$}\end{subfigure}\\%
\vspace{2mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.34]{pme4-mesh-varExp-1_tf-70.pdf}\caption{$t = 0.70$}\end{subfigure}\hspace{5mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.26]{pme4-varExp-1_tf-70.pdf}\caption{$t = 0.70$}\end{subfigure}%
\caption{Example~\ref{exam5.2}. An adaptive mesh and the corresponding solution at various time instants ($N = 25600$).}
\label{fig:VarExp1-example5.2}
\end{figure}
\begin{exam}[Waiting-time for variable exponent without absorption]
\label{exam5.3}
For this example,
\begin{align*}
& \lambda = 0,\quad \gamma = 2-x-y,\quad \Omega = (-1.5,1.5)\times (-1.5,1.5),
\\
& u_0 = \begin{cases} 5 (0.25-x^2-y^2), & \text{for} \quad \sqrt{x^2 + y^2} < 0.5 \\
0,& \text{otherwise}.
\end{cases}
\end{align*}
This example has been studied in \cite{Duque2015}.
We take $t \in [0, 0.05]$. The free boundary of the solution does not move until $t \approx 0.02$.
A moving mesh and the corresponding computed solution are shown in Fig.~\ref{fig:VarExp2-example5.3}.
We can see that the variation of the exponent causes the free boundary to expand anisotropically and
the solution to have different steepness along the free boundary.
Moreover, a closer examination of the results confirms the waiting time phenomenon, where the interface
in the region $\{(x,y): x+y \leq 0 \}$ does not move until a finite time has elapsed. Fig. \ref{fig:waiting-varExp2-2D-cross-section} show the cross sections of the numerical solutions with the plane $y = x$ at various instants of time.
In the figure, the red dashed line refers to the position of the initial interface,
where the waiting time phenomenon subsequently occurs.
\qed \end{exam}
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.34]{pme4-mesh-varExp-2_tf-0.pdf}\caption{$t = 0$}\end{subfigure}\hspace{5mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.32]{pme4-varExp-2_tf-0.pdf}\caption{$t = 0$}\end{subfigure}\\
\vspace{-1mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.34]{pme4-mesh-varExp-2_tf-6.pdf}\caption{$t = 0.06$}\end{subfigure}\hspace{5mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.32]{pme4-varExp-2_tf-6.pdf}\caption{$t = 0.06$}\end{subfigure}\\%
\vspace{-1mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.34]{pme4-mesh-varExp-2_tf-28.pdf}\caption{$t = 0.28$}\end{subfigure}\hspace{5mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.32]{pme4-varExp-2_tf-28.pdf}\caption{$t = 0.28$}\end{subfigure}\\
\vspace{-1mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.34]{pme4-mesh-varExp-2_tf-90.pdf}\caption{$t = 0.90$}\end{subfigure}\hspace{5mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.32]{pme4-varExp-2_tf-90.pdf}\caption{$t = 0.90$}\end{subfigure}%
\caption{Example~\ref{exam5.3}. An adaptive mesh and the corresponding solution at various time instants ($N = 25600$).}
\label{fig:VarExp2-example5.3}
\end{figure}
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-varExp2-waiting_tf-0.pdf}\caption{$t = 0$}\end{subfigure}%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-varExp2-waiting_tf-1.pdf}\caption{$t = 0.01$}\end{subfigure}%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-varExp2-waiting_tf-2.pdf}\caption{$t = 0.02$}\end{subfigure}%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-varExp2-waiting_tf-3.pdf}\caption{$t = 0.03$}\end{subfigure}\\%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-varExp2-waiting_tf-4.pdf}\caption{$t = 0.04$}\end{subfigure}%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-varExp2-waiting_tf-5.pdf}\caption{$t = 0.05$}\end{subfigure}%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-varExp2-waiting_tf-6.pdf}\caption{$t = 0.06$}\end{subfigure}%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-varExp2-waiting_tf-7.pdf}\caption{$t = 0.07$}\end{subfigure}\\%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-varExp2-waiting_tf-15.pdf}\caption{$t = 0.15$}\end{subfigure}%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-varExp2-waiting_tf-20.pdf}\caption{$t = 0.2$}\end{subfigure}%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-varExp2-waiting_tf-30.pdf}\caption{$t = 0.3$}\end{subfigure}%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-varExp2-waiting_tf-40.pdf}\caption{$t = 0.4$}\end{subfigure}\\%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-varExp2-waiting_tf-50.pdf}\caption{$t = 0.5$}\end{subfigure}%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-varExp2-waiting_tf-60.pdf}\caption{$t = 0.6$}\end{subfigure}%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-varExp2-waiting_tf-70.pdf}\caption{$t = 0.7$}\end{subfigure}%
\begin{subfigure}[b]{0.25\linewidth}\includegraphics[scale=0.20]{pme4-varExp2-waiting_tf-95.pdf}\caption{$t = 0.95$}\end{subfigure}\\%
\caption{Example~\ref{exam5.3}. The cross section at $y = 0$ of a computed solution is shown at various time instants ($N = 40000$).}
\label{fig:waiting-varExp2-2D-cross-section}
\end{figure}
\begin{exam}[Variable exponents with absorption]
\label{exam5.4}
The last example, taken from \cite{Duque2014}, has time dependent exponents, i.e.,
\begin{align*}
& \lambda = 1,\quad \gamma = \frac{x^2+y^2}{t^2+1},\quad \sigma = x^2+y^2 + 1 + e^{-t},
\quad \Omega = (-1.5,1.5)\times (-1.5,1.5),
\\
& u_0 = \begin{cases} \cos(2 \pi (x^2+y^2)), & \text{for} \quad \sqrt{x^2 + y^2} < 0.5 \\
0,& \text{otherwise}.
\end{cases}
\end{align*}
We take $t \in [0, 0.1]$.
The numerical results are shown in Fig.~\ref{fig:VarExp3-example5.4-mesh}.
They are comparable with those in \cite{Duque2014}.
\qed \end{exam}
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.34]{pme4-mesh-varExp-3_tf-0.pdf}\caption{$t = 0.00$}\end{subfigure}\hspace{5mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.32]{pme4-varExp-3_tf-0.pdf}\caption{$t = 0$}\end{subfigure}\\
\vspace{-1mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.34]{pme4-mesh-varExp-3_tf-3.pdf}\caption{$t = 0.03$}\end{subfigure}\hspace{5mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.32]{pme4-varExp-3_tf-3.pdf}\caption{$t = 0.03$}\end{subfigure}\\%
\vspace{-1mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.34]{pme4-mesh-varExp-3_tf-6.pdf}\caption{$t = 0.06$}\end{subfigure}\hspace{5mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.32]{pme4-varExp-3_tf-6.pdf}\caption{$t = 0.06$}\end{subfigure}\\
\vspace{-1mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.34]{pme4-mesh-varExp-3_tf-50.pdf}\caption{$t = 0.50$}\end{subfigure}\hspace{5mm}
\begin{subfigure}[b]{0.35\linewidth}\includegraphics[scale=0.32]{pme4-varExp-3_tf-50.pdf}\caption{$t = 0.50$}\end{subfigure}%
\caption{Example~\ref{exam5.4}. An adaptive mesh and the corresponding solution at various time instants ($N = 25600$).}
\label{fig:VarExp3-example5.4-mesh}
\end{figure}
\section{Conclusions and further remarks}
\label{SEC:conclusion}
In the previous sections we have studied an adaptive moving mesh finite element method for the numerical
solution of PME. The method is based on
the MMPDE moving mesh strategy and its new implementation and uses a linear finite element
method and the fifth-order Radau IIA scheme for the spatial and temporal discretization.
Numerical results show that the method is able to produce correct mesh concentration around
the free boundary and deal with problems having complex solution support.
Three types of mesh have been considered, uniform and acrlength- and Hessian-based
adaptive meshes. The method shows a first-order convergence behavior as the mesh is refined
for uniform and arclength-based adaptive meshes and improves to a second-order convergence
when Hessian-based adaptive meshes are used. This indicates that mesh concentration around the free boundary
is important to the accuracy of the method. Moreover, the prompt response
of the mesh movement to the changes in the solution is also crucial, requiring that a small value
of the parameter $\tau$ in mesh movement (cf. (\ref{mmpde-1})) be used especially for the computation
with fine meshes.
We have also studied the application of the method to PME with variable exponents and absorption
for which there are very few theoretical results available. Numerical results demonstrate that the method
is robust and able to deal with PDEs having more complicated structures.
It should be pointed out that there are small oscillations around the free boundary
in computed solutions; see the discussion in Sect.~\ref{SEC:PME-numerics}.
How to suppress these oscillations using a monotone or structure-preserving scheme
(e.g., see \cite{BaSo1991,LSSV07,LiHu2013,NH2015,Oberman2006,Le05,YuSh2008,Zhang-2009,ZZS2013})
and to combine them with adaptive mesh movement for PME are worth future investigations.
\vspace{20pt}
{\bf Acknowledgment.}
Support from US Army Research Office under grant W911-NF-1510377 is gratefully acknowledged.
The authors would also like to thank the anonymous referees for their valuable comments in improving the quality of the paper.
|
1,314,259,994,937 | arxiv | \section{}
\label{append}
The lowest-order polarization insertion is,
\begin{widetext}
\begin{equation}
\label{polpro}
\Pi^{0}(q_0, \v{q}) = 4 \int \frac{d^{3} k}{(2 \pi)^{3}} \,
\theta(|\v{q}+\v{k}|-k_{F}) \theta(k_{F}-k) \,
\left( \frac{1}{q_0- t_{q+k}+t_{k} + \imath \eta} -
\frac{1}{q_0+ t_{q+k}-t_{k} - \imath \eta}
\right)
\end{equation}
\end{widetext}
where $t_p=p^{2}/(2 m)$, with $m$ being the nucleon mass. In this
equation $k_F$ is the Fermi momentum.
We present now our model for the residual interaction $V$,
\begin{widetext}
\begin{equation}
\label{inter}
V(q) = \frac{f_{\pi}^2} {\mu_{\pi}^2} \Gamma_{\pi}^2 (q)
(\,f \, + \, g' \,
\v{\sigma} \cdot \v{\sigma'} \,
\v{\tau} \cdot \v{\tau'} \; + \;
V_{\pi}(q) \v{\sigma} \cdot \v{\widehat{q}} \,
\v{\sigma'} \cdot \v{\widehat{q}} \,
\v{\tau} \cdot \v{\tau'}
\; + \; V_{\rho}(q)
(\v{\sigma} \times \v{\widehat{q}}) \cdot
(\v{\sigma'} \times \v{\widehat{q}}) \,
\v{\tau} \cdot \v{\tau'}),
\end{equation}
\end{widetext}
where it has been taken the static limit. Therefore,
$V_{\pi}(q) = -q^2/(q^2 + \mu_{\pi}^2)$ and
$V_{\rho}(q) = -(\Gamma_{\rho}/
\Gamma_{\pi})^{2} \, C_{\rho} \,
q^2/(q^2 + \mu_{\rho}^2)$,
where $\mu_{\pi}$ ($\mu_{\rho}$ )
is the pion (rho) rest mass, $f_{\pi}^2/4 \pi=0.081$
and $C_{\rho} = 2.18$.
The form factor of the $\pi NN$ ($\rho NN$) vertex
is $\Gamma_{\pi}$ ($\Gamma_{\rho}$),
where $\Gamma_{j}=
((\Lambda^{2}_{j}-m^{2}_{j})/(\Lambda^{2}_{j}+q^{2}))^{2}$.
Using the property,
\begin{equation}
\label{sigma}
\v{\sigma} \cdot \v{\sigma'} =
\v{\sigma} \cdot \v{\widehat{q}} \,
\v{\sigma'} \cdot \v{\widehat{q}} \, +
(\v{\sigma} \times \v{\widehat{q}}) \cdot
(\v{\sigma'} \times \v{\widehat{q}}),
\end{equation}
the Eq.~(\ref{inter}) can be rewritten as,
\begin{widetext}
\begin{equation}
\label{inter2}
V(q) = \frac{f_{\pi}^2} {\mu_{\pi}^2} \Gamma_{\pi}^2 (q)
({\cal V}_C \; + \; {\cal V}_L \v{\sigma} \cdot \v{\widehat{q}} \,
\v{\sigma'} \cdot \v{\widehat{q}} \,
\v{\tau} \cdot \v{\tau'}
\; + \; {\cal V}_T
(\v{\sigma} \times \v{\widehat{q}}) \cdot
(\v{\sigma'} \times \v{\widehat{q}}) \,
\v{\tau} \cdot \v{\tau'}),
\end{equation}
\end{widetext}
with obvious definitions for ${\cal V}_{C, \, L, \, T}$.
As a final point for this Appendix, we split the solution of the
Dyson's equation (Eq.~(\ref{dyson})), into it real and imaginary parts,
\begin{equation}
\label{dyson4}
\Pi^{Dys} = \frac{{\cal R}(1-V \, {\cal R})- V \, {\cal I}^{2}}
{(1-V \, {\cal R})^{2}+ (V \, {\cal I})^{2}} \; + \;
\frac{{\cal I}}
{(1-V \, {\cal R})^{2}+ (V \, {\cal I})^{2}} \, \imath
\end{equation}
We now perform the same procedure as in Eq.~(\ref{ring7}), but for the
real part of the ring series,
\begin{eqnarray}
\label{ring7r}
Re(\Pi^{0}) & = & {\cal R} \nonumber \\
Re(\Pi^{0} V \Pi^{0}) & = & {\cal R}^{2} \; V \nonumber \\
Re(\Pi^{0}(V \, \Pi^{0})^{2}) & = & {\cal R}^{3} \; V^{2} \nonumber \\
& ... & \nonumber \\
Re(\Pi^{0}(V \, \Pi^{0})^{N}) & = & {\cal R}^{N+1} \; V^{N} \nonumber \\
& ... &
\end{eqnarray}
where the sum is,
\begin{equation}
\label{ring9}
Re (\Pi^{ring}) = \frac{{\cal R}}{1-V \, {\cal R}}.
\end{equation}
Finally, we can write,
\begin{equation}
\label{ring10}
\Pi^{ring} = \frac{{\cal R}(1-V \, {\cal R})}
{(1-V \, {\cal R})^{2}} \; + \;
\frac{{\cal I}}
{(1-V \, {\cal R})^{2}} \, \imath
\end{equation}
|
1,314,259,994,938 | arxiv | \section{Introduction}
\label{s:intro}
Following the overly optimistic expectations of early research, after several decades of intense research on the one hand and relatively disappointing practical results on the other, face recognition technology has been finally started to enjoy some success in the consumer market. An example can be found within the online photo sharing platform in Google Plus which automatically recognizes individuals in photographs based on previous labelling by the user. It is revealing to observe that the recent success of face recognition has been in the realm of data and image retrieval \cite{Shan2010}, rather than security, contrasting the most often stated source of practical motivation driving past research \cite{ChelWilsSiro1995}. This partial paradigm shift is only a recent phenomenon. In hindsight, that success would first be achieved in the domain of retrieval should not come as a surprise considering the relatively low importance of type-II errors in this context: the user is typically interested in only a few of the retrieved matches and the consequences of the presence of false positives is benign, more often being mere inconvenience.
Nevertheless, the appeal of face recognition as a biometric means that the interest in its security applications is not waning. Face recognition can be performed from a distance and without the knowledge of the person being recognized, and is more readily accepted by the general public in comparison with other biometrics which are regarded as more intrusive. In addition, the acquisition of data for face recognition can be performed readily and cheaply, using widely available devices. However, although the interest in security uses of face recognition continues, it has become increasingly the case that research has shifted from the use of face recognition as a sole biometric. Instead, the operational paradigm is that of face recognition as an element of a multi-biometric system \cite{MuraIwamMakiYagi2013}.
Of particular interest to us is the use of infrared imaging as a modality complementary to the `conventional', visible spectrum-based face recognition. Our focus is motivated by several observations. Firstly, in principle an IR image of a face can be acquired whenever a conventional image of a face can. This may not be the case with other biometrics (gait, height etc). Secondly, while certainly neither as cheap nor as widely available as conventional cameras, in recent years IR imagers have become far more viable for pervasive use. It is interesting to note the self-enforcing nature of this phenomenon: the initial technology advancement-driven drop in price has increased the use of IR cameras thereby making their production more profitable and in turn lowering their cost even further.
In a distal sense, the key challenges in IR-based face recognition remain to be pose invariance, robustness to physiological conditions affecting facial IR emissions, and occlusions due to facial hair and accessories (most notably eyeglasses). In a more proximal sense, as argued in a recent review \cite{GhiaAranBendMald2013b}, the main challenge is to formulate a framework which is capable of dealing with all of the aforementioned factors affecting IR `appearance' in a unified manner. While a large number of IR-based face recognition algorithms have been described in the literature, without exception they all constrain their attention to a few, usually only a single, extrinsic factor (e.g.\ facial expression or pose). None of them can cope well with a concurrent variability in several extrinsic factors. Yet, this is the challenge encountered in practice.
In this paper our aim is to describe what we believe to be the first IR-based face recognition method which is able to deal with all of the major practical challenges. Specifically, our method explicitly addresses (i) the variability in the user's physiological state, (ii) pose changes, (iii) facial expressions, (iv) partial occlusion due to prescription glasses, and (v) quasi-occlusion due to facial hair.
\section{Unified treatment of extrinsic variability}\label{s:main}
In this section we detail different elements of our system. We start by describing the Dual Dimension Active Appearance Model Ensemble framework and then demonstrate how it can be extended to perform model selection which allows for the handling of partial occlusion due to prescription glasses, and quasi-occlusion due to facial hair.
\subsection{Dual Dimension AAM Ensemble (DDAE)}
At the coarsest level, the Dual Dimension Active Appearance Model Ensemble algorithm comprises three distinct components. These are: (i) a method for fitting an active appearance model (AAM) \cite{GrosMattBake2006} particularly designed for fast convergence and reliable fitting in the IR spectrum, (ii) a method for selecting the most appropriate AAM from a trained ensemble, and (iii) the underlying extraction of person-specific discriminative information. The ultimate functions of these elements within the system as a whole are respectively pose normalization within a limited yaw range, invariance across the full range of yaw, and invariance to physiological changes of the user.
\vspace{-10pt}
\subsubsection{AAM fitting}
There are two crucial design aspects of the design and deployment of AAMs that need to be considered in order to ensure their robustness. These are: (i) the model initialization procedure, and (ii) the subsequent iterative refinement of model parameters. In the context of the problem considered in this paper, the former is relatively simple. Given that we are using thermal imaging background clutter is virtually non-existant applying simple thresholding to the input image allows the face to be localized and its spatial extent estimated. Reliable initialization of the AAM is then readily achieved by appropriately positioning its centroid and scale. A much greater challenge in the use of the AAM model for the normalization of pose and facial expression concerns the subsequent convergence -- the model is notoriously prone to convergence towards a local mininum, possibly far from the correct solution. This problem is even more pronounced when fitting is performed on face images acquired in the IR spectrum; unlike in the visible spectrum, in thermal IR human faces lack face-specific detail that is crucial in directing iterative optimization. This is the likely explanation for the absence of published work on the use of AAMs for faces in IR until the work of Ghiass \textit{et al.}~\cite{GhiaAranBendMald2013}. Their key idea was to perform fitting by learning and applying an AAM not on raw IR images themselves but rather on images automatically processed in a manner which emphasizes high-frequency detail. Specifically the detail-enhanced image $I_e$ is computed by anisotropically diffusing the input image $I$:
{\small\begin{align}
\frac{\partial I}{\partial t} = \nabla.\left( c(\| \nabla I\|)~\nabla I\right) = \nabla c.\nabla I + c(\| \nabla I\|)~\Delta I,
\end{align}}
using a spatially varying and image gradient magnitude-dependent parameter $c(\| \nabla I\|) = \exp \left\{ -\|\nabla I\|/400 \right\}$, subtracting the result from the original image and applying histogram equalization $I_e = \text{histeq}(I - I_d)$. Warped examples are shown in Fig.~\ref{f:aamExamp}.
\begin{SCfigure
\vspace{-20pt}
\centering
\includegraphics[width=0.4\textwidth]{aam_examples.png}
\caption{Examples of automatically pre-processed thermal images warped to the canonical geometric frame using the described AAM fitting method. }
\label{f:aamExamp}
\vspace{-10pt}
\end{SCfigure}
\subsubsection{Person and pose-specific model selection}
Applied to faces, the active appearance model comprises a triangular mesh with each triangle describing a small surface patch. To use this model for the normalization of pose, it is implicitly assumed that the surface patches are approximately planar. While this approximation typically does not cause fitting problems when pose variation is small this is not the case when fitting needs to be performed across a large range of poses (e.g.\ ranging from fully frontal to fully profile orientation relative to the camera's viewing direction). This is particularly pronounced when the applied AAM is generic (rather than person-specific) and needs to have sufficient power to describe the full scope of shape and appearance variability across faces. The DDAE method overcomes these problems by employing an ensemble of AAMs. Each AAM in an ensemble `specializes' to a particular region of IR face space. The space is effectively partitioned both by pose and the amount of appearance variability, making each AAM specific to a particular range of poses and individuals of relatively similar appearance (in the IR spectrum). In training, this is achieved by first dividing the training data corpus into pose-specific groups and then applying appearance clustering on IR faces within each group. A single AAM in an ensemble is trained using a single cluster. On the other hand, when the system is queried with a novel image containing a face in an arbitrary and unknown pose, the appropriate AAM from the ensemble needs to be selected automatically. This can be readily done by fitting each AAM from the ensemble and then selecting the best AAM as the one with the greatest maximal likelihood, that is, the likelihood achieved after convergence.
\vspace{-15pt}
\subsubsection{Discriminative representation}
A major challenge to IR-based face recognition in practice emerges as a consequence of thermal IR appearance dependence on the physiological state of the user. Factors which affect the sympathetic or parasympathetic nervous system output (e.g.\ exercise or excitement) or peripheral blood flow (e.g.\ ambient temperature or drugs) can have a profound effect. This is illustrated in Fig.~\ref{f:phys}. With the notable exception of the work by Buddharaju \textit{et al.} \cite{BuddPavlTsiaBaza2007} there has been little research done on developing an IR-based person-specific representation invariant to these changes.
\begin{figure}[thp]
\centering
\subfigure[Usr~1, seq~1]{~~\includegraphics[width=0.14\textwidth]{user1_before.png}~~}
\subfigure[Usr~1, seq~2]{~~\includegraphics[width=0.14\textwidth]{user1_after.png}~~}~~~
\subfigure[Usr~2, seq~1]{~~\includegraphics[width=0.14\textwidth]{user2_before.png}~~}
\subfigure[Usr~2, seq~2]{~~\includegraphics[width=0.14\textwidth]{user2_after.png}~~}
\vspace{-5pt}
\caption{Thermal IR appearance before (`seq 1') and after (`seq 2') mild exercise. }
\label{f:phys}
\vspace{-10pt}
\end{figure}
We adopt the use of a representation which is dependent on the distribution of superficial blood vessels. Unlike the vessel network based representation \cite{BuddPavlTsiaBaza2007} our representation is not binary i.e.\ a particular image pixel is not merely classified as a blood vessel or not. Rather, the extracted pseudo-probability map allows for the level of confidence regarding the saliency of a particular pixel to be encoded. Additionally, unlike the representation extracted by various blood perfusion methods \cite{SealNasiBhatBasu2011}, our representation is based on temperature gradients, rather than the absolute temperature. As such, it is less affected by physiological changes which influence the amount of blood flow, and is rather a function of the invariant distribution of underlying blood vessels.
Our representation is extracted using the so-called vesselness filter \cite{FranNiesVincVier1998}, originally developed for use on 3D MRI data for the extraction of tubular structures from images. Just as in 3D, in 2D this is achieved by considering the of the Hessian matrix $H(I,x,y,s)$ computed at the image locus $(x,y)$ and at the scale $s$:
{\small\begin{align}
H(I,x,y,s)=\begin{bmatrix}
L_x^2(x,y,\sigma) & L_x L_y (x,y,\sigma)\\
L_x L_y (x,y,\sigma) & L_y^2(x,y,\sigma) \\
\end{bmatrix}
\end{align}}
where $L_x^2(x,y,s)$, $L_y^2(x,y,s)$, and $L_x L_y (x,y,\sigma)$ are the second derivatives of $L(x,y,s)$, resulting from the smoothing the original image $I(x,y)$ with a Gaussian kernel $L(x,y,s) = I(x,y) \ast G(s)$. If the two eigenvalues of $H(I,x,y,s)$ are $\lambda_1$ and $\lambda_2$ and if without loss of generality we take $|\lambda_1| \leq |\lambda_2|$, two statistics which characterize local appearance and which can be used to quantify the local vesselness of appearance are $\mathcal{R}_\mathcal{A} = |\lambda_1|/|\lambda_2|$ and $\mathcal{S} = \sqrt{\lambda_1^2 + \lambda_2^2}$. The former of these measures the degree of local 'blobiness' which should be low for tubular structures, while $\mathcal{S}$ rejects nearly uniform, uninformative image regions which are characterized by small eigenvalues of the Hessian. For a particular scale of image analysis $s$, the two measures, $\mathcal{R}_\mathcal{A}$ and $\mathcal{S}$, are unified into a single measure $\mathcal{V}(s) = (1-e^{-\frac{\mathcal{R}_\mathcal{A}}{2\beta^2}}) \times (1-e^{-\frac{\mathcal{S}}{2c^2}})$ for $\lambda_2 > 0$ and $\mathcal{V}(s) =0$ otherwise, where $\beta$ and $c$ are the parameters that control the sensitivity of the filter to $\mathcal{R}_\mathcal{A}$ and $\mathcal{S}$. The overall vesselness of a particular image locus can be computed as the maximal vesselness across scale $\mathcal{V}_0 = \max_{s_{min} \leq s \leq s_{max}} \mathcal{V}(s)$.
\subsection{Robustness to eye-wear and facial hair changes}
A significant challenge posed to face recognition algorithms, conventional and IR-based ones alike, is posed by occlusions \cite{GrosMattBake2006,Mart2002}. Of particular interest to us are specific commonly encountered occlusions -- these are occlusions due to prescription glasses \cite{HeoKongAbidAbid2004} (for practical purposes nearly entirely opaque to the IR frequencies in the short, medium and long wave sub-bands and facial hair. To prevent them having a dramatic effect on intra-personal matching scores, it is of paramount importance to detect and automatically exclude from comparison the corresponding affected regions of the face. The DDAE framework can be extended to achieve precisely this.
In particular, we extend the existing AAM ensemble with additional models which are now occlusion-specific too. In the training stage this can be achieved by including AAMs which correspond to the existing ones but which are geometrically truncated. Note that this means that the new AAMs do not need to be re-trained -- it is sufficient to adopt the already learnt appearance and shape modes and truncate them directly. In particular, we created two truncated models -- one to account for the growth of facial hair (beard and moustache) and one to account for the presence of eye-wear. These two can also be combined to the produce the third truncated model for the handling of differential facial hair and eye-wear between two images hypothesized to belong to the same person. We created the two baseline truncated models manually; this is straightforward to do using high-level domain knowledge, as the nature of the specific occlusions in question constrains them to very specific parts of the face. An example of a fitted geometrically truncated AAM is shown in Fig.~\ref{f:mask}.\vspace{-12pt}
\begin{SCfigure
\centering
\includegraphics[width=0.233\textwidth]{mask2.png}
\caption{A geometrically truncated AAM. The truncated portion (red) is not used for fitting -- here it is shown using the average shape after the fitting of the remainder of the mesh. }
\label{f:mask}
\end{SCfigure}
\vspace{-12pt}The application of the new ensemble in the classification of a novel image of a face, and in particular the process of model selection needed to achieve this, is somewhat more involved. In particular, the strategy whereby the highest likelihood model is selected is unsatisfactory as it favours smaller models (in our case the models which describe occluded faces). This is a well-known problem in the application of models which do not seek to describe jointly the entire image, i.e.\ both the foreground and the background, but the foreground only. Thus, we overcome this by not selecting the highest likelihood model but rather the model with the highest log-likelihood normalized by the model size \cite{AranCipo2006c}. Recall that the inverse compositional AAM fitting error is given by $e_{icaam} = \sum_{\text{All pixels } \mathbf{x}} \left[ I_e(\mathbf{W}(\mathbf{x};\mathbf{p})) - A_0(\mathbf{W}(\mathbf{x};\mathbf{p})) \right]^2$, where $\mathbf{x}$ are pixel loci, $A_i$ the retained appearance principal components and $\mathbf{W}(\mathbf{x};\mathbf{p})$ the location of the pixel warped using the shape parameters $\mathbf{p}$. Noting that by design of the model the error contributions of different pixels are de-correlated, in our case the highest normalized log-likelihood model is the one with the lowest mean pixel error $\bar{e_{icaam}}$.
\section{Evaluation and results}
In this section we describe the experiments we conducted with the aim of assessing the effectiveness of the proposed framework, and analyze the empirical results thus obtained. We start by summarizing the key features of our data set, follow that with a description of the evaluation methodology, and finally present and discuss our findings.
\vspace{-12pt}\subsubsection{Evaluation data}
In Sec.~\ref{s:intro} we argued that a major limitation of past research lies in the \textit{ad hoc} approach of addressing different extrinsic sources of IR appearance variability. Hand in hand with this goes the similar observation that at present there are few large, publicly available data sets that allow a systematic evaluation of IR-based face recognition algorithms and which contain a gradation of variability of all major extrinsic variables. We used the recently collected Laval University Thermal IR Face Motion Database \cite{GhiaAranBendMald2013a}. The database includes 200 people, aged 20 to 40, most of whom attended two data collection sessions separated by approximately 6 months. In each session a single thermal IR video sequence of the person was acquired using FLIR's Phoenix Indigo IR camera in the 2.5--5$\mu$m wavelength range. The duration of all video sequences in the database is 10~s and they were captured at 30~fps, thus resulting in 300 frames of $320 \times 240$ pixels per sequence. The imaged subjects were instructed to perform head motion that covers the yaw range from frontal ($0^\circ$) to approximately full profile ($\pm 90^\circ$) face orientation relative to the camera, without any special attention to the tempo of the motion or the time spent in each pose. The pose variability in the data is thus extreme. The subjects were also asked to display an arbitrary range of facial expressions. Lastly, a significant number of individuals were imaged in different physiological conditions in the two sessions. In the first session all individuals were imaged in a relatively inactive state, during the course of a sedentary working day and after a prolonged stay indoors, for the second session some users were asked to come straight from the exposure to cold ($<0^\circ$C) outdoors temperatures, alcohol intake, and/or exercise; see Fig.~\ref{f:phys}. In addition, individuals who wore prescription glasses in the first session were now asked to take them off. Several participants were also asked to allow for the growth of facial hair (beard, moustache) between the two sessions.
\vspace{-12pt}\subsubsection{Evaluation methodology}\label{ss:evalMethod}
We evaluated the proposed algorithm in a setting in which the algorithm is trained using only a single image in an arbitrary pose and facial expression. The querying of the algorithm using a novel face is also performed using a single image, possibly in a different pose and/or facial expression. Pose, facial expression changes and partial occlusion due to eye-wear or hair all present a major challenge to the current state of the art, and the consideration of all of the aforementioned in concurrence make our evaluation protocol extremely challenging (indeed, more so than any attempted by previous work), and, importantly, representative of the conditions which are of interest in a wide variety of practical applications. In an attempt to perform a comprehensive comparative evaluation we contacted a number of authors of previously proposed approaches. However none of them was able or willing to provide us with the source code or a working executable of their methods. Thus herein we constrain ourselves to the comparison of the proposed method with the DDAE algorithm which was compared with a thermal minutia points \cite{BuddPavlTsia2006} and vascular networks \cite{BuddPavl2009} methods in \cite{GhiaAranBendMald2013a}.
\subsection{Results}
The key results evaluation are summarized in Table~\ref{t:recognition} and Fig.~4. These show respectively the recognition rates achieved by our system in different experiments we conducted, and the receiver-operator characteristic curves corresponding to the recognition experiments in the presence of occlusion (in all subjects) due to facial hair or prescription glasses.
\begin{figure}[thp]
\centering
\vspace{-15pt}
\begin{tabular}{ccccc}
\rotatebox{90}{~~~Facial hair}~~~~&
\includegraphics[width=0.19\textwidth]{rocBeard.pdf} & \includegraphics[width=0.19\textwidth]{rocBeard_00_30.pdf} & \includegraphics[width=0.19\textwidth]{rocBeard_30_60.pdf} & \includegraphics[width=0.19\textwidth]{rocBeard_60_90.pdf}\\
& \scriptsize Overall & \scriptsize $\Delta$ 0--30$^\circ$ & \scriptsize $\Delta$ 30--60$^\circ$ & \scriptsize $\Delta$ 60--90$^\circ$
\end{tabular}
\begin{tabular}{ccccc}
\rotatebox{90}{~~~~~~Eye-wear}~~~~&
\includegraphics[width=0.19\textwidth]{rocGlasses.pdf} & \includegraphics[width=0.19\textwidth]{rocGlasses_00_30.pdf} & \includegraphics[width=0.19\textwidth]{rocGlasses_30_60.pdf} & \includegraphics[width=0.19\textwidth]{rocGlasses_60_90.pdf}\\
& \scriptsize Overall & \scriptsize $\Delta$ 0--30$^\circ$ & \scriptsize $\Delta$ 30--60$^\circ$ & \scriptsize $\Delta$ 60--90$^\circ$
\end{tabular}
\vspace{-5pt}
\caption{Performance in the presence of occlusion across different extents of pose changes. }
\label{f:roc}
\vspace{-30pt}
\end{figure}
\begin{SCtable}
\centering
\small
\vspace{-10pt}
\caption{Average recognition rate. In experiments with partial occlusion the occlusion was differential (e.g.\ if a training image was acquired with eye-wear on, the test image was acquired with it off, and \textit{vice versa}. }
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{l|ccc}
\noalign{\hrule height 4\arrayrulewidth}
& Unoccluded & Facial hair & Eye-wear\\
\hline
Rank 1 & 100\% & 87\% & 74\% \\
Rank 2 & 100\% & 94\% & 84\% \\
Rank 3 & 100\% & 95\% & 92\% \\
\hline
\noalign{\hrule height 4\arrayrulewidth}
\end{tabular}
\label{t:recognition}
\vspace{-20pt}
\end{SCtable}
\vspace{-10pt}To start with, consider the results in Table~\ref{t:recognition}. It is interesting to notice that performance deterioration is greater when occlusion is caused by eye-wear, rather than facial hair growth. This may be particularly surprising considering that in our data the area occluded by facial hair was larger in extent. One possible explanation of this finding may be rooted in different discriminative abilities of different facial components. Further work is needed to ascertain this; although the eye region appears to be highly informative in the visible spectrum \cite{CampFeriCesa2000} this may not be the case in the IR spectrum as suggested by evidence from some previous work \cite{AranHammCipo2010}. However, there are alternative possibilities which may explain or contribute to the explanation of the observed performance differential. For example, the choice to grow a beard (say) is not arbitrary but rather a conscious decision made with aesthetic considerations in mind. It is possible that individuals who choose to grow facial hair have more characteristic faces. It is also possible that the explanation is of a more practical nature -- perhaps the accuracy of AAM fitting is more affected by the absence of the information around the eyes, rather than those areas of the face typically covered by facial hair. We could not examine this quantitatively as it was prohibitively laborious to obtain the ground truth AAM parameters for the entire database. More research is certainly needed to establish the contribution of each of the aforementioned factors.
As Table~\ref{t:recognition} shows, both types of occlusions, those due to eye-wear and those due to facial hair, have a significant effect on recognition accuracy. However, what is interesting to observe is that already at rank-3 the correct recognition rate in all cases is at least 92\%. This exceeds the performance of the vascular networks based method which used thermal minutia points \cite{BuddPavlTsia2006} and is competitive with the iteratively registered networks approach \cite{BuddPavl2009}, even though the aforementioned algorithms employ several images per person for training and do not consider occlusions.
\section{Summary and conclusions}
We described what we believe to be the first attempt at addressing all major challenges in practical IR-based face recognition in a unified manner. In particular, our system explicitly handles changes in a person's pose, mild facial expression, physiological changes, partial occlusion due to eye-wear, and quasi-occlusion due to facial hair. Our future work will focus on the extension of the described framework to recognition from video and the utilization of partial information available in the regions of the face covered by facial hair.
\tiny
\bibliographystyle{unsrt}
|
1,314,259,994,939 | arxiv | \section{Introduction}
Ring galaxies show a pronounced ring structure surrounding an
apparently empty region in which an offcentered nucleus can often be seen. Such
objects are relatively rare and are mainly found in low density environments.
Their properties have been reviewed and compared to those of disc ga\-la\-xies with
resonant
rings (ringed galaxies) by Athanassoula \& Bosma (1985). Theys \& Spiegel
(1976) made the important remark that they have a companion which lies
preferentially near
the minor axis of the ring, and this
guided their simulations (Theys \&
Spiegel 1977) of a companion colliding with a disc galaxy, which indeed
resulted in
the formation of rings in the disc.
A clear picture of what happens during the collision is given by
Lynds \& Toomre (1976) and by Toomre (1978). As the intruder approaches the
disc, the extra inwards gravitational force it exerts on the disc particles
increases and causes
their orbits to contract. When the companion leaves there is a
strong rebound. As a result the orbits crowd together and a high amplitude,
transient density wave is formed, which propagates outwards. A second or third
rebound is possible, resulting in a second or third ring.
Self-consistent simulations following these precepts have been made
recently by Huang \& Stewart (1988) and Appleton \& James (1990), while
Hernquist \& Weil (1993) and
Horelou \& Combes (1993) include also gas in the
simulations (the latter,
however, with a rigid companion and halo). Mihos \&
Hernquist (1994) add star formation as well.
Relatively high mass companions, equal
to 1, 0.4 or 0.333 times the target galaxy mass, are considered by Appleton \&
James (1990), Huang \& Stewart (1988) and Horelou \& Combes (1993),
respectively. Hernquist \& Weil (1993) use a compa\-nion mass equal to
that of the target disc,
and about 0.25 of the total mass of the target,
for their fiducial simulation, but also present
simulations with the companion having double or a quarter of that mass.
In this paper we will use numerical simulations to investigate
further the formation of rings by infall of a small companion galaxy on a
nonbarred or barred target galaxy. Prompted by the observational results for
the Cartwheel galaxy (Davies \& Morton 1982, and discussion in Struck-Marcell
\& Higdon 1993 and Appleton \& Struck-Marcell 1996), we use
companions with comparatively low mass, i.e.,
0.02 to 0.2 times the target galaxy
mass. This ensures that the disc survives
the collision and that the ring can be considered as a perturbation,
comparable to that of resonant rings of ringed galaxies. In fact one of our
goals is to study the
morphology and kinematics of the rings
formed by impacts and to compare them
with the
corresponding ones for ringed galaxies. Moreover, we will consider
the evolution in both
nonbarred and barred target discs.
In section \ref{sec:simul} we present
the initial conditions and numerical simulations and give some detail
on the computing techniques used. The results for the
impacts on nonbarred discs are presented in section \ref{sec:nonbar}. In
section \ref{sec:barred} we discuss cases where the target disc is
barred. A discussion about ring {\it versus} ringed
galaxies is given in section \ref{sec:ringringed}
and a summary of our results in section \ref{sec:summary}.
\section{Numerical techniques }
\label{sec:simul}
\subsection{Initial conditions }
\indent
The simulations
we will discuss in this paper form part of a series which studies
the effects of a small companion on a disc galaxy. More information on
the setup of these simulations and their
evolution will be given elsewhere (Puerari \& Athanassoula,
in preparation). Here we will discuss only some characteristics
relevant to the problem of ring formation.
The effects on the vertical structure of
the disc will be left for a more global paper, encompassing a larger
number of simulations.
Our model galaxies are
composed of a halo, a disc and, in some cases, a bulge, and some relevant
parameters of these components are given in Table~\ref{tab:models}.
Four of
our models ({\bf m1}, {\bf m2}, {\bf m3} and {\bf s1}) have discs which were
initially setup with a truncated
Kuzmin/Toomre (hereafter KT) projected radial density profile \\
${\displaystyle \Sigma (r) = \frac {M_D}{2\pi b_D^2}(1+\frac {r^2}{b_D^2})^{-3/2}}$ \\
\noindent (Kuzmin 1956, Toomre 1963) and a $\rm{sech}^2$({$z$}/{$z_0$})
vertical distribution, while their
halo and bulge have a truncated Plummer (hereafter PL) profile \\
${\displaystyle \rho (r) = \frac {3M}{4\pi b^3} (1+\frac {r^2}{b^2})^{-5/2}}$ \\
\noindent where $M$ and $b$ are the mass and scalelength of the component (halo, bulge, or companion). The radial
velocity dispersions in the disc are chosen so that the $Q$ parameter
(Toomre 1964) \\
${\displaystyle Q = \frac {\sigma_R \kappa} {3.36 G \Sigma}}$ \\
\noindent is independent of radius and equal to 1, 1.1, 1.2 or 1.5.
In the above formula
$\sigma_R$ is the radial velocity dispersion,
$\kappa$ is the epicyclic frequency,
$G$ is the gravitational constant and
$\Sigma$ is the disc surface density.
We have calculated the
tangential velocity dispersion using the epicyclic approximation; the
vertical velocity dispersion follows from
the expression $\sigma^2_z=\pi G \Sigma z_0$ (Binney \&
Tremaine 1987). The first section of Table~\ref{tab:models} describes
the disc. Its first column gives the label of
the model, the second one the type of disc used (which for the four models we
are discussing is KT), the third gives the number of particles in the
disc, the fourth its mass, the fifth its scale length, the sixth its outer
cutoff radius, and the seventh its vertical scaleheight.
The second and third section of Table~\ref{tab:models} give the
same information,
but now for the halo and bulge respectively. For brevity we will often refer to model {\bf m1} as standard, and to models {\bf m2} and {\bf m3} as extended, or very extended.
\begin{table}
\centering
\caption{Model parameters}
\label{tab:models}
\begin{tabular}{@{}llrllrl@{}}
&&&&&& \\
\multicolumn{2}{l}{\bf DISC} &&&&& \\
&&&&&& \\
Model & Type & $N_D$ & $M_D$ & $b_D$ & $R_{D}$ & $z_0$ \\
{\bf m1} & KT & 8000 & 0.4 & 1.0 & 5.0 & 0.2 \\
{\bf m2} & KT & 8000 & 0.4 & 2.0 & 8.0 & 0.2 \\
{\bf m3} & KT & 8000 & 0.4 & 5.0 & 8.0 & 0.2 \\
{\bf s1} & KT & 8000 & 0.3215 & 1.0 & 5.0 & 0.2 \\
{\bf mb} & KT & 14000 & 0.7 & 1.0 & 5.0 & 0.2 \\
& barred & 14000 & 0.7 & 0.84 & 5.0 & 0.21 \\
{\bf mh} & KT & 14000 & 0.7 & 1.0 & 5.0 & 0.2 \\
& hot & 14000 & 0.7 & 0.83 & 5.0 & 0.21 \\
&&&&&& \\
\multicolumn{2}{l}{\bf HALO} &&&&& \\
&&&&&& \\
Model & Type & $N_H$ & $M_H$ & $b_H$ & $R_{H}$ & \\
{\bf m1} & PL & 30000 & 1.5 & 5.0 & 10.0 & \\
{\bf m2} & PL & 30000 & 1.5 & 5.0 & 10.0 & \\
{\bf m3} & PL & 30000 & 1.5 & 5.0 & 10.0 & \\
{\bf s1} & PL & 30000 & 1.0733 & 5.0 & 10.0 & \\
{\bf mb} & PL & 26000 & 1.3 & 10.0 & 10.0 & \\
& PL & 26000 & 1.3 & --- & 11.8 & \\
{\bf mh} & PL & 26000 & 1.3 & 10.0 & 10.0 & \\
& PL & 26000 & 1.3 & --- & 11.8 & \\
&&&&&& \\
\multicolumn{2}{l}{\bf BULGE} &&&&& \\
&&&&&& \\
Model & Type & $N_B$ & $M_B$ & $b_B$ & $R_{B}$ & \\
{\bf m1} & PL & 2000 & 0.1 & 0.375 & 10.0 & \\
{\bf m2} & PL & 2000 & 0.1 & 0.375 & 10.0 & \\
{\bf m3} & PL & 2000 & 0.1 & 0.375 & 10.0 & \\
{\bf s1} & PL & 2000 & 0.0997 & 0.375 & 10.0 & \\
{\bf mb} & --- & --- & --- & --- & --- & \\
{\bf mh} & --- & --- & --- & --- & --- & \\
&&&&&& \\
\multicolumn{3}{l}{\bf COMPANION} &&&& \\
&&&&&& \\
Model & Type & $N_C$ & $M_C$ & $b_C$ & $R_{C}$ & \\
{\bf cs} & PL & 4000 & 0.1987 & 0.195 & 3.0 & \\
{\bf csd} & PL & 8000 & 0.3974 & 0.195 & 3.0 & \\
{\bf csh} & PL & 2000 & 0.09935 & 0.195 & 3.0 & \\
{\bf c1} & PL & 800 & 0.04 & 0.3333 & 3.0 & \\
\end{tabular}
\end{table}
The setting up procedure follows approximately
that of Barnes (1988). We
first construct the halo and bulge components separately and then superpose
them and allow them to relax together. Then we choose positions for the disc
particles, using the prescribed density profiles, and tabulate the forces
from this distribution of
particles on a grid. This field is slowly imposed
on the spheroid particles, in
addition to their own field, to allow the halo-bulge system to relax in the
total field. After the disc field has reached its
final amplitude, we continue the evolution until the spheroid reaches
equilibrium. The remaining step
is to give the disc particles their initial velocities. For this we
use the chosen velocity
dispersions, take
into account asymmetric drift corrections, and use the information from
the relaxed halo-bulge component for the calculation of the initial tangential
velocity of the disc particles. At this point we are ready to start the
si\-mu\-lation by superposing the spheroid and disc distributions and adding
the companion.
Model {\bf mb} has no bulge and is bar unstable.
The setting up procedure was the same as described above, except that there
was no bulge component. In this case, however, before introducing the companion,
we let the galaxy
evolve, first enforcing axisymmetry, and then allowing full freedom of all the
particles, until the bar forms, and only then did we introduce the
companion. In other words, in this case the companion will be perturbing a
barred galaxy.
Thus model {\bf mb} is described in
Table~\ref{tab:models} by two
lines. The first one gives the parameters describing the individual
components before they interacted, as for the previously described models.
The second line gives information on the
target galaxy at the time the companion is introduced. As $b_D$
($z_0$) we give the radius (height) containing
the same percentage of the disc mass as $b_D$ ($z_0$) for the unperturbed
disc. No equivalent quantity can be given for $b_H$ since,
because of the adopted truncation, $b_H$ contains 100\% of the Plummer sphere
mass. As $R_{D}$ and $R_{H}$ we give
the radii containing 99\% of the disc and halo mass, respectively.
To obtain model {\bf mh} we took the barred galaxy described above and
redistributed the disc particles randomly in azimuth. For this reason both
lines describing this model in Table~\ref{tab:models}
contain identical information (except of course for the
descriptions ``hot" and ``barred") to those describing model {\bf mb}. This model
has high values of the radial velocity dispersion in the disc (a density
weighted mean $Q$ of the order of 1.8) and thus allows
us to perturb a hot disc in a bulgeless galaxy. We have also evolved this model
without a companion for a $\Delta t=120$, which is considerably longer than
the time
we follow the target disc after the perturber has been put in, and find that
no noticeable
bar component formed, although Fourier analysis of the particle positions
shows a weak signal at later times
that could develop into a bar if the run had been evolved further.
However this signal is negligible during approximately 65 units of time
which allows us to consider simulations with this target as perturbations
of a nonbarred disc galaxy (in the two simulations where this model
was used, the companion hits the disc at $t=26$ and $t=8$ units of time).
The companion is also modelled as a Plummer sphere and the parameters
describing it, namely the number of particles $N_C$,
its mass $M_C$, its characteristic radius $b_C$ and its
cutoff radius $R_{C}$, are listed in the last section of
Table~\ref{tab:models}. For brevity we will often refer to companion
{\bf cs} as standard, while {\bf csd} and {\bf csh} will be referred
to as having double or half the standard mass respectively.
Runs are labelled S1 to S4, R1 to R12 and C55 to C99 and are listed in Table
\ref{tab:components}. In runs
R1 to R12 and C55 to C99 all particles in the target galaxy have the same
mass, so that the ratio of
number of particles equals the mass ratio between the different components.
In runs S1 to S4 the masses of disc, halo, and bulge particles are
$4\times 10^{-5}$, $3.57\times 10^{-5}$ and $5\times 10^{-5}$
respectively. Thus the mass ratio of the
components is \hbox{$D:H:B = 1:3.34:0.31$}. The mass of the particles
in companions {\bf cs}, {\bf csh} and {\bf csd} is $4.9657\times 10^{-5}$ and that of
the particles in companion {\bf c1} is $5\times 10^{-5}$.
\begin{table}
\centering
\caption{Components, type of the impact and code used in the runs }
\label{tab:components}
\begin{tabular}{@{}ccccc@{}}
RUN & MODEL & COMPANION & IMPACT & CODE \\
S1 & {\bf s1} & {\bf cs} & PS & treecode \\
S2 & {\bf s1} & {\bf cs} & PF & treecode \\
S3 & {\bf s1} & {\bf cs} & CS & treecode \\
S4 & {\bf s1} & {\bf cs} & CF & treecode \\
R1 & {\bf m1} & {\bf cs} & CS & treecode \\
R2 & {\bf m1} & {\bf cs} & CF & treecode \\
R3 & {\bf m2} & {\bf cs} & CS & treecode \\
R4 & {\bf m3} & {\bf cs} & CS & treecode \\
R5 & {\bf mh} & {\bf cs} & CS & treecode \\
R6 & {\bf mh} & {\bf cs} & CF & treecode \\
R7 & {\bf mb} & {\bf cs} & CS & treecode \\
R8 & {\bf mb} & {\bf cs} & CSC & treecode \\
R9 & {\bf mb} & {\bf cs} & PSB & treecode \\
R10 & {\bf m1} & {\bf c1} & CF & treecode \\
R11 & {\bf m1} & {\bf c1} & C01 & treecode \\
R12 & {\bf m1} & {\bf c1} & C02 & treecode \\
C55 & {\bf m1} & {\bf cs} & CS & grape \\
C57 & {\bf m1} & {\bf csd} & CS & grape \\
C58 & {\bf m1} & {\bf csh} & CS & grape \\
C59 & {\bf m1} & {\bf cs} & CVS & grape \\
C61 & {\bf m1} & {\bf csd} & CVS & grape \\
C63 & {\bf m1}(Q=1.1) & {\bf csd} & CS & grape \\
C64 & {\bf m1}(Q=1.2) & {\bf csd} & CS & grape \\
C85 & {\bf m2} & {\bf cs} & CS & grape \\
C86 & {\bf m2} & {\bf csd} & CS & grape \\
C88 & {\bf m3} & {\bf cs} & CS & grape \\
C89 & {\bf m3} & {\bf csd} & CS & grape \\
C91 & {\bf m1}(Q=1.5) & {\bf csd} & CS & grape \\
C99 & {\bf m1} & {\bf cs} & CS & grape \\
\end{tabular}
\end{table}
The units of length and time are 3 kpc and $10^7$ years respectively, and
$G=1$. Using this normalisation the units of mass, velocity and volume density
are \hbox{6 $\times$ 10$^{10}$ M$_{\odot}$},
\hbox{293 km/sec}, and
\hbox{2.22 M$_{\odot}$/pc$^3$}
respectively.
\subsection{Simulations }
\indent
Table~\ref{tab:components} lists all the runs discussed in this paper
(co\-lumn~1), together with the model
used for the target and the companion (columns~2
and 3) and, in
column~4, an indication of the initial conditions used for the
companion's orbit. In the last column we give the code used in
the force calculation.
The corresponding initial positions (columns~2 to 4) and
velocities (columns~5 to 7) of the companion are given in computer units in
Table~\ref{tab:initial}. The last two columns of this table give the ratio of
the amplitude of the initial velocity to the escape velocity, calculated
by assuming all the mass in the target galaxy is at a point in its center.
The initial positions and velocities
have been obtained as follows: we first decide the impact point and
velocity for model {\bf s1} and companion {\bf cs}, then calculate the orbit
backwards in time for a time interval sufficiently long that the distance
between the center of the target galaxy and the companion is longer
than $1.4 R_{H}$, while keeping the
target galaxy particles fixed. The final positions and velocities are then
used as the initial positions and velocities of the companion for initial
conditions CS, CF, PS and PF, corresponding to simulations S1 to S4, R1 to
R7, C55 to C58, C63 to C99.
Of course when the full blown simulation is run, the companion
never hits the disc exactly at the impact point and with the impact velocities
initially chosen, partly due to the evolution of the disc and partly
because of the diffe\-rences between the discs {\bf m1}, {\bf m2}, {\bf m3},
{\bf mh} and {\bf mb} on the one hand and disc {\bf s1} on the other.
Nevertheless, Table~\ref{tab:impact} shows that the differences are not
large
and that CF initial conditions correspond roughly to central and fast
encounters, CS to central and slow ones, PF to peripheral and fast, and PS to
peripheral and slow ones. The initial conditions for run R8 (CSC) are similar to
those of run R7, except for a slight spatial shift to make the impact position
of the
companion as near as possible to the density center of the disc. For run R9
the procedure is the same, except that we have used disc {\bf mb} when
calculating the orbit backwards to get a slow impact on the bar semimajor
axis. The initial conditions C01 and C02
are such that the companion starts at rest from a point 12 computer units
from the center of the target galaxy, either on the $z$ axis (C01), or
$30^\circ$ from it (C02). Finally CVS is a central and vertical passage
whose initial radius and velocity amplitude are the same as those of CS.
\begin{table*}
\centering
\begin{minipage}{140mm}
\caption{Initial conditions}
\label{tab:initial}
\begin{tabular}{@{}ccccccccc@{}}
& \multicolumn{3}{c|}{POSITION} & \multicolumn{3}{c|}{VELOCITY} &
\multicolumn{2}{c|}{$\vert V/V_{esc}\vert$} \\
IMPACT & $x$ & $y$ & $z$ & $V_x$ & $V_y$ & $V_z$ & S runs & R or C runs \\
PF & -10.02 & 10.04 & 20.30 & 1.04 & -1.05 & -2.31 & 7.88 & \\
PS & 0.0 & 10.0 & 10.0 & -0.04 & 0.0 & 0.0 & 0.08 & \\
CF & -8.98 & 9.39 & 18.08 & 1.03 & -1.08 & -2.07 & 6.96 & 5.67 \\
CS & -10.01 & 19.07 & 17.39 & 0.33 & -0.63 & -0.56 & 2.75 & 2.35 \\
CSC & -10.11 & 19.17 & 17.39 & 0.33 & -0.63 & -0.56 & & 2.38 \\
CVS & 0.0 & 0.0 & 27.68 & 0.0 & 0.0 & -0.90 & & 2.36 \\
PSB & -4.74 & 17.25 & 15.19 & 0.10 & -0.59 & -0.51 & & 1.91 \\
C01 & 0.0 & 0.0 & 12.0 & 0.0 & 0.0 & 0.0 & & 0.0 \\
C02 & 0.0 & 6.0 & 10.39 & 0.0 & 0.0 & 0.0 & & 0.0 \\
\end{tabular}
\end{minipage}
\end{table*}
\begin{table*}
\centering
\begin{minipage}{140mm}
\caption{Values at the impact}
\label{tab:impact}
\begin{tabular}{@{}ccccccccc@{}}
RUN &
RING &
$T_i$ &
$\theta_i$ &
$R_i$ &
$V_{C_{R_{i}}}$ &
$V_{C_{z_{i}}}$ &
$\vert V_{C_{R_{i}}}$/$\sigma_{R_{i}}\vert$ &
$\vert V_{C_{z_{i}}}$/$\sigma_{z_{i}}\vert$ \\
S1 & y & 42 & 45 & 0.50 & -0.5 & -0.8 & 2.3 & 4.5 \\
S2 & y & 8 & 56 & 1.20 & -1.6 & -2.4 & 11.6 & 21.8 \\
S3 & y & 26 & 43 & 0.04 & 1.1 & -1.2 & 3.8 & 4.6 \\
S4 & y & 9 & 54 & 0.03 & 1.6 & -2.5 & 5.8 & 9.2 \\
R1 & y & 26 & 43 & 0.07 & 1.2 & -1.1 & 5.9 & 4.4 \\
R2 & y & 8 & 54 & 0.05 & 1.6 & -2.3 & 10.7 & 9.6 \\
R3 & y & 26 & 42 & 0.05 & -1.1 & -1.1 & 10.0 & 7.2 \\
R4 & y & 27 & 42 & 0.17 & -1.0 & -1.1 & 3.3 & 5.4 \\
R5 & y & 26 & 42 & 0.07 & 1.1 & -1.1 & 3.6 & 4.6 \\
R6 & y & 8 & 55 & 0.07 & 1.5 & -2.4 & 4.5 & 8.5 \\
R7 & y & 26 & 45 & 0.21 & -1.1 & -1.0 & 3.3 & 4.5 \\
R8 & y & 26 & 43 & 0.24 & 0.3 & -1.2 & 0.9 & 4.7 \\
R9 & y & 25 & 42 & 0.91 & 1.0 & -1.3 & 3.5 & 6.5 \\
R10 & n & 4 & 89 & 0.02 & 0.0 & -3.2 & 0.0 & 13.2 \\
R11 & y? & 34 & 88 & 0.03 & 0.0 & -1.2 & 0.0 & 5.5 \\
R12 & y? & 34 & 58 & 0.06 & -0.6 & -1.2 & 2.8 & 4.5 \\
C55 & y & 26 & 42 & 0.04 & 1.1 & -1.1 & 5.9 & 4.4 \\
C57 & y & 26 & 42 & 0.06 & 1.1 & -1.3 & 3.7 & 5.3 \\
C58 & y & 26 & 41 & 0.08 & 1.1 & -1.0 & 7.2 & 4.1 \\
C59 & y & 26 & 89 & 0.02 & 0.0 & -1.6 & 0.0 & 6.9 \\
C61 & y & 26 & 89 & 0.01 & 0.0 & -1.8 & 0.0 & 7.3 \\
C63 & y & 26 & 41 & 0.07 & 1.1 & -1.3 & 4.6 & 5.1 \\
C64 & y & 26 & 42 & 0.07 & 1.1 & -1.3 & 4.5 & 5.1 \\
C85 & y & 26 & 41 & 0.01 & 1.1 & -1.0 & 6.3 & 4.6 \\
C86 & y & 26 & 41 & 0.08 & 1.1 & -1.1 & 4.7 & 5.5 \\
C88 & y & 26 & 42 & 0.09 & 1.0 & -0.9 & 5.2 & 6.2 \\
C89 & y & 26 & 42 & 0.07 & 1.0 & -1.0 & 5.9 & 5.3 \\
C91 & y & 26 & 45 & 0.06 & 1.0 & -1.2 & 4.0 & 7.3 \\
C99 & y & 26 & 41 & 0.08 & 1.1 & -1.1 & 5.8 & 4.6 \\
\end{tabular}
\end{minipage}
\end{table*}
Thus Table~\ref{tab:components} gives at a glance rough information about the run description. For example the line describing run C55 tells us that the target had a standard disc (second column), the companion was standard (third column) and the impact was central and slow (fourth column). Similarly for run C86 we see that the target has the more extended disc {\bf m2}, a double mass companion and a central and slow impact.
In Table~\ref{tab:impact} the first column gives the label of the run and
the second column indicates whether a ring was formed (y) or not (n).
Runs with rings which are not
clearly outlined are denoted with a y?. Columns~3 to 9 give the time
of impact measured from the time the companion was introduced,
the impact angle
(i.e., the angle between the plane of the disc and
the orbit of the companion at impact), the distance of the impact point from
the center of mass of the target galaxy, the radial and vertical
velocity
component of the companion at impact, and the ratios of the absolute values
of these components to the velocity dispersion along the same direction
and measured at the impact position. The angle is measured in degrees
and the other quantities in computer units. As impact time we list
the time step immediately preceding or
following the impact, for which the companion was nearest to the $z=0$ plane
and for which the information on the particle position and
velocities exists. Since all particle positions and velocities were stored every
two computer time units for runs C55 to C99 and every time unit for the
remaining runs (see next section), the precision for the impact time is $\pm$ 1
for runs C55 to C99 and $\pm$ 0.5 for all other runs. The intersection between the
$z=0$ plane and the line connecting the positions of the center of mass of
the companion at the time steps saved immediately before and after impact
defines the impact angle and the distance of the impact point from the
center. The velocities of the last four columns are measured
by interpolation at the time
when the companion crosses the disc.
\subsection{Numerical codes }
\indent
For all runs starting with an R or an S
the evolution has been
followed using a vectorised version (Hernquist 1990) of the
treecode (Barnes \& Hut 1986). For the R models we
use a tolerance parameter of $\theta = 0.7$, and for the S runs
$\theta=1$. In both cases we include quadrupole terms in the force
calculations to increase the accuracy.
The softening parameter has been taken to
be the same for all particles and equal to 0.066666 for
the R runs and 0.05 for the S
runs, and the time step
equal to 0.025 and 0.05 respectively, both measured
in computer units.
For the parameters of the R runs and 44000 particles one time step
took roughly 15 seconds, the precise value depending on the configuration.
These parameters also
ensure an adequate energy conservation. If we
measure the energy conservation using the minimum and maximum values obtained
during the runs, we find an accuracy of order 0.1\%, the highest
deviations
being often found at times when the companion crosses the disc and
becomes very concentrated. If we consider only the initial and final values of
the energy, then we obtain an energy conservation better than 0.05\%.
All these simulations have been run for 80 computer units, or equivalently
$8\times
10^8$ years, from the time the companion is introduced. This time span is
sufficient, since rings formed by infalling small companions prove to be
short lived structures. The positions and velocities of all particles in the
simulation have been
saved every integer value of the time measured in computer
units.
\begin{figure*}
\vspace{13.00cm}
\special{psfile=fig01.ps vscale=100 hscale=100 hoffset=-60 voffset=-450}
\caption{Snapshots from simulation C59 showing the formation and evolution
of the first and second ring.
We plot the x, y positions of
all disc particles in computer units. Times are marked in the
upper left corner of each frame and are
measured from the time of impact, in computer units, i.e., with the
adopted normalisation, in units of $10^7$ years.}
\label{fig:c59xy}
\end{figure*}
After this project had been started, our group acquired a GRAPE system
(Okumura et al. 1992, Ebisuzaki et al. 1993, Okumura et al. 1993)
consisting of 5 GRAPE-3AF boards coupled via an Sbus/VMEbus converter to a
SPARC station 10/512. A detailed description of this system and of its
performances will be given elsewhere. Here we will only mention that, for
the direct summation code used in this paper,
properly parallelised over the 40 chips,
it gives a sustained speed slightly higher than 20
Gflops, so that one time step for 44000 particles takes only 3 seconds.
With this system we have calculated runs C55 to C99, using direct
summation. The time
step have been chosen equal to 0.015625 and the softening 0.0625, both
in computer units. Again we find that the highest errors
in the energy correspond
to the time when the companion crosses the disc. Nevertheless the
energy conservation, calculated from the initial and final values,
is better than or of the order of 0.2 \%.
All simulations made with GRAPE have been run for 100 computer time units, or
equivalently $10^9$ years, from the time the companion is
introduced. The positions and velocities of all particles in
the simulations have been saved every 2 computer time units, or
equivalently $2 \times 10^7$ years.
The advantages of these two approaches are multiple.
All components, including the halo and the companion,
are described fully self-consistently. They allow a good
resolution for all geometries and configurations. No extra forces need be
introduced to account for the change of reference frame due to the
companion,
as in the case for grid codes.
Finally the orbit of the companion comes naturally out of the simulation and
does not have to be assumed or pre-calculated.
Since we include in this paper simulations made with different codes
and computers --- treecode on CRAY and direct summation on GRAPE --- we
have run a few cases in both ways in order to make comparisons. Thus
run R1 has identical initial conditions with run C55. C99 is also
identical to C55 except that the initial positions of the disc particles
have been randomised in azimuthal angle. We compared the radius of the
ring as a function of angle and time and found very good agreement
between the three cases. Furthermore the differences between C55 and C99
are of the same order as those between R1 and C55, or R1 and C99.
\section{A nonbarred target galaxy }
\label{sec:nonbar}
\subsection{The fiducial case: A central vertical impact }
\indent
The rings in our simulations show very different morphologies, depending on
the mass and trajectory of the companion but also on the target disc. A good
point to start the descriptions and comparisons is run C59, which
is a central
vertical impact of the standard companion ({\bf cs}) on the standard disc
({\bf m1}), and
whose most relevant part is shown in Fig.~\ref{fig:c59xy}.
The first ring starts right after impact and is, all through its evolution,
rather symmetric and near-circular. It expands initially very rapidly but the
expansion rate slows down somewhat with time. The second ring is less
circular and also expands less fast.
One can get a clear impression of how the rings evolve with time by
using an $r = r (t)$ plot, as given in Fig.~\ref{fig:r=rtb}, where $r$ is the
radius of a particle in the disc component, measured from the center of mass
of the target galaxy and $t$ the time measured from the beginning of the
simulation. Particles in the halo and bulge are not plotted, and,
for cla\-ri\-ty, we only display 2000 of the disc particles at each time.
Strictly speaking
it is not possible to get $r=r(t)$ from numerical simulations since these
give information about the positions of the particles only at discrete times.
We have thus used the following artifact, quite similar to what is used for
greyscale plots. We saved the values of the positions of all particles in the
simulation very frequently, namely every unit of time
for S and R runs and 2 units of time for C ones (in computer units).
Plotting this information on the $(r,t)$ plane would give infinitely thin
strips of points, which
would not allow us to see any evolution. We have thus smeared out the
information by placing every particle at its radius and at a time chosen at
random between $t$ and $t+1$ or $t$ and $t+2$.
For treecode runs where the particle
coordinates are saved every unit of time and where the rings happen to
have re\-la\-tively low expansion velocities, this procedure gives a very smooth
figure, allowing us to follow closely the evolution of the ring (e.g.
Fig.~\ref{fig:r=rtc}).
For simulations with large ring expansion velocities and for which the
particle coordinates are saved every two time units, this procedure gives a
more step-wise appearance (e.g. lower panel of Fig.~\ref{fig:r=rtd}), yet
clear enough to allow us to draw conclusions.
The upper panel of Fig.~\ref{fig:r=rtb} shows the data for
our fiducial simulation. One can clearly see the formation of the two first rings.
One
can also follow the expansion of the rings and see that it is faster for the
first than for the second ring, and also that it slows down with time.
Insight into the ring formation and evolution can be obtained by putting in
equations figure 4 of Lynds \& Toomre (1976) or figure 6 of Toomre (1978).
This is easily done by following the motion of particles initially in circular
orbits and perturbed by the infalling companion. Binney and Tremaine (1987)
model the target galaxy and the companion as two identical isothermal spheres,
use the impulse approximation to calculate the effect of the companion, which
is assumed to have a constant velocity, and neglect collective effects in the ring\footnote{In a similar way, but taking into account that the companion is
a Plummer sphere, we obtain for the radius at time $t$ of a star initially at
radius $R_o$
${\displaystyle R(R_o,t)=R_o-\frac{2GM_c}{V\kappa} \frac{R_o}{R_o^2+b_c^2} sin(\kappa t)}$
\noindent
where $V$ is the velocity of the companion, assumed constant, and where the
epicyclic frequency is measured at the radius $R_o$. This equation shows that
slower passages, or more massive companions create larger displacements of the
orbits. This predicts higher density in the rings for such passages, and, as we
will see in the next sections, this is indeed borne out by our simulations.}.
Using this equation to calculate the perturbed surface density we see that the
density enhancement moves outwards with a velocity which is constant if the
rotation curve is constant with radius. On the other hand if the rotation
curve decreases with radius, as in the examples of Lynds \& Toomre (1976)
and Toomre (1978) - where the drop was keplerian -, this velocity decreases
with radius. Such a decrease can also be seen in our figures \ref{fig:r=rtb}, \ref{fig:r=rtc}, and \ref{fig:r=rtd}. The
reason in this case is not the form of the rotation curve, which, within
the disc, is nearly constant, but collective effects, which have been
neglected in the above analysis. This can be seen by calculating numerically
the group velocity (cf. Toomre 1969) from the $m=0$ Lin-Shu-Kalnajs dispersion
relation (Kalnajs 1965, Lin \& Shu 1966).
\subsection{Non-vertical passages }
\indent
Run C55 has the same target and companion as run C59
but the impact is not vertical but
rather at an angle of roughly $45^{\circ}$. Both the first and
the second (although
the latter to a lesser extent) rings are more eccentric and broader than
those generated
by the vertical impact and their axial ratio does not vary much with time.
The major axis of the ring rotates.
Fig.~\ref{fig:r=rtb}
compares the $r=r(t)$ plots of these two simulations
and shows that the differences are small.
\begin{figure}
\vspace{10.00cm}
\special{psfile=fig02.ps vscale=100 hscale=100 hoffset=-15 voffset=
-560}
\caption{Comparison of the $r=r(t)$ plots, as described in the text,
for a vertical (upper panel) and an oblique (lower panel)
impact. The label of the simulation is given in the upper
left corner of each panel. Both radii and times are in computer
units.}
\label{fig:r=rtb}
\end{figure}
\subsection{Fast and slow passages }
\indent
The evolutions of simulations with slow (R1) and fast (R2) passages do not differ much. Nevertheless the $r=r(t)$ plot (Fig.~\ref{fig:r=rtc}) shows that for the fast passage the rings are less intense, do not reach the edge of the disc and
expand less fast. Also in the case of the faster passage there is less time
during which the two rings coexist, because the first ring fades away faster.
\begin{figure}
\vspace{10.00cm}
\special{psfile=fig03.ps vscale=100 hscale=100 hoffset=-15 voffset=-560}
\caption{Comparison of the $r=r(t)$ plots
for a slow (upper panel) and a fast passage (lower panel).
The label of the simulation is given in the upper
left corner of each panel. Both radii and times are in computer
units.}
\label{fig:r=rtc}
\end{figure}
\subsection{Varying the mass of the companion }
\indent
\begin{figure*}
\vspace{17.00cm}
\special{psfile=fig04.ps vscale=100 hscale=100 hoffset=-40 voffset=-345}
\caption{Comparison of three simulations with different perturber masses.
The left panels correspond to simulation C58, where the companion has half
the standard mass, the middle ones to simulation C55, where the companion has
the standard mass, and the right ones to C57 with a double mass perturber.
The arrows plotted in the right column show the direction of the
companion's velocity at impact and start off from the impact point.
Time increases from top to bottom and is marked in the top left corners of
the left panels, all panels in the same row corresponding to the same time.}
\label{fig:585557xy}
\end{figure*}
The effect of changing the perturber mass is quite sizeable, as can be seen
from Fig.~\ref{fig:585557xy}, where we compare runs with identical target
discs but companions of half the standard mass (C58), standard mass (C55)
and double the standard mass (C57).
\begin{figure}
\vspace{10.00cm}
\special{psfile=fig05.ps vscale=100 hscale=100 hoffset=-15 voffset=-580}
\caption{ Comparison of $r=r(t)$ plots for a companion of low mass
(upper panel) and of high mass (lower panel).
The label of the simulation is given in the upper
left corner of each panel. Both radii and times are in computer
units.}
\label{fig:r=rtd}
\end{figure}
The companion-to-disc mass ratios in these cases are 0.25, 0.5 and 1.
respectively, and the companion-to-target mass 0.05, 0.1 and 0.2 respectively .
Both the expansion velocity of the first ring
and its intensity increase considerably with the mass of the companion. This
is clearly seen from the three upper rows, which correspond to early times in
the evolution. The last row corresponds to a much later time. The first ring
has faded out in the low mass encounter, while being still present in the
other two, and is particularly clear in the high mass case. At this time the
second ring is still clear in all three simulations, although its structure is
rather different. Another important difference is the existence of spokes in
the high mass case, which will be discussed in section \ref{sec:spokes}, and the
sizeable expansion of the disc extent, which can be seen by comparing the top
and bottom rows for the three simulations. Rings are more symmetric and
nearer to circular in low mass encounters than in higher mass ones. There is
also a general trend that more massive companions create wider, more intense
rings. Similar results can be seen by comparing simulations C59 and C61, not
shown here.
\begin{figure*}
\vspace{17.00cm}
\special{psfile=fig06.ps vscale=100 hscale=100 hoffset=-60 voffset=-290}
\caption{Comparison of three simulations with the same standard mass
companion but
different target discs, described as
\mxum (C55), \mxdois (C85) and \mxtres (C88) in the
text. Description of the layout as for Fig.~2.}
\label{fig:558588xy}
\end{figure*}
\begin{figure*}
\vspace{17.00cm}
\special{psfile=fig07.ps vscale=100 hscale=100 hoffset=-100 voffset=-730}
\caption{As for Fig.~6 but for a
companion of double mass.}
\label{fig:578689xy}
\end{figure*}
\begin{figure}
\vspace{18.00cm}
\special{psfile=fig08.ps vscale=100 hscale=100 hoffset=-15 voffset=-360}
\caption{Comparison of $r=r(t)$ plots for three simulations with target discs of
different scalelengths, the smallest scalelength being at the top and the
biggest one at the bottom.}
\label{fig:r=rt2}
\end{figure}
Fig.~\ref{fig:r=rtd} compares the $r=r(t)$ plots for the impact of the low mass
companion (C58) to that of the high mass one (C57) and shows that there
is an important increase of the expansion velocity and of the ring
intensity with the companion mass.
If the companion has a very low mass, then rings either do not form, or are too weak to be clearly seen. This is the case for simulations R10, R11 and R12 for which the
companion has a mass of only 0.04 computer units, or $M_C/M_D=0.1$ and
$M_C/M_G=0.02$, where $M_C/M_D$ and $M_C/M_G$ are the ratios of the mass in
the companion to that in the disc and target galaxy respectively.
\subsection{Different target discs }
\indent
Striking differences can be seen when one varies the scalelength of the
target disc, i.e., when one considers in turn discs
{\bf m1}, {\bf m2} and {\bf m3}.
Our simulations allow us to construct three such sequences, two with the
standard companion mass, of which one is shown in Fig.~\ref{fig:558588xy},
and the other with double mass, shown in Fig.~\ref{fig:578689xy}. As
expected from swing amplifier theory (Toomre 1981), the more extended discs
show before the impact a spiral structure with higher arm multiplicity.
Then, when the ring forms, it is much more homogeneous in the
standard disc than in the extended ones, where it is very irregular and
patchy and, in the most extended disc case, ends up looking like a polygon
rather than an oval (cf.~the third row, third column panels of
Figs.~\ref{fig:558588xy} and \ref{fig:578689xy}). At yet later times the
first ring breaks up in many spiral arcs and segments, as can be seen in the bottom right panel of Fig.~\ref{fig:558588xy}.
Fig.~\ref{fig:r=rt2} compares the $r=r(t)$ results for the three
si\-mu\-lations shown in Fig.~\ref{fig:558588xy}. It gives the
impression that the more
intense rings are formed in the more compact disc, an impression
which will be confirmed by a measurement in section \ref{sec:ampl}.
\subsection{Asymmetries }
\indent
Since the impact in most of our simulations is not vertical we expect some
asymmetries in the resulting rings. Indeed we found that the first rings
obtained from double mass companions can look quite asymmetric,
particularly towards the end of their lifetime, but not the rings formed by
standard or half mass companions. This can be seen for example in the last
row of Fig.~\ref{fig:585557xy}, where it is clear that the asymmetry
increases with companion mass although the initial conditions for the three
simulations are the same. It can also be seen by comparing
Figs.~\ref{fig:558588xy} and \ref{fig:578689xy}.
The arrows plotted in the right hand column of Fig.~\ref{fig:585557xy} show
the direction of the companion's velocity at impact and
start off from the impact point. We note that the highest
density point stays near the impact position and the direction of
least expansion is not far from the direction to which the arrow is
pointing. In the last plotted time the direction of the impact
roughly coincides with the major axis of the inner ring and inner oval.
These structures, however, rotate, so this will not hold for other
moments.
Asymmetries can also be obtained of course with peripheral impacts.
An example is given in Fig.~\ref{fig:s1xy}, which
shows four instants from the evolution of simulation S1. The ring that forms is
offcentered and non-circular, and the position of the ``nucleus" with respect
to the ring changes with time. There is also one single spoke
or spiral
feature, which is diffuse and lasts until about $t=14$. At roughly $t=15$ the
companion crosses the disc for a second time and it is this passage that
might be responsible for the demise of the spoke. In the later stages of
the evolution the spoke, then the nucleus, and finally the ring itself, become
quite deformed.
For our second simulation with a peripheral impact (S2) the impact is too
peripheral and/or too fast
(cf.~Table~\ref{tab:impact}) to produce
a ring as clear as those of the previous case.
\subsection{Spokes }
\label{sec:spokes}
\indent
The Cartwheel galaxy (A0035--335) is one of the best studied examples of a
ring galaxy. Detailed high contrast photographs (e.g., Plate 1, Davies
\& Morton 1982; or Fig. 4, Toomre 1978)
show, in addition to the inner and outer ring, a set of short radial
arcs, or segments,
frequently referred to as spokes. These features have been reproduced
in a transient manner in some of our simulations
and we describe them in this section. They are located between the
inner and outer ring. As the outer ring expands material which at
some point was in it falls back towards the center of the galaxy.
Clumps and inhomogeneities in the ring are also present in this
material and trigger swing amplification (Toomre 1981). Thus spokes
can be understood in the same way as the sections of spiral arms
seen in flocculent galaxies.
\begin{figure*}
\vspace{12.00cm}
\special{psfile=fig09.ps vscale=100 hscale=100 hoffset=-60 voffset=-475}
\caption{Four characteristic times of the evolution of
run S1. Note the spoke around $t=11$.}
\label{fig:s1xy}
\end{figure*}
An example of such spokes in our simulations are seen clearly in
Fig.~\ref{fig:c61}, a frame from the vertical central slow
passage in run C61. They appear
between
the outer and inner ring, the presence of both being necessary. They
are trailing, nearly straight and last for a couple of $10^8$ years.
Run C57, which
has the same target and companion, but an impact at $42^{\circ}$, also
shows spokes, one quite massive and the others not so well defined.
A problem that needs to be further
considered is what the mass ratio between the companion and the target
should be for such features to appear. In our simulations encounters
with a companion of mass twice
the standard, i.e., a mass equal
to that of the disc or 20\% of the mass of the galaxy, produce spokes at
some stage of their evolution, whereas
this is sometimes but not always the case for encounters with standard
mass companions, as
can be seen from Figs.~\ref{fig:c59xy}, \ref{fig:585557xy},
\ref{fig:558588xy}, \ref{fig:578689xy} and \ref{fig:s1xy}. Seen the
observational uncertainties in the
calculation of the mass of the Cartwheel and its companions (Appleton \&
Struck-Marcell 1996), these numbers tend to suggest that one of the companions,
and in particular G2 (following the notation of Higdon 1996), could be
responsible for the structures in the Cartwheel (cf. also Struck-Marcell
\& Higdon 1993). A confirmation, however, would necessitate more elaborate
modelling and, in particular, a target rotation curve resembling that of
the Cartwheel.
Fig.~\ref{fig:5791} compares two runs with identical initial conditions, except
for
the initial $Q$ value, which was 1 for run C57 and 1.5 for run C91. We note
that velocity dispersion does not have a big effect on spokes, except perhaps
for a slight lowering of their intensity with increasing $Q$.
To substantiate this impression we have measured at several time steps
the density in the gap between the two rings along a ``ring''
of the same shape as the outer ring but of smaller size, and find
that indeed that trend is present.
Fig.~\ref{fig:578689xy} compares three simulations with different target discs.
Spokes form earlier and fade away faster in disc {\bf m1} than in
{\bf m3}, {\bf m2} being
intermediate. As has already been noted for rings, spokes are smoother for the
{\bf m1} case than for the {\bf m3}
one, where they contain multiple large clumps.
\begin{figure}
\vspace{7.00cm}
\special{psfile=fig10.ps vscale=100 hscale=100 hoffset=-40 voffset=-600}
\caption{One example of spokes from simulation C61.}
\label{fig:c61}
\end{figure}
\begin{figure}
\vspace{6.00cm}
\special{psfile=fig11.ps vscale=100 hscale=100 hoffset=-55 voffset=-650}
\caption{Comparison of two simulations with identical initial conditions,
except for the initial $Q$ value, which is 1 for run C57
and 1.5 for run C91.}
\label{fig:5791}
\end{figure}
Simulation S1 shows one big massive spoke, which has a spiral rather than a
straight line shape. This is sufficiently
clearly defined so that we can
trace back, all through the simulation, the positions of
the particles that constitute it. At the beginning of the simulation
they form a very tightly wound one-armed leading spiral, which, however, does
not stand out when all particles in the disc are plotted. It unwinds with
time and forms the spoke. This is true also for other well defined spokes in
other simulations which we tried out. As can be seen from
Fig.~\ref{fig:s1xy}, at later
times the particles in the spoke of run S1 do not form a tightly wound trailing
one-armed spiral, but rather they spread out until they fill an area like a
quadrant, whose location with respect to the disc center rotates. At even
later times they fall more uniformly towards the center of the disc.
However the later stages of this evolution are undoubtedly influenced
by the second passage of the companion and the subsequent merging.
\subsection{Amplitude, width and eccentricity of the rings }
\label{sec:ampl}
\indent
Rings are neither always circular nor always centered on the center of the
galaxy, so, in order to find the position of the ring as a function of time, we
recenter the disc particles to the position of maximum disc density
and split the galaxy into 12 angular sectors, each of $30^\circ$.
We then plot radial density profiles separately for each
sector and each time. The maxima of these profiles give us the position(s) of
the ring(s) at that time and angle. The width of the ring has been
determined by
defining as its edge on either side of the maximum either a local minimum
(mainly for the inner edge of the ring), or the radius at which the density
drops to half the value at maximum (mainly for the outer edge). The location
and width of the ring have been
determined for all angular sectors and for all time
steps for which the program could find a clear maximum on the corresponding
radial profile. These data allow us to calculate a number of interesting
quantities.
We have counted the number of particles within the region of the ring
and used the ratio of this number to the number of particles in the same
region for the unperturbed galaxy as an estimate of the amplitude at that
position. Fig.~\ref{fig:ampliring}
gives typical examples of the evolution of this amplitude as a function of
time. It shows clearly that slower passages make higher amplitude first rings
than fast ones (upper row), and a similar effect is found when comparing
impacts by high
mass companions to impacts with low mass ones (bottom row), as expected. The
very large amplitudes found for the simulation which has a companion of
double the standard mass (C57), is due to a large extent to the important
expansions caused by the massive perturber. Thus, with our definition of the
ring amplitude, we are comparing the ring region with a region in the outer
parts of the unperturbed models where the density is quite low.
\begin{figure}
\vspace{7.00cm}
\special{psfile=fig12.ps angle=-90 vscale=36 hscale=36
hoffset=-10 voffset=200}
\caption{Amplitude of the ring as a function of time,
measured in computer units, calculated as
described in the text. The left panels refer to the
first rings and the right ones to the second rings.
The upper panels compares a slow passage (R1, solid line) and a fast one
(R2, dashed line). The lower panels compare the effects of a low mass
companion (C58, solid line), of an intermediate mass one (C55, dashed line)
and of a high mass one (C57, dot-dashed line).}
\label{fig:ampliring}
\end{figure}
A similar comparison (not plotted here), now for a vertical (C59) and a
non-vertical (C55) impact,
suggests that the impact angle does not influence much the amplitude.
Finally
comparing rings in target discs of different scalelength we get an indication
that both the first and the second ring forming in the most
extended disc have a smaller amplitude than those forming in the less extended
one, except for the first ring in the simulations
with single mass perturbers, where it does not seem to matter.
In general the first
ring has a higher amplitude than the second one. Also the amplitude
of the second ring increases with time (except for the fast encounter),
while that of the first ring stays
roughly constant in single mass simulations and increases considerably with
time for impacts with heavy companions, for the reason described in the
previous paragraph.
With the definition given above we find for the outer ring a width of
the order of \hbox{2~--~3~kpc}, which does not vary noticeably with time.
This seems to be the opposite of what one
sees on $r=r(t)$ plots, where one gets the
impression that the rings gets thicker
with time. The reason is that such plots
include all azimuthal angles and, as will be
discussed below, the expansion velocity depends somewhat on the azimuth,
which gives the impression of a thickening. In general more extended discs
have wider rings, while vertical passages create narrower ones than oblique
passages. Similarly wider rings are generally created by slower impacts, or
more massive companions.
\subsection{Density waves vs. material features }
\indent
As predicted theoretically (e.g., Toomre 1978, Lynds \& Toomre 1976) rings
formed by infall of a small companion galaxy are density waves and not material
features. We have been able to verify this by measuring what percentage of the
particles constituting the ring at the time of its formation are still part
of it at a given time later. Thus we find that e.g. for the first ring of
run R1 only of the
order of 20\% of the particles initially in the ring are still in it 6 time
units later and hardly any 10 time units later. Similar numbers can be found
for most other runs although larger percentages can be found for the hot and
for the extended discs, or for impacts with high mass companions, or, in most
cases, for the second ring.
\subsection{Kinematics }
\label{sec:kinem}
\indent
\begin{figure}
\vspace{7.00cm}
\special{psfile=fig13.ps angle=-90 vscale=36 hscale=36
hoffset=-10 voffset=205}
\caption{Isocontour plot of the radial velocity of the particles in
simulation R5.
Dotted contours correspond to -10, 0 and 10 km/sec,
solid line contours to 20, 30, 40,... km/sec and dashed ones
to -20, -30, -40,... km/sec.}
\label{fig:isoradvel}
\end{figure}
Following the encounter particle orbits will first contract, then
expand, and then contract anew when
the second ring is formed. One example is seen in
Fig.~\ref{fig:isoradvel} in which we plot
the isocontours of the radial velocity of the particles in run R5, to
follow the evolution both as a function of time and of radius.
Expansion is given by solid lines,
contraction by dashed lines, and values close to 0 (positive or
negative) by dotted lines.
By comparing with the $r=r(t)$ plot
we note that the
particles in the ring show a strong
expansion when the ring is formed, which decreases considerably
with time. Thus, during the initial stages and for a short while, expansions as
high as \hbox{40 km/sec} are not rare,
but they rapidly fall to values of the order
of \hbox{20 km/sec} as the ring propagates outwards.
This should be compared to tangential velocities of the order
of \hbox{100 km/sec} during the
same time interval and in the same regions.
On either side of the ring the
particles show substantial inflow, again more important at the time of the ring
formation and decreasing with time.
In general we find larger expansion velocities in rings
caused by more massive companions or slower impacts, while
the extent of the target disc
or its velocity dispersion has little
influence on the particle velocities.
\begin{figure}
\vspace{7.00cm}
\special{psfile=fig14.ps angle=-90 vscale=36 hscale=36
hoffset=-10 voffset=205}
\caption{Isocontour of radial velocity dispersion in si\-mu\-lation
R1. The dotted contour is at level \hbox{50 km/sec} and the other
ones are separated by \hbox{10 km/sec}.}
\label{fig:isodisvel}
\end{figure}
Fig.~\ref{fig:isodisvel} shows isocontours
of the radial velocity dispersion on the $(r,t)$ plane for run R1.
Other runs show a similar behaviour.
The passage of the ring from any given radius produces an increase
in the local velocity dispersion, which drops after the ring has
passed and then increases again with the passage of the second ring. The
other runs show a si\-mi\-lar behaviour, with the temporary rise being much higher
for slower passages or more massive
companions, as could be expected. This heating of the disc, occurring
particularly after the passage of the second ring, should be responsible
for stopping the formation of a third ring. This argument is further
substantiated by the fact that even the formation of the second disc is
suppressed in discs which start off relatively hot, as in simulations R5
and R6.
\begin{figure}
\vspace{6.00cm}
\special{psfile=fig15.ps angle=-90 vscale=35 hscale=35
hoffset=-10 voffset=210}
\caption{Position of the ring as a function of time.
The results are from simulation R2, and show
the position of the ring for different angular sectors (open
circles)
and the means over all angles (filled circles), both as a function
of time. The solid line is a polynomial fit to the filled circles.}
\label{fig:ringexpvel}
\end{figure}
To calculate the expansion velocity of the ring we have plotted the
positions of the ring measured for every angular sector as a function of time
(an example for simulation R2 is shown in Fig.~\ref{fig:ringexpvel}). This
shows that the expansion velocity depends on time and argues that it also
depends on angle, in good agreement with what one could infer from
$r=r(t)$ plots. In order to get rid of the angular dependence
we take the
mean of all positions corresponding to the same time and then obtain the
ring expansion velocity by fitting a second order polynomial in time
to the azimuthally averaged ring position.
We have also tried the exercise using splines, but
found that second order polynomials are more satisfactory.
The upper panel of
Fig.~\ref{fig:ringexpvel14} compares
the ring position of the first ring for two runs differing by
the velocity of the companion
and shows that the expansion velocity of the
slow encounter (R1) is larger than that of the fast one (R2).
The middle panel of this figure compares the results for
simulations with identical initial conditions except for the
mass of the companion, and shows that larger masses produce
much faster expanding first rings. This is further illustrated
by the bottom panel, where we plot the expansion
velocity of the first ring as a function of time, this time
for all our simulations with either a standard or a double mass
companion. It is clear that simulations
with a double mass companion (solid dots) have larger velocities
than those of simulations with standard mass companions (plus signs).
This figure also brings out clearly the
decrease of the expansion velocity with time.
\begin{figure}
\vspace{12.00cm}
\special{psfile=fig16.ps vscale=45 hscale=45 hoffset=-5 voffset=0}
\caption{The upper panel gives the position
of the ring for a simulation with a slow passage (R1, solid line) and
a simulation with a fast passage (R2, dashed line) as a function of
time, both measured
in computer units. Note that the slow encounter R1
produces a faster expanding ring than the fast
encounter R2. In the middle panel
we plot the position of the ring for C58 (half mass companion),
C55 (standard mass companion) and C57 (double mass companion).
Note that the expansion velocity of the ring depends on the
companion mass. The lower panel shows the expansion velocity
of the ring in computer units. The simulations with double
mass companion
are plotted with filled circles and those with standard mass
companion with a plus sign. All three panels refer to
first rings.}
\label{fig:ringexpvel14}
\end{figure}
As far as the first ring is concerned, the expansion velocity of the
ring is larger in those cases where the expansion velocity of the particles
is largest. This can be seen in the upper panel of
Fig.~\ref{fig:partringvel} where we plot the
expansion velocity of the ring at a given time, calculated as described above,
as a function
of the mean radial velocity of the particles in the ring
at the same time, and
that for all times and simulations. This trend between the two
velocities is not true for the second ring
(lower panel) where the expansion velocity of the ring remains small at all
times and does not seem to depend in any clear way on the expansion velocity
of the particles.
\begin{figure}
\vspace{12.00cm}
\special{psfile=fig17.ps vscale=45 hscale=45 hoffset=-10 voffset=0}
\caption{Expansion velocity of the ring as a function of the mean
velocity of the particles that are in the area that defines it. There is a
measurement for every time and every simulation. The upper panel corresponds
to the first ring and the lower one to the second one.}
\label{fig:partringvel}
\end{figure}
\section{A barred target galaxy }
\label{sec:barred}
\indent
We also have run
three simulations in which the companion hit an initially barred
disc. In simulations R7 and R8 we have aimed the companion at the center of
the bar and disc, while in simulation R9 we have aimed at a point on
the bar semimajor axis.
Fig.~\ref{fig:barcentxy} shows some characteristic moments of the evolution of the barred disc
around and after the impact of the companion in simulation R7. At time
$t=-2$, just before the
impact, the bar shortens somewhat and after the impact it
gets substantially offset from
the center of mass of the galaxy, so that it forms one side of the ring feature.
Thus at no stage of the evolution do we see a small ring with a diameter
smaller than the bar major axis. Instead there is a rather asymmetric ring
formed in part by the bar. This expands like the other rings discussed so far
and, when it becomes sufficiently large, it becomes detached from the bar,
while an arm emanating initially from one side of the bar continues the ring
without closing it completely. Thus the result is a pseudoring,
enclosing a bar which does not fill it completely. The evolution of
simulation R8 is similar, so we will not show it here.
\begin{figure*}
\vspace{17.00cm}
\special{psfile=fig18.ps vscale=100 hscale=100 hoffset=-70 voffset=-350}
\caption{Evolution of simulation R7, for which the target galaxy is
initially barred. The bar is offcentered by the impact and forms one side of
the expanding asymmetric ring.}
\label{fig:barcentxy}
\end{figure*}
\begin{figure*}
\vspace{13.00cm}
\special{psfile=fig19.ps vscale=100 hscale=100 hoffset=-60 voffset=-450}
\caption{Evolution of simulation R9 following the impact.
In this peripheral encounter the bar is almost destroyed,
forming, in later times of the evolution, an offcentered oval
structure.}
\label{fig:barperixy}
\end{figure*}
The evolution of simulation R9 after the impact is
shown in Fig.~\ref{fig:barperixy}. First the position of maximum density is
displaced from the center of the bar to a point near the impact position and
then
the disc develops a multiarm structure. The different parts of the arms
interact and form one very long arm, winding by nearly $270^\circ$. This
develops into a pseudo-ring, but the density along the ring is a function of
azimuth. The bar itself gets shorter and fatter and de\-ve\-lops into an oval,
whose center gradually shifts towards the center of mass of the target
galaxy. At certain stages of the evolution the result is fairly similar to
that of a peripheral impact on a nonbarred target, except that the ``nucleus"
is oval.
\begin{figure}
\vspace{11.50cm}
\special{psfile=fig20.ps vscale=40 hscale=40 hoffset=-10 voffset=0}
\caption{Bar length (upper panels) and
axial ratios (lower panels) as a function of time. These
are obtained from the isodensity contour at 0.2 (dashed line)
and 0.4 (solid line) times the maximum density in the
disc at the time in question. The simulation each panel
refers to is given in the upper left corner. The impact
time is marked by an arrow.}
\label{fig:barleneachtime}
\end{figure}
\begin{figure}
\vspace{6.50cm}
\special{psfile=fig21.ps vscale=40 hscale=40 hoffset=-10 voffset=-140}
\caption{Bar length (upper panel) and
axial ratios (lower panel) as a function of time. These
are obtained from the isodensity contour at constant
projected density levels equal to 0.2 (dashed line) and 0.4
(solid line) times the maximum density in the disc at the
beginning of the simulation. The impact time is marked by
an arrow and the simulation each panel refers to is
given in the upper left corner.}
\label{fig:barlenfirsttime}
\end{figure}
In none of the three cases was there a second ring, nor did
transient spokes
form.
We have calculated the distance from the position
of the maximum density of
the disc component, which we can define as the center of the bar, to the
center of mass of the target galaxy at each time.
The bar is already somewhat offset before the impact and this
offset is substantially increased when the companion hits the disc,
particularly so when the impact was not central. The offset lasts
for 0.2 to 0.3 Gyears and then the bar center
comes back to a central position.
The pattern speed of the bar before the impact is
in all three cases \hbox{26 km/sec/kpc}.
Right after the impact it drops for a short time to values of the
order of 7 or \hbox{8 km/sec/kpc} for the central
encounters and 14 for the offcenter
one. Then it rises again to 11 or
\hbox{12 km/sec/kpc} for the central encounter and 23
for the offcenter one, values which it keeps till the end of the simulations.
We have made no detailed study of the shape of the bar, but we have
calculated the length of the major and minor axes at a density value
equal to 0.8, 0.6, 0.4 and 0.2 times the maximum density in the disc, and
thus obtained the axial ratios at these density levels at all times. Some of
this information is given in Fig.~\ref{fig:barleneachtime} for one central
impact (R8) and for the peripheral one (R9). The results for the second
central impact simulation are very similar, so they are not shown here.
We have plotted
the length of the major axes of the isodensity contours at 0.4 and 0.2
times the maximum density (upper two panels), as well as the corresponding
axial ratios (lower two panels). The bar length increases considerably after
the central impact (marked by an arrow), then decreases again to a level
similar to its initial one and stays at that level till the end of the
simulation. A similar variation can be seen for the minor axis of the bar
(not plotted). Thus the bar axial ratio shows some oscillations right after
the passage of the companion then settles to a value equal to (at the 0.2
density level) or slightly higher (at the 0.4 density level) than the initial one.
On the other hand after the peripheral impact
(again marked by an arrow) the bar length drops drastically and then stays at
that level, while the minor axis increases slightly. Thus the axial ratio
increases considerably, i.e., the bar becomes more oval-like, in good
agreement with the visual impression obtained from Fig.~\ref{fig:barperixy}.
\begin{figure}
\vspace{10.00cm}
\special{psfile=fig22.ps vscale=100 hscale=100 hoffset=-15 voffset=-560}
\caption{$r=r(t)$ for simulations R8 and R9, for which the
target galaxy is barred. Both radii and times are measured
in computer units.}
\label{fig:r=rtbar}
\end{figure}
These diagrams are based on measurements made at density levels which
are a given fraction of the density ma\-xi\-mum and thus, since the value of the
maximum density varies with time, are not constant with time. Upon
impact, when this is central, the maximum density increases
considerably, then drops and rises again to a value which is slightly below
it and stays constant till the end of the si\-mu\-lation.
In the case of the peripheral impact the maximum
density shows a small minimum after
the impact and then comes back to a value which is, within the errors, equal
the pre-impact one. Since this will affect the results of
Fig.~\ref{fig:barleneachtime} for the central impacts, we have repeated the
exercise, now using fixed density levels, those at 0.2 and 0.4 times the density
maximum in the disc at the moment the companion was introduced. The results
are given in Fig.~\ref{fig:barlenfirsttime}. They show that the results are
similar to the previous ones at the lowest density level, but that the
0.4 level (and the 0.6 and 0.8 ones, not shown here) the length of the bar
decreases after impact and then comes back to a value slightly lower then the
pre-impact one, while the axial ratio increases slightly and then drops again
to a level slightly higher than the pre-impact one.
Fig.~\ref{fig:r=rtbar}
shows the $r = r (t)$ diagram for two of our three simulations
with barred galaxy targets, simulation R7 not being shown since the results
are very similar to those of R8. The bar, being a high density region, can be
easily seen as a darker area
at small radii. Apart from that, the formation and
evolution of the ring show up in a way similar to that for the simulations with
nonbarred targets discussed in the previous section. In run R9 the bar is
severely
offcentered for a short time and thus the darker area moves to larger radii
leaving a low density area near the center of mass of the target galaxy.
During the initial phases of the ring formation the bar is part of the
ring, as was already discussed earlier, and therefore the darker area of the
bar forms part of the ring area in the $r=r(t)$ plot of run R9. After
$t=10$ (measured from the time of the impact)
the ring detaches itself from the bar and continues propagating outwards, while the bar
drifts towards the center of the target galaxy, as can be clearly followed in
the lower panel of Fig.~\ref{fig:r=rtbar} and also on
Fig.~\ref{fig:barperixy}.
Small displacements of the bar can also be seen
at later times of simulation R9, and also for some time intervals in
simulation R8, as small light areas near the $r=0$ axis.
\section{Can ring galaxies be in some cases confused with ringed galaxies? }
\label{sec:ringringed}
\indent
Apart from the rings discussed so far, one can find in galactic discs
rings formed by a different mechanism.
They have been classified in three types (Buta 1986):
outer rings, which, as their name
indicates, are found in the outer parts of discs, inner rings,
which are roughly of the size of
bars or ovals, when such structures are present, and nuclear rings,
which form
around the nucleus of the galaxy. Galaxies with such rings are known as
ringed galaxies and good examples of this class of objects
are NGC 1291, 1433, 2217, and 7217.
The formation of rings in barred galaxies has been
studied with the help
of numerical simulations of the gas response in barred galaxy models using
sticky particle codes, i.e., codes where gas is modelled as an ensemble of
inelastic colliding clouds. Clouds between corotation and outer Lindblad
resonance are pushed out by the torque exerted by the bar and form a spiral,
which evolves to a pseudo-ring and then a ring. Such rings form in the
vicinity of resonances (Schwarz 1979, 1981, 1984, Combes \&
Gerin 1985); the outer ring around the outer Lindblad resonance,
the inner one around the inner ultraharmonic resonance or corotation,
and the nuclear one around the inner Lindblad resonance. Schwarz
associated these rings with the observed rings in disc galaxies and this was
further corroborated by Athanassoula et al. (1982) who, analysing the ratio of
ring diameters as given by de Vaucouleurs \& Buta (1980), found that the
histogram of the ratio of
outer to inner ring diameters in barred galaxies showed a sharp peak for
values between 2 and 2.5. This is in agreement with the result of Kormendy
(1979) that the ratio of outer ring to bar size is $2.2 \pm 0.4$, and was also
confirmed by Buta (1995) with a sample of much higher quality, namely the
CSRG (Catalogue of Southern Ringed Galaxies). For brevity we will often
refer to such rings as resonance-rings.
\begin{table}
\centering
\caption{Times and distances}
\label{tab:distances}
\begin{tabular}{@{}ccccccc@{}}
& \multicolumn{3}{c|}{FIRST} & \multicolumn{3}{c|}{SECOND} \\
RUN & $T_{begin}$ & $T_{end}$ & $D_c$ &
$T_{begin}$ & $T_{end}$ & $D_c$ \\
S1 & 0.03 & 0.18 & 0.61 & --- & --- & --- \\
S2 & 0.08 & 0.29 & 230 & 0.15 & 0.70 & 561 \\
S3 & 0.02 & 0.34 & 73 & 0.12 & 0.54 & 104 \\
S4 & 0.03 & 0.33 & 246 & 0.10 & 0.67 & 501 \\
R1 & 0.02 & 0.24 & 55 & 0.10 & 0.54 & 100 \\
R2 & 0.02 & 0.11 & 81 & 0.08 & 0.43 & 320 \\
R3 & 0.02 & 0.48 & 58 & 0.14 & 0.74 & 102 \\
R4 & 0.01 & 0.25 & 65 & 0.20 & 0.52 & 117 \\
R5 & 0.03 & 0.23 & 54 & --- & --- & --- \\
R6 & 0.05 & 0.20 & 171 & --- & --- & --- \\
R7 & 0.04 & 0.19 & 47 & --- & --- & --- \\
R8 & 0.04 & 0.19 & 47 & --- & --- & --- \\
R9 & 0.09 & 0.20 & 40 & --- & --- & ---\\
C55 & 0.04 & 0.30 & 69 & 0.12 & 0.44 & 93 \\
C57 & 0.02 & 0.36 & 60 & 0.18 & 0.74 & 87 \\
C58 & 0.04 & 0.12 & 36 & 0.10 & 0.60 & 141 \\
C59 & 0.02 & 0.22 & 55 & 0.10 & 0.74 & 146 \\
C61 & 0.02 & 0.44 & 71 & 0.12 & 0.38 & 65 \\
C63 & 0.02 & 0.34 & 59 & 0.18 & 0.44 & 69 \\
C64 & 0.02 & 0.44 & 69 & 0.18 & 0.74 & 90 \\
C85 & 0.02 & 0.48 & 106 & 0.14 & 0.74 & 152 \\
C86 & 0.02 & 0.74 & 110 & 0.24 & 0.44 & 78 \\
C88 & 0.04 & 0.26 & 66 & 0.20 & 0.52 & 118 \\
C89 & 0.04 & 0.74 & 125 & --- & --- & --- \\
C91 & 0.02 & 0.40 & 66 & 0.20 & 0.52 & 77 \\
C99 & 0.02 & 0.30 & 69 & 0.12 & 0.44 & 94 \\
\end{tabular}
\end{table}
The situation is less clear for nonbarred ringed gala\-xies. The
driving in such cases could perhaps come from a relatively massive grand
design spiral, or from a hidden bar or oval. An alternative possibility could
be that the bar has decayed leaving behind the ring or rings, as
suggested for NGC~7217 (Athanassoula
1996, Athanassoula et al. 1996). Here we will consider whether any
galaxy with a ring formed by an infalling companion as discussed above, could
be mistaken for a ringed galaxy with resonance-rings.
The asymmetry of rings due to peripheral and/or oblique impacts is of course
a tell-tale sign of their origin. Also rings formed by relatively massive
companions can
be distinguished from resonance-rings since such impacts will produce
tell-tale spokes and, if not vertical, will severely
distort the disc and
form rings which, at least at their later stages of evolution, are
quite asymmetric (cf.~Fig.\ref{fig:585557xy}).
This leaves rings due to central and
near vertical impacts with relatively less massive compa\-nions,
since their morphology can not be easily
distinguished from that of ringed galaxies (cf.~Fig.~\ref{fig:c59xy}).
Yet even in those cases one can weed out some of the interlopers by
measuring the expansion velo\-city of the material constituting the ring,
since, as we saw in section~\ref{sec:kinem} this is relatively high (except for
the last stages of the evolution) in rings formed by impacts.
We have also compared the ratio of the two ring radii, for ring and ringed
galaxies, for the latter using the data by Buta (1995). Histograms
of the number of galaxies as a function of the ratio of the ring radii
peak roughly at the same position in the two cases, but there are more
ring galaxies with relative large values of the ring size ratios.
Nevertheless, except for extreme cases, this cannot be used as a way of
distinguishing between the two origins since individual ring galaxies
with relatively large ring ratios could always be assimilated to the
tail end of the distribution of ringed galaxies.
Is the presence of the companion the ``smoking gun" on which we can rely to
distinguish ring galaxies from ringed ones? Table \ref{tab:distances} lists
for all simulations discussed so far (column~1),
the time at which the
first ring appears ($T_{begin}$; column~2)
and disappears ($T_{end}$; column~3),
as well as the distance from the
companion to the center of the target galaxy at this time ($D_c$;
column~4).
Columns~5 to 7 contain the same information, but now for the second ring.
In both cases time is measured in Gyears from the moment of impact and
distance in kpc. Of course the time a ring disappears depends not only on
personal judgement, but also on the way the data are displayed and this
introduces nonnegligible uncertainties in our estimates. As already
mentioned, simulation S1 starts with a
velocity much smaller than the escape velocity
(cf.~Tables~\ref{tab:components}
and \ref{tab:initial}) and
thus the compa\-nion merges with the target.
For the remaining simulations the initial velocities are considerably larger
than the escape velocity and the companion is at a considerable distance when
the rings disappear. The minimum distance is 36 kpc or
3 disc radii when the first ring disappears, and 65 kpc or over 4 disc radii
when the second ring disappears. The mean distance is 85 (159) kpc, or 17 (32)
disc radii when the first (second) ring disappears and thus the companion may
have gone unnoticed if the search did not extended to large distances.
We can thus summarise this section by saying that ring galaxies formed
by massive perturbers or oblique or offcentered impacts can not be
confused with ringed galaxies. On the other hand there could perhaps be
some confusion regarding rings resulting from low mass perturbers
with perpendicular and central impact.
Measurements of the expansion velocities of the material constituting
the ring should help distinguish between the two types of rings.
\section{Summary }
\label{sec:summary}
\indent
In this paper we have used N-body simulations to study the formation and
evolution of rings in disc galaxies hit by small spherical companions.
In most simulations two transient and short-lived
rings form, although in barred or very
hot target discs no second ring is seen. These rings are indeed
density waves, as
predicted theoretically (Toomre 1978, Lynds \& Toomre 1976). The second
ring is
formed much after the first one, yet the two coexist for a
considerable time span. Following the encounter particle orbits
first contract, then expand, and then contract again,
this second rebound corresponding to the formation of the second ring.
The amplitude of
these radial motions is a function of radius, or, equivalently, time.
In other words shortly after
the ring has formed, i.e., when it is still in the central parts of the
disc, the particles in the ring have large expansion velocities.
At later times, when the ring has reached larger
radii, they have considerably smaller radial velocities. The passage of
the ring from any given radius increases the local velocity dispersion.
The range of companion masses considered varied between 0.02 and 0.2 times
the target mass, or, equivalently, between 0.1 and 1 times the disc mass.
For the lowest va\-lues in this range either no ring was formed, or it was too
weak to be clearly seen. We did not investigate what the minimum mass
necessary for ring formation is, but our si\-mu\-lations show that 0.05 of the
target mass, or 0.25 of the disc mass, are sufficient.
The amplitude, width, lifetime and expansion velocity of the first ring increase
considerably with companion mass, and so does the radial velocity of the
particles in it. Also rings are more symmetric, narrower and
nearer to circular in low mass encounters. Furthermore high mass impacts
cause a substantial increase of the disc extent.
Rings formed by high mass companions are quite asymmetric during the last
stages of their evolution. Asymmetric rings are of course formed also
by peripheral impacts.
Impacts with a relatively massive companion also make spokes, similar to
those observed in the Cartwheel galaxy. In our simulations we find the best
spokes for $M_C/M_G~=~0.2$, although some good examples can still be found
for $M_C/M_G=0.1$. Spokes are trailing and last only a couple of $10^8$ years.
We tried both vertical and oblique impacts, as well as fast and slow ones.
Perpendicular impacts make rather symmetric and near-circular rings, while
oblique ones form more eccentric and broader rings. First rings formed by slow
impacts have higher amplitudes, larger expansion velocities and longer
lifetimes than those formed by fast impacts. Also the radial velocities of the
particles in the ring are larger.
Discs which have relatively low surface density and extend far out
develop multi-armed spiral structure and their rings, when they form, are
more irregular and patchy, evolving in time first to a polygon-like structure
and then breaking up into a multitude of clumps and spiral segments. Similarly
the spokes formed in such simulations are more clumpy.
Since small companions should hit barred as well as nonbarred galaxies, we
have also run simulations in which the target galaxy is barred. In
all cases we noticed important displacements of the bar structure lasting 0.2
to 0.3 Gyrs. An asymmetric pseudoring is formed in each case, one
side of which is composed by the bar. These rings expand, as the ones
formed in nonbarred galaxies, and after becoming sufficiently large they
detach themselves from the bar.
The pattern speed of the bar is considerably decreased during the encounter,
and then increases again, albeit it to a value lower than the initial one. If
we define the semimajor axis of the bar as the length at which its density
reaches a given fraction of the maximum central density, then we note
considerable temporary increases of this quantity after a central impact,
followed by a decrease roughly to its initial value. On the other hand after
a peripheral impact on the bar major axis the length of the bar decreases
and stays at a low value, the final product being a fat oval.
Finally we discuss whether ring galaxies could be mistaken for ringed ones,
and we argue that this could not be the case for oblique or offcentered
impacts, or impacts with relatively high mass perturbers, since these
leave tell-tale signs. Measurements of the expansion velocities of the
material constituting
the ring should help distinguish between the two types of rings.
\parindent=0pt
\def\par\noindent\parshape=2 0cm 14cm 1cm 13cm{\par\noindent\parshape=2 0cm 14cm 1cm 13cm}
\vskip 0.7cm plus .5cm minus .5cm
\section*{Acknowledgements}
We would like to thank the referee, Ray Carlberg, for interesting comments and suggestions, Lars Hernquist for providing us with his vectorised version of the treecode, and Jean-Charles Lambert for his help with the GRAPE
simulations and analysis software. The treecode simulations described in this
paper were run at the C98 of the IDRIS (Institut de D\'eveloppement et des
Resources en Informatique Scientifique, Orsay, France).
The direct summation simulations were run locally and we would like
to thank the INSU/CNRS and the University of Aix-Marseille I for funds
to develop the necessary computing facilities. I.P. thanks the
INAOE of Mexico (Instituto Nacional de Astrof\'{\i}sica, Optica y
Electr\'onica) for a fellowship.
|
1,314,259,994,940 | arxiv | \section{Introduction}
The phenomenon of \emph{uniaxial anisotropy} in permalloy Ni$_{80}$Fe$_{20}$~at.\% (Py) films and its correlation with microstructure has attracted considerable scientific and industrial interest for decades. The proposed explanations for uniaxial anisotropy include oriented defects and oxides, \cite{sugita1967,fujiwara1968} directional ordering of Fe/Ni atoms pairs, \cite{chikazumi1950a} shape anisotropy of an elongated ordered phase, \cite{kaya1953} composition variation between grains \citep{kench1970} and more recently, localized composition non-uniformity. \citep{rodrigues2018}
No one of these can account for all instances of uniaxial anisotropy in the Py system, and one or more could contribute simultaneously.
\emph{Epitaxial, single crystal} Py films have a perfect lattice, while the arrangement of Ni and Fe atoms may contain varying degrees of order. Among the explanations above, only the pair ordering and the localized composition non-uniformity are applicable in such a case. Py has vanishingly small magnetocrystalline anisotropy and magnetostriction, and low coercivity, but extremely large magnetic permeability. \citep{chikazumi1961,yin2006} This makes it a unique system in which to study e.g.\ induced uniaxial anisotropy.
Single crystal Py films have been deposited epitaxially by numerous techniques, including thermal evaporation, \citep{chikazumi1961,yelon1965,lo1966} electron beam evaporation, \citep{song1994} molecular beam epitaxy (MBE), \citep{huang1997,tanaka09:2515,tanaka10:345} ion beam sputtering, \citep{hashim1994,hashim1995} rf magnetron sputtering, \citep{higuchi2011,ohtani2013} dc magnetron sputtering, \citep{michelini2002} and pulsed laser deposition. \citep{rao2014}
Most of these deposition methods have resulted in a single crystal Py (001) film on MgO (001) with biaxial anisotropy in the plane, and the easy directions being [110] or [100]. Unfortunately, studies that focused on the growth of single crystal Py did not discuss the effect of ordering at all. Most of these studies used a low deposition rate, which normally results in higher order. On the other hand, the study of atomic order is limited to annealing a quenched specimen at about 500~$^\circ$C for a very long time. \cite{chikazumi1950a,lutts1970,hausmann1971,wan2005}
Several groups have shown that an increase in atomic order results in deterioration of the anisotropy constant $K_1$. \cite{chikazumi1950a,tsukahara1966,hausmann1971,bozorth1993} Uniaxial anisotropy can be induced in single crystal Py by applying an \emph{in-situ} magnetic field during deposition, \citep{chikazumi1961,hashim1994,hashim1995} or by post annealing \citep{bozorth1934b} in magnetic field.
It has been shown that magnetic field induced anisotropy strongly depends on the crystal orientation of the Py for both deposition and annealing in a magnetic field \citep{chikazumi1961,chikazumi1956} i.e.\ it is most efficient along the $\langle111\rangle$ direction, less so along the $\langle110\rangle$ direction and least along the $\langle100\rangle$ direction.
Another method of inducing uniaxial anisotropy is deposition under an angle with respect to the substrate normal. We have shown that deposition under a 35$^\circ$ angle is more effective than applying a 70~Oe \emph{in-situ} magnetic field when depositing polycrystalline films. \citep{kateb2017,kateb2018}
We have also demonstrated growth of polycrystalline Py films under an angle using high power impulse magnetron sputtering (HiPIMS) and compared with films deposited by conventional dc magnetron sputtering (dcMS). \citep{kateb2018hipims} During HiPIMS deposition high power pulses of low frequency and low duty cycle are applied to a magnetron target which results in highly ionized sputtered material. \cite{gudmundsson12:030801} The HiPIMS discharge provides a highly ionized flux of the metallic species and the averaged ion energy is significantly higher in the HiPIMS discharge than in dcMS discharge and this energetic metallic ions are created during the active phase of the discharge pulse. \cite{bohlmark06:1522,lundin08:035021,greczynski12:4202} For both methods, deposition under an angle with respect to the substrate induces very well-defined uniaxial anisotropy in the film. \citet{schuhl1994} showed that tilt deposition breaks the symmetry between two in-plane easy axes, appearing as a stepped easy axis magnetization loop along the flux direction. However the method of inducing uniaxial anisotropy using tilt deposition of a single crystal Py has not been studied so far. In this work we demonstrate the epitaxial growth of \emph{single crystal} Py films on MgO (001) substrates, by HiPIMS and by dcMS both deposited under an incident angle of 35$^\circ$. We study the effect of the two above mentioned sputtering methods, whose adatom energy differs by order(s) of magnitude, on the structure, order and magnetic anisotropy of the films. It might be tempting to think that the high adatom energy involved in HiPIMS would cause severe structural damage, but there appear to be only very subtle structural disparities, while the ordering and magnetic anisotropy, however, are vastly different.
\section{Method}
The substrates were single-side polished single crystalline MgO (001) with surface roughness $<$5~\AA, and with lateral dimension of 10$\times$10~mm$^2$ and 0.5~mm thickness (Latech Scientific Supply Pte.~Ltd.). The MgO substrates were used as received without any cleaning but were baked for an hour at 400~$^\circ$C in vacuum for dehydration. The Py thin films were deposited in a custom built ultra-high vacuum magnetron sputter chamber with a base pressure of $<5\times10^{-7}$~Pa. The deposition was performed with argon of 99.999~\% purity as the working gas at 0.33~Pa pressure using Ni$_{80}$Fe$_{20}$ target of 75~mm in diameter and 1.5~mm thickness. During deposition, the substrates were rotated 360$^\circ$ back and forth at $\sim$12.8~rpm with 300~ms stop in between. Further detail on our deposition geometry can be found elsewhere. \citep{kateb2017,kateb2018,kateb2018hipims}
For dcMS deposition a dc power supply (MDX 500, Advanced Energy) was used and the power was maintained at 150~W. For HiPIMS deposition, the power was supplied by a SPIK1000A pulser unit (Melec GmbH) operating in the unipolar negative mode at constant voltage, which in turn was charged by a dc power supply (ADL GS30).
The pulse length was 250~$\mu$s and the pulse repetition frequency was 100~Hz. The average power during HiPIMS deposition was maintain around 151~W. The HiPIMS deposition parameters were recorded by a home-made LabVIEW program communicating with the setup through high speed data acquisition (National Instruments).
X-ray diffractometry (XRD) was carried out using a X'pert PRO PANalitical diffractometer (Cu K$_\alpha$, wavelength 0.15406~nm) mounted with a hybrid monochromator/mirror on the incident side and a 0.27$^\circ$ collimator on the diffracted side. A line focus was used with a beam width of approximately 1~mm. The film thickness, density and surface roughness was determined by low-angle X-ray reflectivity (XRR) measurements with an angular resolution of 0.005$^\circ$. The data from the XRR measurements were fitted using a commercial X'pert reflectivity program.
For the (002) pole figure the $\theta-2\theta$ was set to a corresponding peak obtained in the normal XRD. However, the (111) and (022) peaks do not appear in the normal XRD. To this end, first a rough pole scan was done according to $\theta-2\theta$ found in the literature. This roughly gives the in-plane ($\phi$) and out-of-plane ($\psi$) angles of those planes with respect to the film surface. Then we scan $\theta-2\theta$ at the right $\phi$ and $\psi$ to find each (111) and (022) peak. Finally, a more precise pole scan is made again at the new $\theta-2\theta$ values. Obviously, the $\theta-2\theta$ values reported in the literature might be slightly different than for our samples due to strain in the film and accuracy of calibration. The $\psi$ was calibrated using the (002) peak of MgO normal to the substrate. In a similar way, the narrow MgO (200) peak in-plane of the substrate was utilized for calibration of $\phi$.
To obtain hysteresis loops, we use a homebuilt high sensitivity magneto-optical Kerr effect (MOKE) looper with HeNe laser light source. We used variable steps in the magnetic field i.e.\ 0.1~Oe steps around transitions of the easy direction, 0.5~Oe steps for the hard axis and before transitions of the easy direction and 1~Oe steps for higher field at saturation.
For the anisotropic magnetoresistance (AMR) measurements we utilized \citet{Price1973} extension to van der Pauw (vdP) \cite{Pauw1958,vdPauw1958} method. We have already shown that the vdP measurement is more reliable in the AMR measurements since it is less geometry dependent compared to conventional Hall-bar method. \cite{kateb2018} Originally, vdP was developed for determining isotropic resistivity and ordinary Hall mobility of uniform and continuous thin films of arbitrary shape and has been used extensively for semiconductor characterization. In the vdP method, four small contacts must be placed on the sample perimeter e.g.\ as illustrated in Fig.~\ref{fig:scheme}. The measured resistances should satisfy vdP equation \cite{vdPauw1958}
\begin{equation}
\exp\left(-\frac{\pi d}{\rho_{\rm iso}}R_{\rm AB,CD}\right)+\exp\left(-\frac{\pi
d}{\rho_{\rm iso}}R_{\rm AD,CB}\right)=1
\label{eq:vdP}
\end{equation}
where $\rho_{\rm iso}$ is the isotropic resistivity and $d$ is the film thickness. The resistance $R_{\rm AB,CD}$ is measured by forcing current through the path ${\rm AB}$ and picking up the voltage at the opposite side between ${\rm CD}$ and $R_{\rm AD,CB}$ is similarly defined. Note that Eq.~(\ref{eq:vdP}) is independent of sample shape and distances between contacts. This behavior is a direct result of conformal mapping i.e.\ a sample has been mapped upon a semi-infinite half-plane with contacts along the edge in which the contact distances cancel out.
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{scheme.eps}
\caption{Schematic illustration of set of measurements in vdP method.}
\label{fig:scheme}
\end{figure}
It has been shown that the vdP method can be extended to the case of anisotropic films if two of three principle resistivity axis are in the film plane while the 3rd is perpendicular. In that case, $\rho_{\rm iso}$ obtained from Eq.~(\ref{eq:vdP}) stands for the geometric mean of the in-plane principle resistivities i.e.\ $\rho_{\rm iso}=\sqrt{\rho_1\rho_2}$. \cite{hornstra1959,price1972} In principle, the vdP method is based on conformal mapping and thus it should remain valid if an anisotropic medium with lateral dimensions of $a\times b$ mapped into an isotropic one with new dimensions e.g.\ $a\times b'$. In practice one can make a rectangular sample with sample sides parallel to the principle resistivities. \cite{hornstra1959,Price1973} Assuming the principle resistivities are aligned with $x$ and $y$ directions in Fig.~\ref{fig:scheme}, the principle resistivities $\rho_x$ and $\rho_y$ can be obtained from Price's \cite{Price1973} extension, as
\begin{equation}
\sqrt{\frac{\rho_{x}}{\rho_{y}}}=-\frac{b}{\pi a}\ln\left(\tanh \left[\frac{\pi
dR_{\rm AD,CB}}{16\rho_{{\rm iso}}}\right]\right)
\label{eq:rhoratio}
\end{equation}
where $a$ and $b$ are the side lengths of a rectangular sample and $R_{\rm AD,CB}$ is resistance along the $b$ sides as described above.
Eq.~(\ref{eq:rhoratio}) yields to the ratio of principle resistivities and the individual values can subsequently be obtained by
\begin{equation}
\rho_{x}=\rho_{\rm iso}\sqrt{\frac{\rho_{x}}{\rho_{y}}}
\end{equation}
and
\begin{equation}
\rho_{y}=\rho_{\rm iso}\left(\!\sqrt{\frac{\rho_{x}}{\rho_{y}}}\;\right)^{-1} \quad
\end{equation}
For the AMR measurement according to Bozorth's \cite{Bozorth1946} notation one must measure resistivity with saturated magnetization parallel ($\rho_{\|}$) and perpendicular ($\rho_{\bot}$) to current direction. The AMR ratio is given by \cite{McGuire1975}
\begin{equation}
\rm{AMR}=\frac{\Delta\rho}{\rho_{\rm{ave}}}=\frac{\rho_{\|}-\rho_{\bot}}{\frac{1}{3}\rho_{\|}+\frac{2}{3}\rho_{\bot}}
\label{eq:amr}
\end{equation}
Assuming the $x$-axis being the current direction, the AMR response can be presented as $\rho_{x}$ which only depends on the direction of saturated magnetization
\begin{equation}
\rho_{x}=\rho_{\|}+\Delta\rho \cos^2\phi
\label{eq:rhoxx}
\end{equation}
here $\phi$ stands for angle between current ($x$) and saturated magnetization direction.
It is worth noting that Eq.~(\ref{eq:rhoxx}) states that the resistivity is only dependent on $\phi$, not on the initial magnetization direction of the films. This is because $\rho_{\|}$ and $\rho_{\bot}$ are measured at saturation magnetization where the entire domains are assumed to be aligned to external magnetic field. \cite{Bozorth1946}
\section{Results and discussion}
\subsection{HiPIMS discharge waveforms}
The discharge current and voltage waveform are a characteristic of the HiPIMS process, which provides important information on both the instrument (the pulser unit) and the physics of ionization. Fig.~\ref{fig:waveform} shows the current and voltage waveforms of the HiPIMS discharge recorded during deposition. It can be seen that a nearly rectangular voltage pulse of 250~$\mu$s length was applied to the cathode target. The oscillations at the beginning and after ending the voltage pulse initiate from internal inductance of the power supply, which creates a resonant circuit along with the capacitance of the cathode target and the parasitic capacitance of the cables. There is also a local minimum corresponding to the current rise at 80~$\mu$s.
The discharge current is initiated about 70~$\mu$s into the voltage pulse. The dischrge current peaks at 110 $\mu$s into the pulse and then decays until it is cut-off. As described by \citet{lundin09:045008} the current waveforms can be divided into three distinct regions. (I) A strong gas compression due to the rapid accumulation of sputtered flux as plasma ignites which give rise to current to the peak value. This is followed by (II) rarefaction i.e.\ collision of sputter flux with the working gas which results in heating and expansion of the working gas and consequently current decay. More recently, it has been shown that the later mechanism is dominating for rarefaction at higher pressures and ionization loss is dominating otherwise. \cite{huo12:045004} (III) A steady state plasma till the end of the voltage pulse which gives a relatively flat current plateau.
It has been shown that increased pressures can prolong current decay time to the end of pulse and eliminate plateau regions. \cite{lundin09:045008} We believe this is highly unlikely the case in Fig.~\ref{fig:waveform}. We have already shown that 0.33~Pa is in vicinity of minimum pressure i.e.\ lower pressures results in non-linear increase of delay time for current onset and increase time from current onset to peak current which is nearly constant at higher pressures. \cite{kateb2018hipims} Thus pressure is low enough to capture the third stage of the current evolution but, the short pulse length here does not allow it to appear.
\begin{figure}
\includegraphics[width=1\linewidth]{waveform.eps}
\caption{\label{fig:waveform} The discharge current and voltage waveforms at 0.33~Pa with the pulse frequency of 100~Hz and pulse length 250~$\mu$s.}
\end{figure}
It is worth mentinong that, although we have tried to maintain the HiPIMS average power the same as dcMS power (at 150~W), the HiPIMS pulse voltage and peak current (465~V and 25~A) are well above dcMS counterparts (321~V and 463~mA).
\subsection{Microstructure}
\subsubsection{XRR}
Values of film thickness, film density and surface roughness obtained from fitting of the measured XRR curves are shown in Table \ref{XRRfit}. The surface roughnesses obtained here are slightly lower than previously reported values using dcMS. \citep{michelini2002}
The mass density of HiPIMS deposited film is higher than for the dcMS deposited counterpart. It is somewhat lower than for polycrystalline films deposited by dcMS and HiPIMS (8.7~g/cm$^3$) \cite{kateb2018hipims} and the bulk density of 8.73~g/cm$^3$ (Ref \citenum{mclyman11} (p.~2-6)). Accounting for the epitaxial strain the densities here are within a reasonable range.
\begin{table}[h]
\caption{\label{XRRfit} Values of film thickness, film density and surface roughness obtained by fitting the XRR measurement results.}
\begin{tabular}{ c c c c c }
\hline \hline
Growth & Thickness & Deposition & Density & Roughness\\
technique & (nm) & rate (\AA/s) & (g/cm$^3$) & (\AA) \\
\hline
HiPIMS & 45.74 & 0.97 & 8.38 & 6.75 \\
dcMS & 37.50 & 1.50 & 8.32 & 6.33 \\
\hline
\end{tabular}
\end{table}
It is worth mentioning that, during each HiPIMS pulse, the deposition rate is much higher during the active discharge phase than for dsMS i.e.\ more than 50 times accounting 250~$\mu$s pulse width and 100~Hz frequency.
\subsubsection{XRD}
Permalloy has a fcc structure while both metastable hcp \citep{huang1998,higuchi2011,tanaka10:345} and bcc \cite{yang1999,yin2006,minakawa2016} Py phases have been reported in utrathin films. Fig.~\ref{XRD} illustrates the symmetric $\theta-2\theta$ XRD pattern of the epitaxial films obtained by both deposition methods. In the out-of-plane XRD, fcc (002) peak is the only detectable Py peak. This indicates that the (002) planes of Py are very well aligned to that of the MgO substrate i.e.\ Py~(001)~$\|$~MgO~(001). Similar results were obtained by measurement in-plane of epitaxial films ($\psi=90^\circ$) along the [100] directions of MgO i.e.\ normal to substrate edges. Furthermore, in-plane measurements along the $\langle$110$\rangle$ direction of MgO (substrate diagonals) show (220) peaks from both the MgO substrate and the Py film. These indicate a orientation relationship of Py~[100]~$\|$~MgO~[100] and Py~[110]~$\|$~MgO~[110] i.e.\ the [100] and [110] directions of Py are fully aligned with those of the MgO substrate. Thus, in spite of the large lattice mismatch (15.84$\%$), high quality single crystal Py ($a_{\rm Py}=3.548$~\AA) film can be established on a MgO ($a_{\rm MgO}=4.212$~\AA) substrate for both deposition techniques. Furthermore, we compared the in-plane peaks along the $\langle100\rangle$ and $\langle010\rangle$ directions (not shown here) and detected no difference in lattice parameter even with a precise scan i.e.\ angular resolution 0.0001$^\circ$ and 10~s counting time. This means we observed identical in-plane strain along the [100] directions in both of the films.
In the dcMS case, the Py (002) peak shows a slight shift towards higher angles in the normal XRD scan. This is accompanied by the shift of in-plane peaks towards smaller angles. Thus, tensile strain at the film-substrate interface generates slight compression normal to the film plane. \cite{tanaka2010} However, in the HiPIMS case, both the in-plane \emph{and} out-of-plane peaks are shifted towards smaller angles. This would indicate tensile strain in all tree dimension that is impossible. However, we attribute the shift of (002) peak in the HiPIMS case to departure from the $L1_2$ Ni$_3$Fe superlattice. \cite{dahl1936} As pointed out by O'Handley (Ref \citenum{Ohandley2000} (p.~548)), the Ni$_3$Fe phase exists in either a disordered or well ordered structure. It has been shown that an ordered Ni$_3$Fe phase can be detected as a shift of XRD peaks towards larger angles \cite{chikazumi1950a,Ohandley2000,wan2005} and narrower peaks. \cite{lutts1970,wan2005} In addition, the intensity of XRD peaks is expected to increase with the higher order. \citep[p.~549]{Ohandley2000} All these conditions can be observed in our dcMS deposited film, indicating that it is more ordered than its HiPIMS counterpart. The more disordered arrangement in the HiPIMS deposition can be attributed to the high deposition rate during each pulse which suppresses adatom mobility.
\begin{figure}
\includegraphics[width=1\linewidth]{XRD}
\caption{\label{XRD} The symmetric XRD pattern of the epitaxial films deposited by HiPIMS (right) and dcMS (left). The vertical dashed lines show the peak position of bulk Py and MgO. The curves are shifted manually for clarity.}
\end{figure}
\subsubsection{Pole figures}
Fig.~\ref{Pole} illustrates pole figures for the main Py planes of our epitaxial films. In the \{200\} pole figure, there is an intense spot at $\psi=0$ that verifies that the (002) plane is lying parallel to the substrate i.e.\ epitaxial relationship of Py(001)~$\|$~MgO(001) for both dcMS and HiPIMS deposited films. There is also a weak spot with four-fold symmetry at $\psi=90^\circ$ due to in-plane diffraction of \{200\} planes parallel to substrate edges in both films. This indicates there are Py \{100\} planes parallel to the substrate edges i.e.\ Py[100]~$\|$~MgO[100]. The \{220\} pole figures, also depict four-fold symmetry of \{220\} planes at $\psi$ angle of 45 and 90$^\circ$ as expected from symmetry in a cubic single crystal for both films. In both of the \{111\} pole figures, there is a four-fold spot at $\phi=45^\circ$ and $\psi=54.74^\circ$ which is in agreement with the angle between (002) and \{111\} planes. However, compared to the (002) spots the \{111\} and \{220\} planes are slightly elongated radially, along the $\psi$ axis. This indicates a lattice constant expanded in-plane of the substrate for both films, in agreement with shift observed in the in-plane XRDs (cf. Fig.~\ref{XRD}). The FWHM of the spots are always narrower for the dcMS deposited epitaxial film indicating higher order in this case.
\begin{figure}
\includegraphics[width=1\linewidth]{Pole}
\caption{\label{Pole} The pole figures obtained for Py \{111\}, \{200\} and \{220\} planes of epitaxial films deposited by HiPIMS and dcMS. The height represents normalized log intensity in arbitrary units.}
\end{figure}
The extra dots that appear in the \{111\} pole figure of the HiPIMS deposited film belong to twin boundaries as have also been reported for epitaxially deposited Cu using thermal evaporation \citep{chen2013} and HiPIMS. \citep{cemin2017} The existence of twin boundaries in the Py is a signature of high deposition rate which has been observed previously in evaporated \citep{baltz1963,yelon1965} and electro-deposited \citep{kench1970} films and studied in detail using TEM. \citep{baltz1963,thangaraj1995,ross1996} It can be seen that these dots at 23$^\circ$ also appear in the dcMS deposited film but with very small intensity. This indicates that the fraction of twin boundaries is much lower in the dcMS deposited film. In addition, there are three spots with four-fold symmetry in the \{200\} pole figure of the HiPIMS deposited film which do not appear in the dcMS counterpart. The three dot pattern in the \{200\} pole figure has been characterized as an auxiliary sign of twin boundaries in the film. \citep{cemin2017} It is worth noting that these extra dots in both \{200\} and \{111\} plane were characterized by a $\theta-2\theta$ scan (not shown here) to make sure they belong to the Py film.
\subsection{Magnetic properties}
Fig.~\ref{MOKE} compares the results of in-plane MOKE measurements along the [100] and [110] directions of both the epitaxial films.
Fig.~\ref{MOKE}(a-b) indicate a biaxial behaviour in the dcMS deposited film consisting of two easy axes along the [110] directions with $H_{\rm c}$ of $\sim$2~Oe. This is consistent with the $\langle111\rangle$ direction being the easy direction of the Py crystal and the magnetization being forced in-plane along the $\langle110\rangle$ directions due to shape anisotropy. \citep{yelon1965,ohtake2011} Along the [100] directions the MOKE response is relatively hard i.e.\ open hysteresis with a gradual saturation outside the hysteresis. The gradual saturation can be explained by an out-of-plane component of the magnetization. \cite{shi2016} In polycrystalline films, the out-of-plane element of magnetization increases with increase in the film thickness \cite{romera2011,silva2017} and it gives perpendicular anisotropy at trans-critical thicknesses. \cite{sugita1967,fujiwara1968,svalov2010} In single crystal films, however, it appears that an out-of-plane component of the magnetization is generally the case. \cite{huang1997,michelini2002,loloee2002}
Fig.~\ref{MOKE}(a) also shows that the $\langle100\rangle$ and $\langle010\rangle$ directions are not completely equivalent for our dcMS deposited film. The $\langle100\rangle$ direction presents larger coercivity ($\sim$2~Oe) and saturates at 12~Oe but the $\langle010\rangle$ direction gives $\sim$1~Oe coercivity and saturates at 15 -- 18~Oe. This difference arises from the fact that $\langle100\rangle$ is the direction of sputter flux during the 300~ms stop time while reversing the rotation. Such a short time is enough to define uniaxial anisotropy in the polycrystalline film using both dcMS and HiPIMS. \cite{kateb2018hipims} However, it appears that for the epitaxial film deposited by dcMS, our deposition geometry is not enough to induce uniaxial anisotropy along the [100] direction in agreement with the previous study of \citet{schuhl1994}.
\begin{figure}
\includegraphics[width=1\linewidth]{MOKE}
\caption{\label{MOKE} The average hysteresis loops of the epitaxial films obtained by MOKE measurements along the [100] and [110] directions of the epitaxial Py films.}
\end{figure}
As shown in Fig.~\ref{MOKE}(c-d) the HiPIMS deposited epitaxial film shows very well-defined uniaxial anisotropy indicated by a linear hard axis trace without hysteresis and slightly rounded easy axis loop along the [100] directions.
The anisotropy field ($H_{\rm k}$) of the HiPIMS epitaxial film is 3.5~Oe, i.e.\ much lower than the values observed for polycrystalline films deposited by HiPIMS on Si/SiO$_2$ (11 -- 14.5~Oe). \citep{kateb2018hipims} However the coercivity ($H_{\rm c}$) of 1.8~Oe here is very close to that of polycrystalline films i.e.\ 2 -- 2.7~Oe. We have shown that in polycrystalline films the $H_{\rm c}$ depends on the film density and increases as the film density drops. \cite{kateb2018hipims} In principle the $H_{\rm c}$ of a film depends on the domain boundary structure which has been proved to be dependent on the film thickness. \cite{miyazaki1989} However, since the grain size changes with the film thickness, it is a common mistake to correlate $H_{\rm c}$ with the grain size. We have shown that for a range of film thicknesses (10 -- 250~nm) the grain size changes continuously while $H_{\rm c}$ only changes with the domain wall transition i.e.\ N{\'e}el to Bloch to cross-tie. \cite{kateb2017}
A question that might arise here is what makes the HiPIMS deposited epitaxial film present uniaxial anisotropy. It has been shown by several groups that formation of ordered Ni$_3$Fe results in lower uniaxial anisotropy constant ($K_1$). \cite{chikazumi1950a,chikazumi1956,tsukahara1966,hausmann1971,bozorth1993} According to both pair ordering \cite{chikazumi1950a} and localized composition non-uniformity \cite{rodrigues2018} theories, uniaxial anisotropy is not expected for a highly symmetric Ni$_3$Fe. While in the case of HiPIMS deposited film, lower order results in uniaxial anisotropy.
\subsection{Transport properties}
Fig.~\ref{fig:amrrot} shows the AMR response of epitaxial films to the rotation of 24~Oe in-plane saturated magnetization. This field is large enough to saturate both films in any direction. The $\theta$ here stands for angle between applied magnetic field and the $\langle100\rangle$ direction of films and should not to be confused with the $\phi$ in Eq.~(\ref{eq:rhoxx}) i.e.\ the angle between current direction and magnetic field. The result of Eq.~(\ref{eq:rhoxx}) is also plotted for comparison as indicated by the black line. Even though, the dcMS deposited film is thinner than the HiPIMS counterpart, the resistivites in the dcMS case are all lower than the HiPIMS ones. This behaviour is in contradiction with the Fuchs model \citep{fuchs1938} which predicts lower resitivity for thicker films. It can be explained in terms of higher Ni$_3$Fe order achieved in the dcMS deposited film. It has been shown previously that the resistivity depends on the order and decreases upon increase in Ni$_3$Fe order. \cite{hausmann1971}
It can be seen that the AMR response of the epitaxial film deposited with HiPIMS conforms better with Eq.~(\ref{eq:rhoxx}) than its dcMS counterpart. In the dcMS case, the deviation from Eq.~(\ref{eq:rhoxx}) occurs at about 45 -- 85$^\circ$, 95 -- 135$^\circ$ and so on. Since the deviation is symmetric around 90$^\circ$ (the $\langle010\rangle$ orientation) it is less likely associated with a pinning mechanism of some domains. Presumably, the deviation originates form switching some domains towards the easy axis at 45 and 135, 225 and 315$^\circ$ i.e.\ [110] orientations. This so-called qusi-static switching in single crystal Py has been studied using torque measurements, as characteristics of biaxial anisotropy. \citep{yelon1965}
The AMR values obtained by Eq.~(\ref{eq:amr}) along the $\langle100\rangle$ and $\langle010\rangle$ directions are summarized in Table~\ref{tab:amr}. We have recently shown that in polycrystalline films the AMR response is different along the hard and easy axis of the film. \cite{kateb2018} It appears that the AMR response is always lower along the $\langle100\rangle$ (direction of flux) in the epitaxial films. It is also evident that higher order reduces resistivity and increases AMR.
\begin{table}[]
\centering
\caption{Summary of the AMR results of epitaxial films. (All resitivity values are in $\mu\Omega$-cm unit.)}
\label{tab:amr}
\begin{tabular}{c| c c c c c c}
\hline\hline
Deposition & Current & $\rho_\|$ & $\rho_\bot$ & $\Delta\rho$ & $\rho_{\rm ave}$ & AMR \\
method & direction & \multicolumn{4}{c}{($\mu\Omega$-cm)} & (\%) \\
\hline
dcMS & $\langle100\rangle$ & 22.19 & 22.80 & 0.39 & 23.06 & 1.70 \\
dcMS & $\langle010\rangle$ & 16.92 & 16.46 & 0.46 & 16.77 & 2.74 \\
HiPIMS & $\langle100\rangle$ & 37.46 & 37.91 & 0.45 & 37.76 & 1.19 \\
HiPIMS & $\langle010\rangle$ & 27.16 & 27.62 & 0.46 & 27.47 & 1.67 \\
\hline
\end{tabular}
\end{table}
\begin{figure}
\includegraphics[width=1\linewidth]{AMR_M_rot.eps}
\caption{\label{fig:amrrot} The AMR obtained by resistivity measurements along the [100] directions of Py films deposited by (a -- b) dcMS and (c -- d) HiPIMS during rotation of 24~Oe magnetic field. The $\theta$ here stands for angle of in-plane magnetization with the $\langle100\rangle$ direction. The black lines indicate the result of fitting with Eq.~(\ref{eq:rhoxx}). The vertical dashed lines indicate the direction of easy axes.}
\end{figure}
\section{Summary}
In summary, we have deposited Ni$_{80}$Fe$_{20}$ (001) films by HiPIMS and dcMS. We have characterized them carefully with detailed X-ray measurements, finding only rather subtle structural differences. The pole figures display a signature of twin boundaries (stacking faults) in the HiPIMS deposited film and it appears to be slightly more strained or disordered, regarding dispersion of Ni and Fe atoms, than the dcMS deposited film. However, the differences in the magnetic properties of said films are vast. The dcMS deposited film has biaxial symmetry in the plane, with easy directions [110] as one might expect for a bulk fcc magnetic material (the $\langle111\rangle$ direction is out of plane and shape anisotropy forces magnetization into the plane of the film). The HiPIMS deposited film exhibits different magnetic symmetry, as it has uniaxial anisotropy with $\langle100\rangle$ as the easy direction. Furthermore, the film is magnetically soft and has an anisotropy field of only 3.5~Oe, which is lower than most results we have obtained for polycrystalline films. We attributed the uniaxial anisotropy to less ordered dispersion of Ni and Fe at the atomic level in the film deposited by HiPIMS due to high deposition rate of HiPIMS during the discharge pulse.
\begin{acknowledgments}
The authors would like to acknowledge helpful comments and suggestions from Dr.~Fridrik Magnus and Dr.~Arni S.~Ingason on the structure characterization. This work was partially supported by the Icelandic Research Fund (Rannis) Grant Nos.~196141, 130029 and 120002023, and the Swedish Government Agency for Innovation Systems (VINNOVA) contract No.~2014-04876.
\end{acknowledgments}
|
1,314,259,994,941 | arxiv | \section{Introduction}
Node property prediction is a ubiquitous task involving graph data with node features and/or labels, with a wide range of instantiations across real-world scenarios such as node classification \citep{velivckovic2017graph} and link prediction \citep{zhang2018link}, while also empowering graph classification \citep{gilmer2017neural}, etc.
Different from conventional machine learning problems where there is typically no explicit non-iid structure among samples, nodes are connected by edges, and a natural assumption is that labels and features vary smoothly over the graph.
This smoothing assumption has inspired two interrelated lines of research:
First, graph neural networks (GNNs) \citep{kipf2016semi,hamilton2017inductive,li2018deeper,xu2018representation,liao2019lanczosnet,xu2018powerful} leverage a parameterized message passing strategy to convert the original node features into predictive embeddings that reflect the features of neighboring nodes. However, this approach does not directly utilize existing label information beyond their influence on model parameters through training.
And secondly, label propagation algorithms \citep{zhu2005semi,zhou2004learning,zhang2006hyperparameter,wang2007label,karasuyama2013manifold,gong2016label,liu2018learning} spread label information via graph diffusion to make predictions, but cannot exploit node features.
As GNNs follow a similar propagation mechanism as the label propagation algorithm, the principal difference being whether features or labels are smoothed across the graph, it is natural to consider combinations of the two for improving performance. Examples motivated by this intuition, at least to varying degrees, include APPNP \citep{klicpera2018predict}, Correct and Smooth (C\&S) \citep{huang2020combining}, GCN-LPA \citep{wang2020unifying}, and TPN \citep{liu2018learning}. While effective in many circumstances, these methods are not all end-to-end trainable and easily paired with arbitrary GNN architectures.
Among these various combination strategies, a recently emerged \textit{\labeltrick{}} has enjoyed widespread success in facilitating the parallel use of node features and labels via a stochastic label splitting technique. In brief, the basic idea is to use a randomly-selected portion of the training labels as GNN inputs, concatenated with the original node features for making predictions on the remaining labels during each mini-batch. The ubiquity of this simple label trick is evidenced by its adoption across numerous GNN architectures and graph benchmarks \citep{wang2021bag,sun2020adaptive,kong2020flag,li2021training,shi2020masked}. And with respect to practical performance, this technique is foundational to many of the top-ranking submissions on the Open Graph Benchmark (OGB) leaderboard \citep{hu2020open}. For example, at the time of this submission, the top 10 results spanning multiple research teams all rely on the label trick, as do the top 3 results from the recent KDDCUP 2021 Large-Scale Challenge MAG240M-LSC \citep{hu2021ogb}.
And yet despite its far-reaching adoption, thus far the label trick has been motivated primarily as a training heuristic without a strong theoretical foundation. Moreover, many aspects of its underlying operational behavior have not been explored, with non-trivial open questions remaining. For example, while originally motivated from a stochastic perspective, is the label trick reducible to a more transparent deterministic form that is amenable to interpretation and analysis? Similarly, are there any indirect regularization effects with desirable (or possibly undesirable) downstream consequences? And how do the implicit predictions applied by the model to test nodes during the stochastic training process compare with the actual deterministic predictions used during inference? If there is a discrepancy, then the generalization ability of the model could be compromised. And finally, are there natural use cases for the label trick that have so far flown under the radar and been missed?
We take a step towards answering these questions via the following two primary contributions:
\begin{itemize
\setlength\itemsep{0pt}
\item We prove that in certain restricted settings, the original \textit{stochastic} \labeltrick{} can be reduced to an interpretable, \textit{deterministic} training objective composed of two terms: (1) a data-fitting term that naturally resolves potential label leakage issues and maintains consistent predictions during training and inference, and (2) a regularization factor conditioned on graph structure that adapts to graph size and connectivity.
Furthermore, complementary experiments applying the \labeltrick{} to a broader class of graph neural network models corroborate that similar effects exists in more practical real-world settings, consistent with our theoretical findings.
\item Although in prior work the label trick has already been integrated within a wide variety of GNN models, we introduce novel use-cases motivated by our analysis. This includes exploiting the label trick to: (i) train simple end-to-end variants of label propagation and C\&S, and (ii) replace stochastic use cases of the label trick with more stable, deterministic analogues that are applicable to GNN models with linear propagation layers such as SGC \citep{wu2019simplifying}, TWIRLS \citep{yang2021graph} and SIGN \citep{frasca2020sign}. Empirical results on node classification benchmarks verify the efficacy of these simple enhancements.
\end{itemize}
Collectively, these efforts establish a more sturdy foundation for the label trick, and in doing so, help to ensure that it is not underutilized.
\vspace{-4pt}
\section{Background}
\vspace{-4pt}
Consider a graph $G=(V,E)$ with $n=|V|$ nodes, the node feature matrix is denoted by ${\bm{X}}\in{\mathbb{R}}^{n\times d}$ and the label matrix of the nodes is denoted by ${\bm{Y}}\in{\mathbb{R}}^{n\times c}$, with $d$ and $c$ being the number of channels of features and labels, respectively.
Let ${\bm{A}}$ be the adjacency matrix, ${\bm{D}}$ the degree matrix and ${\bm{S}}={\bm{D}}^{-\frac12}{\bm{A}}{\bm{D}}^{-\frac12}$ the symmetric normalized adjacency matrix.
The symmetric normalized Laplacian ${\bm{L}}$ can then be formulated as ${\bm{L}}={\bm{I}}_n-{\bm{S}}$.
We also define a training mask matrix as
${\bm{M}}_{tr}=\left(\begin{matrix}{\bm{I}}_m & \bm0\\
\bm0 & \bm0\end{matrix}\right)_{n\times n}$,
where w.l.o.g.~we are assuming that the first $m$ nodes, denoted ${\mathcal{D}}_{tr}$, form the training dataset.
We use ${\bm{P}}$ to denote a \textit{propagation matrix}, where the specific ${\bm{P}}$ will be described in each context.
\subsection{Label Propagation Algorithm}
\label{sec:lpa}
Label propagation is a semi-supervised algorithm that predicts unlabeled nodes by propagating the observed labels across the edges of the graph, with the underlying smoothness assumption that two nodes connected by an edge are likely to share the same label.
Following \citep{zhou2004learning,yang2021graph}, the implicit energy function of label propagation is given by
\begin{equation}
\label{eqn:lp}
E({\bm{F}})=(1-\lambda)\|{\bm{F}}-{\bm{Y}}_{tr}\|_2^2+\lambda\tr[{\bm{F}}^\top {\bm{L}} {\bm{F}}],
\end{equation}
where ${\bm{Y}}_{tr}={\bm{M}}_{tr}{\bm{Y}}$ is the label matrix of training nodes, and $\lambda\in(0,1)$ is a regularization coefficient that determines the trade-off between the two terms.
The first term is a \emph{fitting constraint}, with the intuition that the predictions of a good classifier should remain close to the initial label assignments, while the second term introduces a \emph{smoothness constraint}, which favors similar predictions between neighboring nodes in the graph.
It is not hard to derive that the closed-formed optimal solution of this energy function is given by ${\bm{F}}^*={\bm{P}}{\bm{Y}}$, where ${\bm{P}}=(1-\lambda)({\bm{I}}_n-\lambda {\bm{S}})^{-1}$.
However, since the stated inverse is impractical to compute for large graphs, ${\bm{P}}{\bm{Y}}$ is often approximated in practice via ${\bm{P}}\approx(1-\lambda)\sum_{i=0}^k\lambda^i{\bm{S}}^i{\bm{Y}}$. From this expression, it follows that ${\bm{F}}$ can be estimated by the more efficient iterations ${\bm{F}}^{(k+1)}=\lambda{\bm{S}}{\bm{F}}^{(k)}+(1-\lambda){\bm{F}}^{(0)}$, where ${\bm{F}}^{(0)}={\bm{Y}}_{tr}$ and for each $k$, ${\bm{S}}$ smooths the training labels across the edges of the graph.
\vspace{-4pt}
\subsection{Graph Neural Networks for Propagating Node Features}
In contrast to the propagation of labels across the graph, GNN models transform and propagate node features using a series of feed-forward neural network layers. Popular examples include GCN \citep{kipf2016semi}, GraphSAGE \citep{hamilton2017inductive}, GAT \citep{velivckovic2017graph}, and GIN \citep{xu2018powerful}.
For instance, the layer-wise propagation rule of GCN can be formulated as ${\bm{X}}^{(k+1)}=\sigma({\bm{S}}{\bm{X}}^{(k)}{\bm{W}}^{(k)})$ where $\sigma(\cdot)$ is an activation function such as ReLU, ${\bm{X}}^{(k)}$ is the $k$-th layer node representations with ${\bm{X}}^{(0)}={\bm{X}}$, and ${\bm{W}}^{(k)}$ is a trainable weight matrix of the $k$-th layer.
Compared with the label propagation algorithm, GNNs can sometimes exhibit a more powerful generalization capability via the interaction between discriminative node features and trainable weights.
\vspace{-4pt}
\subsection{Combining Label and Feature Propagation}
While performing satisfactorily in many circumstances, GNNs only indirectly incorporate ground-truth training labels via their influence on the learned model weights. But these labels are not actually used during inference, which can potentially degrade performance relative to label propagation, especially when the node features are noisy or unreliable. Therefore, it is natural to consider the combination of label \textit{and} feature propagation to synergistically exploit the benefits of both as has been proposed in \citep{klicpera2018predict,liu2018learning,wang2020unifying,wang2021bag,shi2020masked,huang2020combining}.
One of the most successful among these hybrid methods is the so-called \textit{\labeltrick{}}, which can be conveniently retrofitted within most standard GNN architectures while facilitating the parallel propagation of labels and features in an end-to-end trainable fashion. As mentioned previously, a number of top-performing GNN pipelines have already adopted this trick, which serves to establish its widespread relevance \citep{sun2020adaptive,wang2021bag,kong2020flag,li2021training,shi2020masked} and motivates our investigation of its properties herein. To this end, we formally define the \labeltrick{} as follows:
\begin{definition}[\labeltrick{}]
The \labeltrick{} is based on creating random partitions of the training data as in ${\mathcal{D}}_{tr} = {\mathcal{D}}_{in} \cup {\mathcal{D}}_{out}$ and ${\mathcal{D}}_{in} \cap {\mathcal{D}}_{out}=\varnothing$, where node labels from ${\mathcal{D}}_{in}$ are concatenated with the original features ${\bm{X}}$ and provided as GNN inputs (for nodes not in ${\mathcal{D}}_{in}$ zero-padding is used), while the labels from ${\mathcal{D}}_{out}$ serve in the traditional role as supervision. The resulting training objective then becomes
\begin{equation} \label{eq:label_trick}
\E_{splits}\Big[\sum_{i\in{\mathcal{D}}_{out}} \ell\big(~{\bm{y}}_i,~f\left({\bm{X}},{\bm{Y}}_{in};\mbox{\boldmath $\calW$} \right)_i ~ \big) \Big]
\end{equation}
where ${\bm{Y}}_{in}\in{\mathbb{R}}^{n\times c}$, is defined row-wise as ${\bm{y}}_{in,i}=\begin{cases}{\bm{y}}_i & \textnormal{if}\ i\in {\mathcal{D}}_{in} \\ {{\bm{f}} 0} & \textnormal{otherwise}\end{cases}$ for all $i$, the function $f({\bm{X}},{\bm{Y}}_{in};\mbox{\boldmath $\calW$})$ represents a message-passing neural network with parameters $\mbox{\boldmath $\calW$}$ and the concatenation of ${\bm{X}}$ and ${\bm{Y}}_{in}$ as inputs, and $\ell(\cdot,\cdot)$ denotes a point-wise loss function over one sample/node. At inference time, we then use the deterministic predictor $f({\bm{X}},{\bm{Y}}_{tr};\mbox{\boldmath $\calW$})_i$ ~for all test nodes $i \notin {\mathcal{D}}_{tr}$.
\end{definition}
\section{Reliable Randomness though the Label Trick} \label{sec:analysis}
Despite its widespread adoption, the label trick has thus far been motivated as merely a training heuristic without formal justification. To address this issue, we will now attempt to quantify the induced regularization effect that naturally emerges when using the label trick.
However, since the formal analysis of deep networks is challenging, we herein adopt the simplifying assumption that the function $f$ from (\ref{eq:label_trick}) is linear, analogous to the popular SGC model from \citep{wu2019simplifying}.
For simplicity of exposition, in Sections \ref{sec:label_trick_alone} and \ref{sec:selp} we will consider the case where no node features are present to isolate label-trick-specific phenomena. Later in Sections \ref{sec:label_trick_as regularization} and \ref{sec:nonlinear_extensions} we will reintroduce node features to present our general results, as well as considering nonlinear extensions.
\subsection{Label Trick without Node Features}
\label{sec:label_trick_alone}
Assuming no node features ${\bm{X}}$, we begin by considering the deterministic node label loss
\begin{equation}
\label{eqn:objective_wlpa}
{\mathcal{L}}({\bm{W}})=\sum_{i\in{\mathcal{D}}_{tr}}\ell\big({\bm{y}}_i, ~ [{\bm{P}}{\bm{Y}}_{tr}{\bm{W}}]_i\big),
\end{equation}
where $[~\cdot~]_i$ indicates the $i$-th row of a matrix and ${\bm{P}}{\bm{Y}}_{tr}{\bm{W}}$ is a linear predictor akin to SGC, but with only the zero-padded training label matrix ${\bm{Y}}_{tr}$ as an input. However, directly employing (\ref{eqn:objective_wlpa}) for training suffers from potential label leakage issues given that a simple identity mapping suffices to achieve the minimal loss at the expense of accurate generalization to test nodes. Furthermore, there exists an inherent asymmetry between the predictions computed for training nodes, where the corresponding labels are also used as inputs to the model, and the predictions for testing nodes where no labels are available.
At a conceptual level, these issues can be resolved by the \labeltrick{}, in which case we introduce random splits of ${\mathcal{D}}_{tr}$ and modify (\ref{eqn:objective_wlpa}) to
\begin{equation}
\label{eqn:objective_wlpa2}
{\mathcal{L}}({\bm{W}})=\E_{splits}\Big[\sum_{i\in {\mathcal{D}}_{out}}\ell\big({\bm{y}}_i, ~ [{\bm{P}}{\bm{Y}}_{in}{\bm{W}}]_i\big)\Big].
\end{equation}
For each random split, the resulting predictor only includes the label information of ${\mathcal{D}}_{in}$ (through ${\bm{Y}}_{in}$), and thus there is no unresolved label leakage issue when predicting the labels of ${\mathcal{D}}_{out}$.
In practice, we typically sample the splits by assigning a given node to ${\mathcal{D}}_{in}$ with some probability $\alpha\in(0,1)$; otherwise the node is set to ${\mathcal{D}}_{out}$. It then follows that $\E[|{\mathcal{D}}_{in}|]=\alpha|{\mathcal{D}}_{tr}|$ and $\E[{\bm{Y}}_{in}]=\alpha {\bm{Y}}_{tr}$.
\vspace{-4pt}
\subsection{Self-excluded Simplification of the Label Trick}
\label{sec:selp}
Because there exists an exponential number of different possible random splits, for analysis purposes (with later practical benefits as well) we first consider a simplified one-versus-all case whereby we enforce that $|{\mathcal{D}}_{out}|=1$ across all random splits, with each node landing with equal probability in ${\mathcal{D}}_{out}$. In this situation, the objective function from (\ref{eqn:objective_wlpa2}) can be re-expressed more transparently without any expectation as
\begin{equation} \label{eq:linear_simplification_loss}
\begin{aligned}
{\mathcal{L}}({\bm{W}}) &= \E_{splits}\Big[\sum_{i\in {\mathcal{D}}_{out}} \ell\big({\bm{y}}_i, ~ [{\bm{P}}{\bm{Y}}_{in}{\bm{W}}]_i\big)\Big]
= \E_{splits}\Big[\sum_{i\in{\mathcal{D}}_{out}} \ell\big({\bm{y}}_i, ~ [{\bm{P}}({\bm{Y}}_{tr}-{\bm{Y}}_i){\bm{W}}]_i\big)\Big]\\
& = \sum_{i\in{\mathcal{D}}_{tr}} \ell\big({\bm{y}}_i, ~ [({\bm{P}}-{\bm{C}}){\bm{Y}}_{tr}{\bm{W}}]_i\big),
\end{aligned}
\end{equation}
where ${\bm{Y}}_i$ represents a matrix that shares the $i$-th row of ${\bm{Y}}$ and pads the rest with zeros, and ${\bm{C}}=\diag({\bm{P}})$. This then motivates the revised predictor given by
\begin{equation}
\label{eqn:selp}
f_{\predictorSELP{}}({\bm{Y}}_{tr};{\bm{W}})=({\bm{P}}-{\bm{C}}){\bm{Y}}_{tr}{\bm{W}},
\end{equation
where ${\bm{P}}$ here can in principle be any reasonable propagation matrix, not necessarily the one associated with the original label propagation algorithm.
\begin{remark}
From this expression we observe the intuitive role that ${\bm{C}}$ plays in blocking the direct pathway between each training node input label to the output predicted label for that same node. In this way the predictor propagates the labels of each training node excluding itself, and for \textit{both} training and testing nodes alike, the predicted label of a node is only a function of \textit{other} node labels. This resolves the asymmetry mentioned previously with respect to the predictions from (\ref{eqn:objective_wlpa}).
\end{remark}
\begin{remark} \label{rem:test_node_consistency}
It is generally desirable that a candidate model produces the same predictions on test nodes during training and inference to better insure proper generalization. Fortunately, this is in fact the case when applying (\ref{eqn:selp}), which on test nodes makes the same predictions as label propagation. To see this, note that ${\bm{M}}_{te}({\bm{P}}-{\bm{C}}){\bm{Y}}_{tr}={\bm{M}}_{te}{\bm{P}}{\bm{Y}}_{tr}$, where ${\bm{M}}_{te}={\bm{I}}_n-{\bm{M}}_{tr}$ is the diagonal mask matrix of test nodes and ${\bm{P}}{\bm{Y}}_{tr}$ is the original label propagation predictor.
\end{remark}
Although ultimately (\ref{eqn:selp}) will serve as a useful analysis tool below, it is also possible to adopt this predictor in certain practical settings. In this regard, ${\bm{C}}$ can be easily computed with the same computational complexity as is needed to approximate ${\bm{P}}$ as discussed in Section \ref{sec:lpa} (and for alternative propagation operators that are available explicitly, e.g., the normalized adjacency matrix, ${\bm{C}}$ is directly available).
\vspace{-4pt}
\subsection{Full Execution of the Label Trick as a Regularizer} \label{sec:label_trick_as regularization}
We are now positioned to extend the self-excluded simplification of the \labeltrick{} to full execution with arbitrary random sampling, as well as later, the reintroduction of node features. For this purpose, we first define ${\bm{Y}}_{out}={\bm{Y}}_{tr}-{\bm{Y}}_{in}$, and also rescale by a factor of $1/\alpha$ to produce $\widetilde{\bm{Y}}_{in}={\bm{Y}}_{in}/\alpha$. The latter allows us to maintain a consistent mean and variance of the predictor across different sampling probabilities.
Assuming a mean square error (MSE) loss as computed for each node via $\ell({\bm{y}},\widehat{{\bm{y}}})=||{\bm{y}}-\widehat{{\bm{y}}}||_2^2$ (later we consider categorical cross-entropy), our overall objective is to minimize
\begin{equation} \label{eq:quadratic_LP_loss}
{\mathcal{L}}({\bm{W}})=\E_{splits}\Big[\sum_{i\in{\mathcal{D}}_{out}}\ell({\bm{y}}_i, ~ [{\bm{P}}\widetilde {\bm{Y}}_{in}{\bm{W}}]_i)\Big] = \E_{splits}\left[\|{\bm{Y}}_{out}-{\bm{M}}_{out}{\bm{P}}\tilde{\bm{Y}}_{in}{\bm{W}}\|_F^2\right],
\end{equation}
where ${\bm{M}}_{out}$ is a diagonal mask matrix defined such that ${\bm{Y}}_{out} = {\bm{M}}_{out} {\bm{Y}} = {\bm{Y}}_{tr} - {\bm{Y}}_{in}$ and the random splits follow a node-wise Bernoulli distribution with parameter $\alpha$ as discussed previously. We then have the following:
\vspace{-4pt}
\begin{theorem}
\label{thm:regression}
The \labeltrick{} objective from (\ref{eq:quadratic_LP_loss}) satisfies
\begin{equation}
\label{eqn:regression}
\frac{1}{1-\alpha}\E_{splits}\left[\|{\bm{Y}}_{out}-{\bm{M}}_{out}{\bm{P}}\tilde{\bm{Y}}_{in}{\bm{W}}\|_F^2\right]
=\|{\bm{Y}}_{tr}-{\bm{M}}_{tr}({\bm{P}}-{\bm{C}}){\bm{Y}}_{tr}{\bm{W}}\|_F^2+\frac{1-\alpha}{\alpha}\|\mbox{\boldmath $\Gamma$} {\bm{W}}\|_F^2,
\end{equation}
where $\mbox{\boldmath $\Gamma$}=\big(\diag({\bm{P}}^T{\bm{P}})-{\bm{C}}^T{\bm{C}}\big)^{\frac12}{\bm{Y}}_{tr}$.
\end{theorem}
Note that $(\diag({\bm{P}}^T{\bm{P}})-{\bm{C}}^T{\bm{C}})$ is a positive semi-definite diagonal matrix, and hence its real square root will always exist. Furthermore, we can extend this analysis to include node features by incorporating the SGC-like linear predictor ${\bm{P}}{\bm{X}}{\bm{W}}_x$ such that Theorem~\ref{thm:regression} can then naturally be generalized as follows:
\vspace{-4pt}
\begin{corollary}
\label{cor:regression_x}
Under the same conditions as Theorem~\ref{thm:regression}, if we add the node feature term ${\bm{P}}{\bm{X}}{\bm{W}}_x$ to the label-based predictor from (\ref{eq:quadratic_LP_loss}), we have that
\begin{equation}
\label{eqn:regression_x}
\hspace{-2em}
\begin{aligned}
&\frac{1}{1-\alpha}\E_{splits}\left[\|{\bm{Y}}_{out}-{\bm{M}}_{out} {\bm{P}} {\bm{X}}{\bm{W}}_x- {\bm{M}}_{out}{\bm{P}}\tilde{\bm{Y}}_{in}{\bm{W}}_y\|_F^2\right]\\
&=\|{\bm{Y}}_{tr}-{\bm{M}}_{tr}{\bm{P}} {\bm{X}}{\bm{W}}_x-{\bm{M}}_{tr}({\bm{P}}-{\bm{C}}){\bm{Y}}_{tr}{\bm{W}}_y\|_F^2+\frac{1-\alpha}{\alpha}\|\mbox{\boldmath $\Gamma$} {\bm{W}}_y\|_F^2.
\end{aligned}
\end{equation}
\end{corollary}
The details of the proofs of Theorem~\ref{thm:regression} and Corollary~\ref{cor:regression_x} are provided in Appendix~\ref{sec:proof_regression}.
This then effectively leads to the more general, feature and label aware predictor
\begin{equation} \label{eq:full_predictor_model}
f_{\predictorLFP{}}({\bm{X}},{\bm{Y}}_{tr};\mbox{\boldmath $\calW$})={\bm{P}}{\bm{X}}{\bm{W}}_x+({\bm{P}}-{\bm{C}}){\bm{Y}}_{tr}{\bm{W}}_y,
\end{equation}
where $\mbox{\boldmath $\calW$}=\{{\bm{W}}_x,{\bm{W}}_y\}$. These theoretical results reveal a number of interesting properties regarding how the label trick behaves, which we summarize as follows:
\vspace{-4pt}
\begin{remark}
Although the original loss involves an expectation over random data splits that is somewhat difficult to interpret, based on (\ref{eqn:regression_x}), the \labeltrick{} can be interpreted as inducing a deterministic objective composed of two terms:
\begin{enumerate}
\item The error accrued when combining the original node features with the self-excluded label propagation predictor from (\ref{eqn:selp}) to mitigate label leakage, and
\item An additional graph-dependent regularization factor on the model weights associated with the labels that depends on $\alpha$ (more on this below).
\end{enumerate}
Moreover, we can easily verify from (\ref{eq:full_predictor_model}) that the model effectively applies the same prediction to test nodes during both training and inference, consistent with Remark \ref{rem:test_node_consistency}.
\end{remark}
\begin{remark}
Regarding the $\mbox{\boldmath $\Gamma$}$-dependent penalty term, if the graph has no edges, then there is no chance for overfitting to labels and $\big(\diag({\bm{P}}^T{\bm{P}})-{\bm{C}}^T{\bm{C}}\big)^{\frac12} = \mathbf 0$ shuts off the regularization. In contrast, for a fully connected graph, the value of $\mbox{\boldmath $\Gamma$}$ can be significantly larger, which can potentially provide a beneficial regularization effect. Additionally, given that $\mbox{\boldmath $\Gamma$}$ also grows larger with graph size (assuming edges grow as well), $\|\mbox{\boldmath $\Gamma$} {\bm{W}}_y\|_F^2$ scales proportionately with the data fitting term, which is generally expected to increase linearly with the number of nodes. Hence (\ref{eqn:regression_x}) is naturally balanced to problem size.
\end{remark}
\begin{remark} \label{rem:regularization_remark}
The splitting probability $\alpha$ in (\ref{eqn:regression_x}) controls the regularization strength.
Specifically, when $\alpha$ tends to zero, fewer labels are used as input to predict a large number of output labels, which may be less reliable, and corresponds with adding a larger regularization effect. Additionally, it means placing more emphasis on the original node features and downplaying the importance of the labels as input in (\ref{eqn:regression_x}), which explains the addition of a penalty on ${\bm{W}}_y$.
Conversely, when $\alpha$ tends to one, more labels are used as input to predict the output and the model approaches the deterministic self-excluded label trick. Specifically, for random splits where $|{\mathcal{D}}_{out}| = 1$, the loss mimics one random term from the self-excluded label trick summation, while for the splits when $|{\mathcal{D}}_{out}|=0$, the contribution to the expectation is zero and therefore does not influence the loss. Splits with $|{\mathcal{D}}_{out}| > 1$ will have very low probability. So this situation naturally corresponds with canceling out the regularization term. Later in Section \ref{sec:nonlinear_extensions} we will extend these observations to general nonlinear models.
\end{remark}
\begin{remark}
The regularization term is also loosely analogous to the one associated with dropout \citep{srivastava2014dropout} for the case of linear regression, where the splitting probability $\alpha$ is similar to the \textit{keep probability} of dropout. A smaller $\alpha$ means that fewer labels are used as input, implying stronger regularization. While this is an interesting association, there remain critical differences, such as the natural emergence of ${\bm{C}}$ which complicates the problem considerably; see the proof for further details.
\end{remark}
We now turn to the categorical cross-entropy loss, which is more commonly applied to node classification problems. While we can no longer compute closed-form simplifications as we could with MSE, it is nonetheless possible to show that the resulting objective when using the original \labeltrick{} is an upper bound on the analogous objective from the self-excluded label trick. More specifically, we have the following (see Appendix~\ref{sec:proof_classification} for the proof):
\vspace{-4pt}
\begin{theorem}
\label{thm:classification}
Under the same conditions as in Theorem \ref{thm:regression} and Corollary \ref{cor:regression_x}, if we replace the MSE loss with categorical cross-entropy we obtain the boun
\begin{equation}
\begin{aligned}
&\frac{1}{1-\alpha}\E_{splits}\left[\textnormal{CrossEntropy}_{{\mathcal{D}}_{out}}({\bm{Y}}_{out},{\bm{P}}{\bm{X}}{\bm{W}}_x+{\bm{P}}\tilde{\bm{Y}}_{in}{\bm{W}}_y)\right]
\\
&\ge ~~ \textnormal{CrossEntropy}_{{\mathcal{D}}_{tr}}({\bm{Y}}_{tr},{\bm{P}}{\bm{X}}{\bm{W}}_x+({\bm{P}}-{\bm{C}}){\bm{Y}}_{tr}{\bm{W}}_y),
\end{aligned}
\end{equation}
where $\textnormal{CrossEntropy}_S( \cdot , \cdot )$ denotes the sum of row-wise cross-entropy of $S$.
\end{theorem}
\subsection{Nonlinear Extensions} \label{sec:nonlinear_extensions}
When we move towards more complex GNN models with arbitrary nonlinear interactions, it is no longer feasible to establish explicit, deterministic functional equivalents of the label trick for general $\alpha$. However, we can still at least elucidate the situation at the two extremes where $\alpha \rightarrow 0$ or $\alpha \rightarrow 1$ alluded to in Remark \ref{rem:regularization_remark}. Regarding the former, clearly with probability approaching one, ${\bm{Y}}_{in}$ will always equal zero and hence the model will default to a regular GNN, effectively involving no label information as an input. In contrast, for the latter we provide the following:
\begin{theorem} \label{thm:nonlinear_limiting_case}
Let $f_{GNN}({\bm{X}},{\bm{Y}};\mbox{\boldmath $\calW$})$ denote an arbitrary GNN model with concatenated inputs ${\bm{X}}$ and ${\bm{Y}}$, and $\ell({\bm{y}},\hat{{\bm{y}}})$ a training loss such that $\sum_{i\in {\mathcal{D}}_{out}}\ell\big({\bm{y}}_i, ~ f_{GNN}[{\bm{X}},{\bm{Y}}_{in};\mbox{\boldmath $\calW$}]_i\big)$ is bounded for all ${\mathcal{D}}_{out}$. It then follows tha
\begin{equation}
\label{eqn:objective_wlpa_general}
\lim_{\alpha \rightarrow 1} \left\{ \frac{1}{1-\alpha} \E_{splits}\Big[\sum_{i\in {\mathcal{D}}_{out}}\ell\big({\bm{y}}_i, ~ f_{GNN}[{\bm{X}},{\bm{Y}}_{in};\mbox{\boldmath $\calW$}]_i\big)\Big] \right\} = \sum_{i=1}^m \ell\big({\bm{y}}_i, ~f_{GNN}[{\bm{X}},{\bm{Y}}_{tr}-{\bm{Y}}_{i};\mbox{\boldmath $\calW$}]_i \big).
\end{equation}
\end{theorem}
The proof is given in Appendix~\ref{app:nonlinear}. This result can be viewed as a natural generalization of (\ref{eq:linear_simplification_loss}), with one minor caveat: we can no longer guarantee that the predictor implicitly applied to test nodes during training will exactly match the explicit function $f_{GNN}[{\bm{X}},{\bm{Y}}_{tr};\mbox{\boldmath $\calW$}]$ applied at inference time. Indeed, each $f_{GNN}[{\bm{X}},{\bm{Y}}_{tr}-{\bm{Y}}_{i};\mbox{\boldmath $\calW$}]_i$ will generally produce slightly different predictions for all test nodes depending on $i$ unless $f_{GNN}$ is linear. But in practice this is unlikely to be consequential.
\section{Broader Use Cases of the Label Trick} \label{sec:broader_use}
Although the \labeltrick{} has already been integrated within a wide variety of GNN pipelines, in this section we introduce three novel use-cases motivated by our analysis.
\vspace{-4pt}
\subsection{Trainable Label Propagation}
In Sections \ref{sec:label_trick_alone} and \ref{sec:selp} we excluded the use of node features to simplify the exposition of the \labeltrick{}; however, analytical points aside, the presented methodology can also be useful in and of itself for facilitating a simple, trainable label propagation baseline.
The original label propagation algorithm from \citep{zhou2004learning} is motivated as a parameter-free, deterministic mapping from a training label set to predictions across the entire graph. However, clearly the randomized \labeltrick{} from Section \ref{sec:label_trick_alone}, or its deterministic simplification from Section \ref{sec:selp} can be adopted to learn a label propagation weight matrix ${\bm{W}}$. The latter represents a reasonable enhancement that can potentially help to compensate for interrelated class labels that may arise in multi-label settings. In contrast, the original label propagation algorithm implicitly assumes that different classes are independent. Beyond this, other entry points for adding trainable weights are also feasible such as node-dependent weights, nonlinear weighted mappings, step-wise weights for heterophily graphs, or weights for different node types for heterogeneous graphs.
\vspace{-4pt}
\subsection{Deterministic Application to GNNs with Linear Propagation Layers}
\vspace{-4pt}
Many prevailing GNN models follow the architecture of message passing neural networks (MPNNs).
Among these are efficient variants that share node embeddings only through linear propagation layers. Representative examples include SGC \citep{wu2019simplifying}, SIGN \citep{frasca2020sign} and TWIRLS \citep{yang2021graph}.
We now show how to apply the deterministic label trick algorithm as introduced in Section~\ref{sec:selp} with the aforementioned GNN methods.
We begin with a linear SGC model.
In this case, we can compute $({\bm{P}}-{\bm{C}}){\bm{Y}}_{tr}$ beforehand as the self-excluded label information and then train the resulting features individually without graph information, while avoiding label leakage problems. And if desired, we can also concatenate with the original node features. In this way, we have an algorithm that minimizes an energy function involving both labels and input features.
Additionally, for more complex situations where the propagation layers are not at the beginning of the model, the predictor can be more complicated such as ~~ $f({\bm{X}},{\bm{Y}};\mbox{\boldmath $\calW$}) ~ =$
\vspace{-2pt}
\begin{equation}
\label{eqn:linear_propagation}
\begin{aligned}
h_1\Big(\sum_{i}\big[\mbox{\boldmath $\calP$} h_0([{\bm{X}},{\bm{Y}}-{\bm{Y}}_i])\big]_i\Big)
=h_1(\mbox{\boldmath $\calP$} h_0([{\bm{X}},{\bm{Y}}])-\mbox{\boldmath $\calC$} h_0([{\bm{X}},{\bm{Y}}])+\mbox{\boldmath $\calC$} h_0([{\bm{X}},\mbox{\boldmath $m$}{0}])),
\end{aligned}
\end{equation}
where $[\cdot,\cdot]$ denotes the concatenation operation, $\mbox{\boldmath $\calP$}=[{\bm{P}}_0,{\bm{P}}_1,\ldots,{\bm{P}}_{k-1}]^T$ is the integrated propagation matrix, $\mbox{\boldmath $\calC$}=[\diag({\bm{P}}_0),\diag({\bm{P}}_1),\ldots,\diag({\bm{P}}_{k-1})]^T$, $h_0$ and $h_1$ can be arbitrary node-independent functions, typically multi-layer perceptrons (MLPs).
\vspace{-4pt}
\subsection{Trainable Correct and Smooth}
\label{sec:cns}
Correct and Smooth (C\&S) \citep{huang2020combining} is a simple yet powerful method which consists of multiple stages.
A prediction matrix $\tilde {\bm{Y}}$ is first obtained whose rows correspond with a prediction from a shallow node-wise model. $\tilde {\bm{Y}}$ is subsequently modified via two post-processing steps, \textit{correct} and \textit{smooth}, using two propagation matrices $\{{\bm{P}}_c,{\bm{P}}_s\}$, where typically ${\bm{P}}_i=(1-\lambda_i)({\bm{I}}_n-\lambda_i {\bm{S}})^{-1},i\in\{c,s\}$.
For the former, we compute the difference between the ground truth and predictions on the training set as ${\bm{E}}={\bm{Y}}_{tr}-\tilde {\bm{Y}}_{tr}$ and then form $\tilde {\bm{E}}=\gamma({\bm{P}}_c{\bm{E}})$ as the \emph{correction matrix}, where $\gamma(\cdot)$ is some row-independent scaling function. The final \textit{smoothed} prediction is formed as $f_{C\&S}(\tilde {\bm{Y}})={\bm{P}}_s({\bm{Y}}_{tr}+{\bm{M}}_{te}(\tilde {\bm{Y}}+\tilde {\bm{E}}))$. This formulation is not directly amenable to end-to-end training because of label leakage issues introduced through ${\bm{Y}}_{tr}$.
In contrast, with the \labeltrick{}, we can equip C\&S with trainable weights to further boost performance.
To this end, we first split the training dataset into ${\mathcal{D}}_{in}$ and ${\mathcal{D}}_{out}$ as before.
Then we multiply $\tilde {\bm{E}}_{in} =\gamma({\bm{P}}_c({\bm{Y}}_{in}-\tilde {\bm{Y}}_{in}))$, the correction matrix with respect to ${\bm{Y}}_{in}$, with a weight matrix ${\bm{W}}_c$. We also multiply the result after smoothing with another weight matrix ${\bm{W}}_s$.
Thus the predictor under this split is
\begin{equation}
\label{eqn:cns}
\begin{aligned}
f_{TC\&S}({\bm{Y}}_{in},\tilde {\bm{Y}}; \mbox{\boldmath $\calW$})&={\bm{P}}_s({\bm{Y}}_{in}+({\bm{M}}_{te}+{\bm{M}}_{out})(\tilde {\bm{Y}}+\tilde {\bm{E}}_{in} {\bm{W}}_c)){\bm{W}}_s\\
&={\bm{P}}_s(({\bm{Y}}_{in}+({\bm{M}}_{te}+{\bm{M}}_{out})\tilde {\bm{Y}}){\bm{W}}_s+{\bm{P}}_s({\bm{M}}_{te}+{\bm{M}}_{out})\tilde {\bm{E}}_{in} {\bm{W}}_c{\bm{W}}_s\\
&={\bm{P}}_s(({\bm{Y}}_{in}+({\bm{M}}_{te}+{\bm{M}}_{out})\tilde {\bm{Y}}) {\bm{W}}_s+{\bm{P}}_s({\bm{M}}_{te}+{\bm{M}}_{out})\tilde {\bm{E}}_{in} \hat {\bm{W}}_c,
\end{aligned}
\end{equation}
where $\mbox{\boldmath $\calW$}=\{\hat {\bm{W}}_c,{\bm{W}}_s\}$ and $\hat {\bm{W}}_c$ is the reparameterization of ${\bm{W}}_c{\bm{W}}_s$. The objective function for optimizing $\mbox{\boldmath $\calW$}$ is
\begin{equation}
\label{eqn:cns-loss}
\begin{aligned}
{\mathcal{L}}({\bm{W}})&=\E_{splits}\Big[\sum_{i\in {\mathcal{D}}_{out}}\ell({\bm{y}}_i,f_{TC\&S}[{\bm{Y}}_{in},\tilde{\bm{Y}};\mbox{\boldmath $\calW$}]_i)\Big].
\end{aligned}
\end{equation}
Since this objective resolves the label leakage issue, it allows end-to-end training of C\&S with gradients passing through both the neural network layers for computing $\tilde{\bm{Y}}$ and the C\&S steps.
At times, however, this approach may have disadvantages, including potential overfitting problems or inefficiencies due to computationally expensive backpropagation. Consequently, an alternative option is to preserve the two-stage training. In this situation, the base prediction in the first stage is the same as the original algorithm; however, we can nonetheless still train the C\&S module as a post-processing step, with parameters as in (\ref{eqn:cns}).
\section{Experiments}
\vspace{-4pt}
As mentioned previously, the effectiveness of the \labeltrick{} in improving GNN performance has already been demonstrated in prior work, and therefore, our goal here is not to repeat these efforts. Instead, in this section we will focus on conducting experiments that complement our analysis from Section \ref{sec:analysis} and showcase the broader application scenarios from Section \ref{sec:broader_use}.
The label trick can actually be implemented in two ways. The first is the one that randomly splits the training nodes, and the second is the simpler version introduced herein with the deterministic one-versus-all splitting strategy, which does not require any random sampling.
To differentiate the two versions, we denote the stochastic label trick with random splits by \textbf{label trick (S)}, and the deterministic one by \textbf{label trick (D)}.
The latter is an efficient way to approximate the former with a higher splitting probability $\alpha$, which is sometimes advantageous in cases where the training process is slowed down by high $\alpha$.
Accordingly, we conduct experiments with both, thus supporting our analysis with comparisons involving the two versions.
We use four relatively large datasets for evaluation, namely Cora-full, Pubmed \citep{DBLP:journals/aim/SenNBGGE08}, ogbn-arxiv and ogbn-products \citep{hu2020open}.
For Cora-full and Pubmed, we randomly split the nodes into training, validation, and test datasets with the ratio of 6:2:2 using different random seeds.
For ogbn-arxiv and ogbn-products, we adopt the standard split from OGB \citep{hu2020open}.
We report the average classification accuracy and standard deviation after 10 runs with different random seeds, and these are the results on the test dataset when not otherwise specified. See Appendix~\ref{app:experiments} for further implementation details.
\vspace{-4pt}
\subsection{Trainable Label Propagation}
\vspace{-4pt}
\label{subsec:trainablelp}
We first investigate the performance of applying the \labeltrick{} to label propagation as introduced in (\ref{eq:label_trick}) in the absence of features, and compare it with the original label propagation algorithm.
Table~\ref{table:trainable_label_propagation} shows that the trainable weights applied to the label propagation algorithm can boost the performance consistently. Given the notable simplicity of label propagation, this represents a useful enhancement.
\vspace{-4pt}
\begin{table*}[htbp]
\begin{minipage}[ht]{0.58\linewidth}
\begin{center}
\caption{Accuracy results (\%) of label propagation and trainable label propagation.}
\vspace{-4pt}
\label{table:trainable_label_propagation}
\resizebox{\linewidth}{!}{
\begin{tabular}{ccc}
\toprule
\textbf{Method} & \textbf{Label Propagation} & \textbf{Trainable Label Propagation} \\
\midrule
Cora-full & 66.44 ± 0.93 & \textbf{67.23 ± 0.66}\\
Pubmed & 83.45 ± 0.63 & \textbf{83.52 ± 0.59} \\
Arxiv & 67.11 ± 0.00 & \textbf{68.42 ± 0.01} \\
Products & 74.24 ± 0.00 & \textbf{75.61 ± 0.21} \\
\bottomrule
\end{tabular}
}
\end{center}
\end{minipage}
\hfill
\begin{minipage}[ht]{0.40\linewidth}
\begin{center}
\caption{MSE on node regression tasks with GBDT base model and C\&S.}
\vspace{-4pt}
\label{table:boosting}
\resizebox{\linewidth}{!}{
\begin{tabular}{cccc}
\toprule
\textbf{Method} & \textbf{C\&S} & \textbf{Trainable C\&S} \\
\midrule
House & 0.51 ± 0.01 & \textbf{0.45 ± 0.01} \\
County & 1.42 ± 0.14 & \textbf{1.13 ± 0.09} \\
VK & 7.02 ± 0.20 & \textbf{6.95 ± 0.22} \\
Avazu & \textbf{0.106 ± 0.014} & \textbf{0.106 ± 0.014} \\
\bottomrule
\end{tabular}
}
\end{center}
\end{minipage}
\vspace{-8pt}
\end{table*}
\subsection{Deterministic Label Trick applied to GNNs with Linear Layers}
\vspace{-4pt}
We also test the deterministic \labeltrick{} by applying it to different GNN architectures involving linear propagation layers along with (\ref{eqn:linear_propagation}).
Due to the considerable computational effort required to produce ${\bm{P}}$ and ${\bm{C}}$ with large propagation steps for larger graphs,\david{But what about just using something simple like normalized adjacency matrix?}\yangkun{If ${\bm{P}}={\bm{I}}={\bm{S}}^1+{\bm{S}}^2$ it is affordable, but it's unlikely to work well.}\david{But isn't this what SGC does?}\todo{add experiments for products} we only conduct tests on the Cora-full, Pubmed and ogbn-arxiv datasets, where the results are presented in Table~\ref{tab:feat-prop}.
\begin{table}[htbp]
\begin{center}
\caption{Accuracy results (\%) with/without \labeltrick{} (D).}
\vspace{-4pt}
\label{tab:feat-prop}
\resizebox{1.00\linewidth}{!}{
\begin{tabular}{ccccccc}
\toprule
\textbf{Method} & \multicolumn{2}{c}{\textbf{SGC}} & \multicolumn{2}{c}{\textbf{SIGN}} & \multicolumn{2}{c}{\textbf{TWIRLS}}\\
label trick (D) &\xmark&\cmark&\xmark&\cmark&\xmark&\cmark\\
\midrule
Cora-full & \textbf{65.87 ± 0.61} & 65.81 ± 0.69 & \textbf{68.54 ± 0.76} & 68.44 ± 0.88 & 70.36 ± 0.51 & \textbf{70.40 ± 0.71} \\
Pubmed & 85.02 ± 0.43 & \textbf{85.23 ± 0.57} & 87.94 ± 0.52 & \textbf{88.09 ± 0.59} & 89.81 ± 0.56 & \textbf{90.08 ± 0.52} \\
Arxiv & 69.07 ± 0.01 & \textbf{70.22 ± 0.03} & 69.97 ± 0.16 & \textbf{70.98 ± 0.21} & 72.93 ± 0.19 & \textbf{73.22 ± 0.10} \\
\bottomrule
\end{tabular}
}
\end{center}
\vspace{-4pt}
\end{table}
From these results we observe that on Pubmed and ogbn-arxiv, the deterministic label trick boosts the performance consistently on different models, while on Cora-full, it performs comparably. This is reasonable given that the training accuracy on Cora-full (not shown) is close to 100\%, in which case the model does not benefit significantly from the ground-truth training labels, as the label information is already adequately embedded in the model.
\vspace{-4pt}
\subsection{Effect of Splitting Probability}
In terms of the effect of the splitting probability $\alpha$, we compare the accuracy results of a linear model and a three-layer GNN on ogbn-arxiv as shown in Figure~\ref{fig:varying_alpha}.
For the linear model, $\alpha$ serves as a regularization coefficient. More specifically, when $\alpha$ tends to zero, the model converges to one without the label trick, while when $\alpha$ tends to one, it converges to the a case with the self-excluded label trick. For the the nonlinear GNN, $\alpha$ has a similar effect as predicted by theory for the linear models. As $\alpha$ decreases, the model converges to that without the label trick.
Moreover, a larger $\alpha$ is preferable for linear models in contrast to GNNs, probably because a simple model does not require strong regularization.
\vspace{-4pt}
\begin{figure}
\begin{tabular}{cc}
\includegraphics[scale=0.45]{lp.eps} &
\includegraphics[scale=0.45]{gnn.eps}
\end{tabular}
\caption{\label{fig:varying_alpha}Accuracy results on validation dataset of linear propagation of features and labels (left) and GNNs (right) varying $\alpha$.}
\vspace{-8pt}
\end{figure}
\subsection{Trainable Correct and Smooth}
\vspace{-4pt}
We also verify the effectiveness of our approach when applied to Correct and Smooth (C\&S), as described in Section~\ref{sec:cns}.
Due to the significant impact of end-to-end training on the model, it is not suitable for direct comparison with vanilla C\&S for a natural ablation. Therefore, in this experiment, we train the C\&S as post-processing steps.
In Table~\ref{table:correct_and_smooth}, we report the test and validation accuracy, showing that our method outperforms the vanilla C\&S on Cora-full and Pubmed.
And for ogbn-arxiv and ogbn-products, the trainable C\&S performs better in terms of validation accuracy while is comparable to vanilla C\&S in terms of test accuracy.
\begin{table}[htbp]
\begin{center}
\caption{Test and validation accuracy (\%) of C\&S and trainable C\&S with MLP as base predictor. Validation accuracy reported in parentheses.}
\label{table:correct_and_smooth}
\resizebox{1.00\linewidth}{!}{
\begin{tabular}{cccc}
\toprule
\textbf{Method} & \textbf{MLP} & \textbf{MLP+C\&S} & \textbf{MLP+Trainable C\&S}\\
\midrule
Cora-full & 60.12 ± 0.29 (61.09 ± 0.39) & 66.95 ± 1.46 (68.26 ± 1.24) & \textbf{67.89 ± 1.37} (\textbf{69.09 ± 1.25}) \\
Pubmed & 88.72 ± 0.34 (89.25 ± 0.26) & 89.12 ± 0.27 (89.45 ± 0.17) & \textbf{89.76 ± 0.17} (\textbf{89.62 ± 0.18}) \\
Arxiv & 71.48 ± 0.15 (72.95 ± 0.05) & \textbf{73.05 ± 0.35} (74.01 ± 0.17) & 73.03 ± 0.18 (\textbf{74.44 ± 0.08}) \\
Products & 67.60 ± 0.15 (87.07 ± 0.05) & \textbf{83.16 ± 0.13} (91.70 ± 0.06) & 83.10 ± 0.15 (\textbf{91.99 ± 0.07}) \\
\bottomrule
\end{tabular}
}
\end{center}
\vspace{-4pt}
\end{table}
In principle, C\&S can be applied to any base predictor model. To accommodate tabular node features, we choose to use gradient boosted decision trees (GBDT), which can be trained end-to-end using methods such as \citep{Anon2021,ivanov2021boost} combined with the \labeltrick{} to avoid data leakage issues as we have discussed. For evaluation, we adopt the four tabular node regression data sets from \citep{ivanov2021boost} and train using the approach from \citep{Anon2021}. Results are shown in Table \ref{table:boosting}, where it is clear that the \labeltrick{} can significantly improve performance.
\vspace{-8pt}
\section{Conclusion}
\vspace{-4pt}
In this work we closely examine from a theoretical prospective the recently emerged label trick, which enables the parallel propagation of labels and features and benefits various SOTA GNN architectures, and yet has thus far not be subject to rigorous analysis.
In filling up this gap, we first introduce a deterministic self-excluded simplification of the label trick, and then prove that the full stochastic version can be regarded as introducing a regularization effect on the self-excluded label weights. Beyond this, we also discuss broader applications of the label trick with respect to:
\begin{enumerate}
\item Facilitating the introduction of trainable weights within graph-based methods that were previously either parameter-free (e.g., label propagation) or not end-to-end (e.g., C\&S), and
\item Eliminating the effects of randomness by incorporating self-excluded propagation within GNNs composed of linear propagation layers.
\end{enumerate}
We verify these applications and evaluate the performance gains against existing approaches over multiple benchmark datasets.
\vspace{-4pt}
\vskip 0.2in
|
1,314,259,994,942 | arxiv | \section{\label{sec:results}Acknowledgements}
This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0003856.
M.F.K. and S.M.V. acknowledge support from the UK EPSRC grant EP/P015794/1 and the Royal Society. S.M.V. is a Royal Society University Research Fellow.
\section{\label{sec:conclusions}Conclusions}
In summary, spatially-integrated XRTS spectra for 1-D LILAC simulated conditions of low- and high-adiabat, DT cryogenic implosions have been calculated at two-thirds convergence.
Markov-Chain Monte Carlo analysis was performed for two different scattering setups.
Information on the compressed shell conditions was obtained as it has been shown to be possible to use the spectral resolution in a spatially integrated measurement to discriminate between different regions in the plasma.
Fielding two detectors, one in the collective and one in the noncollective scattering regime, produced the best agreement with the compressed shell mass-averaged parameters from the simulation.
This technique can be used to resolve both the low- and high-adiabat implosions.
In the future, similar analysis will be performed on the conditions at stagnation as well as investigations into 2-D and 3-D simulations using DRACO \cite{Radha05} and ASTER \cite{Igumenshchev16}.
\section{\label{sec:statement}AIP Publishing Data Sharing Policy}
The data that supports the findings of this study are available from the corresponding author, H.P., upon reasonable request.
\section{\label{sec:discussion}Discussion}
There is good agreement between the mass-averaged simulation parameter values and the MCMC distributions.
The forward scattering fits tend to converge around lower densities, higher temperatures and broader ionisations. This results in either broader or slightly skewed distributions on the DT parameters.
This differing convergence occurs because the ratio between the source FWHM and the width of the inelastic scattering feature in the forward scattering case is very small, particularly for the $2\,\si{keV}$ probe. It would therefore be possible to obtain information on the compressed DT conditions solely using a backward scattering detector. In order to improve the fit in the forward scattering regime, either a narrower bandwidth or a higher energy source should be used.
The forward scattering spectra have not been used to contribute to the MCMC distributions in Figure \ref{SINGLE_MCMC_matrix}. As the widths of the inelastic scattering features in the forward scattering regime are small relative to the source FWHM, convergence around values representative of the compressed DT shell under these conditions are highly improbable. This exclusion of the forward scattering results in much narrower parameter distributions, particularly for the electron temperature and ionisation.
One reason for the tighter fits is because the low weighting on the CH plasma in the full scattering fits means MCMC assumes the conditions from the uniform DT plasma region are generating both the inelastic and the elastic scattering features. However, from Figures \ref{2.8_conditions} and \ref{8.0_conditions} we can see the Rayleigh scattering is dominated by the conditions in the CH coronal plasma. Therefore, for the electron temperature and ionisation, better convergence on the compressed shell DT conditions is seen when the fitting to the elastic scattering feature is ignored.
In contrast, the full spectrum MCMC analysis produces more statistically accurate results on the electron density. This is due to the inclusion of the predominately collective scattering forward detector.
Inelastic scattering in the collective scattering spectrum is very sensitive on the electron plasma frequency (which determines its overall shift),
meaning the collective scattering case is best used to determine the electron density. However, it should also be noted that removal of the elastic scattering feature leads to uncertainty as to the magnitude of the inelastic shift. This operation would therefore be difficult to justify under experimental conditions.
Overall the best agreement between the MCMC analysis on the synthetic scattering data and the simulations is obtained with the full spectrum analysis using a $3.5\,\si{keV}$ probe.
Better agreement may be achieved when focusing on the inelastic scattering if a narrower bandwidth probe beam could be used, meaning the forward scattering does not need to be omitted from the analysis.
In fact, this is what it would feasible with a Free Electron Laser \cite{Fletcher2015}.
\section{\label{sec:setup}Proposed Experimental Setup}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{./Figures/scat_fig_with_detectors}
\caption{The 3D inferred temperature profile from Spect3D using the 1D simulation data produced by the LILAC code. Schematic of the scattering events, recorded on the detector by SPECT3D, from different zones throughout the implosion are shown. The scattering geometry is demonstrative and not drawn to scale.}
\label{3d_implosion}
\end{figure}
XRTS is a powerful diagnostic tool for determining the conditions in plasmas where the critical density, $n_c=\epsilon_0m_e\omega^2/e^2$ (where $m_e$ is the mass of an electron, $e$ is the electron charge, $\epsilon_0$ is the electric constant and $\omega$ is the frequency of the laser drive),
exceeds what can be probed by any optical source. The first consideration for an experimental setup, is the power required for the X-ray probe in order to produce a scattering signal that can be observed above background noise. The total number of photons scattered into a detector, $N_d$, can be estimated as \cite{Glenzer09}
\begin{eqnarray}
\label{detected_photons}
N_{d}
= &&\,
\left(\frac{E_L}{h\nu}\eta_x\right)\left(\frac{\Omega_{\mathrm{plasma}}}{4\pi}\eta_{\mathrm{att}}\right)\left[\frac{n_e\sigma_{\mathrm{Th}}\ell}{\left(1 + \alpha\right)^2}\right]\nonumber\\
&&\times
\left(\frac{\Omega_{\mathrm{det}}}{4\pi}\eta_d\right)\,,
\end{eqnarray}
where $E_L$ is the probe laser energy, $\eta_x$ is the conversion efficiency from the laser energy into the probe X-rays, $\eta_{\mathrm{att}}$ is the attenuation of the probe X-rays through the dense plasma, $\Omega_{\mathrm{plasma}}$ and $\Omega_{\mathrm{det}}$ are the solid angles subtended by the plasma and the detector, respectively, $n_e$ is the electron density, $\alpha$ is the scattering parameter, $\sigma_{\mathrm{Th}}$ is the Thomson scattering cross-section, $\ell$ is the path length of the photons through the plasma, and $\eta_d$ is the detector efficiency.
For the plasma conditions investigated here, the scattering fraction, $n_e\sigma_{\mathrm{Th}}\ell$, is approximately equal to $10^{-4}$, where we have taken representative values for the compressed shell to be $n_e\sim 10^{23}\,\,\si{cm^{-3}}$ and $\ell=50\,\,\si{\mu m}$. This small scattering fraction makes fielding XRTS challenging since the signal can easily be
swamped by significant self-emission from the plasma.
In this feasibility study we show that a probe laser energy of $1\,\si{kJ}$ is required, as will be discussed in detail below.
A key benefit of fielding XRTS as a plasma diagnostic, is that XRTS can be split into two scattering regimes, the collective and the noncollective, as determined by the scattering parameter,
\begin{equation}
\label{scattering_parameter}
\alpha
=
\frac{1}{k\lambda_S}\,,
\end{equation}
where $k$ is the scattering vector, and $\lambda_S$ is the screening length.
In the noncollective regime, the incoming wave `probes' through the screening sphere and the scattering spectrum therefore reflects the electron velocity distribution.
In contrast, the collective scattering regime reflects the collective motion of the electrons.
Designing an experiment where both regimes can be recorded can reduce the error on the inferred plasma parameters.
To model the X-ray emissivity, a $1\,\si{kJ}$ laser with a $1\,\si{ps}$ pulse length and a source diameter of $100\,\si{\mu m}$ was used to produce a Gaussian X-ray source, with a FWHM of $10\,\si{eV}$, $4.5\,\si{cm}$ away from the imploding target, taking a conservative estimate of $\eta_x=0.01\%$.
Two $12.25\,\si{cm^2}$ charge-coupled device (CCD) detectors were used to collect spectrally resolved radiation. The scattering geometry is shown in Figure \ref{3d_implosion}. The detectors were placed at a distance of $2.4\,\si{m}$ away from the plasma and at scattering angles of $\theta_F = 40\si{^{\circ}}$ and $\theta_B = 120\si{^{\circ}}$.
The two targets chosen for this investigation are shown in Figures \ref{2.8_conditions} and \ref{8.0_conditions} with adiabats of $2.8$ and $8.0$ respectively.
\begin{figure*}[t]
\centering
\includegraphics[width=0.995\textwidth]{./Figures/98532_full_profile}
\caption{\textbf{(a)} Simulated target design, with an adiabat of $2.8$, fired with laser profile shown in \textbf{(b)}.
\textbf{(c)} Density and electron temperature conditions in the ICF implosion across the shock wave at two-thirds compression, $t=2215\,\si{ps}$, as determined by the LILAC code for the target.
The scattering contributions from the DT in the unshocked fuel, compressed shell and coronal plasma has been isolated and compared to the fully integrated spectrum.
For a $2\,\si{keV}$ probe, the contribution from each region of the plasma to the overall scattering spectrum is shown for both the forward ($40\si{^{\circ}}$), \textbf{(d)}, and backward ($120\si{^{\circ}}$) scattering regime, \textbf{(e)}. The same breakdown of the plasma has been performed with a $3.5\,\si{keV}$ energy probe in \textbf{(f)} and \textbf{(g)}.}
\label{2.8_conditions}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=0.995\textwidth]{./Figures/80802_full_profile}
\caption{As with figure \ref{2.8_conditions} but with an ICF capsule with an adiabat of $8.0$ and at $t=1901\,\si{ps}$.
}
\label{8.0_conditions}
\end{figure*}
Two experimental setups are considered for this paper, one with an X-ray probe energy of $2\,\si{keV}$ and the other using a $3.5\,\si{keV}$ probe.
The scattering regime recorded by each detector in each setup is shown in Figure \ref{fig_params}. It should be noted that the values for the $\alpha$ parameter shown in the figure are calculated for the densest region in the compressed DT shell, and therefore not representative of the scattering from the ICF capsule as a whole. To determine the scattering signals from each region of the implosion, the fully integrated scattering spectra must be determined.
\begin{figure}[t]
\centering
\includegraphics[width=0.455\textwidth]{./Figures/scat_params}
\caption{Scattering parameters, $\alpha$, as calculated for the densest zone in the compressed DT shell for each adiabat, scattering angle and probe energy. A dashed line is shown at $\alpha=1$ which is the approximate separation of collective, $\alpha>1$, and noncollective, $\alpha\leq1$, scattering.}
\label{fig_params}
\end{figure}
The plasmon frequency shift for the high adiabat target is $\sim27\,\si{eV}$, which increases to $\sim30\,\si{eV}$ for the low adiabat target. In order to distinguish this plasmon scattering, a narrow band X-ray probe must be used. To achieve this in an experimental setup, the source must be chosen carefully.
Previous experiments have successfully used a crystal imaging system with a Si He$_\alpha$ line at $\sim1.865\,\si{keV}$ \cite{Stoeckl14} to radiograph OMEGA cryogenic implosions \cite{Stoeckl17} but the required x-ray fluence may not be enough.
Alternatively, Cl K$_\alpha$ at $\sim2.62\,\si{keV}$ or Cl Ly-$\alpha$ at $\sim2.96\,\si{keV}$ could be used, which would require lasers of energy $650\,\si{J}$ and $300\,\si{J}$, respectively \cite{Urry05}.
An important consideration to make before extrapolating this work to an experimental campaign, is predicting the level of noise on the scattering signal. Many factors can contribute to the noise level such as the self-emission, the time-gating of the detector, the detector efficiency \textit{etc.}. For the sake of simplicity in this paper, these parameters will be collected into one function, $G$, which we approximate as $10^{-5}$ \cite{Pak04}.
\section{\label{sec:intro}Introduction}
The design of inertial confinement fusion (ICF) targets is a challenging task that requires, among others, hydrodynamic simulations with knowledge of the shocked materials' equation of state (EOS) if ignition conditions are to be achieved \cite{Lindl04, Hurricane14, Regan18, Goncharov17, Campbell17}.
The theoretical modelling of the extreme matter properties reached during the capsule implosion is difficult due to the need of a quantum mechanical treatment of the degenerate electrons, moderate strongly-coupled ions and many-particle correlations \cite{Gaffney18, Hu10, Hu18, Hu17}.
Uncertainty in the EOS of matter under this regime results in unconfirmed calculations for transport properties, ionization balance, and energy and temperature equilibration \cite{Wang13, Vinko13, Chapman15, White14,Grabowski2020}.
Therefore, experimental validation is vital for benchmarking and developing reduced models that can be implemented in radiation hydrodynamic codes.
At present, the diagnosis of the physical properties of dense plasmas produced in ICF implosions is limited due to the difficulty in achieving the required accuracy and spatial resolutions \cite{Hurricane19, Glebov10, Regan12, Glenzer09} for different model predictions to be tested. Over the past couple of decades there has been a push to develop new diagnostics that may be able to resolve different regions of the imploding capsule, particularly the hot spot, the compressed shell and the coronal plasma.
Multi-keV spectrally resolved x-ray Thomson scattering (XRTS) is one of these techniques \cite{Glenzer09, Gregori08, Chapman14}.
The first experimental observation of noncollective, inelastic x-ray scattering from shocked liquid deuterium is discussed in Ref. \cite{Regan12}. This demonstrated the capabilities of inferring the electron temperature, ionisation and electron density from the shapes and intensities of the elastic (Rayleigh) and inelastic (Compton) components in the scattering spectra in ICF dense matter.
However, the scattering data had no spatial information, nor did the analysis performed provide the capability to separate the contribution from different regions.
Spatial temperature and ionization profiles were determined from a near-solid density foam using a collimated X-ray beam in Ref. \cite{Gamboa14}. This data, produced using the Imaging X-ray Thomson Spectrometer (IXTS) at the Omega laser facility \cite{Boehly1997, Gamboa12}, determined the temperature and ionization state of the carbon foam at multiple positions along the axis of the flow. Good agreement was found between the experiment and theoretical predictions with the exception of the high-temperature, low-density rarefaction region of the blast wave.
Simultaneous collective and non-collective scattering data for dynamically compressed deuterium was collected in Ref. \cite{Davis16} using the $2\,\si{keV}$ Si Ly-$\mathrm{\alpha}$ line. This focused on compression states of $\rho/\rho_0 \sim 2.8-4.05$. The mass density was determined using the VISAR shock velocity using current EOS data. This allowed for a restriction on the parameter space when determining the ionization from the XRTS data.
However, to date, there has been no attempt to field an XRTS diagnostic on a full laser direct-drive ICF implosion. In this report the feasibility of utilising spatially integrated XRTS measurements to determine the in-flight conditions of the compressed DT shell will be investigated.
The study involved analysing the X-ray scattering data produced by targets with very different adiabats.
The adiabat is defined as the ratio of the plasma pressure to the Fermi-degenerate pressure \cite{SpringerB} and for DT fuel is given by \cite{Craxton15}
\begin{equation}
\label{adiabat}
\Gamma_{DT}
\simeq
\frac{P_{\mathrm{Shell}}[\mathrm{Mbar}]}{2.2\left[\rho[\mathrm{g/cm^{3}}]\right]^{5/3}}
\,.
\end{equation}
Confinement properties of an ICF capsule depend on the areal density of the compressed shell and hot-spot, $\rho R$. The areal density is controlled by varying the entropy of the fuel, which is determined by the fuel adiabat.
For ignition to occur, a large enough areal density (low adiabat), $>\numrange[range-phrase=-]{0.2}{0.5}\,\si{g/cm^2}$, and hot enough core, $\sim5-12\,\si{keV}$, are required \cite{Betti16, AtzeniB}. However, targets imploded on a low adiabat are susceptible to hydrodynamic instabilities \cite{Landen20, Edwards13} that drive the rapid growth of nonuniformities. Therefore, an important part of ICF research involves optimisation of the adiabat \cite{Anderson04, Melvin15, Dittrich14}.
In experiments, however, direct measurements of the in-flight fuel adiabat and densities are not yet achievable, instead they are inferred from the neutron yield and x-ray self-emission \cite{Cerjan13}.
This paper presents dual-channel XRTS as a possible diagnostic to retrieve spatial information on the in-flight conditions of an ICF implosion.
The analysis is performed by constructing synthetic, spatially integrated, spectra using the collision-radiative code SPECT3D \cite{MacFarlane07}, including the x-ray scattering simulator \cite{Golovkin13}, which is a post-processor of the 1-D radiation hydrodynamic code LILAC \cite{Delettrez87}.
\section{\label{sec:results}Results}
\begin{figure*}[t]
\centering
\includegraphics[width=0.995\textwidth]{./Figures/FULL_ION_MATRIX}
\caption{MCMC parameter convergence fitting the entire spectrum using equation \ref{cost_ion} as a cost function. Variation in DT plasma parameters from; \textbf{(a)} $2.8$ adiabat and $2\,\si{keV}$ probe, \textbf{(b)} $8.0$ adiabat and $2\,\si{keV}$ probe, \textbf{(c)} $2.8$ adiabat and $3.5\,\si{keV}$ probe, and \textbf{(d)} $8.0$ adiabat and $3.5\,\si{keV}$ probe. The scatter plots show the correlation between each DT parameter. The lower quadrant scatter plots are taken from the $40\si{^{\circ}}$ scattering data, whilst the upper quadrant shows the $120\si{^{\circ}}$ scattering data. The scatter plots have been coloured to represent the spatial density of points. The diagonal plots show the combined histograms for each parameter from both the scattering regimes. Superimposed on each histogram is a normal distribution of the fits. The mass-averaged parameter values from the LILAC 1-D simulation are highlighted as a green dashed line or cross.}
\label{ION_MCMC_matrix}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=0.995\textwidth]{./Figures/FULL_SINGLE_MATRIX}
\caption{As with Figure \ref{ION_MCMC_matrix} but for MCMC analysis focused solely on the inelastic scattering using equation \ref{cost_single} as a cost function. As discussed in Section \ref{sec:discussion}, the forward scattering analysis has been omitted as the widths of the inelastic scattering features in the forward scattering regime are small relative to the source FWHM. This means, without the elastic scattering feature, MCMC converges around values not representative of the entire spectra.}
\label{SINGLE_MCMC_matrix}
\end{figure*}
\begin{table}[h]
\centering
\renewcommand{\arraystretch}{1.5}
\caption{\label{ION_MCMC_results}The full spectral analysis MCMC DT fitting parameters compared to the mass-weighted parameters from the LILAC 1D simulations, focused on the compressed DT shell, for each adiabat and each probe.}
\begin{ruledtabular}
\begin{tabular}{l|ccc}
\textbf{DT Parameter} & $\mathbf{T_e\,(\si{eV})}$ & $\mathbf{n_e\,(\si{cm^{-3}})}$ & $\mathbf{Z}$ \\ \hline
\multicolumn{4}{c}{\textbf{Adiabat} $\mathbf{2.8}$} \\ \hline
Simulation & $25$ & $5.5\times10^{23}$ & $0.97$ \\
MCMC $2\,\si{keV}$ & $26\pm2$ & $(5.4\pm0.3)\times10^{23}$ & $0.94\pm0.03$ \\
MCMC $3.5\,\si{keV}$ & $25\pm2$ & $(5.4\pm0.4)\times10^{23}$ & $0.94\pm0.03$ \\ \hline
\multicolumn{4}{c}{\textbf{Adiabat} $\mathbf{8.0}$} \\ \hline
Simulation & $38$ & $3.7\times10^{23}$ & $0.97$ \\
MCMC $2\,\si{keV}$ & $48\pm9$ & $(3.1\pm0.4)\times10^{23}$ & $0.94\pm0.04$ \\
MCMC $3.5\,\si{keV}$ & $35\pm3$ & $(3.1\pm0.5)\times10^{23}$ & $0.95\pm0.03$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{table}[h]
\centering
\renewcommand{\arraystretch}{1.5}
\caption{\label{SINGLE_MCMC_results} The inelastic scattering analysis MCMC DT fitting parameters compared to the mass-weighted parameters from the LILAC 1D simulations, focused on the compressed DT shell, for each adiabat and each probe.}
\begin{ruledtabular}
\begin{tabular}{l|ccc}
\textbf{DT Parameter} & $\mathbf{T_e\,(\si{eV})}$ & $\mathbf{n_e\,(\si{cm^{-3}})}$ & $\mathbf{Z}$ \\ \hline
\multicolumn{4}{c}{\textbf{Adiabat} $\mathbf{2.8}$} \\ \hline
Simulation & $25$ & $5.5\times10^{23}$ & $0.97$ \\
MCMC $2\,\si{keV}$ & $28\pm1$ & $(4.7\pm0.3)\times10^{23}$ & $0.97\pm0.02$ \\
MCMC $3.5\,\si{keV}$ & $21\pm1$ & $(5\pm1)\times10^{23}$ & $0.95\pm0.03$ \\ \hline
\multicolumn{4}{c}{\textbf{Adiabat} $\mathbf{8.0}$} \\ \hline
Simulation & $38$ & $3.7\times10^{23}$ & $0.97$ \\
MCMC $2\,\si{keV}$ & $44\pm3$ & $(3.0\pm0.6)\times10^{23}$ & $0.97\pm0.01$ \\
MCMC $3.5\,\si{keV}$ & $36\pm4$ & $(4\pm2)\times10^{23}$ & $0.94\pm0.04$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
Before extracting the plasma parameters from the spatially integrated simulated spectra, the inverse problem instability must first be addressed, which implies that the same measured spectra could be fitted equally well by very different plasma parameters \cite{Kasim19}. Bayesian inference, using Markov-Chain Monte Carlo (MCMC) to sample the multidimensional space, is a more robust approach to explore the behaviour of the complex multiparameter simulations \cite{Andrieu03}.
Two MCMC explorations that walked through defined parameter spaces to find the ionization, temperature and density that best fit the forward and backward scattering spectra individually are presented in this paper.
The parameter space assumed a uniform distribution with linear sampling for the electron temperature, $1\leq T_e(\mathrm{eV})\leq 1e3$, and ionization, $0\leq Z\leq 1$, whilst taking a logarithmic sampling for the electron density, $1\mathrm{e}20\leq n_e(\mathrm{1/cc})\leq 5\mathrm{e}24$. A large sampling space was used so no bias was placed on the resultant parameters.
One exploration fit the entire spectra, assuming two weighted uniform plasma regions, one containing DT and the other CH.
The cost function used to determine the appropriateness of each MCMC scattering spectra, $I_\mathrm{fit}$, in this case is
\begin{equation}
\label{cost_ion}
\beta_{\mathrm{cost}} = \textrm{max}{\left(\frac{I_{\mathrm{fit}} - I_{\mathrm{raw}}}{I_{\mathrm{raw}}}\frac{1}{\sqrt{2}\sigma}\right)^{2}}\,,
\end{equation}
where $I_{\mathrm{raw}}$ is the synthetic scattering spectra shown in Figure \ref{Adiabats} and $\sigma$ is the standard deviation representative of the noise of the synthetic scattering spectra.
In an actual experiment this quantity is not known \textit{a priori} and it must be chosen for MCMC to be able to explore a sufficiently wide parameter space.
For the full spectrum analysis we use $\sigma = 0.075$.
This cost function allowed for equal weighting of the fit to the elastic and inelastic peaks.
The alternate MCMC analysis focused solely on the inelastic scattering features. As previously discussed, the inelastic scattering for all detectors was dominated by the contribution from the compressed DT shell. Therefore, this MCMC analysis assumed a single uniform plasma condition containing only DT.
A soft boundary cost function was used in this case as there was no need to apply weighting to the inelastic scattering,
\begin{equation}
\label{cost_single}
\beta_{\mathrm{cost}}
=
\textrm{max}{\left(\frac{I_{\mathrm{fit}} - I_{\mathrm{raw}}}{\sqrt{2}\sigma}\right)^{2}}\,.
\end{equation}
For this analysis, a value of $0.0005$ was used for $\sigma$ as this is representative of the noise when focused on the inelastic scattering region.
The forward likelihood of each fit, $P(I_{\mathrm{fit}}|I_{\mathrm{raw}})$, was determined as
\begin{equation}
\label{logprob}
P(I_{\mathrm{fit}}|I_{\mathrm{raw}})
=
\mathrm{e}^{-\beta_{\mathrm{cost}}}\,.
\end{equation}
To analyse the MCMC data, the DT parameters were plotted on a combined matrix shown in Figures \ref{ION_MCMC_matrix} and \ref{SINGLE_MCMC_matrix}. The scatter plots for each scattering angle are shown separately and have been coloured to represent the spatial density of points.
In Figure \ref{ION_MCMC_matrix}, the histograms along the diagonal are the combined histograms for both the forward and backward scattering parameters. The mean and standard deviation on each parameter was calculated by fitting a normal distribution to the histograms.
The MCMC parameters were compared to the mass-weighted parameters from the 1-D LILAC simulations. The mass-weighted simulation values were calculated using
\begin{equation}
\label{mass-weighted}
\mathrm{\left<F\right>}
=
\frac{\sum F_i \rho_i 4\pi r_i^2\mathrm{d}r_i}{\sum\rho_i 4\pi r_i^2\mathrm{d}r_i}\,,
\end{equation}
where $F_i$ is the desired parameter in zone $i$.
The mass-weighted parameters were determined for each region of the implosion. It can be seen in Tables \ref{ION_MCMC_results} and \ref{SINGLE_MCMC_results}, that the MCMC values were in close agreement with the mass-weighted parameters from the compressed DT shell. As discussed previously, this was an expected result, as the high density in the compressed shell meant it dominated the inelastic scattering features.
\section{\label{sec:spect3d}Obtaining simulated spatially integrated spectra}
\begin{figure}[b]
\centering
\includegraphics[width=0.495\textwidth]{Figures/self_emission_fit.png}
\caption{Logarithmic fit to background self-emission for 2.8 adiabat target and a $3.5\,\si{keV}$ X-ray probe. The background self-emission is assumed to fit $A\mathrm{log}^2(I) + B\mathrm{log}(I) + C$, where the constants $A$, $B$ and $C$ are found for each scattering setup.}
\label{self_emission}
\end{figure}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.75\textwidth]{./Figures/Both_2keV_3.5keV_adiabats}
\caption{X-ray scattering data produced by Spect3D for LILAC simulations with adiabats of $2.8$ and $8.0$. \textbf{(a)} and \textbf{(b)} Forward scattering spectra for a $2\,\si{keV}$ probe and a $3.5\,\si{keV}$ probe, respectively. \textbf{(c)} and \textbf{(d)} Backward scattering spectra for a $2\,\si{keV}$ probe and a $3.5\,\si{keV}$ probe, respectively.}
\label{Adiabats}
\end{figure*}
The cryogenic DT implosion plasma conditions were calculated using the LILAC code. The LILAC code is a 1-D spherical lagrangian, radiation-hydrodynamics code \cite{Delettrez87} that simulates symmetric, laser direct-drive implosions. It includes laser ray-tracing with an inverse bremsstrahlung model that can also account for cross-beam energy transfer \cite{LILAC_CBET}. LILAC also includes a nonlocal thermal transport model that uses a simplified Boltzmann equation with a Krook collision term \cite{LILAC_nonlocal_TT}, multi-group radiation diffusion, and a first-principles equation-of-state
(FPEOS) model \cite{FPEOS_DT,FPEOS_CD} and opacity (FPOT) model \cite{FPOT_DT} derived from molecular dynamics methods.
In this work, the focus is on the time when the capsule is at two-thirds compression,
$R_{\mathrm{Ablation\,surface}} / R_{\mathrm{Vapor,\,initial}} = 2/3$. The inhomogenity of the plasma results in different scattering signals from different regions of the plasma. It is paramount that we are able to simulate fully spatially integrated spectra, accounting for opacity and self-emission of the plasma in order to determine, for a given scattering geometry, the dominant scattering features. This provides insightful information to design the experiments.
SPECT3D is a spectroscopy code produced by Prism Computational Sciences which post-processes hydrodynamics code output and simulates high-resolution spectra and images for LTE and non-LTE plasmas in 1-D, 2-D, and 3-D geometries \cite{MacFarlane07}. It computes a variety of diagnostic signatures that can be compared with experimental measurements including: time-resolved and time-integrated spectra, space-resolved spectra and streaked spectra, filtered and monochromatic images, X-ray diode signals. In a SPECT3D simulation, the radiation incident at a detector is computed by solving the radiative transfer equation along a series of lines-of-sight (LOSs) through the plasma grid. At each plasma volume element along a LOS, the frequency-dependent absorption and emissivity of the plasma is calculated. The scattering cross-section is computed using local values of the plasma conditions based on the formalism originally developed in Refs. \cite{Gregori03, Crowley13}. Scattered X-ray photons are added to the local source function, allowing SPECT3D to utilize the same algorithms as it uses for plasma self-emission. It is assumed that the radiation from a non-monochromatic, isotropically emitting point-like X-ray source is scattered within each volume element of the SPECT3D spatial grid. The source is specified by its photon-energy-dependent intensity and location in 3-D space. The intensity of the radiation from the source is adjusted for each volume element based on the distance to the source. It includes attenuation due to plasma absorption and the change in the solid angle. The radiation flux at each pixel in the detector plane is calculated by integrating the scattered radiation along each LOS. The scattering angle is computed for each volume element based on the LOS and the line that connects the volume element center and the source \cite{Golovkin13}.
For this paper, an additional feature was added to the original implementation which allows for certain plasma cells to be excluded from contributing to the scattered signal. This allows for studying the contribution of particular plasma regions to the total scattered spectrum. Models for computing self-emission and absorption coefficients remain the same in each zone regardless of whether the flag for excluding scattered signal is set or not.
The addition of this feature allows spectra from isolated regions of the plasma to be compared to the fully integrated spectra in figures \ref{2.8_conditions} and \ref{8.0_conditions}. The inelastic scattering in each detector is dominated by the scattering from the compressed DT shell. This gives us confidence that an experiment designed to retrieve scattering spectra at this time during the implosion will be representative of the conditions in the compressed shell.
Using the output from SPECT3D, simulated experimental data was produced by first removing the background noise due to the self-emission of the plasma. A logarithmic fit, was assumed for the background, as shown in Figure \ref{self_emission}. Then random Gaussian noise, with a standard deviation of $7.5\%$ was added to the signal. The resultant spectrum is shown in figure \ref{Adiabats}. Note that we have assumed the noise to be independent of the number of photons per pixel arriving at the detector. In cases where the signal is weak, the signal will be limited by the pixel's photon counts, and in such conditions we would expect the noise to be larger in the wings of the spectrum.
Future work will focus more closely on the noise error analysis for photon-limited signals.
Utilising XRTS to determine the adiabat of an ICF capsule would be a valuable diagnostic development. Figure \ref{Adiabats} demonstrates that for experimental conditions with identical scattering setups, the two extreme adiabat conditions considered here produce notably differing scattering spectra.
In both the $2\,\si{keV}$ and $3.5\,\si{keV}$ case, the plasmon scattering seen in the forward scattering detector, can be used to determine the difference in electron density between the two adiabats.
The difference between the inelastic scattering features from the two adiabats seen in figure \ref{Adiabats}\textbf{(c)} is a result of only the low adiabat remaining in the collective scattering regime. The high adiabat's inelastic scattering feature has become dominated by Compton scattering. This is evidenced by the broadening of the inelastic peak and the lose of a forward plasmon shift peak. This change in scattering features is evidence of its higher electron temperature.
|
1,314,259,994,943 | arxiv | \section{Introduction}
Ly$\alpha$ emission from reprocessed ionizing photons in star-forming galaxies is
a prominent feature that has been used to detect high redshift galaxies
\citep[e.g.,][]{Partridge67,Rhoads03,Gawiser07,Ouchi08,Hill08,Guaita10,
Ciardullo12}.
Such Ly$\alpha$ emitting galaxies (or Ly$\alpha$
emitters; LAEs) have become an important laboratory to study galaxy
formation, large-scale structure, and cosmic reionization. The resonant
scattering of Ly$\alpha$ photons with neutral hydrogen atoms in the circumgalactic
and intergalactic media (CGM and IGM) also potentially opens a window to probe
the spatial and kinematic environments of CGM and IGM from Ly$\alpha$ emission.
In this paper,
we perform a theoretical study of the effect of anisotropy in the environment
on Ly$\alpha$ emission properties and study the implications of such an effect in
using Ly$\alpha$ emission to probe galaxy environment.
Anisotropic gas distributions are common in galaxies and their surrounding
environments. The anisotropy can show up both spatially and kinematically.
The gas distribution around star-forming regions (likely clumps on
a galaxy disk) is already anisotropic. The galactic winds from star formation
are ubiquitous and typically show a bipolar outflow pattern
\citep[e.g.,][]{Bland88,Shopbell98,Veilleux02,Shapley03,Weiner09,Rubin13}.
The cold gas being
supplied for star formation could be accreted from streams and filaments
\citep[e.g.,][]{Keres05,Dekel09}.
Even on the IGM scale, the density field and the velocity field still show
appreciable fluctuations \citep[e.g.,][]{Zheng10}. That is, spatial and
kinematic anisotropies can exist in galactic environments on all scales.
The resonant scatterings of
Ly$\alpha$ photons would enable them to explore such anisotropies and encode
information in the Ly$\alpha$ emission properties.
The anisotropic gas density and velocity distributions are naturally produced
in hydrodynamic simulations of galaxy formation. Monte Carlo Ly$\alpha$ radiative
transfer calculation has been performed for individual simulated galaxies.
For example,
\citet{Laursen09} notice the anisotropic escape of Ly$\alpha$ emission in nine
simulated galaxies.
\citet{Barnes11} show that emerging Ly$\alpha$ spectra from three simulated galaxies
vary strongly with the viewing angle.
The escape fraction of Ly$\alpha$ emission in the simulated galaxies
in \citet{Yajima12} exhibits strong dependence on galaxy morphology and
orientation, and for a disk galaxy the escaping Ly$\alpha$ photons are confined
to a direction perpendicular to the disk.
\citet{Verhamme12} also find that Ly$\alpha$ properties strongly depend on the
disk orientation in one simulated galaxy with an outflowing velocity field.
As a statistical study, \citet{Zheng10} perform Ly$\alpha$ radiative transfer
calculation for about $2\times 10^5$ sources in a radiative hydrodynamic
cosmological reionization simulation. Given the resolution of the simulation,
the density and velocity anisotropies come from gas in CGM and IGM.
The Ly$\alpha$ emission properties (luminosity, surface brightness, and spectra)
are found to depend on the viewing
angle, as a result of the density and velocity distributions (environment).
Because of the environment dependent radiative transfer, the Ly$\alpha$ emission
properties show correlations among themselves, and new effects in the spatial
clustering of LAEs are induced (\citealt{Zheng11a}; also see
\citealt{Wyithe11,Behrens13}).
On the analytic side, solutions are usually found
for simple configurations of uniform media, e.g., static plane-parallel
uniform slabs
\citep[][]{Harrington73,Neufeld90}, static uniform spheres
\citep[][]{Dijkstra06}, and uniform media experiencing Hubble expansion
(in the diffusion regime;
\citealt{Loeb99}). Numerical solutions or simulations for analytical setups
are also usually focused on uniform slabs \citep[e.g.,][]{Auer68,Avery68,
Adams72,Ahn00,Ahn01,Ahn02}, or uniform spheres or isotropic systems
\citep[e.g.,][]{Loeb99,Ahn02b,Zheng02,Dijkstra06,Verhamme06,Roy09,Roy10}.
Given the potential importance of the anisotropy in the observed
properties of LAEs, it is useful to investigate
Ly$\alpha$ emission with systems of analytic setups incorporating prescriptions
of anisotropy. Such an investigation can guide our analyses of Ly$\alpha$ emission
from simulated galaxies and help our understanding of the role of
anisotropy in shaping the Ly$\alpha$ emission from LAEs. In this paper, we
perform such a study.
The structure of the paper is as follows.
In \S~\ref{sec:model}, we describe how we build models of neutral
hydrogen clouds with spatial or kinematic anisotropy for Ly$\alpha$ radiative
transfer calculation. Then we present the results of anisotropies in the
escaping Ly$\alpha$ emission from these models in \S~\ref{sec:results}.
Finally, in \S~\ref{sec:summary} we summarize our investigation and
discuss the implications on studying Ly$\alpha$ emission from star-forming galaxies.
\section{Models of Anisotropic Neutral Hydrogen Clouds}
\label{sec:model}
To investigate the effect of anisotropic density and velocity distribution
on the Ly$\alpha$ emission, we construct three simple models of spherical neutral
hydrogen gas clouds. The first two models intend to
investigate the effects of anisotropy induced by density and velocity
separately. The third one is motivated by galactic wind and mimics an outflow
confined in a cone.
For all the three models, the temperature of the neutral hydrogen atoms in
each cloud is fixed at $2\times 10^4$~K. The Ly$\alpha$ emitting source is assumed
to be a point source located at the cloud center.
We emphasize that while our models may capture different aspects of the
gas distribution around galaxies, by no means they are realistic. The purpose
of the study is to investigate the effects from the anisotropy in the two main
quantities that affects Ly$\alpha$ radiative transfer, i.e., density and velocity.
Instead of more sophisticated models with various couplings between density
and velocity, we intentionally separate them and build simple models to see
the effect from each component.
The first model we consider is a ``density gradient'' case. In this model,
there is no bulk motion of the gas in the cloud and the anisotropic optical
depth distribution is purely a result of the anisotropy in the density
distribution. Specifically, we introduce a density gradient along the $z$
direction on top of an otherwise uniform cloud. The neutral hydrogen number
density $n(z)$ of the cloud is parameterized as
\begin{equation}
\label{eqn:dengrad}
n(z)=\bar{n}\left(1-2A\frac{z}{R}\right),
\end{equation}
where $\bar{n}$ is the mean number density in the cloud, $R$ is the cloud
radius, and $A$ is a parameter denoting the magnitude of the density
gradient. The column density from the cloud center follows a dipole
distribution $N_{\rm HI}=\bar{n}R(1-A\cos\theta)$, where $\theta$ is the angle
with respect to the $+z$ direction (i.e., the polar angle).
The optical depth of Ly$\alpha$ photons depends not only on the density distribution
but also on the velocity distribution. Photons of the same frequency
appear to have different frequency shifts in the restframe of atoms for atoms
with different velocities in the lab frame (i.e., the Doppler effect), which
leads to different
probability of interacting with the atoms (i.e., different scattering
cross-section). Therefore, for the second model, we consider a ``velocity
gradient'' case, where the difference in optical depth along different
directions is purely a result of the anisotropy in the velocity distribution.
With a uniform density cloud that can undergo uniform Hubble-like
expansion/contraction, we introduce a velocity gradient along the $z$
direction,
\begin{equation}
\label{eqn:vgrad}
{\mathbf v}({\mathbf r})=\frac{r}{R} V \hat{\mathbf r} + \frac{z}{R} \Delta V \hat{z}_+.
\end{equation}
The first term on the right-hand side is the uniform Hubble-like motion, and
the second term represents the modification from the velocity gradient, with
$\hat{\mathbf r}$ and $\hat{z}_+$ being the unit vectors along ${\mathbf r}$
and the $+z$ directions, respectively. The quantity $V$ is the Hubble velocity
at the edge of the
cloud and $\Delta V$ denotes the magnitude of the velocity gradient. A positive
(negative) value of $\Delta V$ means that an additional outflow (inflow) field
is added on both the $+z$ and $-z$ sides of the cloud.
Finally, motivated by galactic wind, we consider a case named ``bipolar wind''.
For a cloud of uniform density, we set up a Hubble-like velocity field within a
limited solid angle,
\begin{equation}
{\mathbf v}({\mathbf r}) = \left\{
\begin{array}{ll}
\frac{r}{R} V \hat{\mathbf r} & \quad \text{if $|z|/r>\mu_0$},\\
& \\
0 & \quad \text{otherwise}.\\
\end{array}\right.
\end{equation}
That is, the ``wind'' has a half open angle of $\theta_0=\cos^{-1} \mu_0$
around the $z$ axis. The model shares some similarity with the ``velocity
gradient'' model. The differences are that here the velocity gradient is
imposed along the radial direction (not $z$ direction) and the velocity field
is confined to a cone. The geometry is similar to the model in
\citet{Noterdaeme12}.
Our models can be regarded as simplistic models of star-forming galaxies
emitting Ly$\alpha$ photons with the cloud representing the CGM and IGM environments.
With the above models, we consider three cases of mean Ly$\alpha$ optical depth.
This can be put in terms of a characteristic column density, $N_{\rm HI}
=\bar{n}R$. The three cases have $N_{\rm HI}=10^{18}$, $10^{19}$, and $10^{20}
{\rm cm}^{-2}$, ranging from Ly$\alpha$ limit systems to damped Ly$\alpha$ systems.
For each setup, we perform the radiative transfer calculation of Ly$\alpha$ photons
using a Monte Carlo code \citep{Zheng02}. Ly$\alpha$ photons are initially launched
isotropically from the cloud center, with frequency following a Gaussian
profile
centered at the rest-frame Ly$\alpha$ line-center frequency with the width determined
by the temperature of the gas. For each run, we use $5\times 10^5$--$10^6$
photons to obtain good statistics on fluxes and spectra.
Ly$\alpha$ photons are collected once they reach the surface of the cloud.
We define the angle $\Theta$ as the polar angle of the escaped photon
with respect to $+z$ axis as seen by a distant observer. Since this angle
is the most observationally relevant one (and since all three models are
axial symmetric about the $z$ axis), we focus our study on the Ly$\alpha$
emission properties as a function of $\Theta$. For describing the cloud
configuration (e.g., density and velocity distribution), we use $\theta$
for the polar angle in the cloud's local frame, i.e., the angle between
a given radial direction and the $+z$ axis.
\section{Results of the Radiative Transfer Calculation}
\label{sec:results}
Given the three types of models, different magnitudes of the density/velocity
gradient, and different column densities, there are a large number of radiative
transfer runs we perform. To obtain a basic picture of the anisotropic Ly$\alpha$
emission, we start with the ``density gradient'' case with $N_{\rm HI}=10^{19}
{\rm cm^{-2}}$ and introduce our analyses in Ly$\alpha$ flux and spectra. We then
present the ``velocity gradient'' case with $N_{\rm HI}=10^{19}
{\rm cm^{-2}}$ and the ``bipolar wind'' case. Finally we discuss the general
features in the anisotropic Ly$\alpha$ emission based on all the runs.
\subsection{``Density Gradient'' Case with $N_{\rm HI}=10^{19} {\rm cm^{-2}}$ }
\begin{figure*}
\plotone{f01.ps}
\caption[]{
\label{fig:surfaceflux}
Distribution of Ly$\alpha$ flux observed by distant observers for a static spherical
cloud with
anisotropic density distribution (the ``density gradient'' case with column
density of $10^{19}{\rm cm}^{-2}$).
{\it Left panel}: Ly$\alpha$ flux as a function of
the polar angle $\Theta$ of the escaping photons. Parameter
$A$ denotes the magnitude of the density gradient imposed along the $z$
direction, and the flux is normalized with respect to the isotropic flux
of a uniform cloud (i.e., the $A=0$ case). {\it Right panel}: the multipole
expansion coefficients of the anisotropic distribution of flux. See the text
for more details.
}
\end{figure*}
We first study the anisotropic Ly$\alpha$ flux seen by distant observers.
The flux at a polar angle $\Theta$ is
proportional to the number of photons $\Delta N$ in a narrow angular bin
$\Delta\Theta$ divided by the corresponding area
$2\pi D^2\sin\Theta\Delta\Theta$ for observers at distance $D$.
In what follows, we will normalize this
flux to the isotropic flux
$N/(4\pi D^2)$. The normalized flux is then
\begin{equation}
\label{eqn:F}
F(\mu)=\frac{2\Delta N}{N\Delta\mu},
\end{equation}
where $\mu=\cos\Theta$.
It can also be put as the fractional photon count
$\Delta N/N$ divided by the fractional area $\Delta\mu/2$ around the given
polar angle. It satisfies the normalization condition
$\int_{-1}^1 F(\mu) d\mu/2 =1$.
The left panel of Figure~\ref{fig:surfaceflux} shows the angular dependence
of the flux and the dependence on the magnitude of the density gradient (with
$A=0$ being the uniform sphere case).
Resonant scatterings of Ly$\alpha$ photons off neutral hydrogen atoms enable them
to probe the optical depths along all directions, and they tend to
preferentially make their way out along the path of least resistance. From
our setup of the ``density gradient'' case (Equation~\ref{eqn:dengrad}),
the density decreases toward the $+z$ direction. For Ly$\alpha$ photons at the
cloud center, the scattering optical depth is lowest (highest) along
the $\Theta=0$ ($\Theta=\pi$) direction. It can be clearly seen from the
figure that Ly$\alpha$ photons favor escape along the $\Theta=0$ direction and
dislike escape along the $\Theta=\pi$ direction. The ratio of fluxes at
$\Theta=0$ and at $\Theta=\pi$ increases as we increase the density gradient,
reaching a factor of about 2.5 for $A=0.5$.
To quantify the anisotropy in the full range of the polar angle $\Theta$, we
decompose the flux $F(\mu)$ into its multipole components,
\begin{equation}
F(\mu)=\sum_{l=0}^{\infty} C_l P_l(\mu),
\end{equation}
where $P_l$ is the $l$-th order Legendre polynomial.
The coefficient $C_l$ is solved from the orthogonality of the Legendre
polynomials,
\begin{equation}
C_l=\frac{2l+1}{2}\int_{-1}^1 F(\mu)P_l(\mu) d\mu
=\frac{2l+1}{N} \sum_{i=1}^{N} P_l(\mu_i),
\end{equation}
where $\mu_i$ corresponds to the direction of the $i$-th photon
($i=$1, 2, ..., $N$). The rightmost expression reduces the integral to a
simple sum over all photons. It is derived by making use of
Equation~\ref{eqn:F} in the limit of infinitesimally small $\Delta \mu$ bin,
and in such a limit, $\Delta N$ is either 1 or 0. The monopole coefficient
$C_0$ of $F(\mu)$ is unity by definition.
The density gradient introduces a dipole component in the density
distribution (and thus the initial optical depth distribution seen from the
center). A corresponding dipole component in the Ly$\alpha$ flux shows up
(right panel of Figure~\ref{fig:surfaceflux}). However, since
Ly$\alpha$ photons encounter a large number of scatterings and continually
change their
travel directions and frequencies before emerging at the
surface, their final distribution does not follow the initial optical depth
distribution. As a result, higher multipole components emerge, and
their relative contribution increases as the density gradient increases.
\begin{figure*}
\plotone{f02.ps}
\caption[]{
\label{fig:EW}
Distribution of apparent (observed) Ly$\alpha$ luminosity from observations along
random directions of a static spherical cloud with
anisotropic density distribution
(the ``density gradient'' case with column density
$10^{19}{\rm cm}^{-2}$). The luminosity $L$ is in
units of the intrinsic luminosity ($L_0$). Effectively it can be put in terms
of the Ly$\alpha$ EW. In the top axis of each panel, the values of EW are marked
by assuming the intrinsic EW to be 100\AA\ (corresponding to a stellar
population with Salpeter IMF and 1/20 solar metallicity with age above 100Myr;
see the text). The left panel shows the original distributions, while the
right panel shows the ones smoothed with a Gaussian kernel with standard
deviation of 10\AA\ to mimic the effect of measurement errors.
}
\end{figure*}
\begin{figure*}
\plotone{f03.ps}
\caption[]{
\label{fig:spec}
Normalized Ly$\alpha$ spectra from a static spherical cloud with anisotropic density
distribution (the ``density gradient'' case with column density
$10^{19}{\rm cm}^{-2}$). {\it Left panel}: comparison of spectra observed
along the two pole directions ($\mu=\cos\Theta=\pm 1$) for the $A=0.50$
model. The directions with the lowest and highest column density are
$\mu=+1$ and $\mu=-1$, respectively (see Equation~\ref{eqn:dengrad}).
{\it Right panel}:
comparison of spectra observed along one pole direction for clouds with
different anisotropic parameter $A$. In both panels, the spectra from
the uniform case ($A=0$) are shown for reference. The frequency offset in
units of the Doppler parameter $\Delta\nu_D$ and in velocity units are
shown in the bottom and top axes, respectively.
}
\end{figure*}
\begin{figure*}
\plotone{f04.ps}
\caption[]{
\label{fig:Lspec}
Peak offset $v_{\rm peak}$ and FWHM $\Delta v_{\rm FWHM}$ of Ly$\alpha$ emission
from a static spherical cloud with anisotropic density distribution (for the
``density gradient'' case with column density $10^{19}{\rm cm}^{-2}$).
{\it Left panel}: the correlation between $v_{\rm peak}$ and
$\Delta v_{\rm FWHM}$. {\it Right panel}: their correlation with the
apparent Ly$\alpha$ luminosity or Ly$\alpha$ EW. Only the red peak is analyzed here.
}
\end{figure*}
Because of the anisotropy, the observed flux depends on the viewing angle
(or observed direction). Observers along different directions would infer
different luminosities of the source (assuming isotropic emission at the
source).
Equivalently, for a given observer, observations of similar clouds at
random orientations would give the distribution of the observed (apparent)
luminosity. In Figure~\ref{fig:EW}, we show the distribution of the ratio
of apparent luminosity $L$ to intrinsic luminosity $L_0$ as a function of
density gradient magnitude. The ratio $L/L_0$ is calculated in the same way
as $F(\mu)$ in Equation~\ref{eqn:F}. The apparent luminosity is connected to
the equivalent width (EW) of Ly$\alpha$ emission. If we neglect other contributions
to EW, like dust effect on the continuum and Ly$\alpha$ emission
\citep[e.g.,][]{Verhamme12} and observed Ly$\alpha$ emission being only a fraction
of the total Ly$\alpha$ emission \citep[e.g.,][]{Zheng11b}, $L/L_0$ is proportional
to the Ly$\alpha$ EW. Therefore, the anisotropic emission
provides a mechanism for the distribution of Ly$\alpha$ EW.
To be specific and for the convenience of the comparison, here we neglect
other contributions and assume that the intrinsic Ly$\alpha$ EW is 100\AA,
corresponding to a stellar population with Salpeter initial stellar mass
function (IMF) and 1/20 solar metallicity with age above 100Myr
\citep[][]{Malhotra02}. The
top axis in each panel marks the resultant EW=$100{\rm \AA}(L/L_0)$. Note
that the $L/L_0$ (EW) distribution is computed based on the Ly$\alpha$ flux
distribution shown in the left panel of Figure~\ref{fig:surfaceflux}. To
reduce the effect of noise in the flux distribution, we use the multipole
expansion of the distribution up to to $l=4$ to derive the $L/L_0$ (EW)
distribution shown in Figure~\ref{fig:EW}.
In the left panel of Figure~\ref{fig:EW}, the original EW distributions from
different runs are shown. If the cloud is uniform ($A=0$), the EW does not
depend on viewing angle and we always have the intrinsic one (i.e., the
distribution is just a Dirac $\delta$ function). As the cloud becomes
anisotropic,
the viewing angle dependent Ly$\alpha$ emission makes the EW distribution extended.
The distribution is skewed, with a higher amplitude at lower EW. The skewness
increases as the anisotropy becomes stronger (as a result of the larger density
gradient), and at $A=0.5$ the distribution shows a prominent tail towards high
EW values. Interestingly, such a shape of the EW distribution is similar to
the observed ones for LAEs \citep[e.g.,][]{Ouchi08,Nilsson09,
Ciardullo12}. The
model EW distribution has sharp edges (corresponding to the boundary of
$\Theta=0$ and $\pi$), which are not seen in real observations. To make a
better
comparison, we need to take into account the uncertainties in measuring
the Ly$\alpha$ EW in observation. In the right panel of Figure~\ref{fig:EW}, each
distribution is smoothed with a Gaussian kernel with standard deviation of
10\AA, roughly the size of the measurement errors in Ly$\alpha$ EW from observation
\citep[e.g.,][]{Ouchi08}. The skewness seen in the left panel remains after
smoothing, and for strong anisotropic cases, the EW distribution mimics those
seen in observation.
Besides the viewing angle dependent flux or apparent luminosity, the Ly$\alpha$
spectra are also affected by the system anisotropy. In the left panel of
Figure~\ref{fig:spec}, the normalized spectra observed from observers
located on the $+z$ ($\Theta=0$ or $\mu=1$) and $-z$ ($\Theta=\pi$ or
$\mu=-1$) directions for the $A=0.50$ case are
compared. The spectra for the isotropic case ($A=0$) is shown for reference.
The bottom axis shows the frequency offset in units the Doppler frequency
$\Delta\nu_D\equiv (v_p/c)\nu_0$, which corresponds to the frequency offset
of a line-center photon (with frequency $\nu_0$) seen by an atom moving with
the most probable thermal velocity $v_p=\sqrt{2kT/m_H}$. The top axis marks
the offset in velocity units.
To escape a static medium, Ly$\alpha$ photons need to shift either to the blue
or red wing, where the optical depth is small. This results in a characteristic
double-peak profile \citep[e.g.,][]{Neufeld90,Zheng02}, which applies to
the spectra plotted here.
The two opposite directions in the left panel of Figure~\ref{fig:spec}
correspond to the directions of minimum and maximum column density.
In general, photons that escape from a direction with a lower column density
diffuse less in frequency space. As a consequence, the separation of the
two peaks is smaller and the width of each peak is narrower. In the right
panel, the spectra in the $\Theta=0$ direction are compared for the $A=0.25$
and $A=0.50$ cases. The $A=0.50$ case has the lower line-of-sight column
density, which has narrower peaks with a closer separation. The spectra do
not differ substantially from the uniform case (dotted curve). The reason is
that the column density modulation on top of the uniform case is only
a factor of $1-A\cos\theta$. Even for the $A=0.5$ case, the modulation is only
a factor of three. Because of the resonant scattering, Ly$\alpha$ photons can be
thought to probe the optical depths in all the directions before escaping along
the final direction. This also reduces the difference in the effective column
densities experienced by photons escaping along different directions, leading
to only small differences in the spectra.
The left panel of Figure~\ref{fig:Lspec} shows the viewing angle dependence
of two spectral features, the peak offset $v_{\rm peak}$ with respect to the
restframe Ly$\alpha$ line center and the line width characterized by the
full width at half maximum (FWHM) $\Delta v_{\rm FWHM}$. In order to reduce
the effect of noise, a Gaussian fit around the peak in
each spectrum is made to determine the peak offset.
Both $v_{\rm peak}$ and $\Delta v_{\rm FWHM}$
are computed in ten bins of the viewing angle, with the bottom-left
and upper-right points corresponding to $\cos\Theta\sim +1$ and $-1$,
respectively. We only show the case from the peak in the red side of the
spectra -- in reality, the blue peak likely becomes insignificant because of
the scattering in the IGM with Hubble flow \cite[e.g.,][]{Dijkstra07,Zheng10,Laursen11}. The $A=0$ case is shown to give a
sense of the noise in our calculation of these quantities from the spectra.
There is a clear correlation between $v_{\rm peak}$ and $\Delta v_{\rm FWHM}$
-- as Ly$\alpha$ photons diffuse more in the frequency space, the spectrum as a whole
shifts further away from the initial line center and at the same time it
becomes more broadened.
Since both the apparent Ly$\alpha$ luminosity and spectral features depend on viewing
angle, there must exist correlations between them. In the right panel of
Figure~\ref{fig:Lspec}, we see that both the peak offset and width are
anti-correlated with the Ly$\alpha$ luminosity
(or EW). This makes perfect sense -- along a direction of easier escape, Ly$\alpha$
emission has a higher flux (thus larger $L/L_0$ or EW), while Ly$\alpha$ photons
diffuse less in frequency space (thus smaller peak offset and width).
As a whole, the resonant scatterings experienced by Ly$\alpha$ photons constantly
change their propagation directions. This enables the photons to probe optical
depths along
all directions, and they tend to escape from the directions with low
optical depth. For the ``density gradient'' case, the anisotropic distribution
in the column density of the cloud translates to anisotropic
Ly$\alpha$ emission, i.e.,
viewing angle dependent Ly$\alpha$ emission properties, such as the apparent Ly$\alpha$
luminosity, spectral peak offset, and peak width. The viewing angle dependent
Ly$\alpha$ flux leads to a spread in Ly$\alpha$ EW, suggesting an interesting
mechanism to produce the observed EW distribution. The emission properties are
correlated as a result of the viewing angle dependence. In directions of lower
resistance, Ly$\alpha$ photons tend to have higher flux and less diffusion in
frequency space.
\subsection{``Velocity Gradient'' Case with $N_{\rm HI}=10^{19}{\rm cm}^{-2}$}
\begin{figure*}[h]
\plotone{f05.ps}
\caption[]{
\label{fig:Vel_obsflux}
Similar to Figure~\ref{fig:surfaceflux}, but for a uniform density cloud with
velocity anisotropy (the
``velocity gradient'' case with column density $10^{19}{\rm cm}^{-2}$).
The polar angle $\Theta$ is the angle between direction of the distant
observer and
the $z$ axis. The parameter $\Delta V$, the expansion velocity at the cloud
poles, denotes the magnitude of the velocity anisotropy.
See text for more details.
}
\end{figure*}
\begin{figure*}[h]
\plotone{f06.ps}
\caption[]{
\label{fig:Vel_EW}
Similar to Figure~\ref{fig:EW}, but for a uniform density cloud with
velocity anisotropy (the ``velocity gradient'' case with column density
$10^{19}{\rm cm}^{-2}$).
The parameter $\Delta V$, the expansion velocity at the poles,
denotes the magnitude of the velocity anisotropy.
}
\end{figure*}
\begin{figure*}[h]
\plotone{f07.ps}
\caption[]{
\label{fig:Vel_spec}
Similar to Figure~\ref{fig:spec}, but for a uniform density cloud with
velocity anisotropy (the ``velocity gradient'' case with column density
$10^{19}{\rm cm}^{-2}$).
}
\end{figure*}
\begin{figure*}[h]
\plotone{f08.ps}
\caption[]{
\label{fig:Vel_Lspec}
Similar to Figure~\ref{fig:Lspec}, but for a uniform density cloud with
velocity anisotropy (the ``velocity gradient'' case with column density
$10^{19}{\rm cm}^{-2}$).
}
\end{figure*}
Following the detailed discussions of the ``density gradient'' case,
we now turn to another anisotropic case by adding a velocity gradient to an
otherwise static uniform sphere ($V=0$ in Equation~\ref{eqn:vgrad}). The
velocity gradient leads to anisotropy in the initial Ly$\alpha$ optical depth, and
we expect the Ly$\alpha$ emission to depend
on viewing angle as well. As an example, we present the results for the
$N_{\rm HI}=10^{19}{\rm cm}^{-2}$ case.
Figure~\ref{fig:Vel_obsflux} shows the viewing angle dependence of the
flux measured by distant observers (left) and the multipole decomposition
(right). The way we add the
velocity gradient can be thought as Hubble-like expansion in the $\pm z$
directions and no expansion in the $\pm x$ and $\pm y$ directions. In the plot,
$\Delta V$ (taking the values of 50, 100, and 200 ${\rm km\, s^{-1}}$) is the
radial velocity at the edge of the cloud along $\pm z$
directions. For such expansion velocities, the initial Ly$\alpha$ photons launched
at the center are effectively at a single frequency (the restframe
line-center frequency). In the restframe of the atoms in the expanding cloud,
these initial photons appear to be redshifted off the line center, with
a frequency shift proportional to the radial velocity. With our setup,
the optical depth seen by the initial Ly$\alpha$ photons appears low towards the
$\Theta=0$ and $\pi$ directions (poles) and high towards the $\Theta=\pi/2$
directions (equator). Therefore, unlike the ``density gradient'' case, here we
have a quadrupole-like distribution of the initial optical depth.
Given such a distribution of optical depth, it is expected that photons prefer
to escape towards the poles rather than the equator, as seen in the left
panel of
Figure~\ref{fig:Vel_obsflux}. The ratio of the maximum to minimum fluxes
increases from about two for $\Delta V=100{\rm km\, s^{-1}}$ to about four for
$\Delta V=200{\rm km\, s^{-1}}$. A strong quadrupole component in the angular distribution
of flux emerges (right panel of Figure~\ref{fig:Vel_obsflux}), which increases
as $\Delta V$ increases. Unlike the ``density gradient'' case, there is no
dipole component from the symmetry of the system. Nevertheless, the density and
velocity gradient cases share the same effect on Ly$\alpha$ emission -- anisotropy
in the initial Ly$\alpha$ optical depth leads to anisotropy in the Ly$\alpha$ flux.
Similar to the ``density gradient'' case, the anisotropic Ly$\alpha$ emission
leads to a skewed distribution of apparent Ly$\alpha$ luminosity ($L/L_0$ or EW)
with a tail of high values (see Figure~\ref{fig:Vel_EW}, left for the original
distributions and right for the ones smoothed with a Gaussian kernel with
a standard deviation of 10\AA). For $\Delta V$=100${\rm km\, s^{-1}}$ and 200${\rm km\, s^{-1}}$, the
shape of the smoothed distribution looks similar to the observed ones
\citep[e.g.,][]{Ouchi08,Nilsson09,Ciardullo12}.
Figure~\ref{fig:Vel_spec} shows the spectra of photons
escaping from the directions of the pole and equator as a function of the
velocity gradient (parameterized by the velocity $\Delta V$ at the edge of
the cloud). While the spectra in the ``velocity gradient'' case still have
two peaks, they are no longer symmetric about the initial line center
(Figure~\ref{fig:Vel_spec}). Blue Ly$\alpha$ photons would be redshifted close to
the line center in the restframe of hydrogen atoms in an expanding cloud and
be strongly scattered. That is, Ly$\alpha$ photons tend to
shift redward for an easy escape from the cloud. Overall, the blue peak is
suppressed compared to the red peak.
The generic features of the spectra can be understood
following the interpretations in e.g., \citet{Loeb99}, \citet{Zheng02},
\citet{Dijkstra06}, and \cite{Verhamme06}, by accounting for the anistropy.
Since both the density and velocity affect the frequency diffusion in the
``velocity gradient'' case, the relatively tight correlation between the peak
position and peak FWHM seen in the ``density gradient'' case becomes weaker
(left panel of Figure~\ref{fig:Vel_Lspec}). So does the correlation between
apparent luminosity $L/L_0$ (or EW) and FWHM (lower points in the right panel).
However, the tight anti-correlation between $L/L_0$ (EW) and peak position
$v_{\rm peak}$ persists (big symbols in the right panel). This anti-correlation
appears to be much stronger than that in the ``density gradient'' case
(Figure~\ref{fig:Lspec}).
To summarize, for ``velocity gradient'' induced optical depth anisotropy, Ly$\alpha$
emission again displays the corresponding anisotropy in flux and spectral
features. The apparent luminosity or EW distribution closely resembles the
observed ones. As an extension, we also study systems with ``velocity
gradient'' in one direction imposed on top of an isotropic Hubble-like
expansion, and the relation
between EW and $v_{\rm peak}$ is similar to the ``velocity gradient'' cases
shown here. The results for the extended cases will be presented in
\S~\ref{sec:general}.
\subsection{``Bipolar Wind'' Case}
\begin{figure*}
\plotone{f09.ps}
\caption[]{
\label{fig:Bi_obsflux}
Similar to Figure~\ref{fig:Vel_obsflux},
but for a uniform density cloud with
bipolar outflows (the ``bipolar wind'' case).
}
\end{figure*}
Before discussing general properties of anisotropic Ly$\alpha$ emission, we briefly
present the results on the flux anisotropy for the ``Bipolar Wind'' case.
In this case, the uniform
cloud has a Hubble-like expansion within a cone defined by
$\theta<60\degr$ and $\theta>120\degr$ ($|\mu|>0.5$).
and remains static outside of the cone. That is, half of the cloud gas is
expanding. The expansion velocity at the edge of the cloud is set to be
200${\rm km\, s^{-1}}$. This specific setup is motivated by the bipolar outflow from
galaxies. We perform runs for three column densities: $nR=10^{18}$, $10^{19}$,
and $10^{20}{\rm cm}^{-2}$.
The Ly$\alpha$ optical depth is determined by both the column density and velocity.
For initial photons with frequency $\nu_i$, the radial optical depth is
\begin{eqnarray}
\tau_{\rm ini}
& = & \int_0^R
n\sigma\left[\nu_i\left(1-\frac{\Delta V}{c}\frac{r}{R}\right)\right] dr
\nonumber\\
& = & nR \int_0^1 \sigma\left[\nu_i(1-s\Delta V/c)\right] ds,
\end{eqnarray}
where $s=r/R$ and $\sigma$ is the Ly$\alpha$ scattering cross-section. The equation
applies to the regions both with and without outflow ($\Delta V\neq0$
and $\Delta V=0$). Since all the three runs have
the same velocity fields, from the above equation we see that the
above three runs have the same fractional anisotropy in the initial Ly$\alpha$
optical depth and they differ in the overall optical depth scale (which is
set by the column density).
The ``bipolar wind'' case shares some similarities with the
``velocity gradient'' case; the main differences are that its velocity
field is radially oriented (rather than only in $\pm z$ direction)
and is only within a limited region (rather than the whole cloud).
As a consequence, the results on the flux anisotropy, apparent luminosity
distribution, and anisotropic spectral properties are also similar to the
``velocity gradient'' case. For brevity, we only present the flux
anisotropy here and have other properties
incorporated into the summary of the general results (\S~\ref{sec:general}).
Given the system setup, the anisotropy in
the Ly$\alpha$ emission (Figure~\ref{fig:Bi_obsflux})
mainly has a quadrupole component. As the column density
increases, Ly$\alpha$ photons experience more scatterings and more changes in
travel directions. The anisotropy in the initial Ly$\alpha$ optical depth is
therefore becomes less important, and as a consequence the anisotropy in
Ly$\alpha$ flux becomes weaker.
\subsection{General Results}
\label{sec:general}
With the three cases discussed in the previous subsections, we turn to
present the general results and make an attempt to further connect to
observations.
A few more model runs are included here. We extend the
``velocity gradient'' case
by imposing a velocity gradient along the $z$ direction on top of a spherical
cloud with Hubble-like expansion (see Equation~\ref{eqn:vgrad}). The
Hubble-like expansion has a velocity $V=100{\rm km\, s^{-1}}$ at the edge of the cloud
and we impose different velocity gradients with the parameter $\Delta V$
ranging from -100 to 100${\rm km\, s^{-1}}$ with a step size 50${\rm km\, s^{-1}}$. We also perform
runs with different column densities ($nR=10^{18}$, $10^{19}$, and $10^{20}
{\rm cm^{-2}}$) for the ``density gradient'' case and the ``velocity
gradient'' case and its extension. In total, the results of 32 runs from
all the cases are presented here.
\begin{figure}
\plotone{f10.ps}
\caption[]{
\label{fig:Ltau}
The relation between the apparent Ly$\alpha$ luminosity and the relative initial
line-center optical depth. The value $\langle\tau_{\rm Ly\alpha}\rangle$
is the line-center optical depth for initial Ly$\alpha$ photons, averaged over
all directions.
The curves show $\exp(-\delta_\tau)$,
$\exp(-\delta_\tau/2)$, and $\exp(-\delta_\tau/4)$ to guide the eye,
where $\delta_\tau$ is the optical depth excess defined as
$\delta_\tau\equiv \tau_{\rm Ly\alpha}/\langle\tau_{\rm Ly\alpha}\rangle-1$.
The anisotropy models include those caused by density anisotropy
(``density gradient'' case; black points),
by velocity anisotropy (``velocity gradient'' case; magenta points) and its
extension with an additional isotropic expansion component (``expansion +
velocity gradient''; red points), and by bipolar outflow (``bipolar wind'' case;
blue points). Systems with three different column densities are studied for
each case, $10^{18}$ (triangles), $10^{19}$ (squares), and
$10^{20} {\rm cm^{-2}}$ (circles).
Open and filled symbols denote setups with negative and positive velocity
gradient imposed, respectively.
}
\end{figure}
Overall, we find system anisotropy leads to anisotropies in Ly$\alpha$ emission.
The observed Ly$\alpha$ luminosity depends on the viewing angle. Since higher
luminosities are likely observed along directions of easier escape,
we expect the
luminosity to show a correlation with the optical depth of the same direction.
However, we do not expect the correlation to be tight. Ly$\alpha$ photons probe
the optical depth in all directions in a convoluted way, since their scattering
with the neutral hydrogen atoms causes their positions, directions, and
frequencies to continually change. Furthermore, for observed/apparent
Ly$\alpha$ luminosity, the absolute
value of optical depth is not a good indicator. As an example, consider the
uniform sphere case. For the same intrinsic luminosity, spheres of different
column density (optical depth) have the same observed luminosity, since all
Ly$\alpha$ photons escape.\footnote{
This particular example shows that simply applying a $\exp(-\tau_\nu)$
correction to the initial Ly$\alpha$ spectra is not the right way to do the Ly$\alpha$
radiative transfer, where $\tau_\nu$ is the frequency dependent Ly$\alpha$ optical
depth along the line-of-sight direction. This is demonstrated and emphasized
in \citet{Zheng10}.
}
A better quantity that connects to luminosity could be
the relative optical depth among all directions.
\begin{figure}
\plotone{f11.ps}
\caption[]{
\label{fig:EWs}
Distribution of apparent (observed) Ly$\alpha$ luminosity (or EW) from a few
selected models. The curves have been smoothed with a Gaussian kernel with
standard deviation of 10\AA\ to mimic the effect of measurement errors.
}
\end{figure}
In Figure~\ref{fig:Ltau},
we plot the observed luminosity $L$ in units of the intrinsic luminosity
$L_0$ as a function of the relative optical depth
$\tau_{\rm Ly\alpha}/\langle\tau_{\rm Ly\alpha}\rangle$ for all the models.
Here $\tau_{\rm Ly\alpha}$ is the initial line-center optical depth for Ly$\alpha$
photons and $\langle\tau_{\rm Ly\alpha}\rangle$ is the average over all
directions. Or equivalently we can define the fractional excess
$\delta_\tau=\tau_{\rm Ly\alpha}/\langle\tau_{\rm Ly\alpha}\rangle -1$
in the initial line-center optical depth.
The three curves are shown in the form of $\exp(-\delta_\tau)$,
$\exp(-\delta_\tau/2)$, and $\exp(-\delta_\tau/4)$ to guide the eye.
Clearly there exists an overall correlation between apparent luminosity
(or EW) and the relative line-center optical depth. The plot serves as a
nice summary of the anisotropic Ly$\alpha$ emission from all the models considered
in this paper. It depicts the idea that Ly$\alpha$ photons prefer to escape along
the direction of least resistance. The resonant scatterings allow Ly$\alpha$ photons
to probe optical depth along different directions, and the key quantity that
determines the final anisotropic distribution of Ly$\alpha$ emission is the relative
optical depth, not the absolute one.
As a result of the anisotropy, the observed Ly$\alpha$ luminosity from a set of
randomly oriented clouds will spread out around the intrinsic luminosity.
Figure~\ref{fig:EWs} summarizes the distributions of the apparent Ly$\alpha$
luminosity (EW) by showing the result from a few selected models.
The apparent luminosity/EW distribution is largely determined by the
anisotropy (in the initial optical depth) of the system. For a fixed
anisotropy factor (e.g., density or velocity gradient), a cloud with higher
column density has a lower degree of anisotropy in the initial optical
depth distribution. This translates to smaller spread in the apparent
luminosity/EW distribution (e.g., comparing the $10^{19} {\rm cm^{-2}}$
cases with those of $10^{20} {\rm cm^{-2}}$). The anisotropy leads to
a skewed EW distribution with a tail toward higher values.
It becomes more skewed with higher degrees of flux anisotropy
(i.e., lower column density). For the high column density ``bipolar wind''
case, a low amplitude bump is seen at higher EW values. For this model,
the superposition of the EW distributions from systems with different column
densities (more abundant for lower column density) can produce a distribution
with an extended tail. The skewed EW distribution looks similar to the
observed ones \citep[e.g.,][]{Ouchi08,Nilsson09,Ciardullo12}, which implies
that anisotrpic Ly$\alpha$ emission provides one mechanism to contribute to the
Ly$\alpha$ EW distribution.
\begin{figure*}
\plotone{f12.ps}
\caption[]{
\label{fig:EW_Vpeak}
The relation between the Ly$\alpha$ EW and Ly$\alpha$ line peak offset $v_{\rm peak}$.
{\it Left panel}: the relation from our models of anisotropic clouds, including
those caused by density anisotropy (``density gradient'' case; black points),
by velocity anisotropy (``velocity gradient'' case; magenta points) and its
extension with an additional isotropic expansion component (``expansion +
velocity gradient''; red points), and by bipolar outflow (``bipolar wind'' case;
blue points). Systems with three different column densities are studied for
each case, $10^{18}$ (triangles), $10^{19}$ (squares), and
$10^{20} {\rm cm^{-2}}$ (circles). At a given column density of each case,
the symbol size indicates the degree of anisotropy of the system (larger
for stronger anisotropy). Open and filled symbols denote cases with
negative and positive velocity gradient imposed, respectively.
{\it Right panel}: the observed relation. The data are taken from those
compiled and analyzed in \citet{Hashimoto13}.
}
\end{figure*}
The left panel of Figure~\ref{fig:EW_Vpeak} summarizes the relation between
Ly$\alpha$ EW (or the apparent luminosity $L/L_0$) and the shift in the (red)
peak of the
Ly$\alpha$ line. The points are clearly grouped according to the column density
(triangles, squares, and circles for $nR=10^{18}$, $10^{19}$, and
$10^{20}{\rm cm}^{-2}$, respectively), with larger peak shift at
higher column density. For the sequence at a fixed column density,
the spread in the EW distribution results from the dependence on
the viewing angle. The size of the symbols indicates the magnitude of
the anisotropy of the system, with larger symbols for higher anisotropy.
For the ``density gradient'' case (black points), the anti-correlation
between EW and $v_{\rm peak}$ is stronger at higher column density, but
overall the anti-correlation is weak. The offset in the peak position is
largely determined by the column density. At fixed column density, a cloud with
a low anisotropy in the initial optical depth (indicated by smaller symbols)
has a smaller spread in the apparent luminosity/EW distribution viewed from
different directions, which is expected.
For the ``velocity gradient'' case (magenta points), a similar trend of
increasing strength of EW-$v_{\rm peak}$ anti-correlation with increasing
column density is found, but the anti-correlation is much stronger than
the ``density gradient'' case. At fixed velocity gradient, a higher column
density means a lower degree of system anisotropy, and therefore we see a
smaller spread. At fixed column density, a smaller velocity gradient (indicated
by smaller symbols) also means a lower degree of system anisotropy, leading
to smaller spread in EW.
The extended ``velocity gradient'' case (red points) closely follows the above
results, with filled (open) symbols for imposing positive (negative) velocity
gradients. Similar results are also found for the ``bipolar wind'' case
(blue symbols).
As a whole, for the runs we have performed, the Ly$\alpha$ line peak offset is mainly
driven by the column density, with larger offset from systems of higher column
density.
The overall trend seen in the left panel
of Figure~\ref{fig:EW_Vpeak} from all the cases we consider is that the spread
in EW distribution decreases with increasing Ly$\alpha$ line peak offset.
Interestingly the above trend appears to be consistent with recent
observations. In the right panel of Figure~\ref{fig:EW_Vpeak}, we reproduce
the data points compiled and analyzed by \citet{Hashimoto13} for LAEs,
Ly$\alpha$ blobs (LABs), and Lyman break galaxies (LBGs). The EW can be measured
from the
Ly$\alpha$ line luminosity and continuum with narrow-band and broad-band photometry.
To determine the peak offset, the systemic redshifts of the galaxies are
measured from nebular emission lines such H$\alpha$.
Even though
\citet{Hashimoto13} cast their result as an anti-correlation between EW and
$v_{\rm peak}$, the plot suggests a smaller EW spread towards larger peak
offset. We emphasize that we do not intend to fit the observation here, since
our models are simplistic. Nevertheless, it
is encouraging to see that both the ranges of EW and $v_{\rm peak}$ and their
relation fall into the ballpark of the observational results. It suggests that
the key element in our simple models (i.e., Ly$\alpha$ emission from anisotropic
systems) could play an important role in shaping the Ly$\alpha$ emission in real
systems. More realistic models (e.g., those based on galaxy formation
simulations) are necessary for a better understanding and comparison with
the observation.
\begin{figure}
\plotone{f13.ps}
\caption[]{
\label{fig:VV}
The relation between the Ly$\alpha$ line peak offset $v_{\rm peak}$ and
the FWHM of the line $\Delta v_{\rm FWHM}$ modified by the line
asymmetry parameter $f_{\rm asym}$. The asymmetry parameter
$f_{\rm asym}$ is defined as three times the ratio of the width at
half maximum on the blue and red sides of the line peak.
The dotted line denotes the equality of
the two quantities, while the solid line has a 30${\rm km\, s^{-1}}$ downward offset
to approximate the mean relation. The symbols have the same meanings as
in Figure~\ref{fig:EW_Vpeak}.
}
\end{figure}
To study the EW--$v_{\rm peak}$ relation observationally, the quantity
$v_{\rm peak}$ is the more difficult one to measure. The reason is that
a line indicating the systemic redshift of the galaxy is needed. Examples
of such lines are H$\alpha$ and [\ion{O}{3}]
\citep[e.g.,][]{Steidel10,McLinden11,Hashimoto13}, which shift to infrared
for high redshift galaxies. It would be
interesting to see whether there are other possible ways that can inform us
about
$v_{\rm peak}$, without the knowledge of the systemic redshift. Based on our
results, we see a general correlation between $v_{\rm peak}$ and the FWHM
$\Delta v_{\rm FWHM}$, mainly driven by the column density. The degree of
correlation varies from case to case, which leads to a large scatter among
cases with different anisotropy setups. We investigate whether other spectral
features of the Ly$\alpha$ line can help to reduce the scatter. Besides line peak
offset and the FWHM of the line, another obvious property of the Ly$\alpha$ line
is its asymmetry. Both from observation and from our model, the red peak
of the Ly$\alpha$ line is usually asymmetric, having a red tail. We therefore can
introduce an asymmetry parameter. It can be defined by comparing the line fluxes
or widths blueward and redward of the line peak (see, e.g.,
\citealt{Rhoads03}). Here
we define it as three times the ratio of the widths at half maximum at the
blue and red sides of the peak, $f_{\rm asym}=3W_{\rm blue}/W_{\rm red}$.
By introducing
this parameter, we find a relatively tight correlation between $v_{\rm peak}$
and $\Delta v_{\rm FWHM} f_{\rm asym}$ (see Figure~\ref{fig:VV}), probably with
the low $v_{\rm peak}$ part of the ``bipolar wind'' model causing the largest
scatter. Note that both $\Delta v_{\rm FWHM}$ and $f_{\rm asym}$ can be
inferred from the Ly$\alpha$ spectra. Observationally, \citet{McLinden13} show
a possible trend of increasing line asymmetry with increasing peak offset.
It would be interesting to see whether the correlation seen in our study
holds for realistic models and whether observations support it. Note that
for a fair comparison with observations, the spectral resolution needs to
be taken into account. Its effect is to smooth the Ly$\alpha$ line and increase both
the FWHM and the asymmetry parameter, which would cause a slope change
for the correlation in Figure~\ref{fig:VV}.
In any case,
a correlation of similar type to ours or with
similar spirit is worth pursuing. It can not only provide a way to determine
the peak offset without knowing the systemic redshift, but also serve as a
relation to test theoretical models of environments of star-forming
galaxies.
\section{Summary and Discussion}
\label{sec:summary}
We perform a theoretical investigation of the effect of anisotropy in neutral
hydrogen systems on the observed properties of Ly$\alpha$ emission. The motivation
of the work is to help understand the relation between the gas environment
around star-forming galaxies and Ly$\alpha$ emission properties and yield insights
on using the latter to probe the former. We find that the anisotropy in the
spatial and kinematic distributions of neutral hydrogen can play an important
role in shaping the observed Ly$\alpha$ emission properties from star-forming
galaxies.
We consider simple configurations of neutral hydrogen systems of spherical
clouds with a central point source of Ly$\alpha$ emission. We set up system
anisotropy induced by different factors. The term ``system anisotropy''
refers to the anisotropy in the initial optical depth of Ly$\alpha$ photons.
Since the scattering optical depth of Ly$\alpha$ photons depends on both density
and velocity, we basically explore anisotropies induced by two types of
causes. The first type is density-caused and a ``density gradient'' is applied
to the cloud along one direction. The second type is velocity-caused. For the
``velocity gradient'' case, we apply a velocity gradient along one direction
to the otherwise static cloud. Its extension, the ``expansion plus velocity
gradient'' case, has an additional, isotropic Hubble-like expansion. The
``bipolar wind'' case, which has a spatially separated outflowing region,
also belongs to this velocity-caused category. For each case, we set up
systems of different
column densities and of different degrees of anisotropy. We perform Monte
Carlo Ly$\alpha$ radiative transfer calculations to obtain the Ly$\alpha$ emission
escaping from
the clouds and study the anisotropy in Ly$\alpha$ flux and spectral features.
Owing to the resonant scatterings with neutral hydrogen atoms, a Ly$\alpha$ photon
takes a random walk in a cloud, with its traveling direction and frequency
constantly changing. Such a random walk enables the photon to explore the
optical depths along different directions. It tends to escape along the
directions of low resistance. An initial anisotropy in Ly$\alpha$ scattering
optical depth therefore translates to the anisotropy in the escaping Ly$\alpha$
emission. It is not surprising that a dipole
(quadrupole) component in Ly$\alpha$ flux (or apparent Ly$\alpha$ luminosity) is found
for a dipole (quadrupole) component in the system optical depth anisotropy.
Roughly speaking, the Ly$\alpha$ flux in a direction is determined by the optical
depth along that direction {\it relative} to those along other directions,
or by the fractional excess of the initial optical depth.
For an ensemble of the same systems with random orientations, the
anisotropic Ly$\alpha$ emission gives rise to a non-trivial distribution in the
observed/apparent Ly$\alpha$ luminosity or the Ly$\alpha$ EW. The general EW
distributions from our models are found to be skewed with a tail towards high
EW values. Such distributions, especially the ones from the ``density
gradient'' and ``velocity gradient'' cases, resemble the observed ones for
LAEs \citep[e.g.,][]{Ouchi08,Nilsson09,Ciardullo12}.
The observed Ly$\alpha$ EW distribution is likely a superposition of systems of a
variety of environments. Given the generic features seen in our models, a
superposition of EW distributions from models with different types of
anisotropy, different degrees of anisotropy, and different column densities
will still keep the skewed shape and resemble the observed distributions.
Even though our models are idealized and hardly realistic, our
results suggest that the viewing angle dependent Ly$\alpha$ emission caused by
system anisotropy can be an important factor in contributing to the EW
distribution of LAEs.
Ly$\alpha$ spectra are also subjected to the anisotropic effect. Noticeably for
a given system, the offset of the line peak depends on the viewing angle.
Ly$\alpha$ photons escaping from the directions of low optical depths typically
have a spectrum with a smaller peak offset from the line center. The overall
scale of the offset depends strongly on the mean column density of the cloud.
The viewing angle dependence of both the apparent luminosity (EW) and peak
offset cause an anti-correlation between the two quantities for a given
cloud --- along the direction that is easy to escape, we have a higher Ly$\alpha$
flux (apparent luminosity) and Ly$\alpha$ photons diffuse less in frequency space.
We note that the peak offset is not necessarily a feature specific
to the anisotropic systems we study, and it can also occur in isotropic
systems. In such systems,
the peak offset increases with increasing system column density or optical
depth. However, for isotropic
clouds with a given intrinsic Ly$\alpha$ luminosity $L_0$, the apparent
luminosity $L=L_0$, which is independent of the column density.
Therefore, the anti-correlation between peak offset and luminosity (EW)
we find in the anisotropic systems cannot be naturally produced by the
isotropic systems, unless with the effect of dust invoked or with a
contrived scenario of putting more luminous sources in systems of lower
column densities.
For a fixed magnitude of the anisotropy factor (e.g., density or velocity
gradient), the system anisotropy becomes weaker at higher column density.
From the viewing angle dependence of Ly$\alpha$ luminosity or EW at fixed
column density and the dependence of the peak offset on column density,
we find that at larger peak offset the spread in the apparent luminosity or
EW is reduced. Interestingly this generic feature between EW and
peak offset in
the model anisotropic Ly$\alpha$ emission is seen in the observational data in
\citet{Hashimoto13}, suggesting that anisotropic Ly$\alpha$ emission can be
at work in real Ly$\alpha$ emitting systems.
Based on our simple models, we also find a correlation between the Ly$\alpha$ line
peak offset and line shape (i.e., some combination of the FWHM and asymmetry
of the line). An
observational test of this correlation could help us understand Ly$\alpha$
radiative transfer in real systems by comparing model predictions and
observations. Conversely, a similar kind of relation
between Ly$\alpha$ peak offset and spectral shape established from
observation, if possible, would provide opportunities for us to
learn about the density/velocity structures or, more generally, the environment
around Ly$\alpha$ emitting systems that shapes the Ly$\alpha$ emission properties.
Additionally, this type of correlation has the potential use of determining
the systemic redshift of galaxies with the Ly$\alpha$ emission line alone.
Our investigation based on analytic models suggests that anisotropies in
the spatial and kinematic distributions of neutral hydrogen in the CGM and
IGM can be an important ingredient in determining the properties of observed
Ly$\alpha$ emission. While we try to build models that capture some features in
the CGM and IGM around star-forming galaxies (e.g., density inhomogeneity and
outflow), they are simplistic and by no means realistic. A further
study along this path is to apply similar analyses to galaxies in high
resolution cosmological galaxy formation simulations. If the relations
found in this paper turn out to exist for simulated galaxies,
we expect the scatter to
be large given the more complex environment around galaxies.
In fact, some of the features found in our study are broadly similar to
those from
studies of individual simulated galaxies. For example, the viewing angle
dependent Ly$\alpha$ flux \citep[e.g.,][]{Laursen09,Zheng10,Barnes11,Yajima12,
Verhamme12}, correlation of flux with the initial Ly$\alpha$ optical depth
(e.g., Figure 11 in \citealt{Zheng10}), and the relation between peak shift
and Ly$\alpha$ flux (e.g., Figure 9 in \citealt{Zheng10}).
Compared to the results from our simple models, the viewing angle dependent
Ly$\alpha$ flux in the above study does show much larger spread, an expected
consequence of the more complex density and velocity distribution of the gas
around galaxies.
One can certainly extend our simple models to more complicated models, with
more realistic geometry and coupling between density and velocity, which is
beyond the scope of this paper. Indeed, models with various setups start to
be studied \citep[e.g.,][]{Laursen13,Behrens14,Gronke14}.
Given what we have learned from the simple models, a complementary route is
to continue the investigation with
high-resolution simulated galaxies. A large ensemble of high-resolution
simulated galaxies are necessary for a statistical study, and
we reserve such an investigation for future work. From the results and
analyses we present in this paper, we expect that a more detailed study
with simulated galaxies will greatly advance our understanding of the
interactions between Ly$\alpha$ emission and CGM/IGM and of galaxy formation
through the CGM/IGM environment probed by Ly$\alpha$ emission.
\acknowledgments
We thank Renyue Cen and Jordi Miralda-Escud{\'e} for useful comments.
This work was supported by NSF grant AST-1208891. J.W. was also
supported by the Undergraduate Research Opportunities Program (UROP) at the
University of Utah.
The support and resources from the Center for High Performance Computing at the University of Utah are gratefully acknowledged.
|
1,314,259,994,944 | arxiv |
\section{Hyperparameters}\label{app:hyperparams}
Our method inherits the hyperparameters of text-to-image diffusion models and SDEdit \cite{SDEdit}. In addition, we introduce several other hyperparameters in this work that control the diversity of the synthetic images. Specific values for these hyperparameters are given in Table~\ref{tab:hparams}.
\section{Leafy Spurge Dataset Acquisition and Pre-processing}\label{app:spurge-details}
In June 2022 botanists visited areas in western Montana, United States known to harbor leafy spurge and verified the presence or absence of the target plant at 39 sites. We selected sites that represented a range of elevation and solar input values as influenced by terrain. These environmental axes strongly drive variation in the structure and composition of vegetation \cite{Amatulli18,Doherty21}. Thus, stratifying by these aspects of the environment allowed us to test the performance of classifiers when presented with a diversity of plants which could be confused with our target.
During surveys, each site was divided into a 3 x 3 grid of plots that were 10m on side (\textbf{Fig.}\textbf{~\ref{fig:full-site}}), and then botanists confirmed the presence or absence of leafy spurge within each grid cell. After surveying we flew a DJI Phantom 4 Pro at 50m above the center of each site and gathered still RGB images. All images were gathered on the same day in the afternoon with sunny lighting conditions.
We then cropped the the raw images to match the bounds of plots using visual markers installed during surveys as guides (\textbf{Fig.}\textbf{~\ref{fig:site-tiled}}). Resulting crops varied in size because of the complexity of terrain. E.G., ridges were closer to the drone sensor than valleys. Thus, image side lengths ranged from 533 to 1059 pixels. The mean side length was 717 and the mean spatial resolution, or ground sampling distance, of pixels was 1.4 cm.
In our initial hyperparameter search we found that the classification accuracy of plot-scale images was less than that of a classifier trained on smaller crops of the plots. Therefore, we generated four 250x250 pixel crops sharing a corner at plot centers for further experimentation (\textbf{Fig.}\textbf{~\ref{fig:tile-qtr}}). Because spurge plants were patchily distributed within a plot, a botanist reviewed each crop in the present class and removed cases in which cropping resulted in samples where target plants were not visually apparent.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{assets/full_site_cells_numbered.jpg}
\caption{A drone image of surveyed areas containing leafy spurge. At each site botanists verified spurge presence or absence in a grid of nine spatially distinct plots. Note that cell five is rich in leafy spurge.}
\label{fig:full-site}
\vspace{-0.5cm}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{assets/site_tiled.png}
\caption{Markers installed at the corners of plots were used to crop plots from source images.}
\label{fig:site-tiled}
\vspace{-0.5cm}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{assets/tile_quarter.png}
\caption{At each plot image center we cropped four 250x250 pixel sub-plots. We did this to amplify our data and improve classifier performance. The crops of plots with spurge present labels were inspected by a botanist to filter out examples where cropping excluded the target plant or the plants were not apparent.}
\label{fig:tile-qtr}
\vspace{-0.5cm}
\end{figure}
\section{Benchmarking the Leafy Spurge Dataset}\label{app:spurge-benchmark}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{assets/pooled_specific_examps.png}
\caption{Here we show examples of synthetic images generated from the leafy spurge dataset with DA-Fusion methods. The top row shows output where images are pooled to fine-tune a single token per class. The bottom row shows examples where tokens are generated specifically for each image. Source images, inference hyperparameters, and seed are otherwise identical in each column.}
\label{fig:spurge-examples}
\end{figure*}
We benchmark classifier performance here on the full leafy spurge dataset, comparing a baseline approach incorporating legacy augmentations with our novel DA-fusion method. For 15 trials we generated random validation sets with 20 percent of the data, and fine-tuned a pretrained ResNet50 on the remaining 80 percent using the training hyperparameters reported in section \ref{sec:results} for 500 epochs. From these trials we compute cross-validated mean accuracy and 68 percent confidence intervals.
In the case of baseline experiments, we augment data by flipping vertically and horizontally, as well as randomly rotating by as much as 45 degrees with a probability of 0.5. For DA-Fusion augmentations we take two approaches(\textbf{Fig.}\textbf{~\ref{fig:spurge-examples}}) The first we refer to as DA-Fusion Pooled, and we apply the methods of Textual Inversion \cite{TextualInversion}, but include all instances of a class in a single session of fine-tuning, generating one token per class. In the second approach we refer to as DA-Fusion Specific, we fine-tune and generate unique tokens for each image in the training set. In the specific case, we generated 90, 180, and 270 rotations as well as horizontal and vertical flips and contribute these along with original image for Stable Diffusion fine-tuning to achieve the target number of images suggested to maximize performance\cite{TextualInversion}. In both DA-Fusion approaches we generated ten synthetic images per real image for model training. We maintain $\alpha=0.5$, evenly mixing real and synthetic data during training. We also maximize synthetic diversity by randomly selecting 0.25, 0.5, 0.75, and 1.0 $t_0$ values.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{assets/full_spurge_performance_v3.png}
\caption{Cross-validated accuracy of leafy spurge classifiers when trained with baseline augmentations versus DA-Fusion methods on the full dataset. In addition to the benefits of DA-Fusion in few-shot contexts, we also find our method improves performance on larger datasets. Generating image-specific tokens (green line and bar) offers the most gains over baseline, though at the cost of greater compute.}
\label{fig:spurge-full}
\end{figure}
Both approaches to DA-Fusion offer slight performance enhancements over baseline augmentation methods for the full leafy spurge dataset. We observe a 1.0\% gain when applying DA-Fusion Pooled and a 1.2\% gain when applying DA-Fusion Specific(\textbf{Fig.}\textbf{~\ref{fig:spurge-full}}). It is important to note that, as implemented currently, compute time for DA-Fusion Specific is linearly related to data amount, but DA-Fusion Pooled compute is the same regardless of data size.
While pooling was not the most beneficial in this experiment, we support investigating it further. This is because fine-tuning a leafy spurge token in a pooled approach might help to orient our target in the embedding space where plants with similar diagnostic properties, such as flower shape and color from the same genus, may be well represented. However, the leafy-spurge negative cases do not correspond to a single semantic concept, but a plurality, such as green fields, brown fields, and wooded areas. It is unclear if fine-tuning a single token for negative cases by a pooled method would remove diversity from synthetic samples of spurge-free background landscapes, relative to an image-specific approach. For this reason, we suspect a hybrid approach of pooled token for the positive case and specific tokens for the negative cases could offer further gains, and support the application of detecting weed invasions into new areas.
\subsection{Data Preparation}
\paragraph{Leafy Spurge}\label{sec:spurge} We contribute a dataset of top-down drone images of semi-natural areas in the western United States. These data were gathered in an effort to better map the extent of a problematic invasive plant, leafy spurge (\textit{Euphorbia esula}), that is a detriment to natural and agricultural ecosystems in temperate regions of North America. Prior drone-based work to detect leafy spurge achieved an accuracy of 0.75 \cite{Yang20}. To our knowledge, top-down aerial imagery of leafy spurge was not present in the Stable Diffusion training data. Results of CLIP-retrieval \cite{ClipRetrieval} returned close-up, side-on images of members of the same genus (Figure~\ref{fig:clip-retrieval}) in the top 20 results. We observed the first instance of our target species, \textit{Euphorbia esula}, as a 35th result. Thus, the Spurge dataset represents a unique opportunity to explore few-shot learning setting, and state-of-the art classification outcomes would directly benefit efforts to restore natural ecosystems. Additional details about the Spurge dataset are in Appendix~\ref{app:spurge-details}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{assets/clip-retrieval.png}
\vspace{-0.5cm}
\caption{A sample from the Spurge dataset (the first on the left), compared with top results of CLIP-retrieval queried on the prompt: "a drone image of leafy spurge". We note closeup images from members of the same genus (second, and third) in the top 20 results and a closeup of the same species for the 35th result (fourth).}
\label{fig:clip-retrieval}
\vspace{-0.5cm}
\end{figure}
\paragraph{PASCAL} We leverage the 2012 version of the PASCAL Visual Object Classes challenge \cite{PASCAL}. This dataset contains 11,530 images and 6,929 object segmentation masks. We adapt this dataset into an object classification task by filtering images that have at least one object segmentation mask. We assign these images labels corresponding to the class of object with largest area in the image, as measured by the pixels contained in the mask. There are~20 classes in total using this methodology. We utilize the official training and validation sets for the 2012 challenge, and randomly select $q$ images per class from the training set, which are used to measure few-shot classification accuracy.
\paragraph{COCO} We process the 2017 version of the COCO dataset \cite{COCO} in a similar manner to PASCAL. This dataset contains 330K images with 1.5M object segmentation masks. As before, we filter images that have at least one object segmentation mask. We assign these images labels corresponding to the class of the largest object, measured by segmentation mask area. This dataset has 80 classes. We use the official training and validation sets for the 2017 dataset, and measure few-shot classification accuracy using the same methodology described for the PASCAL dataset.
\subsection{Modelling Novel Visual Concepts}\label{sec:modelling}
Standard data augmentations apply to all images regardless of class and content \cite{Perez17}. We aim to capture this flexibility with our diffusion-based augmentation. This is challenging because real images may contain elements the diffusion model is not able to generate out-of-the-box. How do we generate plausible augmentations for such images? We propose to address this shortcoming by adapting the diffusion model to new concepts and fine-tuning new tokens in the text encoder for each concept.
\paragraph{Adapting The Generative Model} When generating synthetic images, previous work uses a prompt with the specified class name \cite{RealGuidance}. However, this is not possible for novel visual concepts that lie outside the vocabulary of the generative model. This is especially true in applied domains. We discuss this in Section~\ref{sec:spurge} with our contributed weed-recognition task, which our pretrained diffusion model is unable to initially generate a plausible image even when the class name is provided. We propose to address this shortcoming by inserting $c$ new tokens into the vocabulary of the model---one for each class. We then perform Textual Inversion \cite{TextualInversion} for each new token, initializing their embeddings $\vec{w}_i$ to a class-agnostic value (see Appendix~\ref{app:hyperparams}), and fine-tuning only these embeddings using the standard diffusion loss from \citet{DDPM}.
\begin{equation}
\begin{split}
L_{\text{simple}} & (\vec{w}_0, \vec{w}_1, \hdots, \vec{w}_c) = \\
& \mathbb{E} \left[ \| \epsilon - \epsilon_{\theta} ( \sqrt{\tilde{\alpha}_t} x_0 + \sqrt{1 - \tilde{\alpha}_t} \epsilon, t ) \|^2 \right]
\end{split}
\end{equation}
\paragraph{Generating Synthetic Images} Many of the existing approaches generate synthetic images from scratch \cite{DAGAN,Tanaka19,Besnier20,DatasetGAN,StyleGANRender}. However, novel concepts can be particularly challenging to generate from scratch when only a handful of labelled examples are observed \cite{TextualInversion}. Rather than generate synthetic images from scratch, we use image-to-image transformations that splice real images into the reverse diffusion process following prior work in SDEdit \cite{SDEdit}. Given a reverse diffusion process with $S$ steps, we insert a real image $x_{0}^{\text{ref}}$ with noise $\epsilon \sim \mathcal{N}(0, I)$ at timestep $\lfloor S t_0 \rfloor$, where $t_0 \in [0, 1]$ is a hyperparameter controlling the insertion position of the image.
\begin{gather}\label{eqn:sdedit}
x_{\lfloor S t_0 \rfloor} = \sqrt{\tilde{\alpha}_{\lfloor S t_0 \rfloor}} x_{0}^{\text{ref}} + \sqrt{1 - \tilde{\alpha}_{\lfloor S t_0 \rfloor}} \epsilon
\end{gather}
We proceed with reverse diffusion starting from the spliced image at timestep $\lfloor S t_0 \rfloor$ and iterating Equation~\ref{eqn:reverse-step} until a sample is generated at timestep $0$. We discuss the interpretation and selection of the new hyperparameter $t_0$ in Section~\ref{sec:stackable}. Generation is guided with a prompt that includes the fine-tuned token corresponding to the class of the spliced image (see Appendix~\ref{app:hyperparams} for details of the prompts used in this work).
\subsection{Balancing Real \& Synthetic Data}\label{sec:balancing}
Training models on images from generative models often presents a trade-off between the diversity and size of the synthetic dataset, and the risk of over-emphasizing spurious qualities present in the synthetic data \cite{DAGAN}. This is especially important considering that several recent papers have observed the benefit of curating orders of magnitude more synthetic data compared to the real data \cite{Tanaka19,Besnier20,DatasetGAN,StyleGANRender,RealGuidance}.
The common solution assigns different sampling probabilities to real and synthetic images to manage imbalance
\citet{RealGuidance}. We adopt a similar method for balancing real and synthetic data in Equation~\ref{eqn:balance}, where $\alpha$ denotes the probability that a synthetic image is present at the $l$-th location in the minibatch of images $B$.
\begin{gather}\label{eqn:balance}
i \sim \mathcal{U} (\{ 1, \hdots, N \}) , j \sim \mathcal{U} (\{ 1, \hdots, M \}) \\
B_{l + 1} \leftarrow B_{l} \cup \big\{ X_i \;\; \text{w.p.} \;\; (1 - \alpha) \;\; \text{else} \;\; \tilde{X}_{ij} \big\}
\end{gather}
Here $X \in \mathcal{R}^{N \times H\times W \times 3}$ denotes a dataset of $N$ real images, and $i \in \mathbb{Z}$ specifies the index of a particular image $X_i$. For each image, we generate $M$ augmentations, resulting in a synthetic dataset $\tilde{X} \in \mathcal{R}^{N \times M \times H \times W \times 3}$ with $N\times M$ image augmentations, where $\tilde{X}_{ij} \in \mathcal{R}^{H \times W \times 3}$ enumerates the jth augmentation for the ith image in the dataset. Indices $i$ and $j$ are sampled uniformly from the available $N$ real images and their $M$ augmented versions respectively. Given indices $ij$, with probability $(1 - \alpha)$ a real image image $X_i$ is added to the batch $B$, otherwise its augmented image $\tilde{X}_{ij}$ is added. Hyper-parameter details are presented in Appendix~\ref{app:hyperparams}, and we find $\alpha=0.5$ to work effectively in all domains tested, which equally balances real and synthetic images.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{assets/icml2023_dafusion_masking.pdf}
\vspace{-0.5cm}
\caption{Foreground and background augmentations. Our method can be applied independently to each region of the image and maintains an improvement over prior work. In this example of the boat class from the PASCAL dataset, we augment the boat (foreground) separately from the water and hills (background).}
\label{fig:inpaint}
\vspace{-0.1cm}
\end{figure}
\begin{figure*}[th]
\centering
\includegraphics[width=\linewidth]{assets/main-results.pdf}
\vspace{-0.5cm}
\caption{Few-shot classification performance. We evaluate DA-Fusion on three classification datasets and unilaterally outperform classic data augmentation and the current state-of-the-art method. We report validation accuracy as a function of the number of images in the observed training set in the top row, and the area under the curve (AUC) in the bottom row.}
\label{fig:main-results}
\end{figure*}
\subsection{Stackable Augmentations}\label{sec:stackable}
Having appropriately balanced the real and synthetic images, our goal becomes to maximize the augmentation diversity. This goal is shared with the standard data augmentation \cite{Perez17,Shorten19}, where simple transformations are typically composed, yielding more sophisticated and diverse data.
Despite the prevalence of composition for the data augmentation, methods using generative models treat their "augmentations" like a secondary dataset, not stackable transformations \cite{DAGAN,Tanaka19,Yamaguchi20,DatasetGAN,StyleGANRender,RealGuidance}. Inspired by the success of standard data augmentation, we propose stackable data augmentations based on generative models.
We define a sequence of augmentations $\mathcal{A}$ that consists of tuples of image transformations $D_i$ and respective activation probabilities $p_i \in [0, 1]$ that collectively sum to one. This definition supports multiple options for selecting which augmentations to use, and a complete empirical study of them is an exciting future direction. In this work we opt for a simple approach: randomly sampling one augmentation $D_{a}$ from $\mathcal{A}$ weighted by the probability $p_a$. Refer to Appendix \ref{app:hyperparams} for hyperparameters associated with this stacking approach.
\begin{gather}\label{eqn:stackable}
D_i : \mathcal{R}^{H \times W \times 3} \rightarrow \mathcal{R}^{H \times W \times 3} \\
\mathcal{A} = \big[ (D_1, p_1), (D_2, p_2), \hdots, (D_k, p_k) \big]
\end{gather}
To ensure stacking augmentations creates diversity in the samples, the image transformations $D_i$ should be sufficiently unique. Several options are possible here, and we consider a first-principles heuristic that transforms increasingly higher-level features of the image as $i$ increases. This heuristic ensures each $D_i$ operates with a different granularity on the image. We accomplish this with SDEdit \cite{SDEdit}, and insert a noisy source image at a fraction $t_0 = i/k$ through the diffusion process to guide generation. Source images are taken from $X$, observed training images. Previous work in \citet{RealGuidance} shows the effectiveness of SDEdit for creating synthetic images, and our work turns this methodology into a stackable data augmentation.
\subsection{Object-Centric Augmentations}\label{sec:object}
Our approach thus far
applies to all images, regardless of their class and content. However, real images often contain multiple classes with different visual invariances. Considering these visual differences, applying transformations at the object level has an appealing property.
While traditional data augmentations violate this principle \cite{Perez17,Shorten19}, ours permits independent transformations that separate objects and backgrounds. To accomplish this, we leverage inpainting \cite{RePaint,Palette}. Given a pixelwise mask $v \in [0, 1]^{H \times W}$ specifying which image content to modify, we insert content into the diffusion process at locations where $v_{ij}$ is close to one. In particular, after every timestep $t$ in the reverse diffusion process, we assign the current sample $x_t$ to a new value, where $\eta \sim \mathcal{N}(0, I)$ is unit Gaussian noise with the same cardinality as $x_t$.
\begin{gather}\label{eqn:inpaint}
x_{t} \leftarrow \big(1 - v\big) \circ x_{t} + v \circ \big( \sqrt{\tilde{\alpha}_t} x_0^{\text{ref}} + \sqrt{1 - \tilde{\alpha}_t} \eta \big)
\end{gather}
The source image to inpaint is given by $x_0^{\text{ref}}$, and the multiplication $\circ$ represents the elementwise product of tensors. Before splicing, $x_{t}$ is taken from Equation~\ref{eqn:reverse-step} and represents standard reverse diffusion applied to the entire image. Figure~\ref{fig:inpaint} illustrates how inpainting facilitates independent transformations per object. Equipped with our flexible data augmentation strategy that respects objects, stacks with other augmentations, and applies to all images regardless of class and content, we proceed with evaluation.
\subsection{Evaluating Few-Shot Learning}\label{sec:results-main}
While leafy spurge is confirmed to lie outside the vocabulary of the pretrained diffusion model, this is not true for Pascal \cite{PASCAL} and COCO \cite{COCO}, which have common objects like boats and airplanes. To properly evaluate few-shot classification performance with these datasets, we emphasize the need to \textit{delete} knowledge of these objects from the weights of the generative model. This remains an active area of research~\cite{ROME}, so we hold concept deletion out-of-scope for this paper. Instead, we require that all generative models use a class agnostic prompt that does not contain the class name. This is important because generative models pre-trained at scale may have seen thousands of examples of common objects, and using a class name with an embedding trained on this large pool will not result in a proper few-shot setting.
This evaluation protocol treats all classes as novel visual concepts that lie outside the vocabulary of the generative model, and we follow Section~\ref{sec:modelling} for adapting the model to each class.
\paragraph{Experimental Details} In this experiment, we test few-shot classification with three data augmentation strategies. The first, referred to as "Baseline" in the remainder of this paper, employs no synthetic images. This baseline implements a standard data augmentation strategy that uses random horizontal flips for COCO and Pascal, with additional random vertical flips for Spurge, with flip probabilities $0.5$. The Real Guidance baseline is based on \citet{RealGuidance}, and uses SDEdit on real images with $t_0 = 0.5$. Hyper-parameters shared between Real Guidance and our method have equal values to ensure fairness. Real Guidance is given the class agnostic prompt "a photo" whereas our method is prompted with "a photo of a ClassX" where ClassX represents a new token fine-tuned according to Section~\ref{sec:modelling}.
Each real image is augmented $M$ times, and a ResNet50 classifier pre-trained on ImageNet is fine-tuned on a mixture of real and synthetic images sampled as discussed in Section~\ref{sec:balancing}. We fine-tune the final linear layer of the classifier for $10,000$ steps with a batch size of $32$ and the Adam optimizer with learning rate $0.0001$. We calculate validation metrics every 200 steps and report the epoch with highest accuracy. Solid lines in plots represent the mean, and error bars denote the 68\% confidence interval over 8 independent random trials. An overall score is calculated that aggregates performance on all datasets after normalizing performance using $y_{i}^{d} \leftarrow (y_{i}^{d} - y_{\text{min}}^{d}) / ( y_{\text{max}}^{d} - y_{\text{min}}^{d} )$, where $d$ represents the dataset, $y_{\text{max}}^{d}$ is the maximum performance for any trial of any method on that dataset, and $y_{\text{min}}^{d}$ is defined similarly.
\paragraph{Interpreting The Results}
Figure~\ref{fig:main-results} shows results. We observe a consistent improvement in validation accuracy by as much as +10 percentage points on the Pascal and COCO datasets when compared to the standard data augmentation baseline. DA-Fusion exceeds performance of Real Guidance~\cite{RealGuidance} in all domains while utilizing the same hyperparameters, without any prior information about class names given. In this setting, Real Guidance performs comparably to the baseline, which suggests that gains in
Real Guidance may stem from information provided by the class name. This experiment shows DA-Fusion improves few-shot learning and generalizes to out-of-vocabulary concepts. In the following sections, we ablate these results to understand how important each part of the method is to these gains in performance.
\subsection{How Important Is Stacking?}\label{sec:results-stacking}
Our goal in this section is to understand what fraction of gains are due to the stacking methodology of Section~\ref{sec:stackable}. We employ the same experimental settings
as in Section~\ref{sec:results-main}, and run an additional version of our method without stacking $(k = 1)$ and with $t_0 = 0.5$, following the settings previously used with Real Guidance. In Figure~\ref{fig:stacking-results} we report the improvement in area under the curve (AUC) versus standard data augmentation for our method. These results show that both versions of our method outperform the baseline, and stacking improves our method in all domains, leading to an overall improvement of 51\%.
\subsection{Robustness To Object Masks}\label{sec:results-mask}
Our previous results show DA-Fusion is an effective data augmentation strategy when applied to whole images. However, real images often contain several objects with different visual invariances. For example, in a photo of a cat sitting on a table, we may aim to change only the breed of the cat or the brand of the table, which are defined independently. Augmentations based on generative models can achieve this behavior using inpainting \cite{Palette,RePaint}, and stable performance when various masks are given is a desireable trait for any such method. In this section we test the robustness of DA-Fusion when various masks are used to constrain where image changes occur.
\begin{figure*}[tbhp]
\centering
\includegraphics[width=\linewidth]{assets/masking-results.pdf}
\vspace{-0.5cm}
\caption{Ablation for masking. We evaluate our method when applied to foregrounds and backgrounds separately. We report the improvement in few-shot classification accuracy on a validation set for our method compared to a Real Guidance baseline using object segmentation masks from the Pascal and COCO datasets. Results show a consistent improvement over the baseline when masks are used.}
\label{fig:masking-results}
\end{figure*}
\paragraph{Experimental Details} We test robustness with two mask types: foreground object masks, and background masks. In the foreground type, a mask around the focal object in each image is used for inpainting with our pre-trained diffusion model. The mask is dilated by $16$ pixels before use for inpainting to ensure the focal object is fully contained within. Background type masks are generated by inverting the foreground masks for each image. We test three versions of our method that augment the whole image, only the foreground, or just the background. Performance represents the gain in few-shot classification accuracy over Real Guidance when training a classifier on samples from our masked augmentation. To ensure fairness, we omit stacking in this experiment and share all hyper-parameters with Real Guidance, including the mask used for inpainting, and the strength value $t_0 = 0.5$ from Section~\ref{sec:results-main}. As before, solid lines in plots represent the mean, and error bars denote the 68\% confidence interval over 8 independent random trials.
\paragraph{Interpreting The Results}
Figure~\ref{fig:masking-results} shows that our method consistently outperforms prior work given masks for objects and backgrounds. Plots are shifted so that Real Guidance performance corresponds to a gained accuracy of~$0$, and positive gains represent an improvement over the corresponding masked Real Guidance baseline. Interestingly, performance gains are lower for masked augmentations than for whole images, suggesting there is room for improving DA-Fusion in the masked case. Additionally, the common ordering in the plots suggests that DA-Fusion is more effective at modifying backgrounds than objects. These results suggest our method has greater flexibility than prior work and produces effective synthetic images when directed with masks.
\subsection{Robustness To The Mixing Ratio}\label{sec:results-ablation}
We next conduct an ablation
to understand the sensitivity of our method to the mixing parameter $\alpha$ that controls the sampling ratio of real and synthetic images. We chose this hyper-parameter as $\alpha = 0.5$ throughout this paper for simplicity as this value is at the center of the range for this hyper-parameter ($\alpha \in [0, 1]$). Insensitivity to the particular value of $\alpha$ is a desireable trait for any data augmentation method using generative models, as it simplifies hyper-parameter tuning. We test sensitivity to $\alpha$ by comparing runs of DA-Fusion with different alpha values. We report the gained accuracy over Real Guidance with the same $\alpha$ in Figure~\ref{fig:alpha-results}. These results show stability as $\alpha$ varies, and that $\alpha = 0.7$ performs marginally better than $\alpha = 0.5$, which suggests our method improves synthetic images quality because sampling them more often improves accuracy.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{assets/alpha-results.pdf}
\vspace{-0.5cm}
\caption{Ablation for mixing ratio. We run our method with three different mixing ratios $\alpha \in \{0.3, 0.5, 0.7\}$ and report the improvement in few-shot classification accuracy over a Real Guidance baseline using the same $\alpha$. The plot shows our method is robust to this hyper-parameter and outperforms prior work for every $\alpha$.}
\label{fig:alpha-results}
\vspace{-0.3cm}
\end{figure}
\section{Introduction}\label{sec:introduction}
\input{contents/introduction.tex}
\section{Related Work}\label{sec:related}
\input{contents/related.tex}
\section{Background}\label{sec:background}
\input{contents/background.tex}
\section{Data Augmentation With Diffusion}\label{sec:method}
\input{contents/method.tex}
\section{Results}\label{sec:results}
\input{contents/results.tex}
\section{Discussion}\label{sec:discussion}
\input{contents/discussion.tex}
\section{Acknowledgements}\label{sec:acknowledgements}
\input{contents/acknowledgements.tex}
|
1,314,259,994,945 | arxiv | \section{#1}\setcounter{equation}{0}}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\newcommand{\rangle}{\rangle}
\newcommand{\langle}{\langle}
\newcommand{\partial}{\partial}
\newcommand{{\Phi}}{{\Phi}}
\newcommand{{Q_B}}{{Q_B}}
\newcommand{{\eta_0}}{{\eta_0}}
\newcommand{{{A}}}{{{A}}}
\newcommand{\big\rangle\big\rangle}{\big\rangle\big\rangle}
\newcommand{\Bigl\langle\Bigl\langle}{\Bigl\langle\Bigl\langle}
\newcommand{\Bigr\rangle\Bigr\rangle}{\Bigr\rangle\Bigr\rangle}
\begin{document}
{}~
\hfill\vbox{\hbox{hep-th/0506077}\hbox{MIT-CTP-3588}
}\break
\vskip 3.0cm
\centerline{\Large \bf A Closed String Tachyon Vacuum ?}
\vspace*{10.0ex}
\centerline{\large Haitang Yang
and Barton Zwiebach}
\vspace*{7.0ex}
\vspace*{4.0ex}
\centerline{\large \it Center for Theoretical Physics}
\centerline{\large \it
Massachusetts Institute of Technology}
\centerline{\large \it Cambridge,
MA 02139, USA}
\vspace*{1.0ex}
\centerline{hyanga@mit.edu, zwiebach@lns.mit.edu}
\vspace*{10.0ex}
\centerline{\bf Abstract}
\bigskip
\smallskip
In bosonic closed string
field theory the ``tachyon potential" is a potential
for the tachyon, the dilaton, and an infinite set of
massive fields.
Earlier computations of the potential did not include the dilaton
and the critical point formed by the quadratic and cubic interactions
was destroyed by the quartic tachyon term.
We include the dilaton contributions to the potential
and find that a
critical point survives and appears to become more shallow.
We are led to consider the
existence of a closed string tachyon vacuum, a critical point
with zero action that represents
a state where space-time ceases to be dynamical.
Some evidence for this interpretation is found
from the study of the coupled metric-dilaton-tachyon
effective field equations, which exhibit
rolling solutions in which the dilaton
runs to strong coupling and the Einstein metric undergoes
collapse.
\vfill \eject
\baselineskip=16pt
\vspace*{10.0ex}
\tableofcontents
\sectiono{Introduction and summary}
In the last few years the instabilities associated with open
string tachyons have been studied extensively and have become
reasonably well understood~\cite{reviews}. The instabilities
associated with closed string tachyons have proven to be harder to
understand. For the case of localized closed string tachyons --
tachyons that live on subspaces of spacetime -- there are now
plausible conjectures for the associated instabilities and a fair
amount of circumstantial evidence for
them~\cite{localized,Okawa:2004rh,Bergman:2004st,Adams:2005rb,
Suyama:2005hw}.
The bulk tachyon of the closed bosonic string is the
oldest known closed string tachyon. It remains
the most mysterious one and
there is no convincing analysis of the
associated instability.
The analogy with open strings, however, suggests a fairly
dramatic possibility. In open bosonic string in the
background of a spacefilling D-brane, the tachyon potential
has a critical point that represents spacetime without
the D-brane and thus without physical
open string excitations. In an analogous closed string
tachyon vacuum one would expect no closed string excitations.
Without gravity excitations spacetime ceases to be dynamical
and it would seem that, for all intents and purposes,
it has dissappeared.
There has been no consensus that such a closed string
tachyon vacuum exists. In fact, no analysis of the closed
string tachyon potential (either in the CFT approach or in the SFT approach)
has provided concrete evidence of a vacuum with non-dynamical spacetime.
Since the analogous open string tachyon vacuum shows
up quite clearly in the open string field theory computation
of the potential it is natural to consider the corresponding
calculation in closed string field
theory (CSFT)~\cite{Zwiebach:1992ie,Saadi:tb}.
The quadratic and cubic terms
in the closed string tachyon potential are well
known~\cite{Kostelecky:1990mi,Belopolsky:1994sk}:
\begin{equation}
\label{cubpot}
\kappa^2\mathbb{V}^{(3)}_0 = - t^2 +
\frac{6561}{4096}\,t^3 \,, \quad (\alpha' = 2)\,.
\end{equation}
These terms define a critical point analogous to
the one that turns out to represent the tachyon vacuum in the open
string field theory. In open string field theory higher
level computations make the vacuum about 46\% deeper. Since CSFT
is nonpolynomial, it is natural to investigate the effect of
the quartic term in the potential. This term was
found to be~\cite{Belopolsky:1994bj,Moeller:2004yy}
\begin{equation}
\label{v4efft}
\kappa^2V^{(4)}_0 =-3.0172\, t^4 \,.
\end{equation}
This term is so large and negative that $\mathbb{V}^{(3)}_0+ V_0^{(4)}$ has no
critical point. In fact, the quartic term in the {\em effective}
tachyon potential (obtained by integrating out massive fields)
is even a bit larger~\cite{Belopolsky:1994bj}.
The hopes of identifying a reliable critical
point in the closed string tachyon potential were
dashed\footnote{In the effective open string tachyon potential
a negative quartic term
also destroys the cubic critical point.
Nevertheless,
the critical point can be gleaned using Pade-approximants~\cite{Taylor:2002fy}.
For closed strings, however, the quartic term is too large: for a potential
$v(t) = v_2 t^2 + v_3 t^3 + v_4 t^4$, with $v_2, v_4<0$, the approximant
formed by the ratio of a cubic and a linear
polynomial fails to give a critical point when $v_2 v_4 \geq v_3^2$.}.
Recent developments inform our present analysis. The tachyon potential
must include all fields that are sourced by the zero-momentum
tachyon. As discussed in~\cite{Sen:1999xm}, this includes massless
closed string states that are built from ghost oscillators, in particular,
the zero-momentum
ghost-dilaton state $(c_1 c_{-1} - \bar{c}_1 \bar{c}_{-1})|0\rangle$.
The search for a critical point
cannot be carried out consistently
without including the ghost dilaton. Computations
of quartic vertices coupling dilatons, tachyons, and other massive
fields are now possible due to the work of Moeller~\cite{Moeller:2004yy}
and have been done to test the marginality of matter and dilaton
operators~\cite{Yang:2005iu,Yang:2005ep}.
As we explain now, ghost-dilaton couplings to the tachyon restore
the critical point in the potential.
The key effect can be understood from the
cubic and quartic couplings
\begin{equation}
\kappa^2 V (t, d) = - {27\over 32}\, \,t\, d^2\,+\,3.8721 \,t^3 d + \ldots \,\,.
\end{equation}
The cubic coupling plays no role as long as we only consider cubic
interactions: $d$ can be set consistently to zero. The quartic
coupling is linear in $d$. Once included,
the equation of motion for the dilaton can
only be satisfied if the dilaton acquires an expectation value.
Solving for the dilaton one finds $d = 2.2944\, t^2$ and
substituting back,
\begin{equation}
\kappa^2V (t, d) = 4.4422\, t^5 + \ldots
\end{equation}
This positive quintic term suffices to compensate the effects
of (\ref{v4efft}) and restores the critical point. Our computations
include additional couplings and the effect of massive fields as well.
The critical point persists and may be reliable, although more work is
needed to establish this convincingly.
In order to interpret the critical point
we raise and answer a pair of questions.
The ghost-dilaton has a
positive expectation value at the critical point.
Does this correspond to stronger or weaker
string coupling ? We do a detailed comparison of quadratic and cubic
terms in the closed string field theory action and in the low-energy
effective field
theory action. The conclusion is that the positive
dilaton expectation value corresponds
to {\em stronger} coupling. In our solution the ghost-dilaton is excited
but the scalar operator $c\bar c\, \partial X \cdot \bar\partial X $, sometimes
included in the dilaton vertex operator, is not. We ask: Is the
string metric excited? Is the Einstein metric excited? These questions
are only well-defined at the linearized level, but the answers are clear:
the string metric does {\em not} change, but the Einstein metric
does. We take the opportunity to explain the relations between
the four kinds of ``dilatons" that are used in the literature: the
ghost-dilaton,
the matter-dilaton, the dilaton, and the dilaton of the older literature.
It is noted that one cannot define unambiguously
a dilaton vertex operator unless one specifies which
metric is left invariant; conversely,
the metric vertex operator is only determined once one specifies which dilaton
is left invariant.
In a companion paper~\cite{HZR} we attempted to gain insight
into the tachyon vacuum by considering the
rolling solutions\footnote{Rolling solutions have long been considered using
Liouville field theory to provide conformal invariant
sigma model with spacetime background
fields that typically include a linear dilaton and a constant string
metric~\cite{Tseytlin:1990mz,Strominger:2003fn,Kluson:2003xn,DaCunha:2003fm}.
} of a low-energy effective action for
the string metric $g_{\mu\nu}$, the tachyon $T$, and the dilaton
$\Phi$:
\begin{equation}
\label{sigma_action}
S_\sigma=\frac{1}{2\kappa^2 }\int \, d^D x
\sqrt{- g}\, e^{-2\Phi}\Bigl(R+4 (\partial_\mu\Phi)^2 -
(\partial_\mu T)^2 -2 V(T)\,\Bigr)\,.
\end{equation}
This action, suggested by the beta functions of
sigma models with background fields~\cite{sdas},
is expected to capture at least
some of the features of string theory solutions.
The potential
is tachyonic: $V(T) = -{1\over 2} m^2 T^2 + \mathcal{O}(T^3)$,
but is otherwise left undetermined. We found that solutions in
which the tachyon begins the rolling process always have
{\em constant} string metric for all times -- consistent with the type of
the SFT critical point. The dilaton, moreover, grows
in time throughout the evolution -- consistent with the larger
dilaton vev in the SFT critical point.
Rather generally, the solution becomes
singular in finite time: the dilaton runs to infinity and the string
coupling becomes infinite. Alternatively, the Einstein metric crunches
up and familiar spacetime no longer exists. This
seems roughly consistent with the idea that the tachyon vacuum
does not have a fluctuating spacetime.
Perhaps the most subtle point concerns the value
of the on-shell action. In the open string field theory computation
of the tachyon potential, the value of the action (per unit spacetime
volume) is energy density. The tachyon
conjectures are in fact formulated in terms of energy
densities at the perturbative and the non-perturbative vacuum~\cite{reviews}.
Since the tree-level cosmological constant in closed
string theory is zero, the value of the action at the perturbative
closed string vacuum is zero. We ask: What is the value of the
potential, or action (per unit volume) at the critical point ?
The low-energy action (\ref{sigma_action}) suggests a surprising
answer. Consider the associated equations of motion:
\begin{equation}
\label{smeqmotion}
\begin{split}
R_{\mu\nu} +
2 \nabla_\mu\nabla_\nu \Phi - (\partial_\mu T) (\partial_\nu T) &= 0\,, \\[0.5ex]
\nabla^2 T - 2 (\partial_\mu \Phi) (\partial^\mu T) - V' (T) &= 0\,, \\[0.5ex]
\nabla^2 \Phi - 2 (\partial_\mu \Phi)^2 - V(T) &= 0\,.
\end{split}
\end{equation}
If the fields acquire {\em constant} expectation values we can satisfy
the tachyon equation if the
expectation value $T_*$ is a critical point of the potential:
$V'(T_*) =0$. The dilaton equation imposes an
additional constraint: $V(T_*)=0$, the potential must itself
vanish. This is a reliable constraint that follows from a simple fact:
in the action the dilaton appears without derivatives only
as a multiplicative factor. This fact remains true after
addition of $\alpha'$ corrections of all orders. It may
be that $V(T)$ has a critical point $T_0$ with $V(T_0) <0$,
but this cannot be the tachyon vacuum. The effective field
equations imply that a vacuum with spacetime
independent expectation values has {\em zero} action.
The
action (\ref{sigma_action}) can be evaluated on-shell
using the equations of motion. One finds
\begin{equation}
\label{sigma_action_os}
S_{on-shell}=\frac{1}{2\kappa^2 }\int \, d^{\,d+1} x
\sqrt{- g}\, e^{-2\Phi}\bigl( -4 V(T)\,\bigr)\,.
\end{equation}
In rolling solutions the action density changes
in time but, as $\Phi\to\infty$
at late times the action density
goes to zero~\cite{HZR}. This also
suggests that the tachyon vacuum is a critical point with zero
action.
In Figure~\ref{tpot_conj} we present the likely features of the
tachyon potential. The unstable perturbative vacuum $T=0$
has zero cosmological constant, and so does the tachyon vacuum
$T=\infty$. The infinite value of $T$ is suggested
by the analogous result in the effective open string theory
tachyon potential (see conclusions). In SFT the tachyon vacuum
appears for finite values of the fields, but
the qualitative features would persist.
The potential is qualitatively in the class used
in cyclic universe models~\cite{Steinhardt:2004gk}.
\begin{figure}[!ht]
\leavevmode
\begin{center}
\epsfysize=5.8cm
\epsfbox{tpot.eps}
\end{center}
\caption{\small A sketch of a closed string tachyon potential consistent
with present evidence. The perturbative
vacuum is at $T=0$. The closed string tachyon vacuum would be the critical
point with zero cosmological term, shown here at $T\to \infty$ (in CSFT
this point corresponds to finite tachyon vev).
A critical point with negative cosmological constant cannot
provide a spacetime independent tachyon vacuum.}
\label{tpot_conj}
\end{figure}
In our calculations we find some evidence that the action
density, which is negative, may go to zero as we increase
the accuracy of the calculation. To begin with, the value $\Lambda_0$
of the action density at the critical point of the cubic
tachyon potential (\ref{cubpot})
may be argued to be rather small. It is a cosmological term about
seventy times smaller than the ``canonical" one associated with
$D=2$ non-critical string theory (see \cite{Okawa:2004rh}, footnote 5).
Alternatively, $\Lambda_0$ is only about 4\% of the value that would
be obtained using the on-shell coupling of three tachyons
to calculate the cubic term. The inclusion of
cubic interactions of massive fields makes the action density
about 10\% more negative. This shift, smaller than the
corresponding one in open string field theory,
is reversed once we include the dilaton quartic terms. In the most
accurate computation we have done, the action density is down to
60\% of~$\Lambda_0$. Additional computations are clearly in order.
\smallskip
As a by-product of our work, we investigate
large dilaton deformations in CSFT. For
ordinary marginal deformations the description reaches
an obstruction for some finite critical value of the string field marginal
parameter~\cite{Sen:2000hx,Sen:2004cq}. The critical value is stable
under level expansion, and
the potential for the marginal field (which should vanish for infinite level)
is
small.
For the dilaton, however, the lowest-order obstruction
is not present~\cite{Yang:2005ep}. We carry this analysis to higher
order and no reliable obstructions are found:
critical values of the dilaton jump wildly with
level and appear where the dilaton potential
is large and cannot be trusted. This result strengthens
the evidence that CSFT can describe backgrounds with arbitrarily large
variations
in the string coupling. If the infinite string coupling limit
is also contained in the configuration space it may be possible
to define M-theory using type IIA superstring field theory.
\medskip
Let us briefly describe the contents of this paper.
In section 2 we
reconsider the universality arguments~\cite{Sen:1999xm} that require the
inclusion of the ghost-dilaton, exhibit a world-sheet parity symmetry
that allows a sizable truncation of the universal space,
and note that universality
may apply in circumstances significantly more general that originally
envisioned~\cite{SenZwiebach}. Our computational strategy
for the tachyon potential,
motivated by the results of~\cite{Yang:2005iu,Yang:2005ep},
goes as follows. We compute {\em all}
quadratic and cubic terms in the
potential including fields up to level four. We then begin
the inclusion of quartic terms and obtain complete results up
to quartic interactions of total level four.
The results make it plausible that a critical point exists and that the
value of the action density decreases in magnitude as the
accuracy improves. In section 3 we find the linearized
relations between the metric, dilaton, and tachyon closed string fields
and the corresponding fields in the sigma-model approach to
string theory. These relations allow us to establish that
the dilaton vev at the critical point represents an increased string
coupling and that the string field at the critical point does not
have a component along the vertex operator for the
string metric. We discuss the vertex operators
associated with the various definitions of the dilaton,
determine the nonlinear field relations between
the string field theory and effective field theory dilatons and tachyons
to quadratic order and at zero-momentum, and examine large dilaton
deformations.
In the concluding section we discuss additional considerations
that suggest the existence of the tachyon vacuum. These come from
non-critical string theory, p-adic strings, and sigma model
arguments. Finally, the
details of the nontrivial computations of quartic couplings
are given in the Appendix.
\sectiono{Computation of the tachyon potential}
In this section we present the main computations of this paper.
We begin by introducing the string field relevant for the
calculation of the tachyon potential, giving
a detailed discussion of universality. This string field contains
the tachyon, at level zero, the ghost-dilaton, at level two, and
massive fields at higher even levels. We then give the quadratic
and cubic couplings for the string field restricted to level four
and calculate the critical point.
Finally, we give the quartic couplings at level zero, two, and
four. The critical point survives the inclusion of quartic
interactions and becomes more shallow -- consistent with the
conjecture that the tachyon vacuum has zero action.
The computations use the closed string field
action~\cite{Zwiebach:1992ie,Saadi:tb,Okawa:2004rh}, which takes the form
\begin{equation}
\label{csft_action}
S=-\frac{2}{\alpha'}\Big(\frac{1}{2} \langle \Psi|c_0^-\,
Q|\Psi\rangle +\frac{\kappa}{3!} \{ \Psi,
\Psi\,,\Psi\} +\frac{\kappa^2}{4!} \{
\Psi,\,\Psi,\Psi,\Psi\} +\cdots\Big).
\end{equation}
The string field $\Psi$
lives on $\mathcal{H}$,
the ghost number two state space
of the full CFT restricted to the
subspace of states that satisfy
\begin{equation}
\label{subcond}
(L_0 - \bar L_0) |\Psi\rangle = 0 \, \quad \hbox{and}\quad
(b_0 - \bar b_0) |\Psi\rangle = 0 \,.
\end{equation}
The BRST operator is $Q= c_0L_0 + \bar c_0 \bar L_0 + \dots$, where
the dots denote terms independent of $c_0$ and of~$\bar c_0$.
Moreover, $c_0^\pm = {1\over 2} (c_0 \pm \bar c_0)$,
and we normalize correlators using $ \langle 0| c_{-1}\bar c_{-1} c_0^- c_0^+ c_1
\bar c_1|0\rangle=1$. All spacetime coordinates are imagined
compactified with the
volume of spacetime set equal to one.
\subsection{Tachyon potential universality and the ghost-dilaton}
The universality of the closed string tachyon potential was
briefly discussed in~\cite{Sen:1999xm}, where it was also
noted that the ghost number
two universal string field that contains the tachyon
should include the zero-momentum ghost-dilaton
state $(c_{1} c_{-1} - \bar c_1 \bar c_{-1} ) |0\rangle$. In here
we review the universality argument and extend it slightly, offering
the following observations:
\begin{itemize}
\item The ghost-dilaton must be included because closed string
field theory is not cubic.
\item A world-sheet parity symmetry of closed
string field theory can be used to restrict
the universal subspace.
\item The arguments of~\cite{Sen:1999xm} do not apply directly
to general CFT's, linear dilaton backgrounds, for example.
If the closed string background is defined by a general matter
CFT, solutions on the universal subspace may still be solutions,
but there is no tachyon potential~\cite{SenZwiebach}.
\end{itemize}
The original idea in universality is to produce a subdivision
of all the component fields of the string field theory
into two disjoint sets, a set $\{ t_i\}$
that contains the zero-momentum tachyon
and a set $\{u_a\}$ such that the string field action
$S(t_i, u_a)$ contains no term with a single $u$-type field.
It is then consistent to search
for a solution of the equations of motion that assumes $u_a=0$
for all $a$.
To produce the desired set $\{t_i\}$ we assume that the matter
CFT is such that $X^0$ is the usual negative-metric
field with associated conserved momentum $k_0$ and the rest
of the matter CFT is unitary.
The state space $\mathcal{H}$ (see (\ref{subcond})) is then divided
into three disjoint vector subspaces
$\mathcal{H}_1, \mathcal{H}_2,$ and $\mathcal{H}_3$. One has
$\mathcal{H}_i = \mathcal{M}_i \otimes |\mathcal{G}\rangle$,
where $|\mathcal{G}\rangle$ denotes a state built with ghost
and antighost oscillators only and
$\mathcal{M}_1, \mathcal{M}_2,$ and $\mathcal{M}_3$ are disjoint subspaces
of the matter CFT whose union gives the total matter CFT
state space:
\begin{eqnarray}
\label{sthesplitvematt}
\mathcal{M}_1 : && \hbox{the~} SL(2,C) \hbox{~vacuum} ~
|0\rangle ~\hbox{and descendents} ,\nonumber\\
\mathcal{M}_2 : && \hbox{states with~} k_0 \not=0, \\
\mathcal{M}_3 : && \hbox{primaries with}~ k_0=0
~\hbox{but different from}~|0\rangle ~\hbox{and descendents} \nonumber \,.
\end{eqnarray}
In the above, primary and descendent refers to the matter Virasoro operators.
Note that the primaries in $\mathcal{M}_3$
have positive conformal dimension. The BRST operator preserves the
conditions (\ref{subcond}), and since it
is composed of ghost oscillators
and matter Virasoro operators, it maps each $\mathcal{H}_i$
into itself. Finally, the spaces $\mathcal{H}_i$ are orthogonal
under the BPZ inner product; they only couple to themselves.
The claim is that the set $\{t_i\}$
is in fact $\mathcal{H}_1$, the states built upon the zero momentum vacuum.
The ``tachyon potential" is the string action evaluated for $\mathcal{H}_1$.
We first note
that because of momentum conservation
fields in $\mathcal{H}_2$ cannot couple linearly to
fields in ${\mathcal H}_1$.
The
fields in $\mathcal{H}_3$ cannot couple linearly to the fields in
$\mathcal{H}_1$ either. They cannot do so through the kinetic term because
the BRST operator preserves the space and $\mathcal{H}_1$ and $\mathcal{H}_3$
are BPZ orthogonal.
We also note that the matter correlator in
the $n$-string vertex does not couple $n-1$ vacua $|0\rangle$ from
$\mathcal{H}_1$ to a matter primary from $\mathcal{H}_3$: this
is just the one-point
function of the primary in $\mathcal{H}_3$, which vanishes because the state
has non-zero dimension. The (matter) Virasoro conservation laws on the vertex
then
imply that the coupling of any $(n-1)$ states in $\mathcal{H}_1$ to a state in
$\mathcal{H}_3$ must vanish. This completes the proof that $\mathcal{H}_1$
is the subspace for tachyon condensation.
The space $\mathcal{H}_1$ can be written
as
\begin{equation}
\label{biganswer}
\hbox{Span} \Bigl\{ L_{-j_1}^m \ldots L_{-j_p}^m\,
\bar L_{-\bar j_1}^m \ldots \bar L_{-\bar j_{\bar p}}^m
b_{-k_1} \ldots b_{-k_q}\, \bar b_{-\bar k_1} \ldots \bar b_{-\bar k_{\bar q}}
\, c_{-l_1} \ldots c_{-l_r} \,
\, \bar c_{-\bar l_1} \ldots \bar c_{-\bar l_{\bar r}} \, |0\rangle\Bigr\}\,,
\end{equation}
where
\begin{equation}
j_1 \geq j_2 \geq \ldots \geq j_p \,, ~~j_i \geq 2 \,, ~~
\bar j_1 \geq \bar j_2 \geq \ldots \geq \bar j_{\bar p }\,, ~~\bar j_i \geq
2\,,
\end{equation}
as well as
\begin{equation}
k_i, \bar k_i \geq 2
\,, ~~ l_i, \bar l_i \geq
-1\,,
\quad
\hbox{and}\quad r+ \bar r - q - \bar q = 2 \,.
\end{equation}
Finally, the states above must also be annihilated by $L_0- \bar
L_0$ as well as $b_0 - \bar b_0$.
There is a reality condition on
the string field~\cite{Zwiebach:1992ie}: its BPZ and hermitian conjugates must
differ by a sign. We show now
that this condition
is satisfied by all the states in (\ref{biganswer}), so
the coefficients by which they are multiplied in the
universal string field (the zero-momentum spacetime fields)
must be real.
Suppose a state is built with $p$
ghost oscillators and $p-2$ antighost oscillators. The BPZ
and hermitian conjugates differ by the product of two
factors: a $(-1)^p$ from
the BPZ conjugation of the ghost oscillators and a
$(-1)^{(2 p-2)(2p-1)/2}= (-1)^{p-1}$ from the reordering of oscillators
in the hermitian conjugate. The product of these two factors
is minus one, as we wanted to show.
\medskip
In open string theory twist symmetry, which arises from
world-sheet parity, can be used to further
restrict the universal subspace constructed from matter Virasoro
and ghost oscillators. In the case of closed string theory
the world-sheet parity transformation that exchanges holomorphic
and antiholomorphic sectors is the relevant symmetry.\footnote{We thank
A.~ Sen for discussions that led us to construct the arguments
presented below.} World-sheet
parity is not necessarily a symmetry of arbitrary matter CFT's, but
it is a symmetry in the universal subspace:
correlators are complex
conjugated when we exchange holomorphic and antiholomorphic
Virasoro operators
as $T(z) \leftrightarrow \bar T(\bar z)$.
More precisely, we introduce
a $\star$-conjugation, a map of $\mathcal{H}_1$ to $\mathcal{H}_1$
that is an involution.
In a basis of Virasoro modes $\star$ can be written explicitly as the map
of states
\begin{equation}
\label{rule_matter}
\star \, : \quad A\, L_{-i_1} \cdots L_{-i_n}\, \bar L_{-j_1}
\cdots \bar L_{-j_n} |0\rangle \quad \to
\quad A^*\,\bar L_{-i_1} \cdots \bar L_{-i_n} \,L_{-j_1}
\cdots L_{-j_n} |0\rangle\,,
\end{equation}
where $A$ is a constant and $A^*$ denotes its complex conjugate. Given
the operator/state correspondence, the above defines completely the
star operation $\star \,: \mathcal{O} \to \mathcal{O}^\star$
on vertex operators for vacuum descendents. It
results in the following property for the correlator of $n$ such operators
placed at $n$ points on a Riemann surface:
\begin{equation}
\langle \mathcal{O}_1 \ldots \mathcal{O}_n\rangle =
\langle \mathcal{O}_1^\star \ldots \mathcal{O}_n^\star\rangle^* \,.
\end{equation}
In the ghost sector of the CFT a small complication with
signs arises because the basic correlator is odd under the
exchange of holomorphic and anti-holomorphic sectors:
\begin{equation}
\label{faxsorr}
\langle\, c(z_1)c(z_2)c(z_3) \, \bar c (\bar w_1)\bar c (\bar
w_2)\bar c (\bar w_3)
\rangle = -\langle \, \bar c(\bar z_1)\bar c(\bar z_2) \bar c(\bar z_3) \,
c (w_1) c (w_2) c ( w_3)\,
\rangle^* \,.
\end{equation}
Since two-point functions
of the ghost fields are complex conjugated by the exchanges
$c(z) \leftrightarrow \bar c (\bar z)$
and $b(z) \leftrightarrow \bar b (\bar z)$, it
follows from (\ref{faxsorr}) that performing these exchanges
on an {\em arbitrary} correlator of ghost and antighost fields will give minus the
complex conjugate of the original correlator.
We will define $\star$-conjugation in the ghost sector by:
\begin{equation}
\label{rule_ghost}
\star \, : \quad A\, c_{i_1}\hskip-2pt\cdot\cdot c_{i_n}\,
b_{j_1} \hskip-2pt\cdot\cdot b_{j_m}\, \bar c_{k_1}
\hskip-2pt\cdot\cdot {\bar c}_{k_r}\,
{\bar b}_{l_1}\hskip-2pt\cdot\cdot {\bar b}_{l_s} |0\rangle
\,\quad \to\quad
A^*\, {\bar c}_{i_1}\hskip-2pt\cdot\cdot {\bar c}_{i_n}\,
\bar b_{j_1} \hskip-2pt\cdot\cdot \bar b_{j_m}\, c_{k_1}
\hskip-2pt\cdot\cdot c_{k_r}\,
b_{l_1}\hskip-2pt\cdot\cdot b_{l_s} |0\rangle \,.
\end{equation}
For a general state $\Psi$ of the universal subspace
we define $\Psi^\star$ to be the state obtained by
the simultaneous application of
(\ref{rule_matter}) and (\ref{rule_ghost}).
It is clear from the above discussion that the correlators satisfy
\begin{equation}
\label{n_point}
\langle \Psi_1 \, \Psi_2 \ldots \Psi_n \rangle = -
\langle \Psi_1^\star \, \Psi_2^\star \ldots \Psi_n^\star \rangle^* \,, \qquad
\Psi_i \in \mathcal{H}_1 \,.
\end{equation}
We now define the action of the world-sheet parity operation $\mathcal{P}$
on arbitrary states of the universal subspace:
\begin{equation}
\label{parity_def}
\mathcal{P} \Psi\equiv - \Psi^\star, \quad \Psi \in \mathcal{H}_1\,.
\end{equation}
We claim that the
string field theory action, restricted to $\mathcal{H}_1$,
is $\mathcal{P}$ invariant:
\begin{equation}
\label{lkkekjj}
S (\Psi) = S (\mathcal{P} \Psi )\, , \quad \hbox{for} \quad \Psi \in \mathcal{H}_1\,.
\end{equation}
First consider the invariance of the cubic term. Using (\ref{parity_def})
and (\ref{n_point}) we have
\begin{equation}
\langle \mathcal{P}\Psi \,, \mathcal{P}\Psi\,, \mathcal{P}\Psi\rangle
= - \langle \Psi^\star \,, \Psi^\star\,, \Psi^\star\rangle
= \langle \Psi \,, \Psi\,, \Psi\rangle^*
=\langle \Psi \,, \Psi\,, \Psi\rangle\,,
\end{equation}
where in the last step we used the reality of the string field action.
The kinetic term of the action is also invariant. First note that
$ (c_0^- Q \Psi)^\star
= -c_0^- Q \Psi^\star \,.$
It then follows that
\begin{equation}
\langle \mathcal{P}\Psi\,, c_0^- Q \mathcal{P}\Psi\rangle
= \langle \Psi^\star\,, c_0^- Q \Psi^\star\rangle =
- \langle \Psi^\star\,, (c_0^- Q \Psi)^\star\rangle=
\langle \Psi\,, c_0^- Q \Psi\rangle^* =
\langle \Psi\,, c_0^- Q \Psi\rangle\,.
\end{equation}
For higher point interactions, the
invariance follows because the antighost insertions have the
appropriate structure. Each time we add a new string field we must
add two antighost insertions. For the case of quartic interactions
they take the form of two factors $\mathcal{B} \mathcal{B}^\star$ (see
eqn.~(\ref{exptwoforms})). Since $(\mathcal{B} \mathcal{B}^\star)^\star
= -\mathcal{B} \mathcal{B}^\star$, the extra minus sign cancels against
the minus sign from the extra string field. This can be seen to
generalize to higher
order interactions using the forms of the off-shell amplitudes discussed
in section 6 of~\cite{Belopolsky:1994sk}. This completes our proof
of (\ref{lkkekjj}).
Since $\mathcal{P}^2=1$ the space $\mathcal H_1$ can be divided into two disjoint
subspaces: the space $\mathcal H_1^+$ of states with $\mathcal{P}=1$
and the space $\mathcal H_1^-$
of states with $\mathcal{P}=-1$:
\begin{eqnarray}
\mathcal{P} (\Psi_+)&=&+\Psi_+, \hspace{7mm} \Psi_+\in \mathcal
H_1^+\nonumber\,,\\[0.5ex]
\mathcal{P} (\Psi_-)&=&- \Psi_-, \hspace{7mm} \Psi_-\in \mathcal
H_1^-\,.
\end{eqnarray}
It follows from the invariance of the action that
no term in the action can contain just one state in $\mathcal H_1^-$.
We can therefore restrict
ourselves to the subspace $\mathcal H_1^+$ with positive parity.
The string field is further restricted
by using a gauge fixing condition. The
computation of the potential is done in the Siegel gauge, which
requires states to be annihilated by $b_0+\bar b_0$. To
restrict ourselves to the Siegel gauge we take the states in
(\ref{biganswer}) that have neither a $c_0$ nor a $\bar c_0$.
The Siegel gauge fixes the gauge symmetry completely for the massive levels,
but does not quite do the job at the massless level.
There are two states with $L_0 = \bar L_0=0$ in
$\mathcal{H}_1$ that are in the Siegel gauge:
\begin{equation}
(c_{1} c_{-1} - \bar c_1 \bar c_{-1} )|0\rangle \quad \hbox{and}
\quad (c_{1} c_{-1} + \bar c_1 \bar c_{-1} )|0\rangle \,.
\end{equation}
The first state is the ghost dilaton and it is proportional to $Q
(c_0- \bar c_0)|0\rangle$. Since $(c_0- \bar c_0)|0\rangle$ is not
annihilated by $b_0-\bar b_0$ the gauge parameter is illegal and
the ghost dilaton is not trivial. The second state is proportional
to $Q (c_0+ \bar c_0)|0\rangle$, so it is thus trivial at the linearized
level. Although trivial at the linearized level, one may wonder
if the triviality holds for large fields. Happily, we need not worry:
the state is $\mathcal{P}$
odd, so it need not be included in the calculation. The ghost-dilaton,
because of the relative minus sign between the two terms, is $\mathcal{P}$
even and it is included.
Had the closed string field theory been cubic we could have
discarded the ghost-dilaton state and all other states
with asymmetric left and right ghost
numbers. We could
restrict $\mathcal{H}_1^+$ to fields
of ghost number $(G, \bar G) = (1,1)$. Indeed, the cubic vertex
cannot couple two $(1,1)$ fields to anything except another $(1,1)$
field.
Moreover, in the Siegel gauge $c_0^-Q$ acts as an operator of ghost
number $(1,1)$, so again, no field with asymmetric ghost numbers
can couple linearly.
The quartic and higher order interactions in CSFT have
antighost insertions that do not have equal left and right ghost
numbers. It follows that these higher order vertices can couple
the ghost-dilaton to $(1,1)$ fields. Indeed, the coupling of a
dilaton to three tachyons does not vanish. We {\em cannot} remove
from~$\mathcal{H}_1^+$ the
dilaton, nor other states with asymmetric left and right ghost
numbers.
\medskip
The construction of the universal string field
and action presented here does not work fully if the
matter CFT contains a linear dilaton background.
Momentum conservation along the corresponding
coordinate is anomalous and one cannot build
an action with states of zero momentum
only: the action restricted to $\mathcal{H}_1$ is identically zero.
There would be no universal ``potential" in $\mathcal{H}_1$.
It appears rather likely, however, that any {\em solution} in
the universal subspace would still be a solution in a
linear dilaton background. In fact, any solution in
the universal subspace may be a solution
for string field theory formulated with a general
matter CFT~\cite{SenZwiebach}.
\medskip
We conclude this section by writing out the string field for the first few levels.
The level $\ell$ of a state is defined by
$\ell =L_0+\bar L_0+2\,.$
The level zero part of the string field is
\begin{equation}
|\Psi_0\rangle =t\, c_{1}\bar c_1 |0\rangle\,.
\end{equation}
Here $t$ is the zero-momentum tachyon. The level two part of the string field
is
\begin{equation}
|\Psi_2\rangle =d\, (c_{1} c_{-1} - \bar c_1 \bar c_{-1} ) |0\rangle\,.
\end{equation}
Here $d$ is the zero momentum ghost-dilaton. It multiplies the only state of
$\mathcal{P} = +1$ at this level.
At level four there are four component fields:
\begin{eqnarray}
|\Psi_4\rangle &=&\Bigl( f_1\, c_{-1}\bar c_{-1}+\,f_2\, L_{-2}
c_1 \,\bar L_{-2}\bar c_1 +\,f_3\, (L_{-2} c_1\bar c_{-1}+
c_{-1}\,\bar L_{-2}\bar c_1) \nonumber \\[0.5ex]
&& \,+\,g_1\,(b_{-2} c_1\, \bar c_{-2}\bar c_1 \,-\, c_{-2}c_1\,
\bar b_{-2} \bar c_1)\Bigr) |0\rangle\,.
\end{eqnarray}
Note that the states coupling to the component fields all have
$\mathcal{P}=+1$ and that $g_1$ couples to a state with
asymmetric left and
right ghost numbers. In this paper we will not use higher level
terms in the string field.
With $\alpha'=2$ the closed string field potential $V$ associated
with the action in (\ref{csft_action}) is
\begin{equation}
\label{csftpot}
\kappa^2 V=\frac{1}{2}\langle \Psi|c_0^-\, Q|\Psi\rangle
+\frac{1}{3!} \{ \Psi,\Psi,\Psi\} +\frac{1}{4!} \{
\Psi,\Psi,\Psi,\Psi\} +\cdots\,\,.
\end{equation}
Here $|\Psi\rangle = |\Psi_0\rangle+ |\Psi_2\rangle+ |\Psi_4\rangle+ \ldots$.
Our computations will not include quintic and higher order interactions in the
string action.
\subsection{The quadratic and cubic terms in the potential}
Let us now consider the potential including only the kinetic and cubic
terms in (\ref{csftpot}). To level zero:
\begin{equation}
\kappa^2 V^{(2)}_0 = - t^2 \,, \qquad
\kappa^2 V^{(3)}_0 = \frac{6561}{4096}\,t^3 \,.
\end{equation}
All potentials introduced in this subsection have a superscript
that gives the order of the interaction (two for quadratic, three
for cubic, and so on), and a subscript that gives the level
(defined by the sum of levels of fields in the interaction).
The next terms arise at level four, where we have couplings of the
tachyon to the square of the dilaton and couplings of the level four
fields to the tachyon squared:
\begin{equation}
\label{fglekf4}
\kappa^2\,V^{(3)}_4 = -
\frac{27}{32}\,d^2\,t + \Bigl(\, \frac{3267}{4096}\,{f_1}
+ \frac{114075}{4096} \,{f_2} - \frac{19305}{2048}
{f_3} \, \Bigr)\, t^2 \,.
\end{equation}
At level six we can couple a level four field, a dilaton,
and a tachyon. Only level four fields with $G\not=\overline G$ can have
such coupling, so we find:
\begin{equation}
\label{cgdt} \kappa^2\,V^{(3)}_6 = - \frac{25}{8}\, {g_1}
\,t\,d\,.
\end{equation}
At level eight there are two kinds of terms. First, we have the kinetic
terms for the level four fields:
\begin{equation}
\label{gknfeh}
\kappa^2\,V^{(2)}_8 = ~ {f_1}^2
\, + 169\,{{f_2}}^2
- 26\,{{f_3}}^2 - 2\,{g_1}^2 .
\end{equation}
Second, we have the cubic interactions:
\begin{equation}
\label{cftcoudk}
\begin{split}
\kappa^2\,V^{(3)}_8 &= -\frac{ 1 }{96}\, f_1\,d^2
-\frac{4225}{864} \,f_2 \,d^2
+ \frac{65 }{144} \,{f_3} \,d^2\\[1.0ex] &
~~+ \frac{361}{12288}\,{{f_1}}^2\,t +
\frac{511225}{55296}\,{f_1}\,{f_2}\,t +
\frac{57047809}{110592}\,{{f_2}}^2\,t + \frac{470873}{27648}
\,{f_3}^2\,t -\frac{49}{24}\,{g_1}^2\,t
\\[1.0ex] &
~~- \frac{13585} {9216}\,{f_1}\,{f_3} \,t
-\,\frac{5400395}{27648}\,{f_2}\, {f_3} \,t \,.
\end{split}
\end{equation}
As we can see, these are of two types: couplings of a level four
field to two dilatons
(first line)
and couplings of two level four fields to a tachyon (second and third lines).
The terms at level 10 couple two level four fields and
a dilaton. Because of ghost number conservation, one of the
level four fields must have $G\not= \overline G$:
\begin{equation}
\kappa^2\,V^{(3)}_{10}=-\frac{25}{5832}\big(361\, f_1+4225\,
f_2-2470\, f_3\big)\, d\, g_1 \,.
\end{equation}
Finally, at level 12 we have the cubic couplings of three level-four fields:
\begin{equation}
\begin{split}
\kappa^2\,V^{(3)}_{12}~ &= \frac{1}{4096}
\,f_1^3+\frac{1525225}{8957952} \,f_1^2 f_2-\frac{1235}{55296}
f_1^2\,f_3+\frac{6902784889}{80621568}f_1 f_2^2 \\[1.3ex]
&~-\frac{102607505}{6718464}f_1f_2f_3
+\frac{1884233}{2239488}f_1f_3^2 \\[1.1ex]
&~ +\frac{74181603769}{26873856}
f_2^3-\frac{22628735129}{13436928} \,f_2^2f_3
+\frac{4965049817}{20155392} f_2 f_3^2 \\[1.1ex]
& ~-\frac{31167227}{3359232}
f_3^3-\frac{961}{157464}\,f_1 g_1^2
-\frac{207025}{17496}\, f_2 g_1^2 +\frac{14105}{26244}\,f_3
g_1^2 \,.
\end{split}
\end{equation}
\medskip
\subsection{Tachyon vacuum with cubic vertices only}
With cubic vertices only the dilaton expectation value is zero.
In fact, only fields with $G= \overline G = 1$ can acquire
nonvanishing expectation values.
To examine the tachyon vacuum we define a series of potentials:
\begin{equation}
\label{serpot}
\begin{split}
\mathbb{V}^{(3)}_0 \equiv & ~ V^{(2)}_0 + V^{(3)}_0 \,, \\
\mathbb{V}^{(3)}_8 \equiv & ~ \mathbb{V}^{(3)}_0 + V^{(3)}_4+ V^{(3)}_6 +
V^{(2)}_8 + V^{(3)}_8 \,, \\
\mathbb{V}^{(3)}_{12} \equiv & ~ \mathbb{V}^{(3)}_8 + V^{(3)}_{10} +
V^{(3)}_{12} \,. \\
\end{split}
\end{equation}
A few observations are in order. In all of the above potentials
we can set $d=g_1=0$. As a consequence, $V^{(3)}_6$ and
$V^{(3)}_{10}$ do not contribute. Since the level-two dilaton
plays no role, once we go beyond the tachyon we must include level
four fields. The kinetic terms for these fields are of level
eight, so $\mathbb{V}^{(3)}_8$ is the simplest potential beyond
level zero. With level-four fields the next potential
is~$\mathbb{V}^{(3)}_{12}$.
The critical points obtained with the potentials $\mathbb{V}^{(3)}_0$,
$\mathbb{V}^{(3)}_8$, and $\mathbb{V}^{(3)}_{12}$ are given in
Table~\ref{w/oenergy}. We call the value of the potential
$\kappa^2\mathbb{V}$ at
the critical point the action density. The values of the action
density follow the pattern of open string theory. The original cubic
critical point becomes deeper. It does so by about 10\%, a value
significantly smaller than the corresponding one in open string field theory.
\begin{table}[ht]
\begin{center}
\renewcommand\arraystretch{1.5}
\vskip 0.1in
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\hbox{Potential}&$t$ & $f_1$& $f_2$ & $f_3$
& \hbox{Action density} \\
\hline
$\mathbb{V}^{(3)}_0$& $0.41620$ & $--$ & $--$ & $--$ &
$-0.05774$ \\
\hline
$\mathbb{V}^{(3)}_8$ &$0.43678$ &$-0.06502$ &
$-0.00923$ & $-0.02611$ & $-0.06329$ \\
\hline
$\mathbb{V}^{(3)}_{12}$&$0.43709$
&$-0.06709$ & $-0.00950$ & $-0.02693$ & $-0.06338$ \\
\hline
\end{tabular}
\caption{Vacuum solution with cubic
vertices only}\label{w/oenergy}
\end{center}
\end{table}
\subsection{Tachyon vacuum with cubic and quartic vertices}
We can now examine the quartic terms in the potential. The associated
potentials are denoted with a superscript $(4)$ for quartic and
a subscript that gives the sum of levels of the fields that enter
the term. The quartic self-coupling of tachyons has been calculated
in~\cite{Belopolsky:1994bj,Moeller:2004yy}:
\begin{equation}
\kappa^2{V}^{(4)}_0=-3.0172\, t^4 \,.
\end{equation}
With total level two we have a coupling of three tachyons
and one dilaton. This is calculated in Appendix A.2 and
the result is
\begin{equation}
\kappa^2{V}^{(4)}_2=\,3.8721 \,t^3 d \,.
\end{equation}
With total level four there is the coupling of two tachyons
to two dilatons (Appendix A.2) and the coupling of three tachyons
to any of the level-four fields (Appendix A.3):
\begin{equation}
\begin{split}
\kappa^2{V}^{(4)}_4 = 1.3682\, t^2 d^2
+ t^3 \bigl( -0.4377 \, f_1-56.262 \, f_2+ 13.024\, f_3
+\,0.2725\, g_1\bigr)\,.
\end{split}
\end{equation}
With total level six there are three types of interactions:
a tachyon coupled to three dilatons, two tachyons coupled to a
dilaton and a level-four field, and three tachyons coupled to
a level-six field. We have only computed the first one (Appendix A.2):
\begin{equation}
\label{td^3val}
\kappa^2{V}^{(4)}_6= -\,0.9528\, t d^3 + \ldots \,.
\end{equation}
The terms that have not been computed are indicated by the dots.
Finally, the quartic self-coupling of dilatons was computed
in~\cite{Yang:2005ep},
where it played a central role in the demonstration that the
effective dilaton
potential has no quartic term:
\begin{equation}
\label{d^4val}
\kappa^2{V}^{(4)}_8= -\,0.1056\, d^4 + \ldots\,.
\end{equation}
We use the dots to indicate the additional level eight
interactions that should be computed.
Let us now consider the potentials that can be assembled using
the above contributions. We use the following strategy: we include
cubic vertices to the highest possible level and then begin
to introduce the quartic couplings level by level. The most accurate
potential with quadratic and cubic terms that we have is
$\mathbb{V}^{(3)}_{12}$ and the tachyon vacuum it contains appears
in the last line of Table~\ref{w/oenergy}. The lowest order quartic
potential that we use is therefore:
\begin{equation}
\mathbb{V}^{(4)}_0 \equiv ~ \mathbb{V}^{(3)}_{12} + V^{(4)}_0 \,.
\end{equation}
This potential has a familiar difficulty: the quartic self-coupling
of the tachyon is so strong that the critical point in the potential
disappears. As we have argued, once additional terms are included
the critical point in the potential reappears. The higher level
potentials are defined by including progressively higher level quartic
interactions:
\begin{equation}
\label{serpot}
\begin{split}
\mathbb{V}^{(4)}_2 \equiv & ~ \mathbb{V}^{(4)}_0 + V^{(4)}_2 \,, \\[0.5ex]
\mathbb{V}^{(4)}_4 \equiv & ~ \mathbb{V}^{(4)}_2 + V^{(4)}_4 \,.
\end{split}
\end{equation}
Since our computations of $V^{(4)}_6$ and $V^{(4)}_8$ are
incomplete, the results that follow from
$\mathbb{V}^{(4)}_6\equiv \mathbb{V}^{(4)}_4 + V^{(4)}_6$ and
$\mathbb{V}^{(4)}_8\equiv ~ \mathbb{V}^{(4)}_6 + V^{(4)}_8$ cannot be trusted.
We are now in a position to calculate the critical points of the
potentials $\mathbb{V}^{(4)}$. In our numerical work we input the
cubic coefficients as fractions and the quartic coefficients as
the exact decimals given above (so the $t^4$ coefficient is
treated as exactly equal to $3.0172$.) Our results are given in
Table~\ref{wqu/oenergy}. For ease of comparison, we have included
the cubic results for $\mathbb{V}^{(3)}_{12}$ as the first line.
Furthermore, we include a line for $\mathbb{V}^{(4)}_0$ even
though there is no critical point. The next potential is
$\mathbb{V}^{(4)}_2$ which contains only the additional coupling
$t^3d$. The significant result is that the critical point
reappears and can be considered to be a (moderate) deformation of
the critical point obtained with $\mathbb{V}^{(3)}_{12}$. Indeed,
while there is a new expectation value for the dilaton (and for
$g_1$), the expectation value of the tachyon does not change
dramatically, nor do the expectation values for $f_1$, $f_2$, and
$f_3$. The critical point becomes somewhat shallower, despite the
destabilizing effects of the tachyon quartic self-couplings.
\begin{table}[ht]
\begin{center}
{\renewcommand\arraystretch{1.6}
\vskip 0.1in
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\hbox{Potential}&$t$ & $d$& $f_1$& $f_2$ & $f_3$&
$g_1$ & \hbox{Action density}
\\ \hline
$\mathbb{V}^{(3)}_{12}$&$0.43709$ & $0$
&$-0.06709$ & $-0.00950$ & $-0.02693$ & $-- $ & $-0.06338$ \\
\hline
$\mathbb{V}^{(4)}_0$ & $--$ & $--$
& $--$ & $--$ & $--$ & $--$ & $-- $ \\
\hline
$\mathbb{V}^{(4)}_2$&$0.33783$ & $0.49243$ & $-0.08007$ &
$-0.00619$ & $-0.02607$ & $-0.10258$ &$-0.05806$ \\
\hline
$\mathbb{V}^{(4)}_4$& $0.24225$ & $0.45960$ &
$-0.04528$ & $-0.00140$ &
$-0.01233$ & $-0.07249$ & $-0.03382$ \\
\hline
\end{tabular}}
\caption{\small Vacuum solution with cubic
and quartic vertices. We see that the magnitude of the action
density becomes smaller as we begin to include the effects of
quartic couplings. }\label{wqu/oenergy}
\end{center}
\end{table}
At the next level, where $t^2 d^2$ and $t^3M_4$ ($M_4$ denotes a level-four
field)
terms appear, the critical point experiences some significant change.
First of all, it becomes about 40\% more shallow; the change is large
and probably significant, given the expectation that the action density
should eventually reach zero. The tachyon expectation changes
considerably but the dilaton expectation value
changes little. Due to the $t^3M_4$ terms
the expectation values of some of the level four fields change dramatically.
Glancing at Table~\ref{wqu/oenergy}, one notices that
the tachyon expectation value is becoming smaller so one
might worry that the
critical point is approaching the perturbative vacuum.
This is, of course, a possibility. If realized, it would imply that
the critical point we have encountered is an artifact of level
expansion. We think this is unlikely. Since the dilaton seems to be relatively
stable, a trivial critical point would have to be a dilaton deformation
of the perturbative vacuum, but such deformations have negative
tachyon expectation values (see Figure~\ref{fig marginal directions}).
\medskip
At this moment we do not have full results for higher levels. The
computation of $\mathbb{V}^{(4)}_6$ would require the evaluation
of couplings of the form $t^2dM_4$ and, in principle, couplings
$t^3M_6$ of level-six fields, which we have not even introduced in
this paper. The only additional couplings we know at present are
$t d^3$, which enters in $\mathbb{V}^{(4)}_6$ and $d^4$, which
enters in $\mathbb{V}^{(4)}_8$ (see eqns.~(\ref{td^3val}) and
(\ref{d^4val})). Despite lacking terms, we calculated the
resulting vacua to test that no wild effects take place. The
incomplete $\mathbb{V}^{(4)}_6$ leads to $t =0.35426,~ d=0.40763$
and an action density of $-0.05553$. The incomplete
$\mathbb{V}^{(4)}_8$ leads to $t =0.36853,~ d=0.40222$ and an
action density of $-0.05836$. In these results the action density
has become more negative. Given the conjectured value of the
action, it would be encouraging if the full results at those
levels
show an action density whose
magnitude does not become larger.
One may also wonder what happens if terms of order higher
than quartic are included in the potential. Since the tachyon
terms in the CSFT potential alternate signs~\cite{Belopolsky:1994sk},
the quintic term is positive and will help reduce the value of
the action at the critical point. The coefficient of this coupling
will be eventually needed as computations become more accurate.
The sixtic term will have a destabilizing effect. Having survived
the destabilizing effects of the quartic term, we can hope that
those of the sixtic term will prove harmless. If, in general,
even power terms do not have catastrophic effects, it may be
better to work always with truncations of odd power.
\sectiono{The sigma model and the string field theory pictures}
In this section we study the
relations between
the string field metric $h_{\mu\nu}$ and the ghost-dilaton $d$ and the
corresponding sigma model fields, the string
metric $\tilde h_{\mu\nu}$ and dilaton $\Phi$. These relations are
needed to interpret
the tachyon vacuum solution and to discuss the possible
relation to the rolling solutions.
We begin by finding the precise linearized
relations between the string field dilaton and the sigma model
dilaton. The linearized relations confirm
that the CSFT metric $h_{\mu\nu}$, which does not acquire an expectation
value in the tachyon vacuum, coincides with the string metric
of the sigma model, which does not change in the rolling solutions.
Moreover, the relation (\ref{dilaton relation}), together
with $h_{\mu\nu}=0$,
implies that our $d>0$ in the tachyon vacuum
corresponds to $\Phi >0$, thus larger
string coupling. This is also consistent with what we obtained
in the rolling solutions.
Our discussion of the linearized relations also allows us to
examine the various vertex operators associated with the various
dilaton fields used in the literature (section~3.2.).
In section 3.3 we examine the nonlinear relations between
the CSFT tachyon and dilaton and the effective field theory ones.
We work at zero momentum and up to quadratic order. Finally,
in section 3.4, we present evidence that CSFT can describe
arbitrarily large dilaton deformations.
\subsection{Relating sigma model fields and string fields }
Consider first the effective action (\ref{sigma_action}),
suggested
by the conditions of conformal invariance of a sigma model
with gravity, dilaton and tachyon background fields.
If we set the tachyon to zero, this action reduces to the effective
action for massless fields, in the conventions of~\cite{Polchinski:1998rq}.
In this action $g_{\mu\nu}$ is the string
metric, $\Phi$ is the diffeomorphism invariant
dilaton, and $T$, with potential
$V(T)=-\frac{2}{\alpha'} T^2+\cdots$, is the tachyon.
In order to compare with the
string field action we
expand the effective action in powers of small fluctuations using
\begin{equation}
g_{\mu\nu}=\eta_{\mu\nu}+\tilde h_{\mu\nu}\,,
\end{equation}
where we use a tilde in the fluctuation to distinguish it from the
metric fluctuation in the string field. The result is
\begin{equation}
\label{sigma_limit}
\begin{split}
S_\sigma &=\frac{1}{2\kappa^2}\int d^D
x\,\Bigl(~ \frac{1}{4}\tilde h_{\mu\nu}\partial^2 \tilde h^{\mu\nu}
-\frac{1}{4} \tilde h\partial^2 \tilde h + \frac{1}{2}(\partial^\nu \tilde
h_{\mu\nu})^2 + \frac{1}{2} \tilde h\partial_\mu\partial_\nu \tilde h^{\mu\nu}
\\[0.5ex]
&\qquad\qquad\qquad + 2\tilde h\,\partial^2\Phi - 2\Phi\, \partial_\mu\partial_\nu \tilde
h^{\mu\nu} -4 \Phi\,\partial^2 \Phi \\[0.6ex]
&\qquad\qquad\qquad - (\partial T)^2 + {4\over \alpha'} T^2
+\tilde h^{\mu\nu} \partial_\mu
T\partial_\nu T+ \bigl(\,{\tilde h\over 2} - 2\Phi\bigr)
(\partial T)^2 + \cdots \Bigr)\,,
\end{split}
\end{equation}
where we have kept cubic terms coupling the dilaton and metric to
the tachyon. Such terms are needed to fix signs in the relations
between the fields in the sigma model and the string fields.
Let us now consider the string field action. The
string field needed to describe the tachyon, the metric fluctuations,
and the dilaton is
\begin{equation}
\label{scffc}
\begin{split}
|\Psi \rangle &=\int \frac{d^D k}{(2\pi)^D} \Big( \,t(k) \, c_1 \bar c_1
-\frac{1}{2}
h_{\mu\nu}(k) \alpha_{-1}^\mu\bar \alpha_{-1} ^\nu c_1\bar c_1 +
d(k) (c_1 c_{-1}-\bar c_1\bar c_{-1})\\[0.5ex]
&\quad\quad\quad \quad+i \sqrt\frac{\alpha'}{2} B_\mu (k)
c_0^+(c_1\alpha_{-1}^\mu -\bar c_1\bar \alpha_{-1}^\mu) \Big)
|k\rangle\,.
\end{split}
\end{equation}
Here $t(k)$ is the tachyon, $h_{\mu\nu}(k) = h_{\nu\mu}(k)$
is a metric fluctuation, $d(k)$ is the
ghost-dilaton, and $B_\mu(k)$ is an auxiliary field.
The sign and coefficient of $h_{\mu\nu}$ have been chosen
for future convenience. The linearized gauge
transformations of the component fields can be obtained from
$\delta |\Psi\rangle =Q_B |\Lambda\rangle$ with
\begin{equation}
|\Lambda\rangle={i\over \sqrt{2\alpha'}} \, \epsilon_\mu
(c_1\alpha_{-1}^\mu- \bar c_1 \bar\alpha_{-1}^\mu) |p\rangle\, .
\end{equation}
The resulting coordinate-space gauge
transformations are:
\begin{equation}
\label{gtcsf}
\delta h_{\mu\nu} = \partial_\nu
\epsilon_\mu+ \partial_\mu \epsilon_\nu , \quad
\delta d = -{1\over 2} \,\partial\cdot \epsilon,
\quad
\delta B_\mu =- {1\over 2} \partial^2\epsilon_\mu\,, \quad
\delta\, t = 0 \, .
\end{equation}
We now calculate the quadratic part of the closed string field action,
finding
\begin{equation}
\label{messyaction}
\begin{split}
S^{(2)}&=-\frac{1}{\kappa^2\alpha'}~\langle \Psi|c_0^- Q_B|\Psi\rangle, \\
&=~\frac{1}{2\kappa^2}\int d^D x\,\Bigl(\,
\frac{1}{4}h_{\mu\nu}\partial^2 h^{\mu\nu}- 2d\,\partial^2 d\,-2 B_\mu (\partial_\nu
h^{\mu\nu}+2\partial^\mu d\,)-2B^2-
(\partial t)^2 + {4\over \alpha'} t^2 \Bigr),\\[0.5ex]
&=\frac{1}{2\kappa^2}\int d^D x\, \Bigl(
\,\frac{1}{4}h_{\mu\nu}\partial^2 h^{\mu\nu}
+\frac{1}{2}(\partial^\nu h_{\mu\nu})^2
-4 d\,\partial^2 d
- 2d\,\partial_\mu\partial_\nu h^{\mu\nu} \,
- (\partial t)^2 + {4\over \alpha'} t^2
\Bigr)\,.
\end{split}
\end{equation}
In the last step we
eliminated the auxiliary field $B_\mu$ using its algebraic
equation of motion.
The gauge transformations (\ref{gtcsf}) imply that the
linear combination $d + {h\over 4}$ is gauge
invariant. It follows that the sigma model dilaton must take the form
\begin{equation}
\label{dilaton relation 1} \lambda \, \Phi = d+ \frac{h}{4}\,,
\end{equation}
where $\lambda$ is a number to be determined.
Using (\ref{dilaton relation 1}) to eliminate the ghost-dilaton $d$
from the action (\ref{messyaction}) we find
\begin{equation}
\label{quadcsft}
\begin{split}
S^{(2)}&=\frac{1}{2\kappa^2}\int d^D x\, \Bigl(
~ \frac{1}{4}h_{\mu\nu}\partial^2 h^{\mu\nu}
-\frac{1}{4} h\partial^2 h
+ \frac{1}{2}(\partial^\nu h_{\mu\nu})^2
+\frac{1}{2} h\partial_\mu\partial_\nu h^{\mu\nu}\\[0.5ex]
&\qquad\qquad\qquad +\,2\lambda\, h\partial^2\Phi -2\lambda\,
\Phi\,\partial_\mu\partial_\nu h^{\mu\nu} -4\lambda^2\, \Phi\partial^2 \Phi
- (\partial t)^2 + {4\over \alpha'} t^2 \Bigr).
\end{split}
\end{equation}
We also use the string field theory to
calculate the on-shell coupling of $h_{\mu\nu}$ to two tachyons.
This coupling arises from the term
\begin{equation}
S^{(3)}=-\frac{1}{\alpha' \kappa^2}\langle \,\mathcal{T}\,,\mathcal{H} \,,
\mathcal{T}\rangle \,,
\end{equation}
where $\mathcal{T}$ and $\mathcal{H}$ denote the parts of the
string field (\ref{scffc}) that contain $t(k)$ and $h_{\mu\nu}(k)$,
respectively. We thus have
\begin{equation}
S^{(3)}
=\frac{1}{2\alpha' \kappa^2} \Bigl( \prod_{i=1}^3 \int
\frac{d^D k_i}{(2\pi)^D}\Bigr) \bigl\langle c_1\bar c_1 e^{i k_1\cdot
X},c_1\bar c_1 \alpha_{-1}^\mu\bar\alpha_{-1}^\nu e^{i k_2\cdot
X}, c_1\bar c_1 e^{i k_3\cdot X} \bigr\rangle \, t(k_1)t(k_3) h_{\mu\nu}(k_2)\,.
\end{equation}
The on-shell evaluation is readily carried out using $k^\mu h_{\mu\nu} (k)=0$.
We obtain
\begin{equation}
S^{(3)}=-\frac{1}{2\kappa^2} \int {d^D k_1\over (2\pi)^D}
{d^D k_3\over (2\pi)^D}
\, k_1^\mu k_3^\nu \, t(k_1)t(k_3)
h_{\mu\nu}(-k_1-k_3)
= \frac{1}{2\kappa^2} \int d^D x\,h^{\mu\nu} \partial_\mu t \partial_\nu t\,.
\end{equation}
Combining this result with (\ref{quadcsft}) we obtain the
closed string field theory action
\begin{equation}
\label{quad+cubic_csft}
\begin{split}
S_{csft}&=\frac{1}{2\kappa^2}\int d^D x\, \Bigl(
~ \frac{1}{4}h_{\mu\nu}\partial^2 h^{\mu\nu}
-\frac{1}{4} h\partial^2 h
+ \frac{1}{2}(\partial^\nu h_{\mu\nu})^2
+\frac{1}{2} h\partial_\mu\partial_\nu h^{\mu\nu}\\[0.6ex]
&\qquad\qquad\qquad +\,2\lambda\, h\partial^2\Phi -2\lambda\,
\Phi\partial_\mu\partial_\nu h^{\mu\nu} -4\lambda^2\, \Phi\partial^2 \Phi \\[0.6ex]
&\qquad\qquad\qquad
- (\partial t)^2 + {4\over \alpha'} t^2 +h^{\mu\nu} \partial_\mu t \partial_\nu t
+\ldots \Bigr).
\end{split}
\end{equation}
We are finally in a position to identify the sigma model action
(\ref{sigma_limit}) and the string field action (\ref{quad+cubic_csft}).
Comparing the quadratic terms in $\tilde h_{\mu\nu}$ and those in
$h_{\mu\nu}$ we see that $\tilde h_{\mu\nu} = \pm h_{\mu\nu}$.
We also note that $T= \pm t$.
The coupling $\tilde h^{\mu\nu} \partial_\mu T \partial_\nu T$ in (\ref{sigma_limit})
coincides with the corresponding coupling in (\ref{quad+cubic_csft})
if and only if
\begin{equation}
\tilde h_{\mu\nu}=h_{\mu\nu} \,.
\end{equation}
This simple equality justifies the multiplicative factor of $(-1/2)$
introduced for $h_{\mu\nu}$ in the string field (\ref{scffc}). The
string field $h_{\mu\nu}$ so normalized is the fluctuation of the string
metric.
Comparing the couplings of metric and dilaton in both actions
we also conclude that $\lambda = +1$ and, therefore,
equation (\ref{dilaton relation 1}) gives
\begin{equation}
\label{dilaton relation}
\Phi =
d+ \frac{h}{4}\,\,.
\end{equation}
This expresses the sigma model dilaton $\Phi$ in terms of the string
field metric trace and the ghost dilaton $d$. It is important to
note that when we give a positive expectation value to $d$
(and no expectation value to $h$) we are increasing the value of $\Phi$
and therefore increasing the value of the string coupling.
\subsection {The many faces of the dilaton}
Equipped with the precise relations between string fields
and sigma-model fields we digress on the various
dilaton fields used in the literature. Of particular
interest are the
corresponding vertex operators, which are determined by the
CFT states that multiply the component fields in the closed
string field.
We introduce the states
\begin{equation}
|\mathcal{O}^{\mu\nu}(p)\rangle = -{1\over 4} ( \alpha_{-1}^\mu \bar\alpha_{-1}^\nu
+\alpha_{-1}^\nu \bar\alpha_{-1}^\mu) |p\rangle \,, \quad
|\mathcal{O}^d(p)\rangle = (c_1c_{-1}- \bar c_1 \bar c_{-1}) |p\rangle\,.
\end{equation}
The corresponding vertex operators are
\begin{equation}
\mathcal{O}^{\mu\nu}(p) = {1\over 2\alpha'} ( \partial X^\mu \bar\partial X^\nu
+\partial X^\nu \bar\partial X^\mu)\, e^{ip X}, \quad
\mathcal{O}^d (p) = {1\over 2} (c \partial^2 c- \bar c \bar\partial^2 c) \, e^{ipX}\,.
\end{equation}
Working for fixed momentum, the string field (\ref{scffc}) restricted
to metric and dilaton
fluctuations~is
\begin{equation}
\label{mdfjhkj}
|\Psi\rangle = h_{\mu\nu}\, |\mathcal{O}^{\mu\nu}\rangle + d \, |\mathcal{O}^d\rangle\,.
\end{equation}
This equation states that $\mathcal{O}^d$ is the vertex operator associated
with the ghost-dilaton field $d$. An excitation by this vertex operator does
not change the metric $h_{\mu\nu}$.
Our transformation to a gauge invariant dilaton gives
\begin{equation}
\label{firstred}
\Phi~ = d + {1\over 4} \, h\,, \quad
\tilde h_{\mu\nu} = h_{\mu\nu} \,.
\end{equation}
Here $\tilde h_{\mu\nu}$ is the fluctuation of the
string metric. Inverting these relations
\begin{equation}
\label{invfirstred}
d = \Phi - {1\over 4}\, \tilde h \,, \quad
h_{\mu\nu} = \tilde h_{\mu\nu} \,.
\end{equation}
Subtituting into the string field (\ref{mdfjhkj}) we obtain
\begin{equation}
\label{dflkkflk}
|\Psi\rangle = \tilde h_{\mu\nu} \Bigl(|\mathcal{O}^{\mu\nu}\rangle - {1\over 4}\,
\eta^{\mu\nu} |\mathcal{O}^d\rangle \Bigr) + \Phi\, |\mathcal{O}^{d}\rangle\,.
\end{equation}
It is interesting to note that $\mathcal{O}^d$ is the vertex
operator associated with a variation of the gauge-invariant dilaton $\Phi$
and no variation of the string metric. On the other hand,
$\mathcal{O}^{\mu\nu} - {1\over 4}\,
\eta^{\mu\nu} \,\mathcal{O}^d$ varies the
string metric and does not vary the gauge-invariant dilaton
(although it varies the ghost-dilaton).
Finally, we consider the formulation that uses the Einstein metric
$g_{\mu\nu}^E$
and the dilaton $\Phi$. The field redefinition is
\begin{equation}
g_{\mu\nu}^E = \exp (2\omega) \, g_{\mu\nu}\,, \quad \hbox{with}
\quad \omega = -{2\over D-2}\, \Phi\,.
\end{equation}
Expanding in fluctuation fields we obtain
\begin{equation}
h_{\mu\nu}^E = \tilde h_{\mu\nu} - {4\over D-2} \, \eta_{\mu\nu} \, \Phi \,.
\end{equation}
Solving for $d$ and $h_{\mu}$ in terms of $\Phi$ and $h_{\mu\nu}^E$ we get
\begin{equation}
\label{invfirstredd}
d = -{2\over D-2} \,\Phi - {1\over 4} h^E \,, \quad
h_{\mu\nu} = h_{\mu\nu}^E + {4\over D-2} \, \eta_{\mu\nu} \,
\Phi \,.
\end{equation}
Substituting into the string field (\ref{mdfjhkj}) we obtain
\begin{equation}
|\Psi\rangle = h_{\mu\nu}^E \Bigl( |\mathcal{O}^{\mu\nu}\rangle - {1\over 4}\,
\eta^{\mu\nu} |\mathcal{O}^d\rangle \Bigr) + {2\over D-2}\,\, \Phi\,
\Bigl( \, 2 \eta_{\mu\nu} |\mathcal{O}^{\mu\nu}\rangle -|\mathcal{O}^d\rangle \Bigr)
\,.
\end{equation}
Interestingly, the vertex operator that varies the Einstein metric (without
variation of the dilaton) is the same as that for the
string metric~(see (\ref{dflkkflk})).
It is the dilaton operator that changes this time. The vertex operator
\begin{equation}
\mathcal{D} = 2\eta_{\mu\nu}\mathcal{O}^{\mu\nu} - \mathcal{O}^d
=\Bigl( {2\over \alpha'}\, \partial X \cdot \bar\partial X
- {1\over 2} (c \partial^2 c- \bar c \bar\partial^2 c) \,\Bigr) e^{ipX}\,,
\end{equation}
varies the dilaton without varying the Einstein metric.
This is the dilaton vertex operator used almost exclusively in
the early literature -- it is naturally associated with the Einstein
metric. The corresponding state $|\mathcal{D}(p) \rangle$ has a particularly
nice property: it is annihilated by the BRST operator when $p^2=0$. Indeed,
\begin{equation}
Q_B\,|\mathcal{D}(p)\rangle = \frac{\alpha'}{2}p^2c_0^+ |\mathcal{D}(p)\rangle \,.
\end{equation}
The dilaton $\mathcal{D}$ is in fact the unique linear
combination of the matter
and ghost dilatons that has this property. For other combinations,
terms linear in the momentum $p$ (such as $ (p\cdot \alpha_{-1})c_1\,\bar c_1 \bar
c_{-1} |p\rangle$), survive.
\subsection{Relating the sigma model and string field dilaton and tachyon}
The closed string theory potential $V$, as read from
the effective action (\ref{sigma_action}) is
\begin{equation}
\label{Sigmamodelpot}
\kappa^2 V=e^{-2\Phi} \big( \, V (T) +\cdots
\big)\,, \quad \hbox{with} \quad V (T) = -T^2 + \cdots\,.
\end{equation}
Here $\Phi$ and $T$ are the zero momentum
dilaton and tachyon fields in the effective field theory.
The purpose of this section is to discuss the relation between
$\Phi$ and $T$ and the corresponding string fields
$d$ and $t$, both sets at zero-momentum. To do this we
must consider the effective potential for $d$ and $t$ calculated
in string field theory. We only have the potential
itself. Collecting our previous results, we write
\begin{eqnarray}
\label{stringfieldform}
\kappa^2 V&=&-t^2+1.6018\, t^3-3.0172 \,t^4\nonumber\\[1.0ex]
&&+\,3.8721\, t^3 d
+(-0.8438\, t+1.3682\,t^2)\, d^2
-0.9528\, t\, d^3
-0.1056 \, d^4\, .
\end{eqnarray}
The contributions from massive fields affect quartic and higher
order terms.
In our setup, the relevant terms arise when we eliminate
the level-four massive fields using their kinetic terms in
(\ref{gknfeh}) and their linear
couplings to $t^2$ in (\ref{fglekf4}), to
$td$ in (\ref{cgdt}), and to $d^2$ in (\ref{cftcoudk}). We find
\begin{equation}
\Delta V = -{6241\over 186624}\, d^4 + {25329\over 16384}\, d^2\, t^2
- {1896129\over 4194304}\, t^4 \simeq
-0.0334\, d^4 + 1.5460\, d^2 t^2 - 0.4521\, t^4\,.
\end{equation}
It follows that the effective potential for the tachyon
and the dilaton, calculated up to terms quartic in the fields and including
massive fields of level four only, is given by:
\begin{eqnarray}
\label{string field form}
\kappa^2 V_{eff}&=&-t^2+1.6018\, t^3-3.4693 \,t^4\nonumber\\[1.0ex]
&&+\,3.8721\, t^3 d
+(-0.8438\, t+2.9142\,t^2)\, d^2
-0.9528\, t\, d^3
-0.1390 \, d^4\, + \ldots\, .
\end{eqnarray}
The dots represent quintic and higher terms, which receive
contributions both from
elementary interactions and some integration of massive fields.
We write, more generically
\begin{eqnarray}
\label{stringxfield form}
\kappa^2 V_{eff} &=& - t^2+ a_{3,0} t^3+ a_{4,0} \,t^4\nonumber\\[1.0ex]
&&+a_{3,1}\, t^3 d
+(a_{1,2}\, t+a_{2,2}\,t^2)\, d^2
+ a_{1,3}\, t\, d^3
+ a_{0,4}\, d^4\, + \ldots\, .
\end{eqnarray}
The values of the coefficients $a_{i,j}$ can be read comparing this
equation with (\ref{string field form}).
There are two facts about $V_{eff}$
that make it
clear it is not in the form of
a ghost-dilaton exponential times a tachyon
potential. First, it does not have a term
of the form $t^2 d$ that would arise from the tachyon mass
term and the expansion of the exponential.
Second, it contains a term linear in the tachyon; those terms
should be absent since the tachyon potential does not
have a linear term.
Nontrivial field redefinitions are necessary to relate
string fields and sigma model fields.
To linearized order the
fields are the same, so we write relations of the form
:
\begin{eqnarray}
\label{frsfsmf}
t&=&T+\alpha_1 T\Phi+\alpha_2 \Phi^2+\cdots, \nonumber\\[1.0ex]
d&=&\Phi+\beta_0 T^2 + \beta_1 T\Phi+\beta_2 \Phi^2 +\cdots\,,
\end{eqnarray}
where the dots indicate
terms of higher order in the sigma model fields.
We found no {\em need} for a $T^2$ term in the redefinition of tachyon
field, such a term would change the cubic and quartic self-couplings
of the tachyon in $V(T)$.
Since $d$ gives rise to pure tachyon terms that are quadratic or higher,
only at quintic and higher order in $T$ will $V(T)$ differ from the
potential obtained by replacing $t\to T$ in the first line of
(\ref{string field form}). We thus expect that after the field redefinition
(\ref{string field form}) becomes
\begin{equation}
\label{Sigmamodelpotx} \kappa^2 V=e^{-2\Phi} \big( -T^2 +1.6018\,
T^3-3.4693\,T^4 + \dots \big)\,,
\end{equation}
at least to quartic order in the fields.
We now plug the substitutions (\ref{frsfsmf})
into the potential (\ref{string field form}) and compare with
(\ref{Sigmamodelpotx}). A number of conditions emerge.
\begin{itemize}
\item In order to get the requisite $T^2\Phi$ term we need $\alpha_1 = -1$.
\item In order to have a vanishing $T\Phi^2$ term $\alpha_2 = {1\over
2}a_{1,2}$ must be
half the coefficient of $td^2$ in (\ref{string field form}).
\item Getting the correct
$T^3\Phi$ coupling then fixes $\beta_0 = (a_{3,0} - a_{3,1})/(2 a_{1,2})$.
\item Getting the correct value of $T^2 \Phi^2$
fixes $\beta_1 = -(1+ {3\over 2} a_{3,0} a_{1,2} + a_{2,2})/ (2a_{1,2})$.
The vanishing of $T\Phi^3$
fixes $\beta_2= - a_{1,3}/(2a_{1,2})$. All coefficients in
(\ref{frsfsmf}) are now fixed.
\item The coefficient
of $\Phi^4$, which should be zero, turns out to be $(a_{0,4} + {1\over
4} a_{1,2}^2)\simeq
0.0389$,
which is small, but does not vanish.
\end{itemize}
Our inability to adjust the coefficient of $\Phi^4$ was to be expected.
The potential (\ref{string field form})
contains the terms $- t^2 + a_{1,2} \, t d^2 + a_{0,4} d^4$
and, to this order, integrating out the tachyon gives an effective
dilaton quartic term of $(a_{0,4} + {1\over 4} a_{1,2}^2)$. With the
contribution
of the massive fields beyond level four
this coefficient in the dilaton effective potential would vanish.
This is, in fact, the statement that was verified in~\cite{Yang:2005ep}.
It follows that we need not worry that the
quartic term in $\Phi$ do not vanish exactly.
Following the steps detailed before we
find
\begin{eqnarray}
\label{td to TD}
t&=&T- T\, \Phi-\, 0.4219 \, \Phi^2+\cdots, \nonumber\\[1.0ex]
d&=&\Phi + 1.3453 \, T^2+ 1.1180 \,T\, \Phi-\, 0.5646 \,
\Phi^2+\cdots.
\end{eqnarray}
\begin{figure}[!ht]
\leavevmode
\begin{center}
\epsfysize=6.0cm
\epsfbox{MarginalDirections.eps}
\end{center}
\caption{\small The solid line is the dilaton marginal
direction defined by the set of points $(d, t(d))$ where $t(d)$ is the
expectation
value of $t$ obtained solving the tachyon equation of motion for
the given $d$. The dashed line represents the direction along
the sigma model dilaton $\Phi$ (thus $T=0$). It is obtained by setting
$T=0$ in equation (\ref{td to TD}).
The two lines agree well even reasonably far from the origin.}
\label{fig marginal directions}
\end{figure}
In string field theory the dilaton deformation is represented
in the $(d,t)$ plane
by the curve $(d, t(d))$, where $t(d)$ is the expectation
value of the tachyon when the dilaton is set equal to $d$.
This curve, calculated using the action (\ref{string field form}),
is shown as a solid line in Figure~\ref{fig marginal directions}.
On the other hand, it is clear that $\Phi$ (with $T=0$) defines the
marginal direction in the effective field theory. Setting $T=0$ in
(\ref{td to TD}) we find the pair $(d(\Phi), t(\Phi))$, which must
be a parameterization of the flat direction in terms of $\Phi$.
This curve is shown as a dashed line in Figure~\ref{fig marginal
directions}. It is a good consistency check that these two
curves agree well with each other over a significant fraction of
the plot.
\subsection{Dilaton deformations}
In Ref.~\cite{Yang:2005ep} we computed the effective dilaton potential
that arises when we integrate out the tachyon from a potential
that includes only quadratic and cubic terms. We found that the domain
of definition of this potential is the full real $d$ line. This
happens because the (marginal) branch $t(d)$ that gives the
expectation value of $t$ for a given value of $d$ is well defined
for all values of $d$. In this section we extend this computation
by including higher level fields and higher order interactions.
As we will demonstrate, it appears plausible that the domain
of definition for the effective dilaton potential remains $d\in (-\infty,
\infty)$.
The marginal branch is easily identified for small values
of the dilaton: as the dilaton expectation value goes to zero
all expectation values go to zero. For large enough values of
the dilaton the marginal branch may cease to exist, or it may
meet another solution branch. If so, we obtain limits on the
value of $d$. Since the dilaton effective potential is supposed
to be flat in the limit of high level, we propose the following
criterion.
If we encounter a limit value of $d$, this value is deemed
reliable only if the dilaton potential at this point is not
very large. A large value for the potential indicates that the calculation
is not reliable because the same terms that are needed to make the potential
small could well affect the limit value. In open string field theory
a reliable limit value was obtained for the Wilson line parameter:
at the limit point the potential energy density was a relatively small fraction
of the D-brane energy density.
The purely cubic potential for $t$ gives a critical point
with $\kappa^2 V \sim - 0.05774$.
We define $\mathcal{R}(d) \equiv\frac{|\kappa^2 V(d)|}{0.05774}$, where $V(d)$
is the
effective dilaton potential. A critical value of $d$ for which $\mathcal{R} >
1$
will be considered unreliable.
We start with cubic potentials and then include the elementary
quartic interactions level by level. With cubic potentials, the
effective dilaton potential
is invariant under $d\to -d$. With
$\mathbb{V}^{(3)}_4$ dilaton deformations can be arbitrarily
large~\cite{Yang:2005ep}. We then find
\begin{itemize}
\item The dilaton potential derived
from $\mathbb{V}^{(3)}_8$ is defined
for $|d|\leq 624$. This is plausible since, at this level,
the equations of motion for the level-four fields are linear.
\item The dilaton potential derived
from $\mathbb{V}^{(3)}_{12}$ is defined
for $|d|\leq 1.71$. Since $\mathcal{R}(\pm 1.71)=42.4$,
there is no reliable limit value.
\item The dilaton potential derived from $\mathbb{V}^{(4)}_{0}$
is defined
for $|d|\leq 4.67$, where $\mathcal{R}(\pm
4.67)=49.5$. The large value of $\mathcal{R}$ indicates
that there is no evidence of a limit value.
\item The dilaton potential derived from $\mathbb{V}^{(4)}_{2}$ is
not invariant under $d\to -d$. We find a
range $d \in (-\infty\,, 3.124)$ . Although
$\mathcal{R}(3.124)=0.387$, the potential has a maximum with
$\mathcal{R}=3.325$ at $d=1.92$. This fact makes the limit point
$d= 3.124$ unreliable.
\item The dilaton potential derived from $\mathbb{V}^{(4)}_{4}$, the highest
level potential we have computed fully, is regular
for $d\in (-2.643, \,6.415)$. Since $\mathcal{R}(6.415)=1502.4$
and $\mathcal{R}(-2.643)=89.2$, there is no
branch cut in the reliable region.
\end{itemize}
We have also computed the higher level
quartic interactions $t d^3$ and
$d^4$. We have checked that
$\mathbb{V}^{(4)}_{4}$, supplemented by those interactions
does not lead to branch cuts in the potential for the dilaton.
This result, however, is not conclusive. Additional
interactions must be included at level six (the level of $t d^3$)
and at level eight (the level of $d^4$).
\begin{figure}[!ht]
\leavevmode
\begin{center}
\epsfysize=6.5cm
\epsfbox{resultsfig03.eps}
\end{center}
\caption{\small Dilaton effective potential. The dashed line
arises from $\mathbb{V}^{(3)}_{4}$, the solid line arises from
$\mathbb{V}^{(3)}_{8}$, and the thick line arises from
$\mathbb{V}^{(4)}_{8}$.} \label{dilaton potentials}
\end{figure}
We tested in~\cite{Yang:2005ep} that cubic and quartic
interactions combine to give a vanishing quartic term in the
dilaton effective potential. We can ask if the potential for the
dilaton becomes flatter as the level of the calculation is
increased. We find that it roughly does, but the major changes in
the potential are due to the elementary quartic term in the
dilaton. For the cubic vertex, the interactions of the type $d^2
M$, with $M$ massive give rise to terms quartic on the dilaton.
Other cubic couplings that do not involve the dilaton typically
induce $d^6$ (and higher order) terms, which play a secondary role
in flattening the potential if the quartic terms have not
cancelled completely. Therefore, the potentials that arise from
$\mathbb{V}^{(3)}_{8}$, $\mathbb{V}^{(3)}_{10}$ and
$\mathbb{V}^{(3)}_{12}$ (without the contribution from level six
massive fields) have no obvious difference. The potentials
obtained at various levels are shown in Figure~\ref{dilaton
potentials}. The dashed line arises from $\mathbb{V}^{(3)}_{4}$,
the solid line arises from $\mathbb{V}^{(3)}_{8}$, and the thick
line arises from $\mathbb{V}^{(4)}_{8}$.
\sectiono{Conclusions}
In this paper we have presented some calculations that
suggest the existence of a tachyon vacuum for the bulk
closed string tachyon of bosonic string theory. We have
discussed the physical interpretation using the effective
field theory both to suggest the value of the action
density at the critical point (zero!) and to obtain
rolling solutions~\cite{HZR} that seem consistent with the interpretation
of the tachyon vacuum as a state in which there are no
closed string states.
The numerical evidence presented is still far from conclusive.
A critical point seems to exist and appears to be robust, but it
is not all that clear what will happen when the accuracy of
the computation is increased. If the action density at the
critical point goes to zero it may indeed define a
new and nontrivial tachyon vacuum.
Conceivably, however, the critical point could approach the
perturbative vacuum, in which case there would be no evidence
for a new vacuum. Alternatively, if the action density at the
critical point remains finite, we would have no interpretation
for the result.
\begin{figure}[!ht]
\leavevmode
\begin{center}
\epsfysize=9.1cm
\epsfbox{ncstring.eps}
\end{center}
\caption{\small A non-critical $(p+1)$-dimensional
string theory would
correspont to a solitonic solution of critical string
theory in which, far away from the reduced space, the fields
approach the values of the closed string tachyon vacuum.}
\label{ncrit_conj}
\end{figure}
Let us consider some additional indirect arguments that support the
existence of a closed string tachyon vacuum. The first one arises
from the existence of sub-critical bosonic string theories. The
evidence in string theory is that most string theories are related
by compactifications and/or deformations.
It seems very likely that non-critical string theories are
also related to critical string theory. It should then
be possible to obtain a non-critical string theory as a solution
of critical string theory. Certainly the view that $D=2$ bosonic
string theory is a ground state of the bosonic string has been
held as likely~\cite{2dconj}. In non-critical string theory the number of
space dimensions is reduced (at the expense of a linear dilaton
background). The analogy with lower-dimensional D-branes in open
string theory seems apt: the branes are solitons of the open string
field theory tachyon in which far away from the branes the tachyon
sits at the vacuum. It seems plausible that non-critical string
theories are solitonic solutions of the {\em closed} string theory
tachyon. As sketched in Figure~\ref{ncrit_conj}, far away
along the coordinates transverse to the non-critical
world-volume, the background would approach the
closed string tachyon vacuum. The universality of the tachyon vacuum
would imply that a noncritical string theory could be further
reduced using the same background configuration used to reduce the original
critical theory.
In fact, in the $p$-adic open/closed string theory lump solutions
of the closed string sector appear to describe spacetimes of lower
dimensionality, as explained by
Moeller and Schnabl~\cite{Moeller:2003gg}.
Indeed, far away from the lump the open string tachyon must be at its
vacuum and therefore there are no D-brane solutions with more space dimensions
than
those of the lump. Away from the lump the closed string
tachyon is at its vacuum, and no linearized solutions of the equations
of motion exist.
A suggestive argument for zero action at the tachyon vacuum follows
from the sigma model approach. As discussed by
Tseytlin~\cite{Tseytlin:2000mt}, it seems likely that
the closed string effective action for the spacetime background fields
may be written in terms of the partition function $Z$ of the two-dimensional
sigma model as well as derivatives thereof (this does work for open
strings~\cite{Witten:1992cr}). The conventional coupling of the
world-sheet area to the tachyon $T$ results
in a partition function and an effective action with a prefactor of $e^{-T}$.
Thus one expects a tachyon potential of the form $e^{-T} g(T)$ where $g$ is a
polynomial that begins with a negative quadratic term\footnote{In
\cite{Tseytlin:2000mt},
a tachyon potential of the form $-T^2 e^{-T}$ is considered. Complications
in fixing the kinetic terms made it unclear if $T=\infty$
was a point in the configuration space (see the discussion below eqn.~(4.13))
of \cite{Tseytlin:2000mt}. For additional comments on the possible
form of the tachyon potential, see Andreev~\cite{Andreev:2003hn}.}. In
this case, for a tachyon vacuum at $T\to\infty$ the action goes to zero.
The computations and the discussion presented in this paper
have led to a set of testable conjectures concerning the vacuum of
the bulk closed string tachyon of bosonic string theory. It seems
likely that additional computations, using both string field theory,
effective field theory, and conformal field theory will help test these ideas
in the near future.
\bigskip
\noindent
{\bf Acknowledgements} We are grateful to M.~Headrick
and A.~Sen for many
instructive discussions. We would also
like to acknowledge useful conversations with K.~Hashimoto,
H. Liu, N. Moeller, Y.~Okawa, M. Schnabl, and A. Tseytlin.
|
1,314,259,994,946 | arxiv | \section{Introduction}\label{intro}
Our present understanding of the universe and its evolution implies the existence of black holes, bodies whose masses are packed in such small volumes that not even light can escape.
We have experimental evidence that such objects do exist from observations of x-ray binaries, which suggests the existence of stellar mass black holes in binary systems, and from the spectral properties of quasars and active galactic nuclei, which suggest that super-massive black holes dwell at the center of most galaxies.
From a theoretical point of view, black holes are a direct consequence of the fact that we must use General Relativity (GR) to describe the late stages of gravitational collapse. For collapsing matter sources satisfying standard energy conditions, Einstein's field equations imply that eventually collapse must lead to the formation of trapped surfaces and a singularity
\cite{sing1}, \cite{sing2}, \cite{HE}.
The generic existence of singularities in solutions of Einstein's field equations is a troublesome issue for classical GR as their presence signals a regime where predictability breaks down
\cite{hawk}
and the theory doesn't hold.
The most conservative view on singularities is that they are a consequence of the application of the theory in a regime where quantum effects become important and thus they should not appear in a full theory of quantum gravity. This point of view is usually traced back to Wheeler (see for example
\cite{wheeler}
for an historical overview).
Our theoretical understanding of how black holes form is rooted in the simplest toy model for spherical collapse that was developed by Oppenheimer and Snyder
\cite{OS}
and independently by Datt \cite{datt}
in 1939 (from now on referred to as the OSD model).
On the one hand, we know that GR works very well in the weak field and we are confident that models of collapse such as OSD are accurate to describe the fundamental features of collapse scenarios far away from the singularity.
On the other hand, the behavior of gravity in the strong field is not well understood and even though we do know that GR requires modifications in the regime where the gravitational field becomes strong over very small scales, we still do not know what kind of form such modifications should take.
Black hole horizons somehow stand at the crossroad between these two situations. For example, the event horizon for a Schwarzschild black hole sits comfortably far from the strong gravity region but its description is strongly linked to the existence of the singularity and modifications to our classical models in the vicinity of the singularity bear important consequences for the behavior of the horizon itself.
The question of how gravity behaves in the strong field is also tightly connected to the question `What happens to matter when it is compressed to volumes small enough that the classical description fails?'
Therefore one could expect that both sides of Einstein's equations (the geometrical side containing Einstein's tensor $G_{\mu\nu}$ and the matter side containing the energy-momentum tensor $T_{\mu\nu}$) will need to be modified in the strong field regime.
The investigation of simple analytical toy models in quantum gravity is motivated by the idea that these scenarios could provide valuable information about the general features that a full theory of quantum gravity should posses.
The general view that is taking shape in the last couple of decades is that quantum effects will generate repulsive pressures that are sufficient to counteract the gravitational attraction, thus avoiding the singularity at the end of collapse. The first, immediate consequence, is that it should be impossible to form a Schwarzschild (or Kerr) black hole within a full theory of quantum gravity.
Then different possibilities may arise depending on the assumptions made in the model. For example, collapse may stop before the formation of the horizon, leaving behind an exotic compact object. Alternatively collapse may lead to the formation of the horizon and eventually halt at much smaller scales. In this last, case collapsing matter would eventually bounce thus leading to a phase of re-expansion following collapse. The re-expanding phase may not affect the geometry in the exterior, thus leaving an object that looks like a black hole for distant observers, or it may trigger a transition of the black hole horizon to a white hole horizon.
In the present article we focus on modifications to dynamical scenarios where the black hole forms from regular initial data. However, it should be noted that the issue of how the static Schwarzschild black hole solution can be altered by the introduction of quantum effects has been addressed by several authors in different frameworks.
For example, solutions within classical GR describe what are now called `regular black holes'
(see for example \cite{bardeen} and \cite{hayward} for the earliest results, or \cite{fro} and \cite{boj4} for more recent discussions). These are modifications of the Schwarzschild solution that imply a minimum length scale and a non vanishing energy-momentum tensor. Rotating regular black holes were considered for example in
\cite{BM},
their properties as candidates for astrophysical objects were studied for example in
\cite{bobo1} and \cite{bobo2},
while the extension to the case with non vanishing cosmological constant was studied in
\cite{saa}.
Other solutions that modify the Schwarzschild black hole can be obtained via the introduction of quantum effects. These are vacuum solutions within a quantum gravitational description of the black hole space-time. For example, approaches based on Loop Quantum Gravity (LQG) have been studied in \cite{ash3}, \cite{boj2}, \cite{pullin} and \cite{modesto}.
Other approaches have been also considered.
For example, an improved Schwarzschild solution based on renormalization group and running gravitational constant was described in
\cite{Bonanno},
while a quantization approach based on canonical formalism was presented in
\cite{kuns1}
and a discussion of the meaning of the gravitational Schwarzschild radius within a quantum theory was presented in
\cite{casadio2}.
To this aim a quantum mechanical operator acting on the `horizon wave-function' was introduced.
All these models show similar features, like, for example, the appearance of an inner horizon inside the event horizon (see for example
\cite{torres} for a study on the properties of the inner horizon of the solution presented in \cite{Bonanno} and \cite{torres-new} for a more recent study with cosmological constant) or the possible existence of a massive, dense compact object of finite size smaller than the horizon. However the most important insight coming from the study of quantum corrected black holes is the realization that quantum effects may modify the geometry at large scales thus having implications for the description of the horizon of the black hole.
The focus of the present article is on existing dynamical models of collapse, and on how the introduction of `quantum' effects may alter the standard picture of black hole formation.
We will review the current status of our understanding of gravitational collapse when some kind of repulsive effects (that can be interpreted as quantum corrections) are incorporated as the gravitational field becomes strong.
In dynamical models the main mechanism that leads to the avoidance of the singularity is when collapse halts at a finite radius, possibly producing a bounce for the infalling matter.
As mentioned before, the cloud may halt at a radius larger than the Schwarzschild radius, thus never forming a black hole. In this case it may produce a compact object or it may completely evaporate (see for example, \cite{grava1} for an example of compact object, \cite{rad2}, \cite{rad3} and \cite{mersini} for the effects of black hole evaporation on gravitational collapse and \cite{yuki1}, \cite{yuki2} for examples of evaporation including 4D Weyl anomaly).
Alternatively the cloud may reach a minimum scale and re-expand. If the the repulsive effects close to the bounce influence the geometry in the vacuum exterior then the black hole effectively turns into a white hole (see for example \cite{garay2}, \cite{rov1} and \cite{bar3}).
On the other hand, if repulsive effects are confined to a small neighborhood of the center then external observers would effectively see a black hole (see for example \cite{frolov}).
Our aim is to point out the main outstanding unresolved issues in theoretical models and how such models might be tested against observations of energetic phenomena in the universe in the near future.
The first proposed model of this kind involved the collapse of a thin light-like shell in a quantum gravity scenario in which the effective Lagrangian was described by the Einstein's Lagrangian plus the leading higher order terms that describe gravity over short distances
\cite{frolov}, \cite{frolov2}.
There it was shown for the first time that quantum gravitational effects can avoid the occurrence of singularities and, as a consequence, shorten the life of the black hole horizon.
In more recent times the renewed interest for quantum corrections to classical singularities in collapse models was sparked by Loop Quantum Cosmology (LQC). Several authors, based on considerations coming from LQG, showed how quantum corrections near the big bang singularity can remove the initial singularity and produce a bouncing universe
(see for example
\cite{ash}, \cite{ash2} and \cite{boj}).
Since the simplest relativistic toy models for collapse are essentially the time reversal of big bang models, the same formalism derived from LQC can in principle be applied to collapse
(see for example \cite{boj3}).
However, the collapse of a massive star is very different from the evolution of the universe. Above all, the most notable difference that one has to deal with, when considering collapse, is the matching to an exterior manifold describing the space-time around the collapsing object. In the OSD model, an horizon develops in the exterior once the infalling matter crosses the Schwarzschild radius.
How the bounce will affect the horizon in the vacuum exterior is presently still not entirely clear. If quantum effects are confined inside the collapsing matter then the light-cone structure of the space-time must undergo a discontinuous transition from the interior to the exterior. On the other hand if quantum effects propagate in the exterior then the black hole geometry must be altered.
As mentioned above, most dynamical models with `quantum' corrections studied to date result in a bouncing scenario. These models suggest the possibility of the formation of exotic compact remnants as leftovers from collapse and recently there has been a lot of interest in the possible phenomenology of such remnants.
One could ask whether a regular black hole can form through a dynamical process
(see for example \cite{bam-mod}),
or what kind of properties such remnants would have
(see for example \cite{vidotto}).
The idea that compact objects other than neutron stars can form from collapse has been around for a long time. For example, exotic compact objects were proposed by Hawking already in 1971
\cite{hawking}.
Just like the electron degeneracy pressure can stabilize collapse leading to a white dwarf and the neutron degeneracy pressure can stabilize it to produce a neutron star, it seems reasonable to suppose that a further island of stability may exist at densities higher than neutron star's cores for a yet unknown state of matter.
Along these lines several kinds of exotic compact objects have been proposed through the years. For example objects like gravastars (`gravitational vacuum stars', obtained from a phase transition of quantum vacuum near the location of the horizon)
(see \cite{grava1} and \cite{grava2})
and black stars
(see \cite{bar1})
have a radius slightly larger than the Schwarzschild radius.
On the other hand theoretical objects like
quark stars
(see \cite{quark1} and \cite{quark2})
and boson stars
(see \cite{boson1} and \cite{boson2})
have a larger boundary (larger than the photon sphere for black holes).
The properties of these proposed objects have been studied in detail.
Also, arguments for the existence of compact remnants left over after black hole evaporation were proposed in connection with the information loss problem
(see for example \cite{gid1}, \cite{chen} and \cite{lochan} and references therein). In these scenarios, a compact object of Planck scale may be the residue of the complete evaporation of the black hole via Hawking radiation.
The question of which kind of dynamical process may lead to the formation of such objects remains open and it is closely connected to how matter and gravity behave at extremely high densities.
The paper is organized as follows: In section \ref{class} we review the OSD model for dust collapse and Einstein's equations for the collapse of homogeneous dust and perfect fluids (the reader familiar with such topics can jump directly to the next section).
Section \ref{semiclass} is devoted to a review of semi-classical corrections to collapse and their consequences for the OSD model and black hole formation in general.
In section \ref{open} we outline the main open questions related to such modified collapse models.
In section \ref{pheno} we explore the phenomenological consequences that semi-classical collapse bears for astrophysical black holes and
we investigate the possibility that exotic compact objects and remnants may occur as leftover from collapse. In section \ref{pheno} we also introduce a new dynamical toy model leading to one of such hypothetical remnants (which we call a dark energy star).
Finally section \ref{discussion} is devoted to a brief discussion of the present and future status of the field.
Bullet points are used throughout the sections in order to highlight the separation of each topic from the next.
Finally, throughout the paper we will make use of geometrical units by setting $G=c=1$ and for simplicity, we will absorb the factor $8\pi$ in Einstein's equations into the definition of the energy momentum tensor.
\section{Classical collapse...}\label{class}
In order to set the stage we will briefly review here the formalism for gravitational collapse of matter fluids in GR.
For the interior of the collapsing cloud we consider a spherically symmetric space-time described in co-moving coordinates by the metric
\begin{equation} \label{interior}
ds^2=-e^{2\nu(r,t)}dt^2+e^{2\Phi(r,t)}dr^2+R(r,t)^2d\Omega^2 \; ,
\end{equation}
where $d\Omega^2$ is the line element on the unit sphere and $\nu(r,t)$, $\mu(r,t)$ and $R(r,t)$ are the metric functions to be determined from Einstein's equations.
The energy momentum tensor is that of a gravitating fluid given by $T_{\mu\nu}={\rm diag}(\rho(r,t),p_r(r,t),p_\theta(r,t),p_\theta(r,t))$ for an anisotropic source. In the following we will concentrate on perfect fluids with $p_r=p_\theta=p$.
The metric \eqref{interior} is continuously matched to a know exterior (such as Schwarzschild or Vaidya) through the collapsing boundary $R_b(t)=R(r_b(t),t)$
(see \cite{matching1}-\cite{matching4}
for details).
Then Einstein's equations can be written in the form
\begin{eqnarray} \label{rho}
\rho&=& \frac{F'}{R^2R'} \; , \\ \label{p}
p_r&=&-\frac{\dot{F}}{R^2\dot{R}} \; , \\ \label{nu}
\nu'&=&2\frac{p_\theta-p_r}{\rho+p_r}\frac{R'}{R}-\frac{p_r'}{\rho+p_r} \; , \\ \label{G}
\dot{R}'&=&\frac{1}{2}\left(R'\frac{\dot{G}}{G}+\dot{R}\frac{H'}{H}\right) \; ,
\end{eqnarray}
where primed quantities denote derivative with respect to the co-moving radius $r$ and dotted quantities denote derivatives with respect to the co-moving time $t$. The function $G(r,t)$ is related to the initial velocity of the particles in the cloud and $G$ and $H$ are defined as
\begin{equation}
G=e^{-2\Phi}R'^2, \; \; H=e^{-2\nu}\dot{R}^2 \; .
\end{equation}
The function $F(r,t)$ is called the Misner-Sharp mass
of the system
\cite{misner},
it describes the amount of matter contained within the co-moving radius $r$ at the co-moving time $t$ and it takes the form
\begin{equation} \label{misner}
F=R(1-G+H) \; .
\end{equation}
The system consists of five equations in seven unknown functions, therefore in order to close it one has to specify two equations of state, for the radial pressure and the tangential pressure. For example, by setting $p_r=p_\theta=0$ one obtains a cloud of non interacting particles (i.e. `dust'), homogeneous perfect fluids are given by $p_r=p_\theta=p(t)$, in this case a linear equation of state $p=\lambda\rho$ is often considered. More realistic mater fluids can be described by a polytropic equation of state of the kind $p_r=p_\theta=K\rho^\gamma$ like the one used to describe equilibrium configurations
\cite{tooper}.
Other, more exotic, possibilities may be considered as well.
Equation \eqref{misner} can be rewritten as
\begin{equation} \label{motion}
\dot{R}=\pm e^\nu\sqrt{\frac{F}{R}+G-1} \; ,
\end{equation}
and treated as the equation of motion describing the trajectory of each collapsing shell of matter. Note that in order to describe collapse one has to take the minus sign in the equation above.
Trapped surfaces develop in the interior when
\begin{equation} \label{horizon}
1-\frac{F}{R}=G-e^{-2\nu}\dot{R}^2=0 \; ,
\end{equation}
which implicitly defines the curve $t_{\rm ah}(r)$ describing the co-moving time at which the shell $r$ becomes trapped.
It is generally useful to make use of a gauge degree of freedom and rewrite the equation of motion in terms an adimensional scale factor $a(r,t)$ (for details see for example
\cite{review}).
This in turn allows for the introduction of a similar scaling for the Misner-Sharp mass $F$ and the velocity profile $G$, as
\begin{itemize}
\item Scale factor $a(r,t)$ given by: $R=ra$.
\item Mass function $M(r,t)$ given by: $F=r^3M$.
\item Velocity profile $b(r,t)$ given by: $G=1+r^2b$.
\end{itemize}
Then, the equation of motion \eqref{motion} can be rewritten as
\begin{equation}\label{motion2}
\dot{a}=-\sqrt{\frac{M}{a}+b} \; .
\end{equation}
For collapse of realistic fluids one usually considers a matter model satisfying the weak, strong or dominant energy conditions. The least stringent of such requirements is given by the weak energy conditions that translate to $\rho>0$, $\rho+p_r>0$ and $\rho+p_\theta>0$.
Of course, all types of matter observed in the universe today satisfy the classical energy conditions mentioned above. `Exotic' matter sources have been conjectured but not observed so far. Nevertheless it is reasonable to suppose that in the ultra-dense regime where quantum gravitational effects become important matter will satisfy some quantum version of the energy conditions.
For example in \cite{visser} it was shown that non linear energy conditions are more suitable to describe matter in a regime transitioning from a classical to a quantum state.
At a classical level it is well known that in a globally hyperbolic space-time, any matter field that satisfies energy conditions and is compact enough to form a trapped surface must inevitably form a singularity as well (for a complete historical review of the singularity theorems see \cite{senovilla-garfinkle}).
Therefore, within the framework of classical GR, the removal of the singularities at the end of collapse is closely related to the violation of at least one of the assumptions of the singularity theorems.
From the above considerations then it is easy to conclude that the most natural way to avoid such singularities is to violate the energy conditions at some stage during collapse. This approach was first followed in
\cite{berg}.
Violation of energy conditions implies repulsive effects and these effects are responsible for halting collapse and avoidance of the singularity.
There are essentially two ways in which the classical energy conditions may be violated in the strong field regime: We can assume that we can still use GR to describe the evolution of the collapsing cloud but the matter fields, within some quantum theory of gravity, are such that they violate the energy conditions. Or we can assume that matter satisfies the energy conditions and repulsive effects arise from quantum modifications to GR. As we will see below, such modifications to GR may be treated as an effective matter source to be put on the right-hand side of Einstein's equations. In this way the problem is reduced to that of determining the evolution of an effective, non physical, source within classical GR.
\subsection{Dust, homogeneous fluids and null shells}
\textbullet\ The simplest matter model that one can consider to study collapse is the OSD model describing homogeneous dust (i.e. non interacting particles). In this case $\rho=\rho(t)$ and $p_r=p_\theta=0$. From equation \eqref{p} it follows that the Misner-Sharp mass must be $F(r)=r^3M_0$, with $M_0$ an arbitrary constant. Then Einstein's equations simplify dramatically and it's easy to see that we can set $\nu=0$ and $G=1+r^2b_0$ with $b_0$ an integration constant. In the marginally bound case given by $b_0=0$ (where ideally particles have zero initial velocity at spatial infinity) solving the set of Einstein's equations reduces to solving the equation of motion \eqref{motion2} which
is readily integrated with the initial condition $a(0)=1$ to give
\begin{equation}
a(t)=\left(1-\frac{3}{2}\sqrt{M_0}t\right)^\frac{2}{3} \; .
\end{equation}
Then it is immediately seen that the singularity forms at the time $t_s=2/(3\sqrt{M_0})$ when $a=0$. From equation \eqref{rho} we see that the energy density is $\rho(t)=3M_0/a^3$ and it is diverging as $t$ goes to $t_s$. The event horizon in the exterior region develops at the time $t=t_s-2r_b^3M_0/3<t_s$, at the same time the apparent horizon develops in the interior. The apparent horizon moves inwards from the boundary to reach the center at the time of formation of the singularity.
The whole scenario is summarized in the well known Penrose diagram in the left panel of figure \ref{fig1}.
\textbullet\ A similar situation occurs when one considers collapse of an homogeneous perfect fluid with linear equation of state of the type $p=\lambda\rho$. Then the dominant energy condition implies that $\lambda\in[-1,1]$. Einstein's equations simplify in a similar way to the dust case and we now obtain a variable Misner-Sharp mass $M(t)=M_0/a^{3\lambda}$, which implies an inflow or outflow of matter through the boundary (matching should then be performed with a Vaidya exterior).
The equation of motion for marginally bound collapse is also easily integrated to give
\begin{equation}
a(t)=\left(1-\frac{3(\lambda+1)}{2}\sqrt{M_0}t\right)^{\frac{2}{3(\lambda+1)}} \; .
\end{equation}
We can see that collapse proceeds in a way qualitatively similar to the dust case (note that $\ddot{a}$ changes sign when $\lambda$ becomes smaller than $-1/3$).
For dust and homogeneous fluids the condition for the formation of trapped surfaces can be rewritten as
\begin{equation} \label{ah}
1-\frac{r^2M}{a}=0 \; ,
\end{equation}
which gives implicitly the curve $t_{\rm ah}(r)$ describing the co-moving time at which the co-moving shell $r$ becomes trapped.
\textbullet\ The situation for collapse of a null shell is slightly different as one has to use the Vaidya space-time
\cite{vaidya}.
In the case of collapse of a thin shell the space-time can then be divided into three regions: A Minkowski interior that is separated from a Schwarzschild exterior by the collapsing thin null shell
(see for example \cite{joshi-book}).
The metric can then be written in advanced Eddington-Finklestein coordinates as
\begin{equation}
ds^2=-\left(1-\frac{2M}{r}\Theta(v)\right)dv^2+2dvdr+r^2d\Omega^2 \; ,
\end{equation}
where $\Theta(v)$ is the Heaviside function separating the interior from the exterior. Again it is easy to see that the event horizon in the exterior forms as the null shell passes the radius $r=2M$ and eventually the singularity forms when the collapsing shell reaches the center. The whole scenario can be summarized in the Penrose diagram in the right panel of figure \ref{fig1}.
Collapse of null fluids can also be considered. In this case the Vaidya null dust space-time is not confined to a shell. Einstein's equations for collapse can be written in a similar manner as above and collapse may result in the formation of a black hole or a naked singularity depending on the behaviour of the mass function $M(v)$
(see \cite{joshi-book} for details).
\begin{figure}[tt]
\centering
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[scale=0.3]{Dust-Penrose.eps}
\put(-120,90){I}
\put(-65,120){II}
\end{minipage}
\hfill
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[scale=0.3]{Vaidya-Penrose.eps}
\put(-100,94){I}
\put(-50,140){II}
\end{minipage}
\caption{Penrose diagrams for collapse with the formation of a singularity (double solid line).
Left panel: Collapse of a spherical dust cloud. The solid curved line $r_b$ represents the boundary of the cloud. The interior region I, is described by pressureless particles (grey area), the exterior region II, is described by the Schwarzschild vacuum space-time. As the boundary passes the Schwarzschild radius the trapped region develops. In the exterior the event horizon (solid diagonal line) forms at $r=2M$, while in the interior the apparent horizon (dashed line) moves inwards from the boundary towards $r=0$.
Right panel: Collapse of a thin null shell (thick line) separating a Minkowski interior, region I, from a Schwarzschild exterior, region II. The event horizon (dashed line) meets the collapsing shell at the Schwarzschild radius. The null fluid focuses at the center forming a singularity at $r=0$.}
\label{fig1}
\end{figure}
These are simple toy models, which are not very realistic when we think about the collapse of a massive star. However, it is generally accepted that they encode all the relevant features to describe the formation of a black hole. Then, if we agree that more realistic models would be qualitatively similar, the main drawback of these models is the occurrence of the singularity, which signals their inability to capture what happens when the gravitational field becomes very strong over very short distances.
On the other hand one could question how the general picture of collapse given above would change if more realistic assumptions were considered.
\subsection{Toy models vs realistic models}
The collapse models described above have the obvious advantage of being simple. Equations can often be solved analytically and the global structure of the space-time can be described. However, it is also obvious that marginally bound collapse of a non rotating dust ball is hardly a realistic model to describe the collapse of a star.
Before moving on to quantum corrections in the strong field, in this section we shall outline the main features that one should address in order to describe a more realistic collapse scenario within classical GR. Typically toy models consider marginally bound collapse of homogeneous, isotropic, spherically symmetric fluids, with simple equations of state.
All of the above assumptions are considered in order to simplify the equations. However one could argue on the physical validity of the resulting models and whether the dynamical evolution of collapse would remain unchanged (at least qualitatively) if more realistic assumptions were made.
\textbullet Initial velocity: Marginally bound collapse means that the initial velocity profile corresponds to a configuration where particles have zero initial velocity at spatial infinity. This corresponds to taking the integration function $b$ coming from equation \eqref{G} to be zero.
From a geometrical point of view, for homogeneous models, this is equivalent to requiring that the interior geometry of the collapsing sphere be flat. However, realistic collapse is expected to start with zero initial velocity from a finite radius. Therefore assuming marginally bound collapse is an oversimplification that may hinder the physical validity of the models. For example, in
\cite{barcelo-uni}
the authors considered semi-classical models for collapse which starts from rest from a finite radius by using Painlev\'e-Gullstrand coordinates\footnote{The choice of Painlev\'e-Gullstrand coordinates is particularly well suited for describing collapse that starts from rest at a finite radius (see \cite{zip1} for a detailed discussion).}.
\textbullet\ Homogeneity and isotropy: The choice of density and isotropic pressures depending only on $t$ helps to simplify the equations. However realistic fluids are expected to have density and pressure gradients. The main consequence of considering inhomogeneous clouds is that the boundary radius of the object can not be specified at will, like in the dust case, but must be determined by the condition of vanishing of the pressure. Also considering anisotropic fluids implies that the regime where the classical description fails would depend on the direction, thus complicating considerably the model.
\textbullet\ Equation of state: Most of the collapse models studied in the literature consider oversimplified models for matter. The OSD model is described by homogeneous dust, other scenarios typically neglect pressure gradients, heat conduction, viscosity and so on. Furthermore local energy conservation for such adiabatic perfect fluids implies that the entropy is constant in any given co-moving volume of the fluid.
If one wishes to give a more realistic description of collapse then a better matter model should be prescribed, where dissipative effects are present. Once a non trivial equation of state is introduced one would need to consider the fluid's hydrodynamics and its thermodynamical properties. In this case the boundary surface $r_b$ needs to be described by a surface layer that transports energy and momentum between the interior and the exterior, thus implying that the exterior can not be given by the Schwarzschild solution (typically a radiating Vaidya metric can be used
\cite{matching3}). In this case one needs to specify a varying boundary radius $r_b(t)$.
A first step towards an equation of state for stars that better describes realistic matter could be that of a polytropic fluid of the kind $p=K\rho^\gamma$, with $\gamma=4/3$
\cite{tooper}.
Then, once the pressure and density at the center are given, the boundary of the object would be defined by the imposition of vanishing of the pressure, $p(r_b,t)=0$.
It must be noted that any equation of state that holds in the weak field is unlikely to remain unchanged during the last stages of collapse. Therefore a prescription for the transition of matter towards the high density phase should be prescribed as well.
Unfortunately, as the equations describing the fluid in the interior become more complicated (involving hydrodynamics, turbolence, shock waves and so on) the hope to find an analytical solution must be abandoned and one has to resort to numerical simulations.
\textbullet\ Spherical symmetry: Given the lack of a viable interior for the Kerr metric it is not surprising that analytical dynamical models of collapse with rotation have not been explored in much detail. Of course one could study collapse scenarios of rotating matter fields without worrying about the matching with an exterior solution. However in order to be physically valid such models would still need to describe well behaved matter sources (typically sources that satisfy energy conditions, have well behaved density profile and are non singular at the initial time). In this respect models with slow rotation have been considered in the past
(see for example \cite{rotation1} and \cite{rotation2}).
The exterior field is usually taken as a vacuum solution with slow rotation, such as the Hartle-Thorne metric
\cite{HT}.
This is a solution of Einstein's field equations in vacuum that describes the exterior of a slowly and rigidly rotating, stationary and axially symmetric body (up to an expansion in the angular momentum).
For years scientists wondered whether the final singularity appearing in OSD collapse was an artifact due to the choice of spherical symmetry. Many argued that once angular momentum was included the singularity would not form. We now know that this is not the case and, if one wishes to describe a somewhat realistic `star' that undergoes collapse, the role of angular momentum, especially for rapidly rotating bodies, while still being poorly understood, can not be neglected.
Despite of its simplicity, the OSD model has become the foundation of black hole physics and the main reason why astrophysicists believe that black holes do form from collapse of very massive stars. Therefore, when looking at quantum gravity induced modifications to collapse scenarios, it is just natural to start from homogeneous collapse. Most models dealing with quantum corrections to black hole formation deal in fact with modifications of the above homogeneous models.
\subsection{Numerical simulations}
In order to solve Einstein's equations for collapse in more realistic scenarios that include departures from spherical symmetry and more realistic matter models one needs to resort to computer simulations. Over the years numerical relativity has grown to become one of the major players in our understanding of energetic astrophysical phenomena such as supernova explosions and binary mergers (see \cite{shapiro-book} for details). A detailed review of numerical collapse is beyond the scope of this article, therefore here we will only briefly mention some results that are important in connection with the previous discussion.
The formation of trapping horizons in collapse of polytropic fluids was first considered in 1966 by May and White
\cite{may}.
Here we would like to mention also the early works by Stark and Piran
\cite{piran}
which considered gravitational waves emitted by the collapse of a rotating configuration in two dimensions and the work by Eardley and Smarr
\cite{smarr}
which raised the issue of the possible formation of naked singularities, a result that was later extended to axially symmetric collapse by Shapiro and Teukolsky
\cite{shapiro}.
More recently, models of three dimensional collapse of a neutron star to a Kerr black hole were obtained in
\cite{rezzolla1} using uniform rotation and in
\cite{rezzolla2} using differential rotation for the parent star. It is also worth mentioning how the previous result has been recently extended to the case of a Kerr-Newmann black hole in
\cite{rezzolla3}.
Numerical models for gravitational collapse within GR have mostly been developed in order to produce the gravitational wave templates that are used by interferometers such as LIGO and VIRGO to detect core collapse supernovae. It is therefore obvious that all the models considered are entirely classical.
Given the importance that quantum corrected scenarios are gaining in our theoretical understanding of the final stages of collapse it is reasonable to assume that the implementation of such models in numerical simulations will play an important role in the future of black hole physics.
\section{...And Quantum bounces}\label{semiclass}
As it can easily be seen from the hypothesis of the singularity theorems, if we are to preserve the global hyperbolicity of the space-time, in order to halt collapse and avoid the formation of the singularity one has to either violate energy conditions or modify GR.
Therefore `quantum improvements' to collapse scenarios should be understood as the idea of incorporating strong gravity modifications into Einstein's equations by either modifying the geometrical side of the equations to account for effects coming from some kind of ultra-violet (UV) completion of GR or modifying the energy momentum tensor to account for the supposed behaviour of matter at high densities.
Either way, given the lack of a theory of quantum gravity, one assumes that it is possible to write these modifications within the framework of GR as additional (averaged) terms to be added to Einstein's equations. Then such terms can be brought to the right-hand side of Einstein's equations and practically treated like an effective energy momentum tensor within classical GR.
If we start from a theory of quantum gravity which reduces to GR in the weak field then we can write the field equations in the semi-classical approximation as
\begin{equation}
G_{\mu\nu}+<G_{\mu\nu}^{\rm corr}>=T_{\mu\nu} \; ,
\end{equation}
where the term $<G_{\mu\nu}^{\rm corr}>$ is the semi-classical correction to the Einstein's tensor coming from the ultra-violet corrections to the classical theory (remember we have incorporated the constant $8\pi k$ into the definition of $T_{\mu\nu}$).
Then by following the procedure described above we define the effective energy momentum tensor as
\begin{equation}
T_{\mu\nu}^{\rm eff}=T_{\mu\nu}-<G_{\mu\nu}^{\rm corr}> \; ,
\end{equation}
and proceed to study the classical general relativistic dynamics produced by such an effective matter source. A full quantum treatment of collapse models can be given for very simple cases (such as for example the one presented in \cite{haj1}) but their validity is subject to the assumptions that are made in order to fully quantize the system and may not translate to more general cases.
\subsection{A brief history of collapse models with quantum corrected interiors}
We will focus now on the interior collapsing geometry and how it can be modified in order to avoid the formation of the singularity at the end of collapse.
There are several ways that can be pursued in order to obtain the resolution of the singularity at the end of collapse via modifications of GR. For the semi-classical models considered here, each procedure corresponds to the construction of a different $<G_{\mu\nu}^{\rm corr}>$ based on a different approach towards the UV completion of gravity. Different prescriptions imply different modifications that in turn may give different collapse models. Therefore it is somehow surprising that after several years of research it seems that a unique qualitative picture is emerging from all these different approaches. We shall briefly review them here. Although not exhaustive, we hope that this survey covers most of the approaches that have been used and most of the groups that have been working on the topic.
\textbullet\ The earliest attempts were made by writing the theory for a higher order Lagrangian and considering only the leading (GR) and next to leading terms.
This is the approach that was followed by Frolov and Vilkoviski
\cite{frolov}.
In their seminal papers they studied gravitational collapse of a null shell from one loop corrections to the gravitational Lagrangian due to quantum effects, in the context of asymptotically free gravity, and found that the shell bounces at a minimum radius and re-expands towards infinity.
\textbullet\ Another early attempt to remove the singularities was investigated by Bergmann and Roman
\cite{berg}.
Noting that energy conditions may not hold for matter under the extreme conditions that develop towards the end of collapse, they used standard GR to investigate what kind of violations of the energy conditions would allow for a resolution of the singularity.
\textbullet\ A later approach developed by Hajicek considered again the collapse of a null shell within the Hamiltonian formulation of GR
(see \cite{haj1} and \cite{haj2}).
The use of the canonical formalism allowed the author to quantize the motion of the null shell, now treated as a wave packet, to show that the quantum shell crosses the horizon in both directions. The classical ingoing shell develops into a superposition of ingoing and outgoing shell as it approaches the strong field region where classically the singularity would form. The horizon also becomes a superposition of a black hole horizon with a white hole horizon (a `grey' horizon), with the black hole prevailing at early times and the white hole prevailing at late times.
\textbullet\ Quantization of collapse of a cloud made of null fluid was considered in \cite{vaz}. The authors quantized a null dust cloud described by the Vaidya metric with an arbitrary mass distribution. At a classical level null collapse may lead to the formation of a black hole or a naked singularity depending on the specific form of the mass function.
Unfortunately the techniques developed in \cite{haj1} cannot be applied to to a problem with many degrees of freedom, such as collapse of a spherical cloud and a different approach leads to a somehow different picture of the final stages of collapse.
The main difference from the previous work lies in the fact that when considering a thick matter distribution instead of a shell, the singularity can be removed only in special cases (such as for example when the wave functional vanishes at the center). This poses some question of the genericity of the results obtained for collapse of thin shells.
A similar approach was used in
\cite{vaz2}
to quantize a cloud of time-like dust particles with the surprising result that the collapsing matter may condense just outside the horizon radius to form a quasi-static object (that could be interpreted as a gravastar).
An attempt to extend the procedure to the inhomogeneous case was developed in \cite{vaz3}.
\textbullet\ The Hamiltonian formulation of gravitational collapse of a scalar field and its quantization was developed in
\cite{hus2}.
The authors found an upper bound for the curvature as a kinematical consequence of the construction of the quantum operators. This upper bound is maintained in the dynamical scenario and thus the classical singularity is avoided. As a consequence a Planck scale remnant is left from collapse.
Similarly in
\cite{hus3}-\cite{hus3c} a program was developed to construct the fully quantum gravitational collapse of a scalar field.
The kinematics for a spherically symmetric
quantum gravitational system was outlined in
\cite{hus3}.
The definition of the `quantum black hole' was developed in
\cite{hus3a},
while the Hamiltonian formulation of the system was considered in
\cite{hus3b}.
The full dynamics of the quantum gravitational spherically symmetric scalar field was considered in
\cite{hus3c}
where the quantum Hamiltonian operator was constructed.
Finally in \cite{hus3d}
the semi-classical states for the above construction were studied in a cosmological context.
Also, semi-classical states for scalar field collapse and quantum entaglement of matter and gravity in collapse were studied in
\cite{hus3e}.
\textbullet\ In \cite{kuns3} numerical simulations of gravitational collapse of a scalar field with LQG corrections showed that as the bounce occurs it is possible for the outer horizon to depart from the Schwarzschild horizon and shrink. This is related to the equations of motion for the effective source being non conservative.
Similar results were obtained in
\cite{kuns2}
where a massless scalar field was considered. In this case two horizons form. An inner horizon where the matter content `piles up' and an outer horizon which is equivalent to that of a regular black hole solution.
\textbullet\
The trace anomaly corrections of a scalar field theory on a given background were considered in
\cite{arfaei1} and \cite{arfaei2}.
The corrections, treated as a backreaction, were incorporated in the collapse scenario of a thin shell in \cite{arfaei1}, while a spherical cloud of homogeneous dust was taken as the background space-time in
\cite{arfaei2}.
Similarly to previous models collapse halts before the formation of the singularity. In this case the space-time settles to a quasi-static configuration with an outer horizon (like the black hole horizon) and an inner horizon.
\textbullet\ The idea that repulsive effects may arise in the strong field regime is connected with the idea of asymptotic safety, which implies that gravity should somehow `switch off' at high densities.
There is now substantial evidence for the existence of a non gaussian fixed point for the renormalization group flow that would provide a UV completion of gravity. This is associated with an effective gravitational action with scale dependent Newton's constant
(see for example \cite{sauer1} and \cite{sauer2}) and implies a modification of the Schwarzschild geometry similar to that appearing in regular black hole models.
From this point of view, regular black holes in classical GR, such as the ones described in \cite{hayward}, may be seen as the UV completion equivalent of the Schwarzschild black hole and may thus originate from gravitational collapse in a semi-classical framework that incorporates asymptotically free gravitational corrections.
Another way to understand the idea is in terms of the gravitational degrees of freedom. If there is a maximum allowed density $ \rho_{\rm cr}$ at Planck scale then as matter approaches this threshold the gravitational degrees of freedom become diluted, thus leaving a Minkowski background in the limit of $\rho\rightarrow \rho_{\rm cr}$. In this context then, the bounce is understood as a scattering process of the collapsing particles that suddenly find themselves in a gravity-free Minkowski space-time.
To model asymptotic safety one can use several approaches.
For example, as mentioned before, one can consider a running gravitational constant $G$ obtained from the renormalization group for gravity in order to describe a scale dependent effective action. Then $G$ goes to zero in the Planckian limit.
This idea was explored by Bonanno and Reuter in the context of the Schwarzschild solution
\cite{Bonanno}
and in the context of an `evaporating' Schwarzschild solution
\cite{Bonanno2}.
The same idea was later applied to collapse in
\cite{torres1}.
Dust collapse with a varying gravitational constant $G$ depending on the density was also considered by Casadio et al.
\cite{casadio}.
Other approaches to asymptotic free gravity are based on alternative theories of gravitation, like non-local super-renormalizable higher derivative theories. The consequences for black holes and collapse of these approaches were considered for example in
\cite{modesto}
and
\cite{us2}.
A similar model was considered in
\cite{garay1}
where the bounce occurs everywhere at the same co-moving time and the expanding, white hole, solution, is described by the time reversal of the collapse model.
\textbullet\ In recent years bouncing models have gained more and more attention. The main player in the field has been Loop Quantum Cosmology. LQC applies the techniques of LQG to cosmology and big bang models. In this context it was shown that strong field corrections to the energy density inspired by a LQG treatment of the dynamics close to the big bang singularity could resolve the singularity and replace it with a bounce
(see for example \cite{lqc}).
These models are appealing because the effective energy momentum tensor takes a particularly simple form with the energy density given by the classical energy density plus a quadratic correction, that becomes important at Planck densities, and can be written as
\begin{equation} \label{rhoeff}
\rho_{\rm eff}=\rho\left(1-\frac{\rho}{\rho_{\rm cr}}\right) \; .
\end{equation}
Such a framework was used in the description of collapse of an homogeneous scalar field in
\cite{boj3},
while in
\cite{gos2}
a similar model that classically leads to the formation of a naked singularity was considered. In \cite{gos2}, it was shown that LQG effects close to the formation of the singularity originate an outward flux of energy that dissipates away all the matter before the singularity forms. The effective energy momentum tensor is not conserved and the system exhibits a mass loss that leads to the shrinkage of the outer horizon and the geometry to become effectively Minkowski on the onset of the bounce.
Modifications of classical dust collapse were analyzed in
\cite{us1}, where it was shown that as the effective density goes to zero collapse halts and then bounces.
\textbullet\ The effective energy-momentum tensor derived from quantum corrections changes the metric in the interior space-time. It is then possible that these quantum corrections induce a change in the exterior metric as well. These modifications may mimic Hawking radiation carrying away part of the mass-energy of the bouncing interior and they may effectively be described by a quantum-corrected Vaidya space-time. Collapse scenarios with a matter outflow in the exterior were considered in
\cite{torres3}
and
\cite{torres4}
in the context of the collapse extension of the asymptotic safety inspired solutions of Bonanno et al.
\cite{Bonanno}. The qualitative picture that emerges in this case is similar to many of the models mentioned above.
\textbullet\ Finally it is worth mentioning that the resolution of singularities in gravitational collapse has been studied also in the context of alternative theories of gravity. For example singularity avoidance has been found in higher derivative, super-renormalizable theories (see for example
\cite{us3}). Also in the context of gravity with non vanishing torsion it has been shown that spin effects may lead to avoidance of the singularity at the end of collapse
\cite{weyssenhoff}.
Within Palatini gravity the collapse of charged fluids may lead to the formation of a wormhole in place of the central singularity (see \cite{garcia2} and \cite{garcia1}).
Finally, in \cite{hus5} it was shown how modified GR affects Choptuik's mass scaling law (see \cite{chop}) observed in the final stages of collapse of a scalar field.
From the above it can be seen that several approaches towards a treatment of quantum effects or UV completion of gravity in the strong field have been proposed and studied. It is perhaps curious to notice that most of these approaches entail a somehow similar qualitative picture of collapse: As matter reaches very high densities, repulsive effects arise which halt collapse and produce a bounce. The region with repulsive behaviour is the portion of the space-time where quantum effects can not be neglected. This region may be confined within the collapsing cloud, extend in the exterior (to reach the horizon and possibly beyond), or it could even extend to spatial infinity for a certain amount of time.
A good example is given by marginally bound homogeneous dust with LQG modifications given by the effective energy momentum tensor in equation \eqref{rhoeff}. The equation of motion \eqref{motion2} becomes
\begin{equation}
\dot{a}=-\sqrt{\frac{M_0}{a}\left(1-\frac{3M_0}{\rho_{\rm cr}a^3}\right)}=-\sqrt{\frac{M_{\rm eff}}{a}} \; ,
\end{equation}
which, with the initial condition $a(0)=1$, has the simple solution
\begin{equation}
a=\left(a_{\rm cr}^3+\left(\sqrt{1-a_{\rm cr}^3}-\frac{3}{2}\sqrt{M_0}t\right)^2\right)^\frac{1}{3} \; ,
\end{equation}
where $a_{\rm cr}^3=3M_0/\rho_{\rm cr}$ is the minimum volume scale that the star reaches before bouncing. Note that the scale factor reduces to the classical solution in the limit for $a_{\rm cr}\rightarrow 0$ (which corresponds to $\rho_{\rm cr}$ going to infinity).
This example was discussed in
\cite{us1}
without a detailed analysis of the matching and the exterior solution. Given the mass loss due to the decrease of the effective mass, it was argued that quantum effects must propagate non locally affecting the exterior horizon instantaneously (similarly to what was discussed in \cite{kuns3}). Then the quantum gravity region extends until spatial infinity and lasts for a finite interval of time centered at the time of the bounce $t_B$. Homogeneous dust is a simple non dissipative system and after the bounce the solution is readily described by the time reversal of the collapsing solution, with the black hole effectively turning into a white hole.
A few words should be spent regarding the interpretation of the parameter $\rho_{\rm cr}$. In LQG-inspired models $\rho_{\rm cr}$ can be interpreted as the Planck density. Obviously, fixing a limiting density is not the same as fixing a limiting size, like, for example the Planck length $l_{\rm pl}$. The minimum scale implied by setting a maximum value for the density may be much larger than the Planck scale. In other words, an object that can achieve a maximum density of $\rho_{\rm cr}$ will have a minimum size much larger than the Planck length $l_{\rm pl}$. Modifications of the Schwarzschild black hole via the introduction of a minimal length have been considered in the context of the quantum description of gravity at small scales
(see for example \cite{nicolini} and references therein).
However one need not limit the analysis to the Planckian regime, as there may be other factors that halt collapse before the cloud reaches the Planck density.
In this respect, $\rho_{\rm cr}$ is just a parameter that sets the scale of the bounce. However its actual value has great importance for the phenomenological consequences of the models. Therefore, in principle, one could hope to constrain the value of $\rho_{\rm cr}$ via observations while at the same time ruling out models that do not fit with the data. In this context, for example, a bouncing scenario originating from four-fermion interaction was investigated in
\cite{us4}
with a critical density much lower (and a critical scale much larger) than the quantum induced Planck scale. Similarly, in the context of emergent gravity, in
\cite{garay2}
it was argued that the energy scale of quantum effects that produces the bounce may differ from the energy scale at which Lorentz violations arise and the emergent gravity picture fails. In this case the range of energies in between could be described simply by quantum field theory on Minkowski space-time.
Finally, it is worth mentioning that there exist other ways to implement the avoidance of singularities. For example by requiring a limiting value for curvature invariants. Such limiting curvature proposal was explored in cosmological context for example in
\cite{markov} and \cite{markov84}.
A detailed study of the condition for the formation of trapped surfaces in the interior, following from equation \eqref{horizon}, shows that the apparent horizon also reaches a minimum value and then expands to infinity, thus reaching the boundary of the star in a finite time. What happens at this point depends on whether the quantum effects are able to propagate in the exterior. However, the horizon in the exterior exists only until the time when the outgoing matter cloud re-emerges.
The idea of horizons with finite lifespan supports the general claim that is emerging in recent years that astrophysical black holes do not posses an event horizon but `only' a trapping horizon that exists for a finite (possibly long) time
(see \cite{hawk-new}).
Most of the attempts to resolve the singularities that arise in classical collapse make use of toy models such as this one or thin shells. It is important to notice that although these are simple models allowing for the equations to be solved explicitly, it is generally believed that they retain all the crucial properties necessary to understand the fundamental aspects of the problem and the behaviour of gravity in the strong field.
However these models are not devoid of drawbacks and understanding what they can teach us about realistic collapse of massive stars requires a careful analysis.
\subsection{The exterior geometry}
We will now briefly discuss how the modifications to the interior geometry of the collapsing cloud may affect the exterior. The dust interior of the OSD model can easily be matched to a Schwarzschild vacuum exterior. As all the matter falls into the singularity the solution tends to become globally Schwarzschild and the horizon settles to the usual event horizon. When repulsive effects are introduced the matter bounces and re-expands and the question of how the horizon in the exterior should be treated is far from trivial.
If the exterior geometry is not affected by modifications in the interior then some non trivial matching must occur at the boundary in order for the light cones to smoothly transition form an almost Minkowskian behaviour where matter is present to an inside-black-hole-horizon behaviour in the vacuum exterior.
However, it seems possible that quantum gravity corrections are not confined to core of the collapsing cloud but can reach the weak field regions of the space-time. After the bounce the matter is outgoing and will eventually cross the horizon outwards thus destroying it. The process may be entirely symmetric in time or it may have a preferred direction, depending, among other things, on the specifics of the fluid model employed. In turn, the process may be accompanied by a transition of the black hole to a white hole solution, the time scales of which vary from model to model.
In \cite{frolov} it was assumed that the horizon remains the usual Schwarzschild black hole horizon until the expanding matter crosses it in the outgoing direction. However this interpretation is not entirely satisfactory as the light-cone structure close to the point where expanding boundary and horizon cross is not well defined.
\begin{figure}[tt]
\centering
\includegraphics[scale=0.3]{fink1.eps}
\caption{Finkelstein diagram for the black hole to white hole transition. The grey area enclosed within dashed lines represents the region where quantum effects are important (QG). The grey area within solid lines represents the trapped region in the exterior space-time. The solid thick line $r_b$ represents the boundary of the cloud. The solid thin vertical line represents the horizon in the exterior region.
In this case the transition is completely symmetric in time. The bounce occurs at the same time $t_B$ for all shells (as in the homogeneous case). An horizon grazing photon (thin curved line), stays in the vicinity of the horizon until right after $t_B$. The lifetime of the white hole is the same as the lifetime of the black hole.}
\label{fink1}
\end{figure}
In more recent times several researchers have proposed the idea that the exterior region may undergo a transition from black hole to white hole. In
\cite{haj3}
the black hole to white hole transition was proposed within a model of collapsing null shells in quantum gravity without discussing the geometry of the transition. More recently, the exterior geometry induced by the transition was analyzed in
\cite{garay2} and \cite{bar3}, where it was suggested that the time scales of the transition must be short.
At the same time the effective geometry of the quantum-gravity region was studied in \cite{rov1}, where it was suggested that quantum effects may accumulate over long time scales.
In any case, models with transition from a black hole geometry to a white hole geometry can explain the change in the causal structure of the space-time by allowing for quantum effects to `tilt' the direction of the light-cones, effectively turning the black hole horizon into a white hole horizon (see figure \ref{fink1}).
As we shall see later, the lifespan of the horizon is model dependent but, for distant observers, is usually long enough to ensure compatibility with astrophysical observation of black hole candidates. At early times the models are well described by a classical GR collapse solution. Quantum corrections exist for a finite time, thus restoring classicality at late times.
\section{Open issues}\label{open}
The general scenario presented above is extremely intriguing from an astrophysical point of view as it gets rid of space-time singularities while at the same time opening a possible observational window on quantum gravity phenomena. It is only natural then that the attention will shift from mathematical toy models to the possible features of realistic astrophysical models.
Even though some scenarios can already be constraint by present observations, in order to truly explore the implications of these models for astrophysics one would need to have some observational data to compare against the theory.
At present, in the absence of such observational data, a way to address the physical validity of a model is to look for its internal consistency.
Most of these models present some unresolved problems or open issues that we will try to briefly discuss here.
\subsection{The horizon in the exterior}
Collapse models are constructed by tailoring a collapsing interior metric, which describes the `star', to a suitable exterior (typically Schwarzschild or Vaidya). For simplicity lets consider the case of a Schwarzschild exterior. This is the case of collapse for OSD, Lema\`itre-Tolman-Bondi (i.e. inhomogeneous dust), anisotropic fluids with only tangential pressure and fluids whose pressures vanish at the boundary. In other words, in order to have a Schwarzschild exterior there must be no inflow or outflow of matter through the boundary of the star.
In the usual OSD model when the collapsing matter passes the Schwarzschild radius a trapped surface forms at the boundary of the star. In the interior this trapped region is described by the apparent horizon, which propagates inwards to reach the center at the time of formation of the singularity. In the exterior the boundary of the trapped region corresponds to the Schwarzschild radius.
This situation may be complicated by considering inhomogeneities and more sophisticated equations of state.
For example, inhomogeneous dust collapse may result in the formation of a naked singularity, which is the consequence of a significantly different behaviour for the apparent horizon
(see \cite{joshi}).
A polytropic equation of state may also alter the structure of the horizon in the interior, delaying the time of formation of the event horizon and creating two apparent horizons (one moving inwards and one moving outwards) from a finite radius in the interior
(see \cite{musco}).
However in all these scenarios the event horizon eventually develops in the exterior and is not affected by the final fate of the collapsing matter. By looking at equation \eqref{ah} that defines the location of the apparent horizon we see that as the fluid is collapsing to the central singularity the apparent horizon also reaches the singularity. After the formation of the singularity we are left with a Schwarzschild black hole.
When repulsive effects are introduced the singularity no longer forms and the matter bounces after reaching the critical scale. This affects the apparent horizon, which also reaches a minimum value $r_{\rm min}$ and then starts moving outwards.
This can be easily seen by solving equation \eqref{ah} with the effective mass in place of the classical mass function. The existence of a minumum value for the apparent horizon indicates that an object with boundary $r_b<r_{\rm min}$ may not become trapped at all. This seems to suggest the possibility that extremely small and extremely dense compact remnants may form from gravitational collapse.
On the other hand when $r_b>r_{\rm min}$ the apparent horizon goes through an expanding phase (which may occur before the bounce) eventually reaching the boundary of the star.
What happens there? How is the exterior affected by the bounce in the interior? Can the quantum effects reach the Schwarzschild radius and beyond?
\textbullet\ One possibility is that the horizon in the exterior `feels' the repulsive effects of the inner region and shrinks to meet the outgoing apparent horizon at the boundary
(see left panel in figure \ref{bounce1}).
This scenario is easily illustrated in the simple example of homogeneous dust. The effective mass becomes zero at the time of the bounce, thus making the space-time almost Minkowki. In fact, at the time of the bounce $\rho_{\rm eff}=0$ and $p_{\rm eff}=-\rho_{\rm eff}$ and from equation \eqref{ah} with $M_{\rm eff}$ in place of $M$ we see that there are no trapped surfaces in the interior at $t=t_B$. One can see that there may not be trapped surfaces in the exterior as well if this is described by a Vaidya radiating solution with $M(v)$ obtained from $M_{\rm eff}$. Then the horizon in the exterior is affected instantaneously by the bounce in the interior. This is reminiscent of the Newtonian problem of action at distance and could be understood if the horizon was in some way `entangled' with the infalling matter. In this case, the changes in the effective energy-momentum tensor are felt immediately everywhere in the space-time and so the horizon in the exterior shrinks due to the decrease of the active gravitational mass in the interior. The two horizons eventually `meet' again at the boundary at a time before the bounce. In this case the physics of the bounce is not covered by an horizon and the quantum gravity region could in principle have an observable signature visible to far away observers (see for example
\cite{us1}).
\begin{figure}[tt]
\centering
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[scale=0.3]{Penrose-bounce1.eps}
\put(-121,93){ah}
\put(-89,93){oh}
\end{minipage}
\hfill
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[scale=0.3]{Penrose-bounce4.eps}
\end{minipage}
\caption{Penrose diagrams for homogeneous collapse with semi-classical corrections. The thick solid line $r_b$ represents the boundary of the cloud. The grey areas represent the trapped regions enclosed by apparent horizons. The dashed lines represent the apparent horizon (curved) and the event horizon (straight) in the classical case. Collapse follows the classical behaviour until a certain time before the bounce. All shells bounce at the same co-moving time $t_B$.
Left panel: When quantum effects become important the apparent horizon in the interior (ah) starts moving outwards. At the same time the outer horizon (oh) in the exterior moves inward. The two horizons meet and annihilate before $t_B$. At the time of the bounce the cloud is not trapped. After the bounce the solution is described by a time reversal of the collapsing solution, therefore a new trapped region (this time with a white hole horizon) develops for a finite time. In this scenario the bounce affects the whole space time instantaneously.
Right panel: The darker grey area represents the region where quantum effects are non negligible. Outside the quantum gravity region (QG) the space-time is given by a classical collapse solution for $t<t_B$ and its time reversal for $t>t_B$. Quantum gravity effects reach a portion of the space-time outside the horizon. Dotted lines represent lines of constant $t$ (note that due to homogeneity, in the interior, the quantum gravity region occurs at the same $t$).}
\label{bounce1}
\end{figure}
\textbullet\ Another possibility is that the exterior region is not described by the classical Schwarzschild solution but by a modified (`quantum corrected') solution in the form of a regular black hole space-time. Regular black holes can be constructed under several prescriptions and share some qualitative properties. Given the absence of the singularity, regular black holes may present multiple horizons (when the mass exceeds a critical mass), the simplest situation being that of two horizons, an outer horizon close to the classical Schwarzschild horizon and one inner horizon. The outer horizon is in the weak field region and quantum effects have little impact on its structure. On the other hand the inner horizon is a distinctive feature of the `quantum' corrections.
For example in
\cite{hayward}
it was considered a very simple metric for a regular black hole inspired by \cite{bardeen}
and it was shown that the horizons in the modified space-time is located at the zeros of
\begin{equation}
f(r)=1-\frac{2Mr^2}{r^3+2l^2M} \; ,
\end{equation}
where $l$ is a characteristic length that can be thought to be of the order of the Planck length. Then for $M>(3\sqrt{3}/4)l$ the solution has two horizons, as discussed above. Note that for $l=0$ one retrieves the Schwarzschild solution.
In the dynamical case, if the exterior is described by a regular black hole solution with two horizons, the outgoing apparent horizon within the star may match with the inner horizon of the exterior metric once it crosses the boundary, while the outer regions of the black hole remain mostly unaffected until the time of the bounce.
\textbullet\ The bounce of the apparent horizon is also connected to the role of the effective mass $M_{\rm eff}$.
Consider again for simplicity the homogeneous dust case. In evaluating the physical density we must use the physical mass function $M_0$ and thus we obtain $\rho=3M_0/a^3$, which, with the `quantum corrected' scale factor, reaches the maximum value $\rho_{\rm cr}$ at the time of the bounce. If we use the physical mass function instead of the effective mass function in the equation for the apparent horizon \eqref{ah} we obtain the intriguing situation where the apparent horizon reaches the smallest radius at the time of the bounce and then re-expands.
This allows for the exterior to have only one horizon that behaves like the event horizon in the Schwarzschild space-time (plus possible small corrections) until the bounce (or until it is reached by the effects of the strong gravity region). After the bounce the model is again described by the time reversal of the collapsing scenario.
Unfortunately this situation doesn't seem to be justifiable if we consider the geometrical definition of the apparent horizon. In fact, the condition for the formation of trapped surfaces in the interior is given by
\begin{equation}
g^{\mu\nu}(\partial_\mu R)(\partial_\nu R)=0 \; ,
\end{equation}
which is equivalent to requiring that the surface $R(r,t)={\rm const}.$
is null. In the case of the semi-classical model for collapse this implies equation \eqref{ah} with $M$ replaced by the effective mass $M_{\rm eff}$.
\textbullet\ In any case it would seem that the exterior geometry cannot remain unaffected by the bounce and thus the quantum effects must somehow reach the horizon in the exterior and possibly beyond.
We can suppose that reaching a certain threshold density may signal when the cloud is entering the quantum gravity regime. In homogeneous collapse every shell has the same density and therefore quantum gravity effects will start throughout the cloud at the same time.
After that time the effects may reach the boundary and propagate in the exterior to modify the geometry near the horizon (see right panel in figure \ref{bounce1}).
In the model discussed in
\cite{us1}
quantum effects reach arbitrarily large distances at the time of the bounce.
On the other hand it is possible to construct models, such as the one discussed in
\cite{rov1},
where quantum effects reach a finite distance beyond the horizon and the space-time is always classical at large distances.
The most striking consequence is that quantum effects may alter the geometry of the space-time in a region where classical physics should dominate. This shows better than any examples how the black hole horizons are intrinsically quantum gravitational concepts.
Another thing that stands out from the previous considerations is that the evolution after the bounce can be described as the time reversal of the collapse scenario. If this is true both in the interior and the exterior then the black hole turns into a white hole as the horizon turns from a membrane that doesn't let anything escape to a membrane that doesn't let anything enter. From the above considerations one is led to ask what is the mechanism that leads the physics of the strong gravity region to have effects at large distances, or how the quantum effects propagate to reach the horizon. And how such effects can turn the black hole into a white hole.
\subsection{The black hole to white hole transition}\label{trans}
In this section we shall investigate the question of how quantum effects may reach the horizon. The simplest mathematical model where the evolution after the bounce is described by the time reversal of the collapse model does not address how the black hole event horizon turns into a white hole horizon.
The event horizon in the exterior region is within the regime where gravity is well described by GR and relatively far from the central region where matter is confined just before the bounce. Therefore if the bouncing scenario induces a transition of the black hole to a white hole solution it is important to understand how such a process can happen. How effects that occur in the strong gravity region affect the geometry near the horizon.
Three main possibilities can be suggested: The horizon could change nature as a consequence of some signal propagating from the strong gravity region at a finite speed. Or the transition could be a statistical process with the black hole horizon being the limit at early times and the white hole horizon the limit at late times, with a `grey' hole given by a superposition of the two in between. Finally it could be that the horizon is quantum entangled with the ingoing matter and therefore reacts instantaneously to what happens close to the center.
\textbullet\ If some signal propagates from the quantum gravity region outwards to reach the horizon in a finite time then the horizon in the exterior is a black hole horizon until it is affected by the signal. Then it is worth wondering about the nature of the signal and what kind of carrier would propagate it towards the exterior.
For example, in
\cite{bar2}
one such mechanism was suggested in the form of a shock wave originating near the strong curvature region.
In the time symmetric case, the quantum gravity region reaches the horizon just before the time of the bounce and the black hole turns quickly (for co-moving observers) into a white hole. For example in the models described in \cite{haj1} the superposition state lasts for a very short time. The geometry before the transition is well described by the classical black hole space-time, while after the transition it is a classical white hole solution (the case of a fast transition from black hole to white hole is illustrated in figure \ref{fink1}).
\textbullet\ On the other hand it is possible that quantum effects accumulate over time in the outer regions making the transition a statistical phenomenon.
This is equivalent to say that at large scales (near the horizon) quantum effects are negligible at any given time but they pile up eventually leading the solution away from the classical black hole solution
\cite{rov1}.
In this case there are a few questions that are worth asking. What is the nature of the horizon during the transition phase? How long does the transition last before the horizon settles to a white hole horizon? And when does the transition occur? Close to the time of the bounce (thus making the scenario symmetrical in time) or close to the disappearance of the horizon (thus making the white hole phase short lived?
In this case it is possible that the transition during which the horizon is a superposition of black hole and white hole lasts for a long time even for local observers near the horizon thus having different observable effects with respect to the previous scenario. Quantum effects would start to accumulate at the horizon right after its formation and the transition would affect the whole lifetime of the black hole.
Note that the two situations described above offer the possibility to break the time reversal symmetry if the quantum gravity signal reaches the horizon after the bounce, as we shall see later.
\textbullet\ It is also possible that the horizon is in some way `entangled' with the infalling matter that produced it and therefore responds instantaneously to quantum effects from the vicinity of the center.
This third possibility corresponds to asking the question `Is it possible that faster than light communication occurs between the quantum gravity region in the dense core and the horizon?'
In order to address this point let's consider a simple thought experiment based again on collapse of a Vaidya null shell. Following \cite{haj1} we assume that the null dust shell can be quantized and described by a wave function. Let's imagine to position a massless spherical mirror at some fixed radius $r_{\rm m}>2M$. Without the mirror and without a quantum treatment, the shell would collapse and form a singularity (as shown in the right panel of figure \ref{fig1}). With the mirror the shell would bounce before crossing the horizon radius and re-expand to infinity (see left panel of figure \ref{vaidya}). With the quantum treatment and without the mirror, the shell would bounce close to the center following the dynamics described in \cite{haj1} and create a state of superposition between black hole and white hole. Now imagine to position a semi-reflective mirror at a fixed radius $r_{\rm m}>2M$. With the semi-reflective mirror, we can in principle follow the usual framework of quantum interference and conclude that the wave function should split into two parts, one transmitted, that propagates towards the center and one reflected, that propagates towards radial infinity. An observer at a radius $r>r_{\rm m}$ would have a probability of $1/2$ to detect the expanding shell. Then even before the white hole transition due to the bounce occurs, the geometry would be described by a superposition of two geometries (one without the horizon and one with the usual black hole horizon), showing that quantization of the null shell entails quantization of the horizon even in the weak field (see right panel in figure \ref{vaidya}).
In this kind of scenario the transition from black hole to white hole would be instantaneous, as the horizon, being entangled with the collapsing matter, would change its nature at the time of the bounce.
\begin{figure}[h]
\centering
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[scale=0.3]{vaidya-penrose-mirror.eps}
\end{minipage}
\hfill
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[scale=0.3]{vaidya-penrose-entang.eps}
\end{minipage}
\caption{Penrose diagrams of a collapsing Vaidya null shell with a massless mirror (left panel) and a semi-reflective mirror (right panel). The trajectory of the static mirror is given by the fixed value of the radius $r=r_{\rm m}>2M$ (solid thin line). The dotted line represents a line of constant $t$.
Left panel: The ingoing wavepacket (thick solid line) reaches the mirror and bounces back to infinity.
Right panel: The ingoing wavepacket reaches the semi-reflective mirror. The wave function is split into two entangled parts, one ingoing and one outgoing (dashed lines). As a consequence, when the ingoing part crosses the Schwarzschild radius, an `entangled' horizon (dot-dashed line) and eventually an `entangled' singularity (double dotted line) form.
If a detector is placed on the trajectory of the outgoing part of the wavepacket the whole system will collapse to either the diagram in the left panel of this figure (if there is detection) or to the diagram in the right panel of figure \ref{fig1} (if there is no detection).}
\label{vaidya}
\end{figure}
In all of the above cases it seems unavoidable that the transition from black hole to white hole must have a non causal nature. In fact the light cones in the exterior region but inside the Schwarzschild radius are tilted towards the center while the quantum gravity effects are propagating towards outer radii. This can be understood in terms of the effective energy-momentum tensor, as violations of the dominant energy condition allow for signals to propagate outside the light cone.
It is worth noting that in \cite{bar2} it was argued that the time-scales of the transition must be short for the white hole to be stable against perturbations.
The model for collapse of quantum null shell developed in
\cite{haj3}
supports the idea that the transition must be short. Later work by Barcel\'o et al.
also supports this claim
(see \cite{bar3}).
At the same time, if the transition is a statistical effect, then the timescales might become relatively long.
This naturally leads to the question of the lifetime of the horizon in these bouncing scenarios. The time scale of duration of the transition may be short while the observed lifetime of the black hole for distant observers may still be long (million of years for a stellar mass object). Can distant observers see the state of superposition of black hole and white hole? Or do they see a sudden transition? And how long does the black hole horizon live for distant observers?
\subsection{Lifespan of the black hole}
The Schwarzschild black hole exists forever. In the OS model after all matter has reached the singularity the final product of collapse is a Schwarzschild black hole. The event horizon in the Schwarzschild geometry `knows' the future development of the space-time for `the entire history of the universe'. It has been argued several times that event horizons can not be detected and are not relevant for real physical phenomena happening in the universe
(see for example \cite{vis-hor} and references therein).
The event horizon is not a physically observable feature of the space-time. On the other hand, from the point of view of local experiments, apparent horizons, which have local significance, can be observed. Therefore when talking about black holes in these bouncing models one has to think about a trapped region, enclosed within apparent horizons, that exists for a finite, albeit possibly very long, time.
In bouncing models, the black hole horizon has a finite lifespan that ends when either the black hole turns into a white hole or when the expanding matter crosses the horizon outwards. Then for how long does the black hole `live'? Is the lifespan of the black hole horizon as measured by distant observers compatible with current observations of astrophysical black hole candidates?
If the lifetime of these objects is short enough then there is a possibility that the observational signature of the disappearance of the horizon be experimentally tested by modern telescopes.
If, on the other hand, the time scales for the disappearance of the horizon are long (say comparable with the age of the universe), then there is little hope to impose observational constraints on the theoretical models.
In some cases the lifespan of the black hole can be calculated using the usual arguments found for example in
\cite{MTW}.
On the other hand the same calculation in a fully quantum framework becomes much more complicated.
\textbullet\ For example, in \cite{frolov} it was found that the lifetime of the black hole is of the order of $T\simeq Me^{M/m_{\rm pl}}$, where $M$ is the mass of the collapsing shell and $m_{\rm pl}$ is the Planck mass (obviously for $m_{\rm pl}\rightarrow 0$ the lifetime goes to infinity). The fact that the lifespan is exponential in the Schwarzschild mass makes it longer than the age of the universe for a stellar mass black hole.
\textbullet\ However, the tunnelling process suggested in \cite{haj3} and the transition process suggested in \cite{bar2} indicate a time scale of the order of $T\simeq t_{\rm pl}(M/m_{\rm pl})$, where $t_{\rm pl}$ is a model dependent characteristic time scale that can be thought of as related to the Planck time. Arguments for a short lived horizon were suggested for example in
\cite{bar3}
where a short transition from black hole to white hole implied also a short lifespan for the black hole horizon (of the order of milliseconds for distant observers).
Using a different framework, in
\cite{bar4}
the authors evaluated the mean lifetime of the black hole by computing the probability of the same of tunneling to a white hole solution. They found an exponential decay law of black holes to white holes with a mean lifetime that scales like $1/(4M)$, which is small for stellar mass objects. When this factor is added to the time that is required for the star to bounce as seen by distant observers the striking result is that quantum tunneling has little effect. The decay does not modify significantly the lifetime of the black hole for far away observers, which scales like the Schwarzschild mass $M$.
\textbullet\ On the other hand, the tunneling model described in
\cite{rov1}
has a time scale of the order of $T\simeq M^2/l_{\rm pl}$, where again $l_{\rm pl}$ is a model dependent characteristic length that can be thought of as related to the Planck length (note that in geometrized units we would have $m_{\rm pl}=t_{\rm pl}=l_{\rm pl}=\sqrt{\hbar}$). The fact that the lifespan of the black hole is quadratic in the Schwarzschild mass puts this model somehow in between the two models presented above, shorter than Hawking evaporation scales (which are of the order of $M^3$) but longer than the time scales discussed above.
The same estimate was obtained within LQG in
\cite{rov-new},
where the authors suggested the possible existence of new astrophysical phenomena, given the solid theoretical support for black holes tunneling to white holes in sufficiently short time scales. The idea put forward is that Fast Radio Bursts may be the result of moon-sized primordial black holes exploding today.
In general there are three phases that the outer horizon goes through before the outgoing matter re-emerges (black hole phase, transition phase and white hole phase). The black hole horizon lasts until the transition to white hole begins. The transition may be instantaneous, short or long and produces what Hajicek called a `grey horizon', which can be thought of as a superposition of the other two states. After the transition the system settles to a white hole horizon.
The lifespan of the black hole, strictly speaking, is the time, as measured by distant observers, during which the black hole horizon exists.
It should be noted that in time symmetric models a long lived black hole horizon implies a long lived white hole horizon which may be problematic due to the known instabilities of white hole solutions
(see for example \cite{white1} and \cite{white2}).
One easy way to understand this instability is to think of the white hole as an attractive body with repulsive effects. Matter is still attracted towards the center due to the positive mass trapped within the white hole horizon, however it can not pass the horizon due to the nature of the white hole horizon itself, therefore it is forced to accumulate at a finite distance from the horizon thus eventually forcing a new collapse.
As we shall see, dissipative effects and inhomogeneities may help solving this problem.
\subsection{Hawking radiation and time symmetry}
It is interesting to compare the previous results with the Hawking evaporation time, which is of the order of
\begin{equation}
T\simeq t_{\rm pl}\left(\frac{M}{m_{\rm pl}}\right)^3 \; ,
\end{equation}
thus considerably longer that any quantum-corrected bouncing scenario. This suggests the possibility that the quantum processes that govern the black hole to white hole transition may super-seed the effects of Hawking radiation on astrophysical black holes.
\textbullet\ Hawking radiation: When considering semi-classical effects in the vicinity of the event horizon one finds that particle pairs created from vacuum fluctuations may cause the black hole to gradually lose its mass through what is now called the Hawking evaporation process \cite{hawking-radiation}. Hawking radiation is believed to be a real physical process that happens in the vicinity of the black hole horizon and must be taken into consideration if one aims at understanding the physics of black holes. Also it has been noted by several authors that the regularization of the Schwarzschild singularity due to UV effects in the strong field has implications for the evaporation process and the information loss problem
(see for example \cite{paradigm}).
Therefore, in order to improve on the physical validity of collapse model several authors have included Hawking radiation from the horizon after collapse passes the threshold of the Schwarzschild radius
(see for example
\cite{rad1} and \cite{rad2}).
Furthermore, it has been shown that back reaction from Hawking radiation in gravitational collapse may in fact halt collapse without the need for other repulsive effects
\cite{mersini}.
Therefore it would seem that Hawking radiation is an important effect that must be considered in any discussion of the dynamical processes that lead to black hole formation.
For example in
\cite{torres2}
a generalized Vaidya space-time was used to model Hawking radiation in the exterior of the collapsing sphere. This approach has some useful advantages as it allows to include radiation in the geometry of the exterior space-time.
However, modeling Hawking radiation with an outgoing Vaidya metric might not accurately capture the main features of the process.
The outgoing Vaidya solution describes what can be regarded as a null fluid which is somehow different from the particle flux of Hawking radiation.
Also the outgoing radiation would be coming from the boundary surface of the collapsing object and not from the vicinity of the horizon as required by semi-classical vacuum fluctuations.
\textbullet\ The black hole `atmosphere': Most models that consider Hawking evaporation treat the radiation as coming from a region very close to the horizon and having a measurable temperature. These two points, together with the time scales for evaporation, are in fact very important when considering Hawking radiation from astrophysical black holes.
Does Hawking radiation come from the vicinity of the horizon?
In
\cite{gid2}
it was shown, using Stefan-Boltzmann law, that despite being a semi-classical effect, Hawking radiation appears quite far from the horizon, from an effective radius of the order of $3\sqrt{3}M$, considerably larger than $2M$. Therefore one has to understand Hawking radiation as coming from a `quantum atmosphere' that extends well beyond the horizon.
The flux of particles coming from this atmosphere peaks around $4M$
\cite{liberati},
thus confirming that considering Hawking radiation as a near horizon effect is a mistake.
If one considers the thermal wavelength $\lambda_T$ of the emitted radiation one gets a value of the order $\lambda_T\approx 158M$, much larger than the horizon. The peak wavelength for Hawking radiation from a stellar mass black hole would then be of the order of $10^5 {\rm m}$, much longer than the cosmic microwave background (CMB). This also suggests that astrophysical black holes, being immersed in a bath of photons from the CMB, may not be radiating at all.
So, is Hawking radiation a necessary ingredient for a realistic collapse model? These considerations, together with the fact that Hawking evaporation takes time scales longer than the age of the universe in order to completely evaporate a stellar mass black hole, suggest that the evaporation process might be entirely overpowered by the black hole to white hole transition mechanism.
\textbullet\ Time symmetry:
It should be noted that Hawking radiation does not turn the black hole into white hole and so the process may `afford' to have such a long timescale as it does not clash with white hole instabilities. The Hawking evaporation process, being an intrinsically dissipative process, possesses a clear direction in time and its implications for the connection between thermodynamics and black holes are well known
\cite{thermodynamics}.
On the other hand, the toy models of black hole to white hole transition are entirely time symmetric, as they neglect any dissipative effects, leaving open the question whether more realistic evolution models may show some asymmetry in time and how this asymmetry would manifest.
There are several indications that the black hole to white hole transition may not be time symmetric. The symmetry may be a consequence of the initial assumptions (in some cases the bounce is obtained through a `cut and paste' technique of the metric with its time reversal across the curve describing the time of the bounce).
For example, consider the hypothetical scenario in which the strong-field effects reach the horizon after the bounce. In this case the black hole horizon may be long lived while the white hole horizon may last only for a short time. A situation like this may be triggered by the presence of inhomogeneities in the collapsing cloud.
In this kind of scenario the transition should again occur over a short time thus possibly leaving the white hole solution as a short lived unstable state that quickly `explodes' releasing the matter that was previously trapped (see figure \ref{fink2}).
The asymmetry has the added benefit to cure the problem of white hole instability, since the white hole state does not last for long enough to become unstable. For example in
\cite{perez}
a similar trick was employed to resolve the white hole instability of the fireworks model presented in
\cite{rov1}.
\begin{figure}[tt]
\centering
\includegraphics[scale=0.3]{fink2.eps}
\caption{Finkelstein diagrams for the black hole to white hole transition with asymmetry in time. The grey area enclosed within dashed lines represents the region where quantum effects are important (QG). The grey area within solid lines represents the trapped region in the exterior space-time. The solid thick line $r_b$ represents the boundary of the cloud. The solid thin vertical line represents the horizon in the exterior region.
In the time asymmetric transition, the bounce curve $t_B(r)$ (dotted line) is not constant (see figure \ref{fink1} for comparison with the symmetric case). An horizon grazing photon (thin curved line) remains in the vicinity of the horizon for longer time with respect to the symmetric case. The lifetime of the white hole is short as compared with the lifetime of the black hole.}
\label{fink2}
\end{figure}
Another, obvious, indication that the bounce should not be symmetric in time comes from the second law of thermodynamics. Hawking radiation is a dissipative effect, but it is not the only one. In a realistic fluid, entropy is expected to increase in time regardless of whether matter is collapsing or expanding. On the other hand in the time symmetric models described above after the bounce the matter cloud comes back to the exact initial configuration, thus with its entropy unchanged (in fact most models consider fluids with constant entropy). In these models if the entropy increased during collapse it must decrease after the bounce thus contradicting the second law of thermodynamics. This is just a consequence of the too simplistic matter profiles that have been chosen.
Therefore, the issue of the thermodynamics of the bouncing models is closely related to the specific matter models used to describe them. Obviously, properties of matter in the strong field, such as equation of state and energy conditions, play an important role in determining the final fate of collapse.
\subsection{Matter models}
In this section we will discuss the features that matter models are expected to exhibit in the strong field.
Non dissipative, homogeneous and isotropic matter fields are not realistic. How does the bounce scenario changes once we introduce, for example, inhomogeneities?
Also, the usual weak field equations of state are not expected to hold in the strong field. What can we say about the properties of matter in the strong field?
\textbullet\ Inhomogeneities: We mentioned that in homogeneous dust collapse every shell has the same density and therefore quantum gravity effects will appear throughout the cloud at the same co-moving time.
It is reasonable to assume that inhomogeneities will affect the collapse scenario just as much as they do in the classical case. For example, in classical collapse introducing inhomogeneities in the dust model changes the location of initial formation of the horizon from the boundary to the interior of the cloud (see \cite{booth} or \cite{may} for earlier numerical work on polytropic fluids). What happens when semi-classical corrections are considered?
Since the critical density now is reached by different shells at different time, it seems natural to suppose that also the bounce will occur at a different time for each shell. The time of the bounce can then be described by a curve $t_B(r)$
(see figure \ref{fink2}).
Then some shells will be collapsing as some others expand, thus breaking the time symmetry of the homogeneous model.
Note that in collapse scenarios, as opposed to cosmological models, it is possible to introduce inhomogeneities in a natural way since we begin with a classical configuration where the behaviour of the matter fields is well understood. On the contrary, in cosmological models one has to introduce the fluctuations initially at a quantum level. It is worth noting that some scenarios considering the effective dynamics of inhomogeneous cosmological models within LQC have been studied (see for example
\cite{Mena1} and \cite{Mena2}).
Considering a decreasing density profile the critical density is reached first at the center of the cloud, thus causing the inner shells to bounce first and collide with the ingoing outer ones
\cite{yue}.
In this case it is natural to ask what happens when expanding shells and collapsing shells collide. It is possible that shell crossing singularities develop thus causing caustics and shockwaves that may disrupt the entire collapse. This process is not time reversable. Considerations along these lines were discussed in
\cite{yue}
where the authors introduced inhomogeneities at a perturbative level, close to the center of the collapsing sphere. In
\cite{harada}
the authors used a non perturbative approach to the introduction of inhomogeneities and found numerical indications that the collapse halts in a shell-crossing singularity before the formation of the central singularity.
It is clear that the exact form of the density profile plays a crucial role in the future evolution of the cloud. Also, if shell crossings do arise at some stage, then the evolution after the bounce will not be the time reversal of the pre-bounce scenario.
All the models discussed here are extremely simple (dust, homogeneous perfect fluids and null dust shells). More realistic models such as polytropic fluids could be considered in the weak field and this would naturally lead to complicated sets of equations that need numerical tools to be solved.
\textbullet\ Energy conditions:
Every known classical matter field describing macroscopic objects obeys energy conditions, such as the `dominant energy condition' which roughly states that energy momentum cannot travel faster than light. The least stringent of the energy conditions however is the `weak energy condition'. The weak energy condition essentially states that the energy density, as measured by an observer moving on a time-like curve, must be non-negative.
However, when dealing with matter fields over very short distances, one has to take into account effects coming from quantum field theory. Then it is possible that the expectation value of the energy density becomes negative. Furthermore, it is possible to make such expectation value arbitrarily negative thus having a matter field that violates the weak energy condition.
Therefore it is clear that classical energy conditions must be modified in situations where one has to include quantum effects. After all, energy conditions are constructed to serve a purpose. That is to describe the behaviour of realistic matter fields under some assumptions. Therefore if realistic matter fields are found to violate energy conditions, no harm comes from dropping them or replacing them with more suitable versions
(see for example the case of the `trace energy condition' discussed in
\cite{visser}).
Then it is legitimate to ask how we should modify such energy conditions to account for ultra-violet effects in the strong gravity regime.
Quantum inequalities (see
\cite{ford})
were introduced to constrain the negative densities in quantum matter fields. Energy conditions are conditions imposed on the energy momentum tensor in order to constrain its physical validity. In the same way, quantum inequalities, imposing constraints on the semi-classical energy-momentum tensor, allow for checking the physical validity of effective matter models.
Similarly, the flux energy conditions
\cite{FEC}
and its quantum equivalent
\cite{FEC2}
were introduced to impose local semi-classical constraints on the energy momentum tensor.
In \cite{visser2}
some properties and applications of the flux energy condition and its modifications were studied, and their possible ranges of applications were discussed.
At present, what kind of energy conditions will hold for matter fields in the strong gravity regime is still unknown. This uncertainty is deeply related to our present lack of understanding of the behaviour of matter at ultra-high densities.
As said, it is reasonable to assume that the usual polytropic and barotropic equations of state will not hold at the densities found at the core of neutron stars or in collapse close to the formation of black holes. But then what kind of equation of state can suitably describe matter fields in these regimes?
\textbullet\ Equations of state: Equations of state for high density matter have been a topic of study for decades. It is indeed possible that some kind of exotic matter is already present at the core of neutron stars. Quark matter seems to be an ideal candidate and it is indeed possible that new islands of stability may appear below the neutron star degeneracy pressure once matter reaches such a state
\cite{weber}.
Historically there have been two kinds of equations of state considered to describe matter at high densities, and they are on the opposite sides of the spectrum when it comes to their physical properties.
On one hand, there are hard equations of state, like the stiff fluid model with $p\rightarrow \rho$ as densities increase. This possibility was advocated for example by Zel'dovich
\cite{zeld}
and entails, among other things, a sound speed within the cloud that approaches the speed of light.
On the other hand, it has been argued that asymptotic safety and the decreased effects of gravitation at high densities might imply the opposite behaviour for matter. Namely matter fields could tend towards a soft equation of state, where the speed of sound in the cloud decreases as densities increase.
This idea was already suggested by Sakharov in
\cite{sakh}
where it was argued that as the baryon density increases the energy density may in fact become lower.
A similar idea was advocated by Hagedorn, who suggested that as densities increase the number of allowed states saturates
and the system tends to a limit phase with a maximum temperature
\cite{hagedorn}.
As matter approaches the Hagedorn limit, adding energy
to the system does not increase the temperature but instead creates more and more particle pairs. One can think of a system near the Hagedorn phase as allowed to store any arbitrary amount of energy without increasing its temperature. Consequences of such a state of matter for cosmology and collapse were explored in
\cite{bahcall}.
More recently, semi-classical models for collapse of Hagedorn fluids were studied in
\cite{me}
and
\cite{harko}.
In this context for example, it was shown that if there is a limiting density threshold in nature that can not be passed by any kind of matter then it is possible to rewrite the equations of GR in a form that reduces to a DeSitter (read, cosmological constant) space-time in the limit
\cite{markov}.
\subsection{Other possibilities}
\textbullet\ Baby Universes: Another intriguing possibility is obtained if one assumes that the exterior horizon is well described by the classical event horizon and that the bounce of the matter is confined within the black hole. In such case far away observers would see collapse evolving as in the standard relativistic picture, with the formation of the horizon and the system settling down to a Schwarzschild black hole. However the infalling matter, instead of being condemned to falling into the singularity, would bounce and re-expand within the confined environment of the newly formed black hole thus originating a baby universe.
This is related to an idea for the resolution of the Schwarzschild singularity that was developed in
\cite{baby}.
In \cite{baby} it was shown that a matching through a transition layer could be performed between Schwarzschild and DeSitter in such a way that the singularity in the former is removed and a closed universe described by the latter develops inside the black hole.
The same idea can be extended to the case of black holes forming from collapse
(see left panel in figure \ref{baby}).
Since then the idea of universes evolving inside a black hole have been considered in a variety of contexts
(see for example \cite{pop} or \cite{hsu}).
In
\cite{smolin},
it was argued that creation of universes inside black hole may provide an evolutionary mechanism that favours universes like the one we observe.
It is not surprising then that they appear also in solutions describing bouncing models from collapse.
For example, baby universes as emerging from the quantum bounce of a null shell in LQG have been suggested in
\cite{pul}.
Also, baby universes have been observed to appear for certain choices of parameters in Friedmann-Robertson-Walker models with quantum corrections coming from LQC
\cite{vid1}.
In the above models Hawking radiation is not considered.
However, universes inside a black hole may also provide clues towards the resolution of the information loss problem (see
\cite{smolin2}).
In fact by allowing world-lines of particles to extend in the baby universe one avoids the loss of information that would happen at the singularity. In this case the Hawking radiation particles in the main universe would be entangled with the particles in the baby universe although they would exist in causally disconnected regions. The situation would resemble the information loss scenario for external observers up until the time in which the black hole horizon reaches Planckian size (which for astrophysical black holes is still longer than the current age of the universe). What would happen at that point is highly speculative and depends on whether predictability is preserved at that stage or not (see for example, \cite{HBU1} and \cite{HBU2}).
\begin{figure}[ht]
\centering
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[scale=0.3]{baby-universe-penrose3.eps}
\put(-45,110){I}
\put(-120,90){II}
\put(-89,200){III}
\end{minipage}
\hfill
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[scale=0.3]{baby-universe-penrose.eps}
\put(-45,88){I}
\put(-100,88){II}
\put(-100,170){III}
\end{minipage}
\caption{Two possible Penrose diagrams for the formation of a baby universe inside a black hole. Collapse of a matter cloud (grey region II within boundary $r_{\rm b}$) proceeds classically until after the formation of the event horizon (thick dashed line). The event horizon in not affected by quantum corrections close to the singularity. Far away observers (in region I) see the formation of a Schwarzschild black hole.
Left panel: The singularity at the end of collapse is removed and replaced by a junction surface (double solid line $r_{\rm s}$) to a DeSitter universe (region III). The thin dashed lines represent the Cauchy horizons (see \cite{baby}).
Right panel: When reaching the quantum gravity region (darker grey region) matter undergoes a phase transition and re-expands after the bounce (region III). However, as opposed to the white hole scenario, the expanding matter remains confined within the horizon and generates an expanding universe causally disconnected from the original one. The dynamics of horizons in the baby universe would depend on the properties of the expanding matter.}
\label{baby}
\end{figure}
In order for the process that leads to the formation of a baby universe to happen, a phase transition is necessary for the collapsing matter at some point close to the bounce. A phase transition in the quantum regime could then be related to a topology change in the space-time thus allowing for the creation of a new asymptotically flat region for observers inside the black hole (see right panel in figure \ref{baby}).
The scenario can be made even more fascinating with the addition of a few speculations. Given the fact that the interior is causally disconnected from the exterior space-time it is entirely possible that the mass $M$ as measured by observers at infinity does not correspond to the mass $M$ for observers inside the baby universe. This could be a consequence of the failure of classical conservation laws across some surface (the horizon or phase transition surface, for example)
\cite{hell}.
In fact quantum fluctuations and pair production near the bounce could effectively act as a `big bang' thus creating an entire universe like our own within the black hole.
\textbullet\ Periodic solutions: Finally, we shall briefly mention here another possibility that has been suggested in \cite{garay1} and \cite{garay2} but has not been thoroughly investigated so far. This is the case of a repeating cycle of collapses and bounces leading to a periodic solution.
In the non dissipative case, with collapse starting with zero velocity from a finite radius, it would seem natural that periodic solutions may exist. The collapsing cloud would reach a minimum size and bounce to return to its original size with zero velocity again. Then the process would repeat unaltered. The black hole horizon would form and disappear at regular intervals, thus leading to a new kind of astrophysical phenomena. Unfortunately, due to the time delay of signals coming from the deep gravitational well of the pulsating object the intervals between two explosions may be too long to be experimentally observed.
In a more realistic scenario the oscillating object would lose part of its mass at every cycle and dissipative effects would dampen the oscillations making each subsequent `explosion' less powerful than the previous one. In this case, over long time scales, the object would slowly settle to a final static configuration described by a low mass, high density, exotic compact remnant (possibly without horizon).
In
\cite{garay2}, \cite{bar3} and \cite{bar4} the possible existence of oscillating models was suggested. In \cite{garay1} a model with multiple bounces was proposed in which the object would lose energy after every bounce, via dissipation and possibly the emission of gravitational waves, to eventually settle to a `black star' final configuration. The geometry of these kind of oscillating solutions was discussed in \cite{barcelo-uni}.
However, it is important to stress that full analytical models describing oscillating scenarios are at present almost entirely unexplored.
\section{Remnants and phenomenology}\label{pheno}
When considering the modified collapse scenarios, the most important question for astrophysics is whether quantum gravity corrections to the black hole formation models have any observable effect in the real universe.
Astrophysical compact sources and cosmology are the only regions in the universe where one can hope to test such proposals. The issue has been debated for a long time (see for example,
\cite{liberati1} for proposed tests of quantum-gravity via cosmic rays or \cite{gupt} and references therein for possible observations of quantum-gravity effects in the early universe).
We have seen that it is possible that quantum corrected models for collapse induce modifications of the geometry by quantum effects in the near horizon region. This indicates that the consequences of a UV completion of gravity may be observable, at least in principle, also in a regime where classicality dominates.
Furthermore, this is not a far fetched speculation with no hope of any real tests being carried out. Precision measurement of astrophysical phenomena are improving every year and it is conceivable that soon we will have enough data from regions of space-time surrounding the locations where the horizon of black hole candidates should be. The presence or absence of non negligible quantum effect may then allow us to test the theories. In fact, it is indeed possible that some energetic phenomena already observed in the universe may have quantum gravitational origin, but their theoretical explanation has not been clarified yet.
Here we ask the question `what kind of phenomenology do these quantum corrected models entail'?
Can the outgoing matter coming out of the white hole horizon produce explosive phenomena (as suggested for example in
\cite{dadhich})?
Or can the matter be radiated away over long times at low energy emission rates?
Is it possible that such phenomena have a distinctive gravitational wave signature?
Can a small, dense compact object of finite size survive the process?
Or must all the matter be radiated away?
If a long lived horizon forms, can it be distinguished from the black hole event horizon?
\subsection{Compact objects}
Compact objects in GR have a long history, starting from the static and spherically symmetric constant density interior solution found by Karl Schwarzschild in 1916,
together with the more famous vacuum solution. Later Tolman considered several classes of interior solutions
\cite{tolman},
one of which was used by Oppenheimer and Volkoff to study an equilibrium state for matter that describes a cold Fermi gas and could be used to model a neutron star
\cite{OV}.
The Oppenheimer-Volkoff model is the general relativistic equivalent of the model that was developed by Chandrasekhar
using special relativistic effects and electron degeneracy pressure to show that there exist an upper mass limit for white dwarfs
\cite{chandra}.
A key ingredient of the relativistic theory of stellar structure is the Tolman-Oppenheimer-Volkoff equation which constraints the structure of the compact object in order to preserve it in equilibrium. The specific form of the Tolman-Oppenheimer-Volkoff equation depends on the equation of state describing the matter model. Studying the mass-radius relation coming from this constraint provides the upper bound on the mass of the compact object. For a given equation of state, objects above a certain limit can not stay in equilibrium and must collapse.
Within classical GR it follows that there is no way to build a stable object above the neutron degeneracy pressure and therefore black holes are the only possible outcome once that threshold is passed.
One intriguing alternative that is suggested by the bouncing scenarios described above is that exotic compact objects as leftovers from collapse may exist in nature.
This can already be seen by the behaviour of the apparent horizon, which shows a threshold (with scale determined by the parameter $\rho_{\rm cr}$) below which no trapped surfaced form. This means that the collapse and bounce of an object small enough would never form any horizon. Then one naturally wonders whether a stable object may exist below this threshold.
In order to understand whether such possibility could be realized in the universe one has to understand the phenomenology of the collapse scenarios. What happens to matter once it undergoes a bounce in a regime where quantum gravity prevails? Is it possible that an island of stability exists for matter at densities above the neutron degeneracy threshold?
Indeed from all of the discussion above it seems that asking the question whether the observed astrophysical black hole candidates are indeed black holes in the usual sense
is not a mere theoretical exercise
\cite{compact-visser}.
As we have seen, in some models, the region where quantum gravity effects are significant may extend until the horizon and beyond. Also we have seen that it is possible for quantum effects to destroy the horizon. Modified Schwarzschild solutions, such as the ones describing regular black holes, and solutions describing horizonless exotic objects seem to be perfectly acceptable. Remember that depending on the value of the critical mass a solution for a regular static black hole like the one in \cite{hayward} can have multiple horizons or none, which implies that in the horizonless case the compact remnant would be visible to far away observers.
In recent times different kinds of models describing `exotic' interiors have been proposed. These may be in the form of remnants that are left after Hawking evaporation or some other process destroys the black hole horizon, or compact objects of finite size larger than the Schwarzschild radius whose formation prevents the formation of the horizon. These objects can be roughly divided into four categories: (i) Planck size remnants left after evaporation, (ii) Planck size remnants left after some other mechanism destroys the horizon (like for example, Planck stars), (iii) finite size horizonless objects with radius greater than the horizon (like for example, quark stars and boson stars) and (iv) finite size horizonless objects with radius slightly larger than the horizon (like for example, gravastars and black stars). It is important to stress that the dynamical evolutions that lead to the formation of each of these types of objects vary greatly from model to model and this implies that their observational signatures are going to be very different.
\textbullet\
When considering horizonless compact objects, gravastars (for `{\em gra}vitational {\em va}cuum {\em stars}') constitute an alternative to black holes as the final product of gravitational collapse. They were first suggested in
\cite{grava1}.
The main idea is that just outside the Schwarzschild radius, matter undergoes a phase transition and `condensates' at a finite radius slightly larger than the horizon radius, thus leaving an horizonless compact object composed by a shell with stiff equation of state $p=\rho$ that separates the Schwarzschild exterior from a DeSitter interior.
The existence of a physical surface outside the Schwarzschild radius, as opposed to the event horizon, is the main indication that such models could be tested by observations
\cite{grava1b}.
In this respect gravastars constitute a valid alternative to black holes, they are not plagued by many of the shortcomings of classical black holes and they naturally bridge the classical regime with the quantum regime.
In order to establish whether such proposed models can exist in the universe one needs to understand their physical and observational properties.
Stability of gravastars was studied in
\cite{grava2}, while in
\cite{grava3}
it was shown that `realistic' gravastars must necessarily
have anisotropic pressures.
Also the spectrum of quasinormal modes of such objects is considerably different
from that of a black hole thus suggesting that it may be possible to distinguish them observationally
\cite{grava4}.
However, in
\cite{grava5}
it was suggested that, despite the difference in quasinormal modes, gravastar mergers could produce a gravitational wave signature (in particularly a ringdown phase, which is usually associated with the existence of the horizon) similar to that of black hole mergers. Therefore great caution should be used when claiming that the ringdown of detected gravitational wave signals is a direct proof of the existence of the black hole horizon.
Finally we should note that in
\cite{grava6}
it was shown that gravastars could be distinguished from black holes also via direct imaging of their shadow.
\textbullet\ More recently, compact remnants, as alternative to black holes, resulting from semi-classical collapse were suggested, with the name of `black stars', by Barcel\'o et al. in
\cite{bar1}. These are compact objects filled with matter, and with an actual surface, that are denser than neutron stars. Black stars are supported in equilibrium by the pressure provided by quantum vacuum polarization and can be understood as the limiting case of an isothermal sphere, where every shell of the black star is close to the horizon limit. Due to their compactness black stars have large redshift that can observationally mimic black holes. Furthermore, in \cite{blackstar} it was shown that such objects can emit Hawking-like radiation similarly to black holes.
\textbullet\
Remnants have also been considered as possible leftovers from complete evaporation of black holes
(see\cite{hayward}). Such objects arise naturally when considering Planck scale cut offs to Hawking evaporation and have sizes (of the order of Planck scales) that depend on the information that they contain. These models may offer a solution to the information loss problem (see for example
\cite{hus4} and \cite{gid1}).
Also, quantum effects on inhomogenous dust collapse were considered in order to construct a solution describing a compact remnant in place of a naked singularity in
\cite{vaz}.
\textbullet\ As said, remnants are Planck size objects that are left after the disappearance of the horizon. Among the various kinds of exotic remnants that have been suggested in the literature, one that is directly related to the bouncing collapse models is the so called `Planck star'. It was proposed in
\cite{ps1} as the ultra-compact remnant of a massive star whose collapse has been halted by repulsive quantum effects. As in the models seen above, the limit at which quantum effects become important is given by the Planck density, which implies a final size for the object much larger than the Planck length. For example the model discussed in \cite{ps2} has a size of the order of $10^{-10}$cm, which, while being smaller than the size of an hydrogen atom is still much larger that the Planck scale. In this scenario the quantum matter distribution is confined behind an inner and an outer horizon (see right panel of figure \ref{remnant}) and the Planck star has a lifetime, as seen by far away observers, of the order of $M^3$, comparable to the Hawking evaporation time. In
\cite{ps2}
the effective metric describing a Planck star was modelled as a modification of a regular black hole metric (such as the one discussed in \cite{hayward}).
In \cite{ps3}
the authors considered the possibility of detecting the final explosion of a primordial Planck star. The main idea is that if such objects formed in the early universe, then, given the average lifetime of the horizon it is possible that the disappearance of their horizon is occurring today. If this process is explosive and accompanied by the emission of gamma rays, then there is some non zero chance of detecting it with modern telescopes (see also \cite{rov-new}).
\textbullet\ Given the existence of several proposals for exotic compact objects, it is interesting to investigate under what conditions gravitational collapse halts at a finite or zero radius, without the formation of a singularity and without a bounce. Is it possible to construct a dynamical evolution that does not bounce and for which collapse stops as $t$ goes to infinity? To answer this question, consider for simplicity the metric \eqref{interior} for a homogeneous fluid, whose dynamics is described by the evolution of the scale factor $a(t)$. Then the condition for the formation of a compact remnant is
\begin{equation} \label{eq}
\dot{a}=\ddot{a}=0 \; .
\end{equation}
In the classical case we get
\begin{eqnarray}
\dot{a}&=&-\sqrt{\frac{M}{a}+b_0} \; , \\ \label{addot-cl}
\ddot{a}&=&\frac{1}{2}\left(\frac{M_{,a}}{a}-\frac{M}{a^2}\right)=-\frac{a}{2}\left(p+\frac{\rho}{3}\right) \; .
\end{eqnarray}
From the above we see that if $M\neq 0$ collapse can halt only in the bound case where $b_0<0$. In the case where $M$ goes to zero then all matter is radiated away and collapse stops leaving behind Minkowski.
Also from equation \eqref{addot-cl} we see that for collapse to stop the equation of state must tend to $p=-\rho/3$.
In the semi-classical case however we have
\begin{eqnarray}\label{adot}
\dot{a}&=&-\sqrt{\frac{M}{a}\left(1-\frac{3M}{\rho_{\rm cr}a^3}\right)+b_0} \; , \\ \label{addot}
\ddot{a}&=&\frac{1}{2}\left[\frac{M_{,a}}{a}\left(1-\frac{6M}{\rho_{\rm cr}a^3}\right)-\frac{M}{a^2}\left(1-\frac{12M}{\rho_{\rm cr}a^3}\right)\right] \; ,
\end{eqnarray}
from which we see that in the marginally bound case ($b_0=0$) as $a\rightarrow a_{\rm cr}$ (with $a_{\rm cr}^3=3M/\rho_{\rm cr}$) we get $\dot{a}\rightarrow 0$, which is the condition for occurrence of the bounce. Also in this case
\begin{equation} \label{eq1}
\ddot{a}\rightarrow \frac{a_{\rm cr}}{2}\left(\rho_{\rm cr}-\frac{M_{,a}}{a_{\rm cr}^2}\right) \; .
\end{equation}
Now equation \eqref{p} tells us that the second term in equation \eqref{eq1} is nothing but the pressure. Therefore a fluid that tends to an equation of state of the type $p=-\rho$ (cosmological constant) in the limit of $\rho$ going to $\rho_{\rm cr}$ will halt collapse asymptotically at a finite or zero value for the scale factor.
In the case of $a_{\rm cr}=0$ either the whole mass of the initial configuration is radiated away or the singularity occurs as $t$ goes to infinity.
In the case of $a_{\rm cr}>0$
the apparent horizon curve $r_{\rm ah}(t)$ in the interior follows again equation \eqref{ah}, with the effective mass in place of the physical mass and the trapped region develops at the boundary in the regime where classical behaviour holds. In the exterior an horizon that follows the event horizon appears at the same time. As the density approaches the characteristic values of the critical scale, the system enters the quantum corrected regime and the apparent horizon curve deviates from the classical trajectory after reaching a minimum value. Then $r_{\rm ah}(t)$ moves outwards to cross the boundary at a certain time, where it connects with the second (inner) horizon of the exterior metric. If the quantum effects reach the exterior to cause the outer horizon to shrink then the two horizons in the exterior meet and annihilate leaving the compact remnant visible to far away observers
(see left panel of figure \ref{remnant}).
If the exterior is described by a regular black hole solution then the two horizons live for a long time (of the order of the Hawking evaporation time) and the space-time is described by a Planck star solution with dark energy equation of state
(see right panel in figure \ref{remnant}).
\begin{figure}[ht]
\centering
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[scale=0.3]{Penrose-remnant1.eps}
\put(-73,140){P}
\put(-70,130){oh}
\put(-92,148){ih}
\put(-130,119){ah}
\end{minipage}
\hfill
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[scale=0.3]{Penrose-remnant2.eps}
\put(-64,110){oh}
\put(-100,166){ih}
\put(-131,120){ah}
\end{minipage}
\caption{Penrose diagrams of the formation of exotic compact objects from collapse with semi-classical corrections. The thick solid line describes the boundary of the collapsing body $r_b$. Dashed lines describe apparent horizon (curved) and event horizon (straight) in the classical case.
Left panel: Quantum effects modify the geometry in the exterior and the outer horizon (oh) shrinks. The outer horizon eventually annihilates with the expanding inner horizon (ih) at the point $P$, the trapped region (grey area) lives for a finite time. The geometry tends towards the classical case at large distances, while for large $t$ near the compact object (up to a radius greater than the Schwarzschild radius) there are no trapped regions.
Right panel: The outer horizon (oh) is well described by the black hole event horizon. Quantum effects result in the formation of an inner horizon (ih) and the system settles to a regular black hole geometry. The trapped region (grey area) is enclosed within the two horizons in the exterior and the apparent horizon (ah) in the interior (solid lines).}
\label{remnant}
\end{figure}
\textbullet\ We have seen that if we wish for collapse of an homogeneous perfect fluid to halt asymptotically we need to impose an equation of state that tends to a dark energy fluid ($p=\lambda\rho$ with $\lambda<-1/3$) in the limit of densities reaching the maximum density. In this case collapse can stop to produce a final `dark energy like' equilibrium configuration. This is closely related to the gravastar model since the dark energy equation of state for the fluid's interior is reminiscent of the DeSitter core of gravastars.
Compact objects with a dark energy equation of state have been considered in
\cite{ds1}
as possible extensions of gravastar models.
In
\cite{ds2}
a varying dark energy equation of state $p=\lambda\rho$ with $\lambda<-1/3$ was considered. If the parameter $\lambda$ is allowed to cross the `phantom' threshold $\lambda=-1$ then a topology change may occur and the repulsive effects can be used to construct stable wormhole solutions.
Similarly to the case of gravastars, anisotropies may turn out to be important for the construction of stable dark energy stars. For example, in
\cite{ds4}
one of such anisotropic models was studied in which the star's interior structure is comprised of two fluids, one baryonic fluid and one repulsive dark energy fluid.
\subsection{A toy model of collapse to a dark energy star}
In this final part of this section we wish to discuss a new dynamical toy model for collapse that leads to the formation of one of such objects.
The easiest way to construct a toy model that settles to a finite sized compact remnant is to make an `ad hoc' choice for the equation of state in such a manner that the conditions for equilibrium \eqref{eq} are met. This means, for example, considering an equation of state that is linear in the weak field and tends to a dark energy fluid in the strong field (as the density approaches the critical value). The physical interpretation of such a choice will be left aside for the moment.
The simplest way of implementing this is to just assume a dark energy fluid equation of state in the form $p=-\rho$ to begin with. Then the equation of motion for both the classical and semi-classical scenarios gives the scale factor as $a(t)=e^{-t/T_{\rm cr}}$, where the time-scale parameter $T_{\rm cr}$ is given by $T_{\rm cr}^{-1}=\sqrt{M_0}$ in the classical case and $T_{\rm cr}^{-1}=\sqrt{M_0(1-3M_0/\rho_{\rm cr})}$ in the semi-classical case.
From this we see that $a$ goes to zero as $t$ goes to infinity and therefore no compact remnant of finite size is left over from collapse asymptotically. However, if the exterior horizon were to disappear due to quantum effects such as the ones described in section \ref{open}, this would imply the existence of a long lived evaporating object. We can guess that if the dark energy equation of state dominates in the strong field a similar behaviour will appear in general.
To check this we consider a toy model where we add a suitable quadratic correction in $\rho$ to the linear equation of state.
Therefore we consider
\begin{equation}\label{eos}
p=\lambda\rho-\frac{1+\lambda}{\rho_{\rm cr}}\rho^2 \; .
\end{equation}
Note that for positive $\lambda$ the pressure reaches the maximum value $p_{\rm max}=\lambda^2\rho_{\rm cr}/(4+4\lambda)$ at densities below the critical scale, while for $\rho>\rho_{\rm cr}(3\lambda+1)/(3\lambda+3)$ the equation of state crosses into the dark energy regime.
Then by using equation \eqref{eos} and substituting $p$ and $\rho$ from equations \eqref{p} and \eqref{rho} we get
\begin{equation}
\frac{dM}{da}=\frac{3M}{a}\left(\frac{1+\lambda}{\rho_{\rm cr}}\frac{3M}{a^3}-\lambda\right) \; ,
\end{equation}
which can be easily solved to give
\begin{equation}
M(a)= \frac{M_0}{a^{3\lambda}+\frac{3M_0}{\rho_{\rm cr}a^3}} \; .
\end{equation}
Note that for $\rho_{\rm cr}\rightarrow +\infty$ we recover the usual perfect fluid with linear equation of state. Also note that as $a\rightarrow a_{\rm cr}$ we get $M_{,a}\rightarrow a_{\rm cr}^2\rho_{\rm cr}$, which, once plugged into \eqref{eq}, gives $\ddot{a}\rightarrow 0$.
By plugging $M$ into the inverse of the equation of motion \eqref{adot} with initial condition $t(1)=0$ we can integrate to get $t(a)$ (note that to be able to invert we must verify that $a$ is monotonic, which is true until the time at which $\dot{a}=0$) and therefore the scale factor $a(t)$ after inverting. The evolution of the scale factor for this model is shown in the left panel of figure \ref{toy}.
The evolution of the apparent horizon follows the classical behaviour at early times while as $t$ grows $r_{\rm ah}$ goes to infinity. This suggests that the apparent horizon will cross the boundary at a finite co-moving time, thus developing a situation similar to the one discussed in the previous section (see right panel in figure \ref{toy}).
In the case where the exterior metric is modified by quantum effects outside the event horizon we retrieve a Planck density compact remnant that slowly `evaporates'.
\begin{figure}[h]
\centering
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[scale=0.4]{a.eps}
\put(-207,200){$a$}
\put(-25,15){$t$}
\end{minipage}
\hfill
\begin{minipage}{.45\textwidth}
\centering
\includegraphics[scale=0.4]{ah.eps}
\put(-207,200){$r$}
\put(-25,15){$t$}
\put(-212,117){$r_{b}$}
\end{minipage}
\caption{Comparison between classical homogeneous dust collapse (dashed lines) and semi-classical collapse leading to an almost static compact remnant (solid lines) for a fluid approaching a `dark energy' equation of state.
Left panel: Comparison of the scale factors $a(t)$. In the classical case $a$ goes to zero in a finite time. In the semi-classical case $a$ goes to zero as $t$ goes to infinity.
Right panel: Comparison of the apparent horizon curves $r_{\rm ah}(t)$. In the classical case $r_{\rm ah}$ goes to zero in a finite time. In the semi-classical case the apparent horizon reaches a minimum value and then tends to infinity as $t$ grows, thus indicating that eventually it crosses the boundary of the collapsing object (indicated here by the dotted line of the co-moving radius $r_b$).
The plots have values of the parameters chosen as follows: $M_0=10$, $\lambda=1/3$, $\rho_{\rm cr}=1000$, $r_b=0.25$. Note that in order to have the initial boundary surface not trapped one has to consider $r_b<r_{\rm ah}(0)$ so that if $r_b$ is smaller than the minimum value of $r_{\rm ah}$ no trapped surfaces form.}
\label{toy}
\end{figure}
\subsection{Future observations}
The bouncing scenarios described in the previous sections present some interesting theoretical challenges, however the most important question is whether they are related to any phenomenon occurring in the real universe. If the UV corrections in the strong field were confined to a close neighborhood of the center with a classical event horizon developing in the exterior, then we would have no hope of testing these proposals experimentally. However, as we have seen, the classical black hole geometry can be significantly affected by quantum correction even in the weak gravity regime and outside the horizon. Therefore it is legitimate to consider the observational signatures that these models may bear on astrophysical phenomena.
To do so, we need to understand the phenomenological features of such models. Then we can hope to check their validity against astrophysical observations.
Astrophysical data of black hole candidates today comes essentially from four types of observations: Spectrum from accretion disks, direct imaging of the black hole shadow (soon to be possible with very long baseline interferometry such as the Event Horizon Telescope), gravitational lensing and gravitational waves from inspirals of compact objects.
In fact, some of these models may already be constrained by observations, at least in principle. For example, short lived black holes that end their lives in a powerful explosions should already be detectable either via optical or gravitational wave signals, and the probability of observing one is related to their statistical occurrence in the universe.
To summarize there are two possible classes of deviations from the classical scenario that can in principle be tested: (i) static and quasi-static final configurations, like regular black holes or horizonless exotic compact objects and (ii) dynamical scenarios related to the disappearance of the horizon, which could imply gamma rays explosions or the emission of matter from the white hole phase.
In the first case one can hope to observe and measure the optical properties of accretion disks in the vicinity of the source. For example, soon we might be able to observe whether deviations from the classical Kerr geometry occur in the vicinity of the super-massive black hole candidate at the center of our galaxy.
In the second case one can hope to observe optical phenomena like gamma rays coming from the black hole to white hole transition or the emission of gravitational waves from the bouncing phase. Gravitational waves may also provide indications of deviations from the classical black hole geometry in mergers of binary systems composed of alternative objects.
\textbullet\ Geodesics and lensing:
In
\cite{gid3}
it was argued that modification to the near horizon geometry of the black hole due to the effective quantum corrections could sufficiently alter geodesics in the space-time thus offering a concrete possibility to test quantitatively the form of the modifications.
Accretion disks around regular black holes and horizonless space-times were considered for example in
\cite{stu}.
The authors showed that when an horizon is present the differences between the modified geometry and the classical black hole geometry may be impossible to be determined efficiently from observations. On the other hand, regular space-times without horizon are clearly distinguishable, even qualitatively, from black holes.
Gravitational lensing by a regular black hole was studied in
\cite{eiroa},
while in
\cite{jap}
the authors considered geodesic motion for massive particles and photons in the Hayward metric given in
\cite{hayward}
and showed that by observing the shadow of a black hole candidate it may be possible to distinguish whether the space-time is described by this metric or the Schwarzschild metric.
Further in
\cite{me-cosimo}
the k$\alpha$ iron line from accretion disks around exotic compact objects was simulated to show that such disks may produce a spectrum considerably different from the black hole case. In particular, if one allows for circular geodesics to extend all the way to the center and no horizon to be present, then the drop at high frequencies of the spectral luminosity distribution that signals the existence of the event horizon disappears, replaced by a more gradual power law decrease.
\textbullet\
Direct imaging: In the above category, while at the same time standing on its own, is the case of the super-massive black hole at the center of the Milky Way, Sagittarius A* (Sgr A*). The main reason is that, due to its relative vicinity, the near horizon region may be directly observable in the case of Sgr A*. The black hole candidate (as well as the one at the center of the galaxy M87) is massive enough and close enough to have an angular diameter in the sky that may be resolved by Very Large Baseline Interferometry observatories thus offering the chance to obtain direct imaging of the black hole shadow \cite{vlbi}.
For a review of possible observations of SGR A* see
\cite{Goddi:2016jrs}.
Possible deviations from the Kerr geometry in the vicinity of Sgr A* have been considered by many authors
(see for example \cite{johannsen} and \cite{bambi} and references therein).
In
\cite{haggard}
it was suggested that while quantum corrections outside the horizon may be small, they may have cumulative effects over time and thus the classical behaviour near the horizon may be lost after a sufficiently long time.
In this respect we should add here that, while it is often stated that quantum effects become important for scales of the order of the Planck length (of the order of $10^{-33}$cm), it is more plausible that it is the Planck density that sets the threshold. Therefore for a stellar mass object the critical length scale at which quantum effects become important would be of the order of $10^{-20}$cm, thus much larger than the Planck length, even though still much smaller than the horizon scale.
Also, this argument is based on the conservative assumption that no other repulsive effects occur between the scale related to the Planck density scale and the classical energy scale.
\textbullet\ Observing white holes:
If the bounce turns the black hole into a white hole, then how does the expanding matter emerge from the horizon?
Roughly speaking there are two main possibilities: An explosive event, where most of the energy is released over a very short time, or an `evaporation' type of process in which the energy is released slowly over a long time. Either way the event should be associated with some kind of electromagnetic emission.
Electromagnetic signals from primordial black holes exploding into white holes were considered in
\cite{vidotto} and \cite{rov-new},
while in
\cite{frb}
a possible connection of such models with observed Fast Radio Bursts was suggested.
It should be noted that if such phenomena occur in the universe then, in order to be detected observationally, there must be nothing else in the neighborhood of the bouncing object. If the white hole is surrounded by a gas cloud or by the remnant of the exploding progenitor star, then the radiation coming from the white hole region will undergo several processes before reaching the observer, thus possibly destroying the quantum signature of the signal.
Models for Gamma Ray Bursts explosions, for example, consider a region for the emission of gamma rays that is far from the core of the object. This is a general feature also in stellar collapse. The light coming from supernovae explosions is also produced far from the core of the collapsing star. Therefore the connection between the processes happening at the core, such as the bouncing scenarios described here, and observable effects is far from trivial. In a similar way, if matter is present outside the bouncing core, it could be that several processes occur after the outgoing matter passes the white hole horizon so that the light reaching far away observers would bear almost no information regarding what happens in the vicinity of the white hole.
\textbullet\ Gravitational wave signatures: Another method that may be soon available to test these hypoteheses is the observation of gravitational waves.
The waveforms, from inspiral of black hole candidates observed by LIGO
\cite{ligo},
are in perfect agreement with the predictions of GR. However it is possible that modifications to the black hole geometry outside the horizon may cause some yet unseen deviations of the gravitational wave signals from the classical predictions of GR.
These deviations would be related to the length scale at which modifications extend outside the horizon, so that, in principle, current observations could already exclude models for which large scale deviations would produce a significantly different waveform
(see for example \cite{gid4}).
However in
\cite{Konoplya:2016pmh}
it was shown that, due to the large uncertainty in the determination of mass and angular momentum of the merging black holes, the present observed gravitational wave signals do not rule out alternative theories of gravity. The possibility that such signals may be produced by merging wormholes was investigated in
\cite{Konoplya:2016hmd}
while in
\cite{Konoplya:2011qq}
the authors reviewed quasinormal modes in merger of black holes in different theories.
Another possibility is given by gravitational waves emitted by matter emerging from the horizon during the white hole phase. This is, in some sense, the white hole equivalent of gravitational waves produced during collapse to a black hole.
Then, the discovery of a distinctive gravitational wave signature from such exploding phenomena could help to determine what kind of scenario (if any) occurs in nature.
In this context, there are two kinds of instabilities that may play important roles in the observational outcome of bouncing scenarios. The white hole instability, that was described in section \ref{trans}, and the ergoregion instability, which states that rapidly rotating objects with an ergoregion but no horizon are unstable. This instability is a result of scattered waves that have larger amplitudes than incident waves, thus leading to an energy loss by the scattering body that may grow exponentially.
The instability occurs in any rotating object without an horizon but becomes non negligible only in very compact objects with high angular momentum
(see \cite{zeld1}, \cite{staro} and \cite{beck}).
The white hole instability suggests that the white hole phase must be short compared to the black hole phase, and therefore most of the matter from the outgoing cloud must be released in a short time.
The ergoregion instability suggests that as gravitational waves are produced by a rotating cloud coming out from the horizon the emission must also happen over short timescales, and therefore the amount of energy liberated per unit time must be large. Both arguments point to an explosive phenomenon for the phenomenological description of the outgoing matter emerging of the white hole horizon.
\section{Discussion}\label{discussion}
The final fate of gravitational collapse has been a very fruitful area of research in GR for decades. Its importance spans from fundamental questions about the nature of gravity to astrophysics.
\textbullet\
From the theoretical standpoint, black holes sit at the crossroad of several different disciplines. It is clear today that the black hole horizon is not a purely classical entity and it requires the addition of quantum physics to be properly understood. The description of the horizon that comes from our present understanding of GR and quantum mechanics is incomplete and presents some puzzles that, once solved, may open the way to a theory of quantum gravity.
The final fate of collapse and the true nature of black holes have important consequences for long standing open questions related to information loss, unitarity and black hole firewalls
(for recent reviews see \cite{info}, \cite{info2}, \cite{info3} and references therein).
In the present article we have chosen to avoid a detailed discussion of such problems in order to concentrate on the astrophysical aspects. Nevertheless it is important to remark that both issues are two sides of the same coin and that a change in our understanding of one side will necessarily lead to a change on the other side.
For example, we have seen how a well defined UV completion of GR may lead to the resolution of the classical black hole singularity and this may eventually lead to the solution of the problems mentioned above.
\textbullet\
From the point of view of astrophysics, the quantum resolution of classical singularities that arise in dynamical models at the end of collapse, means that black holes, as described by the Schwarzschild and Kerr solutions, may not exist in the universe. Astrophysical black holes may posses horizons with a finite lifetime and may turn into white holes towards the end of their lives, possibly leaving behind some remnant of intrinsically quantum nature.
This leads naturally to the question whether such predictions can be tested against observations. We have seen that some modifications to the dynamical behaviour of the collapsing cloud in the strong field may bear implications for the space-time geometry also at distances where one would expect only classical solutions to hold. Such models predict a change in the near horizon geometry that may have observable consequences. The effects of such change vary greatly from model to model and there is hope that present or future observations will be able to put constraints on the proposed scenarios.
\textbullet\ All the models studied in the literature are extremely simplified.
In particularly marginally bound collapse of adiabatic, non dissipative, fluids, leads to the complete time reversal of the solution after the bounce.
The physical validity of these solutions relies on the assumption that more realistic models will not change the qualitative picture. Therefore one future direction to explore implies considering improved theoretical models and it is safe to assume that numerical simulations will be necessary in order to solve more realistic scenarios.
Another question that future research will address is the possibility of the existence of periodic solutions. Such `oscillating' black holes, periodically drifting inside and outside the horizon, could hint at new astrophysical phenomena.
\textbullet\ As already mentioned, future observational tests of gravity in the strong field will put constraints on these proposals and hopefully (if there is a quantum signature in the near horizon geometry) indicate the way forward. This is not a far fetched speculation. Measurement of the properties of the gravitational field near the horizon of black hole candidates are already possible with gravitational wave detectors such as LIGO and VIRGO and at present it can not be excluded that they posses a signature of quantum gravitational nature.
Gravitational wave signals from exotic compact objects have been discussed in
\cite{gw1}, while in
\cite{gw2} `echoes' in the signals arising from deviations from classical GR have been discussed.
Macroscopic effects in the signals due to corrections in the near-horizon geometry have been used in
\cite{gw3} in order to classify different proposals that present echoes and distinguish them from models that do not present echoes.
Finally in
\cite{gw4}
the general features of such echoes have been investigated in relation to quasi-normal modes.
\textbullet\ Also imaging of the shadow of the black hole candidate at the center of the Milky Way will soon be possible with the Event Horizon Telescope, a tool that may already provide hints on the quantum nature of the near horizon geometry
\cite{eht}.
Further to this, unexplained explosive phenomena have been observed in the Universe, for example, think at Fast Radio Bursts or ultra-luminous x-ray sources. At present there is no conclusive connection between any observed phenomena and the quantum gravity signature of exotic compact objects. However it is indeed possible that the experimental counterparts of these theoretical models have already been observed and not yet recognized.
Therefore these are exciting times for black holes physics, a better understanding of gravity in the strong field is emerging and will soon be tested experimentally, and it is indeed possible that, when enough data will be available, nature will surprise us with some unforeseen effect.
|
1,314,259,994,947 | arxiv | \section{Introduction}
In most cosmological models usually considered, the matter is dealt
with as a continuous medium. The aim of the present paper is to point
out how relevant are the gravitational effects which are due to the
discreteness of matter, thought of as constituted of galaxies
described as point particles, if one takes into account both the role
of retardation of the forces (as required by general relativity) and
the correlated nature of the positions of the galaxies. Concerning
the role of discreteness, the key point is that the gravitational
force on a test particle due to a continuous matter with a spherically
symmetric density vanishes. Instead, for a matter constituted of point
particles whose positions are dealt with as random variables with a
spherically symmetric probability distribution, it is only the mean
gravitational force that vanishes, while the fluctuations can be very
large. This actually is the point of view that was taken by
Chandrasekhar and von Neumann in connection with the motions of stars
(see the review \cite{chandra}). They showed that, if the positions
of the stars about a test particle are considered as (independent)
random variables, then the force on the test particle may be very
large; actually, this happens with so huge a probability that the
variance of the force is even divergent. It will be shown here that
very large fluctuations of the force on a test particle occur also in
the case of galaxies. However, while in the case of stars this is due
to the occurrence of close encounters, in the case of galaxies the
largeness of the fluctuations is instead due to the gravitational
contribution of the far galaxies, when one takes into account both the
retarded character of their action and the correlated nature of the
positions of the galaxies.
A probabilistic approach in a cosmological context, with galaxies
described as point particles, whose positions are dealt with as random
variables presenting correlations, is a familiar one since a rather
long time; see for example the book of Mandelbrot \cite{mandel}, the
book \cite{peebles} by Peebles and the work \cite{dp} by Davis and
Peebles. Particular emphasis on the possible fractal nature of matter
distribution was given, in addition to Mandelbrot, by several
authors. See for example the reviews \cite{sylos} by Sylos Labini et
al., \cite{coleman} by Coleman and Pietronero and \cite{combes} by
Combes, and the works \cite{remo} by Ruffini et al., \cite{roma2} and
\cite{roma3} by Gabrielli at al. and finally the work \cite{roma} by
Joyce et al. Now, in all such papers the nonrelativistic
approximation for gravitation was used, and retardation was altogether
neglected, so that one is actually dealing with purely static
Newtonian gravitational forces.
The main original contribution of the present paper consists in
showing that, if retardation is taken into account (together with
Hubble's law and the correlated nature of the positions of the
galaxies), then the gravitational action of far away matter enters the
game and may, in some cases, be the dominant one.
This will be shown by considering an extremely simplified model of the
Universe, with the Hubble constant hold fixed at its present value
$H_0$. Two results will be obtained. First we show that the influence
of the far away galaxies can be described as corresponding to the
existence of an effective density which is about five times larger
than the present barionic one, i.e., about equal to the usually
estimated density of dark matter. Then we look at the force on a test
particle. We show that, if the correlated nature of the positions of
the galaxies is taken into account, the force (per unit mass) can be
estimated as given by $0.2\, c H_0$ ($c$ being the speed of light),
which is about the value of the acceleration at which the influence of
dark matter starts to be felt. Such results thus appear to indicate
that far away matter may produce gravitational effects comparable to
those usually attributed to local dark matter.
We finally give a preliminary discussion of the problem whether such
an estimate of the gravitational action of far away matter may
account, through the virial theorem, for the observed velocity
dispersion in clusters of galaxies. We show that this is possible,
provided the gravitational force of far away matter has a suitable
property concerning its dependence on position. Namely, the force
should not be smooth, and its values at two separated points should
rather be uncorrelated. We point out how the extremely simplified
model here considered may not suffice to settle the question whether
such a decorrelation property should hold, because the answer may
require the introduction of a more realistic model, in which the time
variation of Hubble's constant is taken into account. We leave the
discussion of this interesting point for future work, and in the
present paper we limit ourselves to exhibit, through the simplest
conceivable model, how relevant the role of far away matter may be for
cosmology, if retardation of the forces (in addition to the correlated
nature of the positions of the galaxies) is taken into account.
\section{Definition of the model}
In order to fully take the discrete character of matter into account,
one should in principle deal with an $N$--body problem, in which each
particle is coupled to the gravitational field through the Einstein
equation having all the other particles as sources. This is however a
formidable task. So we introduce first of all the approximation in
which one looks at the motion of a test particle, when the motion of
the sources is assigned, as given by observational cosmology. This
will naturally lead to a compatibility problem, because the test
particle too will have to move according to the same law. It will be
shown how this compatibility condition is solved through the
introduction of a suitable effective density.
As the simplest model for the motion of the sources, we take a
velocity field which corresponds to Hubble's law, i.e., we neglect
altogether the peculiar velocities (a further comment on this point
will be given later). Taking a locally Minkowskian coordinate system
centered about an arbitrary point, a particle with position vector
$\vett q$ will then have a velocity
\begin{equation}\label{hubble}
\dot{\vett q}=H_0 \, \vett q\ .
\end{equation}
For the sake of simplicity of the model, the Hubble constant $H_0$
will be assumed to be independent of time. On this point we will come
back later on. It is easily established that the chart has a local
Hubble horizon $R_0=c/H_0$, where the galaxies have the speed of
light. Notice furthermore that the form (\ref{hubble}) of Hubble's
law is the one appropriate to our choice of Minkowskian coordinates.
For example, one could choose, as Davis and Peebles (but not Joyce et
al.) do, coordinates \emph{``expanding with the background
cosmological model''}, with respect to which the galaxies have zero
velocity (the peculiar velocities having been neglected); see formula
(1), page 426, of \cite{dp}. Our choice of coordinates is perhaps
more convenient in the present case, but obviously, just by
definition, the results do not depend on the choice of the coordinates
at all.
So we investigate the gravitational action due to a system of $N$
galaxies whose motions $\vett q_j(t)$, $j=1,\ldots ,N$, are assigned.
The energy--momentum tensor $T^{\mu\nu}$ then is
\begin{equation}\label{timunu}
T^{\mu\nu}=\sum_{j=1}^N \frac 1 {\sqrt g}\ \frac {M_j}{\gamma_j}\
\delta (\vett x-\vett q_j) \dot{\vett q}^\mu_j \dot{\vett q}^\nu_j
\end{equation}
where $M_j$, and $\gamma_j$ are the mass and the Lorentz factor of the
$j$--th particle, $g$ is the determinant of the metric tensor (which
is considered as an unknown of the problem), $\delta$ the Dirac delta
function, and the dot denotes derivative with respect to proper time
along the worldline of the source. The velocities of the galaxies are
assumed to satisfy Hubble's law (\ref{hubble}), while their position
vectors $\vett q_j$ are considered as random variables, whose
statistical properties will be discussed later.
\section{The perturbation approach}
The study of the solutions of Einstein's equations with the
energy--momentum tensor (\ref{timunu}) as a source still is a
formidable task, and so we limit ourselves to a perturbation approach,
in which the energy--momentum tensor $T^{\mu\nu}$ (\ref{timunu}) is
considered as a perturbation of the vacuum. Following the standard
procedure (see \cite{einstein} or \cite{wein}), we have to determine a
zero--th order solution (the vacuum solution), and solve the Einstein
equations, linearized about it. The simplest consistent zero--th
order solution is the flat metric, because it will be shown that,
coherently, the perturbation turns out to be small (at least if the
free parameters are chosen in accordance with the observations). We
did not investigate whether there do exist other \emph{ansatzs} for
the vacuum which give better results. Some further comments on the
perturbation procedure will be given later.
Thus the metric tensor $g_{\mu \nu}$ is written as a perturbation of
the Minkowskian background $\eta_{\mu \nu}$, namely, as $g_{\mu
\nu}=\eta_{\mu \nu} +h_{\mu \nu}$, and it is well known that in the
linear approximation the perturbation $h_{\mu\nu}$ has to satisfy
essentially the wave equation with $T^{\mu\nu}$ as a source. More
precisely, one gets
\begin{equation}\label{aaaa}
\square \big[ h_{\mu\nu}-\frac 12 \eta_{\mu\nu}h\big]= -\frac{16\pi
G}{c^4} T_{\mu\nu}\ ,
\end{equation}
where $G$ is the gravitational constant, $h$ the trace of
$h_{\mu\nu}$, and $\square=(1/c^2)\partial^2_t-\Delta_2$. The
solutions are the well known retarded potentials
\begin{equation}
\label{campo}
h_{\mu\nu}=\frac {-2 G}{c^4 }\, \, \sum_{j=1}^N \frac {
M_j}{\gamma_j}\, \left. \frac {2 {\dot q}^{(j)} _\mu {\dot q}^{(j)} _\nu -c^2\eta_{\mu\nu}}
{|\vett x-{\vett q}_j|}\right|_{t=t_{\mathrm{ret}}} \
\end{equation}
(with ${\vett q}^{(j)}\equiv {\vett q}_j$).
\section{The mean metric, the compatibility condition and the
effective density}
In order to implement in a suitable sense the compatibility condition
previously mentioned,
we now make reference to the mean metric,
which is obtained by averaging over the position vectors of the
galaxies, considered as random variables.
For a spherically symmetric probability distribution it is immediately seen
that the mean of each of the off--diagonal terms vanishes, and that
the means of the spatial diagonal components are all equal.
Denoting the mean by
$\langle\, .\, \rangle$, the mean metric at the origin is then
$$
ds^2= \langle\ {g}_{\mu\nu}\ \rangle\ dx^\mu dx^\nu=
(1-\alpha-3\beta)\,
c^2 dt^2 -
(1+\alpha+\beta) dl^2
$$
where $dl^2=dx^2+dy^2+dz^2$ and
\begin{equation}\label{coef}
\alpha=\frac {2G}{c^2}\ \langle\ \sum_j \frac {M_j}{|\vett q_j|}\
\rangle \ ,\quad
\beta \, \raisebox{-0.5ex}{$\stackrel{<}{\sim}$} \frac {4GH_0^2}{3c^4}\
\langle\ \sum_j M_j {|\vett q_j|}\ \rangle\ .
\end{equation}
This actually is a spatially flat Friedmann--Robertson--Walker
metric. We can now formulate the compatibility
condition as the requirement that the expansion rate corresponding
to such a metric coincide with the one ($H_0$) that was assumed for the
sources. The condition then takes the form
\begin{equation}\label{consistenza}
\frac 12\frac {d}{dt} \log \frac {1+\alpha+\beta}{1-\alpha-3\beta}=H_0\ .
\end{equation}
The sums (\ref{coef}) defining the coefficients $\alpha$ and $\beta$
might be estimated through integrals involving a suitable effective
matter density. There arises however the problem that, due to the
retarded character of the time entering the expressions for $\alpha$
and $\beta$, the galaxies lying near the border of the chart are to be
taken at times near that of the big bang, at which the density
diverges. This by the way shows that only the galaxies at the border
are the relevant ones. This very fact, however, also allows one to
solve the problem just mentioned, because one can then introduce an
effective density $\rho_{\mathrm{eff}}$ having the property that both
relations
$$
\langle\ \sum \frac {M_j}{|\vett q_j|}\ \rangle
\simeq 4\pi
\rho_{\mathrm{eff}} \ \frac{{R_0}^2}2 \ , \quad
\langle\ \sum M_j {|\vett q_j|}\ \rangle\ \simeq 4\pi \rho_{\mathrm{eff}}
\frac{{R_0}^4}4 \
$$
hold, with the same effective density.
This gives
\begin{equation}\label{aaa}
\alpha\simeq {4\pi G} \rho_{\mathrm{eff}} {{R_0}^2}/{c^2}\ ,\quad
\beta\ < (2/3) \alpha\ .
\end{equation}
Using ${\dot R}_0=c$, one then gets
$$
\dot \alpha\simeq \frac {8\pi G}{c^2} \ \rho_{\mathrm{eff}}\ R_0c\ , \quad
\dot \beta \simeq \frac 23 \dot\alpha \ .
$$
With these expressions for $\dot \alpha$ and $\dot \beta$, the consistency
condition (\ref{consistenza}) then becomes an algebraic one, which gives
for $\rho_{\mathrm{eff}}$ the value
\begin{equation}\label{rhoeff}
\rho_{\mathrm{eff}}\simeq \frac 14\, \frac {3H_0^2}{8\pi G}\simeq 5 \rho_0 \ ,
\end{equation}
where $\rho_0=\Omega_0\, \big({3H_0^2})/\big({8\pi G})$, with
$\Omega_0\simeq 0.05$, is the observed barionic density at the present
time. Notice by the way that the perturbation procedure appears to be
qualitatively consistent, because the first--order perturbation turns
out to be small, of the order of one tenth the unperturbed one.
This is the first result of the present paper. Due to the retarded
nature of the potentials, it turns out that the far away galaxies are
the ones that give the dominant contribution to the mean metric of the
Universe. Moreover, the consistency condition that the expansion rate
obtained with such a mean metric be equal to $H_0$, determines the
value of a corresponding effective density, which is about five times
the observed barionic one, i.e., about equal to the estimated density
of the dark matter.
\section{Form of the force due to the far away galaxies}
So far for what concerns the mean metric. We now come to the problem
of estimating the effects of the fluctuations on the dynamics of a
test particle. The force per unit mass on a test particle is obtained
in the familiar way through the equation for the geodesics. Notice
that the Hubble relation (\ref{hubble}) has here an essential
impact. Indeed, the force contains both a term decreasing as $1/r^2$,
which is proportional to the velocity of the source, and a term
decreasing as $1/r$, which is proportional to the acceleration of the
source. Thus, estimating the acceleration too through Hubble's law,
the latter term actually turns out not to depend on distance at all,
and thus it is again the far away matter that is found to give the
dominant contribution. Compare this with the way in which Mach's
principle was dealt with in \cite{einstein} (see page 102). There,
lacking Hubble's law, the velocities of the sources were
neglected. Thus, only the Newtonian, fast decaying, potential was
considered, and so only the near matter, and not the far one, appeared
to play a role.
So we address our attention to the dominant term of the gravitational
force per unit mass, namely, the one proportional to the acceleration
of the source. Such a term, which we denote by $\vett f$, has, at the
origin, the form
\begin{equation}\label{vettore}
\vett f= \frac {4G H_0^2\, M}{c^2}\ \vett u\ ,\quad \quad \vett u(N)=
\sum_{j=1}^N \frac{\vett q_j}{|\vett q_j|}\
\end{equation}
with the positions ${\vett q}_j$ of the $N$ galaxies are evaluated at
corresponding retarded times. Here, the masses of the galaxies were
all put equal to a common value $M$, and the Lorentz factors
$\gamma_j$ were put equal to 1, for the reasons to be illustrated
later. So, apart from a multiplicative factor, such a force just
reduces to the sum of the unit vectors pointing to each of the
galaxies at the corresponding retarded time. Actually, our attention
was addressed to the component of such a force $\vett f$ along a given
direction. Such a component will be simply denoted by $f$, and the
corresponding component of $\vett u$ by $u$.
\section{Estimate of the force. Role of the probabilistic assumptions
for the distribution of galaxies}
Having determined the quantity $f$ of interest (or equivalently $u$),
we now come to the problem of how to describe the distribution of the
galaxies. It is immediately seen that $f$ exactly vanishes (at any
point) if the matter is described as a continuous medium with a
spherically symmetric density. From the probabilistic point of view
considered here, such a result (the vanishing of $f$ for a spherically
symmetric matter density) now reads as the vanishing of the mean value
of $f$ for a spherically symmetric probability density of the position
of a galaxy.
We thus come to an estimate of the variance of the force $f$ (or of
$u$). It will be seen that the result depends on the further
assumptions one introduces concerning the spatial distribution of the
galaxies. Assume first that the positions $\vett q_j$ of the $N$
galaxies are independent random variables, uniformly distributed with
respect to the Lebesgue measure. Then the sum defining $u$ is found to
grow as $\sqrt N$, just in virtue of the central limit theorem. For
what concerns the estimate of the force on a test particle, one easily
sees that with the present assumption it is completely negligible,
just because the considered sum behaves as $\sqrt N$ rather than as
$N$ (see later).
So we modify such an assumption and, following all the previously
mentioned authors, we consider the case in which the position vectors
of the galaxies present a correlation, i.e., are no more
independently distributed. Thus, the sum defining $u$ is no more
constrained to grow as $\sqrt N$, and can have a faster growth. Just
for the sake of concreteness, we fix our model by requiring that the
probability density corresponds to a fractal of dimension $2$. In
such a way, however, the analytical computation of the probability
distribution of the force becomes a quite nontrivial task with
respect to the Poissonian case considered by Chandrasekhar and von
Neumann, and also with respect to the fractal, but purely Newtonian,
case considered by Gabrielli et al. in the papers \cite{roma2} and
\cite{roma3}. So we are forced, at least provisionally, to
investigate the problem by numerical methods.
We proceeded as follows. In order to estimate the sum defining $u$,
the positions of the $N$ galaxies were extracted (with the method
described in \cite{mandel}) in such a way that the mass distribution
has fractal dimension $2$. We then studied the dependence of $u$ on
the number $N$ of galaxies, which was made to vary in the range
$1000\le N\le 512,000$, the density being kept constant. This means
that the positions of the $N$ points were taken to lie inside a cutoff
sphere whose volume was made to increase as $N$. For the values of $N$
investigated, the corresponding radius turns out to be so small with
respect to the present horizon, that the Lorentz factors $\gamma$
could altogether be put equal to $1$ (as was previously assumed), and
more in general the special relativistic character of our model was
actually justified.
The mean of $u$ turns out to practically vanish for all $N$, while its
variance $\sigma^2_u$ is found to grow as $N^2$ (actually, as $0.2\,
N^2$), rather than as $N$, as occurs in the uniform case. This is
shown in Fig.~\ref{fig2}. The standard deviation $\sigma _f$ is thus
proportional to $N$, being given by
\begin{equation}\label{sigmaf}
\sigma_f\simeq \sqrt{0.2}\ \frac {4GH_0^2}{c^2}\ MN\ =\sqrt{0.2}\
\frac {4G}{R_0^2}\ MN \ .
\end{equation}
\begin{figure}
\begin{center}
\includegraphics[width= \textwidth]{figura1.pdf}
\end{center}
\caption{\label{fig2} The variance $\sigma^2_u$ of $u$ versus the
number $N$ of galaxies in log--log scale. The dashed line is the
curve $\sigma_u^2=0.2\ N^2$.}
\end{figure}
We now take such a result, which was obtained for extremely small
values of $N$, and extrapolate it up to the present horizon
$R_0=c/H_0$, i,e., we insert in formula (\ref{sigmaf}) the actual
value of $N$, so that the quantity $MN$ can be identified with the
total visible mass of the Universe. The latter can be written as $MN=
(4/3)\, \pi\ \rho_{\mathrm{eff}}\ R_0^3$, in terms of the effective density
$\rho_{\mathrm{eff}}$ previously discussed. This gives $\sigma_f\simeq 0.2\,
cH_0$.
On the other hand, if a random variable $f$ has zero mean and a finite
variance $\sigma^2_f$, with great probability its modulus will take on
values very near to its standard deviation $\sigma_f$. In such a
sense we may say to have found
\begin{equation}\label{forza}
|f|\simeq 0.2 \, cH_0\ ,
\end{equation}
which constitutes the second result of the present work. Namely, in
our oversimplified model the force per unit mass, i.e., the
acceleration, exerted by the far matter on a test particle, is found
to have a value of the order of $cH_0$, which is the one that is met
in most cases in which the presence of a dark matter is
advocated. Notice that the assumption of a uniform, rather than
correlated, distribution of matter would lead instead to $|f|\simeq
cH_0 /{\sqrt N}$, i.e., essentially to $f\simeq 0$. Namely, as
previously mentioned, without the correlation hypothesis for the
positions of the galaxies, the usual procedure of neglecting at all
the gravitational contribution of the far away matter, would be
justified. We expect that the coefficient $\sqrt{0.2}$ in
(\ref{sigmaf}) may depend on the degree of correlation chosen for the
positions of the galaxies. This point will be investigated elsewhere.
Notice that this force, acting on each test particle, also acts on
each galaxy itself, thus producing an acceleration which should be
added to the one given by Hubble's law. On the other hand, such an
acceleration was neglected in our model, because the peculiar
velocities were assumed to vanish, so that we have here a consistency
problem. In this connection we notice that this acceleration is small
with respect the Hubble acceleration $H_0c$ of the far away galaxies
(which are the relevant ones), so that our procedure seems to be
consistent. A more accurate discussion of this point is left for
future work.
\section{Possible application to the virial of the forces for
a cluster of galaxies}
We now address the problem whether the previous result may be applied
to estimating the virial of the external forces for a cluster of
galaxies. We have in mind the work of Zwicky \cite{zwi} for the Coma
cluster, in which the contribution of the internal matter was found to
be negligible, that of the external galaxies was not even mentioned
(perhaps, in the spirit of the continuum approximation), and the
presence of a dark matter was proposed. Let us recall that, according
to the virial theorem, for a confined system constituted by $n$
particles (think of a cluster of galaxies) one has $\overline
{\sigma^2_v} = - \overline \mathcal{V}\, /n$. Here, $\sigma^2_v=(1/n)\,
\sum_i v_i^2$ is the variance of the velocity distribution of the
particles (the galaxies of the cluster), whereas $\mathcal{V}=\sum_i \vett
f_i\cdot \vett x_i$ is called the virial of the forces (per unit
mass), $\vett x_i$ denoting the position vector of the $i$--th
internal particle with respect to the center of mass of the cluster,
while overline denotes time--average. It was shown by Zwicky that the
contribution of the internal forces is negligible, so that in
estimating the virial we can just consider the force due to the
external galaxies.
It is well known that the virial of the external forces (per unit
mass) equals the virial of the tidal force (per unit mass) $f-f^*$
where $f^*$ is the force (per unit mass) at the center of mass,
because the contribution of $f^*$ vanishes. The key point now is that
the contribution of the tidal forces depends on the smoothness
properties of the field of force $f$. Indeed it turns out that, if the
field is smooth, so that the tidal force can be estimated through a
Taylor expansion, then one finds $\overline \mathcal{V}\, /n\simeq H_0^2L^2$
where $L$ is the linear dimension of the cluster. For the Coma cluster
this contribution turns out to be negligible.
If one instead assumes that the forces at different points be
uncorrelated, then it turns out that the contribution may be of the
correct order of magnitude.
Indeed this assumption has two deep consequences. The first one is
that it makes conceivable that locally, in some regions, the random
field of force may form patterns of a central--like type, which are
attractive towards a center, with a nonvanishing force at the
center. By the way, this is equivalent to the fact that locally, in
such special regions, the external far away matter produces a
pressure. The second consequence is that in such a case the variance
of the tidal force $f-f^*$ just equals $\sqrt {2}$ the variance of the
force $f$, the estimate of which was given in formula (\ref{forza}).
So, having assumed that the tidal force be of central--like type, the
terms of the sum $\sum_{i=1}^n \overline {(\vett f_i-\vett f^*)\cdot
\vett x_i}$ can be estimated as $\overline {(\vett f_i-\vett f^*)\cdot
\vett x_i}\simeq - \sqrt{2}|f|\ \overline {|\vett x_i|} $, with $f$
given by (\ref{forza}), and with $\overline {|\vett x_i|}\simeq L/4$,
where $L$ is the diameter of the cluster. So, for the velocity
variance one gets
\begin{equation}
\label{varianza}
\overline{\sigma_v^2}\simeq \sqrt {2}\, 0.05\ { cH_0 L}\simeq 0.07\ {
cH_0 L} \ .
\end{equation}
In the case of Coma one thus finds a value $\simeq 8 \cdot 10^5
\mathrm{km}^2/\mathrm{sec}^2$, which is very near to the value $5
\cdot 10^5 \, \mathrm{km}^2/\mathrm{sec}^2$ reported by Zwicky.
The prediction that the velocity variance depends linearly on $L$,
according to (\ref{varianza}), may be of interest, and apparently is
in agreement with the observations (see \cite{kazanas}, Fig. 2, page
539, and \cite{combes}). Notice that, with the parameters entering
the problem, the square of a velocity can be formed only as $c^2$, or
as $cH_0L$ or as $(H_0L)^2$. But the first term is by far too large,
the last term (as previously pointed out) by far too small, while the
term linear in $L$ is indeed about of the correct order of magnitude.
Thus, the previous considerations appear to indicate that the
decorrelation assumption for the forces at different points is
necessary in order that the observed velocity dispersion in a cluster
may be ascribed to the gravitational action of the far away galaxies.
We now briefly address the question of understanding which mechanisms
might be reponsible for such a decorrelation. We have in mind two
mechanisms. The first one is suggested by the analysis made in the
paper \cite{roma} of Joyce et al., in which the Newtonian contribution
to the tidal force is estimated, albeit in a different context. Indeed
in such a paper it is shown (see page 418) that the Newtonian
contribution to the tidal force is finite, whereas the purely
Newtonian non tidal contribution would be divergent, at least for
certain values of the fractal dimension. On the other hand, the latter
quantity is just of the same order of magnitude of the tidal force
corresponding to our far fields, and so such a result suggests that
the tidal force due to the far fields may be divergent. This in turns
may be considered as an indication of decorrelation. The second
mechanism has instead a cosmological character, and is related on the
one hand to the fact that the cosmological horizons relative to
different galaxies do not coincide, and on the other hand to the fact
that the main contribution to the force comes from the matter near the
horizon. Remarking in addition that the distributions of matter about
two different horizons should be considered as independent ones (the
horizons being non causally connected), one is led to conceive that
also the corresponding forces might be independent.
A consistent discussion of this point would require the consideration
of a more realistic model, in which the time dependence of Hubble's
constant be taken into account. So we leave a discussion of this point
for future work.
\section{A comment on the perturbation approach}
In the present paper we have chosen a perturbation approach in which
the zero--th order solution is the flat metric, and consistently the
first--order correction was found to be small with respect to the
unperturbed one.
One might imagine that a better approximation be obtained if the mean
metric, i.e., the Friedmann--Robertson--Walker one, is taken directly
as zero--th order solution. One easily sees, however, that such an
approximation scheme meets with two difficulties. The first one is
that in such a case the source entering the equation for the first
order solution is of the same order as the zero--th order
source. Indeed, the source is proportional to the quantity
$$ T^{\mu\nu}-<T^{\mu\nu}>
$$ which is not small, as its modulus is almost everywhere equal to
that of $<T^{\mu\nu}>$. A more serious difficulty is the fact that the
first--order perturbation has to satisfy essentially d'Alembert's
equation with the source just mentioned, while such an equation cannot
be solved by elementary methods, and it is not even known whether it
admits bounded solutions at all. So, in paper \cite{dp}, Davis and
Peebles, who use such a perturbation procedure for the analogous
nonrelativistic case, have to introduce a suitable resummation
procedure. Now it is not clear whether a similar resummation procedure
can be introduced also in our relativistic case, and furthermore in
our case a discussion of the boundedness of the solution would be
required because, at variance with Davis and Peebles, we are not
restricting ourselves to the case of short distances.
Now, neither of the mentioned difficulties comes in with our
perturbation procedure. In addition, it seems to us that the
procedure of David and Peebles eventually is equivalent to the
nonrelativistic version of our procedure. Indeed, their formula (12)
is equivalent to our formula (\ref{aaaa}), taken in the
nonrelativistic approximation, while their formula (14) just gives the
contribution to the force due to the near galaxies. This contribution
occurs also in our case, and does not appear explicitly in our formula
(\ref{vettore}), only because in the latter we just retained the
dominant contribution due to the far away galaxies.
It is worth mentioning that a perturbation about the FRW metric is
performed also be Joyce et al. in the paper \cite{roma}. But in their
case they take into account the fact that the zero--th order solution
is due to the radiation energy density, so that the energy--momentum
tensor due to matter is not a perturbation of the vacuum. Thus they do
not meet with the previously mentioned problems.
\section{Conclusions}
In conclusion, we have studied the retarded gravitational action of
the far away galaxies. Such an action vanishes if the matter in the
Universe is described in terms of a continuous spherically symmetric
continuum. We have pointed out that such an action is instead quite
relevant if the discrete character of matter, as constituted of
galaxies with correlated positions, is taken into account. Some
gravitational effects were estimated, and were found to have the same
order of magnitude as the corresponding local ones of dark matter.
It is sometimes stated \cite{nature} that the fractal picture of the
Universe may be incompatible with the framework of the standard
cosmological theories, and in the paper \cite{roma} by Joyce et al. a
solution was proposed, based on the idea that the contribution of
matter to Einstein's equations should be considered as a perturbation
to the contribution of radiation. Perhaps the present approach, in
which a perturbation to the vacuum is performed, and the FRW metric is
obtained in the mean (even if radiation is altogether neglected), may
be considered as providing an alternative complementary solution to
the problem.
\vskip .5truecm
\noindent
\textbf{Acknowledgments}. We thank George Contopoulos, Christos
Efthy\-mio\-pou\-los,
Francesco Sylos Labini and Rudolf Thun for useful discussions. We
thank an anonymous referee for pointing out to us that what matters in
the virial theorem for a non--isolated system is the tidal force, and
for drawing our attention to the paper \cite{roma} that had escaped
us, where the Newtonian contribution to the tidal force is estimated
for a fractal distribution of matter. This paper is dedicated to the
memory of Nikos Voglis.
|
1,314,259,994,948 | arxiv | \section{Introduction}
Regardless of numerous experimental and theoretical investigations of metal-oxide interfaces, very little is known about the effect of the interfacial geometry on electron transport. The issue is of paramount importance particularly for metal-insulator-metal (MIM) tunnel junctions where a thin oxide layer creates for electrons a potential barrier between two metals \cite{Ville2011,Martinis2014}. In particular, despite the vast popularity of Al/Al$_2$O$_3$/Al junctions in modern nanoelectronics and continuous interest in their novel applications, for example, in qubits and Superconducting Quantum Interference Devices (SQUIDs) \cite{Al_AlOX_Al_QUBIT_2012,Al_AlOX_Al_QUBIT_2006,Al_AlOX_Al_QUBIT_2004}, single-electron transistors (SETs) \cite{Nakamura_SET_1996,Al_AlOX_Al_SET_1998}, energy storage \cite{Al_AlOX_Al_capacitor_2014}, infrared sensors \cite{IRsensor2015}, to the best of our knowledge, not a single theoretical study about the influence of the interface geometry on electron tunnelling has been performed for these systems. The existing theoretical studies, which are based on fitting the calculated data to the measured current-voltage characteristics ($I$-$V$), usually employ the standard Simmons model for calculating current densities \cite{Simmons1963}. The model suggests an analytic expression for the current density applicable to any shape of the barrier given that the average barrier height and width are known or represent fit parameters. Following the general model, current-voltage relationships are also deduced for the rectangular barrier including image force effects and considering various voltage ranges.
Usually, the thickness of the realistic insulating layer varies throughout the junction. Therefore, the width of the tunnel barrier (which may differ from the physical thickness of the insulator) also varies. This is of substantial importance because the conductance of the tunnel junction is known to depend exponentially on the barrier width \cite{Fisher1961}. Non-uniformity of the order of even one atomic layer may cause significant changes in the electron transport properties of the device \cite{Ville2011}. To address the non-uniformity in the thickness (and in the oxidation level) of the insulating layer, a double-layer barrier model has been introduced in \cite{Arakawa2006}. In this model, the barrier heights and thicknesses are evaluated by fitting the numerically calculated tunnel probabilities to experimental data assuming a rectangular potential and employing Simmons model for the current density.
The second primary parameter defining the tunnelling properties, \textit{i.e.}, the height of the potential barrier in the junction, may be affected by the metal states penetrating into the oxide gap. Thereby, additional energy levels are introduced for the tunnelling electrons, similarly to the metal-semiconductor junctions \cite{Tersoff1984}. To study the effect of such evanescent states on tunnel transport, the transfer matrix method \cite{TsuEsaki1973} was applied in \cite{Jung2009} together with the Simmons model to calculate $I$-$V$ curves for rectangular and side-modified rectangular barriers. A similar method was employed in \cite{JungDFT} with an image-force-modified tunnel barrier profile to study the impact of the substrate on electron transport in Al/Al$_2$O$_3$/metal junctions. Based on the Simmons model, extensive analyses of experimental data were presented in \cite{Gloos2003} and it was concluded that there is a strong correlation between the barrier height and the thickness. Additionally, a discrete distribution of barrier thicknesses with the peak-to-peak distances corresponding to a single oxygen layer was observed.
In all the above-listed studies, the main approach has been the assumption of a certain shape of the barrier, mainly rectangular with possible modifications, and the fitting of the calculated $I$-$V$ data to the experimentally measured one in order to find the height and the width of the barrier. Although these studies provide valuable insight into the average barrier or transport properties, they lack the atomic-scale characterization. On the other hand, theoretical studies which are based on atomic modelling involve a single model geometry, leaving aside all the possible interface effects \cite{JungDFT,Bokes_conductance}. The goal of this work is to establish a link between the structural properties of the interface and the corresponding barrier parameters (\textit{i.e.}, the height and the width). Since in theory, there is an uncountable number of options to interface the two materials, it is impossible to examine all the potential configurations. Therefore, we study by first-principles electronic structure calculations six representative geometries of the Al/Al$_2$O$_3$ interface to extract the barrier parameters for electron tunnelling. The chosen structures differ in stacking sequences and oxide terminations. On the basis of our first-principles results and the experimental data, we demonstrate in this paper that both the barrier height as well as the thickness are highly dependent on the atomic arrangement and the stoichiometry at the interface. In addition, by fitting the semiclassically calculated conductance to the experimental $G$-$V$ curve and comparing the results to those obtained from the density-functional theory (DFT), we predict the average barrier parameters and the most expected geometry of the junction measured in our experiment.
\section{Experimental methods}
In the following, we describe the fabrication and measurement techniques. Single tunnel junctions are made by electron beam lithography and shadow evaporation technique \cite{Fulton1987}. Several Al-Al$_x$O$_y$-Al junctions were formed between the fingers and the common electrode. The thickness of all the fingers and the common electrode is 25 nm. The tunnel barrier (AlOx) of the junction was formed in-situ by thermal oxidation before the deposition of the second layer of Al. A close-up of two tunnel junctions is shown in the scanning-electron micrograph (SEM) together with the experimental setup in figure ~\ref{fig:1}.
\begin{figure}[h!t]
\begin{center}
\includegraphics[width=0.45\textwidth]{Figure1bb}
\caption{SEM of two tunnel junctions together with a schematic of the experimental setup.}
\label{fig:1}
\end{center}
\end{figure}
We measure the differential conductance $G$ in a liquid helium dipstick at $T_{bath}$ = 4.2~K with a standard lock-in technique. The measurement setup shown in figure~\ref{fig:1} consists of dc and ac voltage sources, low noise current preamplifier (DL instruments 1211), and a lock-in amplifier (SR 830). We obtain the differential conductance as $G = I_{ac}/ V_{ac}$, where $ I_{ac}$ is the measured ac current through the junction and $V_{ac} = V^{source}_{ac}R_{3}/(R_{1} + R_{3})$ the applied ac bias voltage calculated according to the voltage division. The dc voltage is $V_{dc} = V^{source}_{dc}R_{3}/(R_{2} + R_{3})$.
\begin{figure}[h!t]
\begin{center}
\includegraphics[width=0.4\textwidth]{Figure2bb}
\caption{(color online) Measured differential conductance vs voltage for three single junctions.}
\label{fig:2}
\end{center}
\end{figure}
The measured differential conductance curves of three junctions are shown in figure~\ref{fig:2} as solid blue (sample D), dashed red (sample H), and dash-dotted green (sample J). The differential conductance has a parabolic shape and a Coulomb blockade dip at zero bias voltage. Parabolic dependence is also the lowest-order result from Simmons model for tunnelling through a
rectangular barrier \cite{Simmons1963}. The zero-bias conductance is determined through the parabolic fit with the $I$-$V$ curve and is found to be 1802.17 $\mu S/ \mu m^2$, 1794.5 $\mu S/ \mu m^2$ and 1765.5 $\mu S/ \mu m^2$ for the three junctions. Coulomb blockade dip is ignored in the fitting.
\section{Theoretical methods}
\subsection{Modelling Al/Al$_2$O$_3$ interfaces}
In our first-principles calculations, the model junction consists of a 5-layer substrate for the Al(111) surface and either single Al layer- or O layer-terminated oxide (figure \ref{geometry}). The oxide is represented as the crystalline Al$_2$O$_3$ having hexagonal unit cell and the (0001) surface parallel to the interface. For each termination we examine interface structures with three different stacking sequences: face-centred cubic (FCC), hexagonal closed packed (HCP) and octahedral (OT) (figure \ref{geometry}(b)). Thus, in total, we investigate six different structures. Throughout the paper we label the interfaces as "1Al FCC", etc., which read as "single Al layer-terminated oxide with FCC stacking sequence", etc. For example, the structures presented in figure \ref{geometry}(a) correspond to the O FCC and 1Al FCC interfaces. To extract barrier parameters, we first relax the chosen structures using DFT with the GPAW code \cite{GPAWrev,GPAW}. We carry out calculations with the 4x4x1 Monkhorst-Pack k-point grid and the PBE exchange-correlation functional \cite{PBE}. The simulation box includes 5 \AA ~vacuum on each side of the slab along the $z$ axis. To avoid the artificial electric field due to asymmetry, we apply the dipole correction to the electrostatic potential \cite{Dipole_correction_1,Dipole_correction_2} perpendicularly to the interface, as implemented in GPAW. Further details of the calculation setup and the procedure for selecting the interfaces, as well as a thorough discussion on their mechanical and electronic properties will be provided in a separate paper.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth, trim={1.0cm 2.8cm 0 0.1cm}]{termination}
\caption*{(a)}
\label{term}
\end{center}
\begin{center}
\includegraphics[width=0.5\textwidth, trim={0.5cm 4.8cm 0 1.5cm}]{stacking}
\caption*{(b)}
\label{stack}
\caption{Illustration of the modelled structures. (a) The two possibilities of the oxide termination. O and Al layer-terminated Al$_2$O$_3$ are interfaced with the Al(111) substrate. (b) The stacking sequences, an example of the Al-terminated interface. FCC: Al surface atoms of the metal and the oxide sit on top of each other, HCP: Al atoms are placed along the second O layer of the oxide, OT: Al atoms from the metal sit on top of the first O layer of the oxide. The dashed lines connect the corresponding layers.}
\label{geometry}
\end{center}
\end{figure}
\subsection{Extraction of barrier parameters}
The simplest model for the profile of the tunnel barrier is a one-dimensional rectangular potential wall corresponding to an abrupt transition between the metal and the oxide. However, the importance of accounting for a transition region (along with an electron effective mass for the oxide) has been pointed out in \cite{Bokes_conductance}. Hence, we start by assuming a trapezoidal shape for the potential. To construct barrier profiles for our systems, we need to know the barrier heights $\phi$, transition region widths $\Delta d$ and the widths $d$ of the barriers for each geometry (figure \ref{profile}). In the following section, we describe the details for determining the three parameters.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth, trim={0.0cm 0.0cm 0.0cm 0.0cm}]{profile7_1_2}
\caption{Trapezoidal barrier model. $E_F$ is the Fermi level of the metal, $\Delta d$ is the width of the transition region at the metal-oxide interface, $d$ is the barrier width corresponding to the smaller base of the trapezoid, and $\phi$ is the barrier height.}
\label{profile}
\end{center}
\end{figure}
\subsubsection{Barrier height $\phi$}
The tunnel barrier height is estimated according to the Schottky model \cite{Schottky}:
\begin{equation}
\phi=W-X,
\label{schottky_height}
\end{equation}
where $W$ is the work function of the metal, \textit{i.e.}, the difference between the Fermi level ($E_F$) of the metal and the vacuum level, and $X$ is the electron affinity of the oxide, \textit{i.e.}, the difference between the conduction band minimum (CBM) of the oxide and the vacuum level. Since the definition of $\phi$ is based solely on the properties of the two isolated bulk materials, it neglects the effects of the interface formed between them. In particular, due to the charge transfer across the interface, a dipole barrier is exerted in this region which may be different for different interface geometries and chemical compositions. On the other hand, because of the long-range character of the Coulomb interaction the band alignment between the adjacent metal and the oxide is determined, in addition to the bulk properties, by the electronic distribution at the interface. As a consequence, to apply (\ref{schottky_height}) to the interface, one needs to know the position of the oxide CBM inside the junction. Since PBE is known to underestimate band gaps, instead of using the theoretical value for CBM, it is more convenient to first find the valence band minimum (VBM) of the oxide. Next, CBM can be obtained by adding the experimental value of the oxide band gap ($E_{g(exp.)}$) to the VBM found from DFT. For the experimental band gap we use a value of 8.8 eV \cite{French1990}. Thus (\ref{schottky_height}) is equivalent to:
\begin{equation}
\phi=(VBM+E_{g(exp.)})-E_F,
\label{phi1}
\end{equation}
or,
\begin{equation}
\phi=VBO+E_{g(exp.)},
\label{phi2}
\end{equation}
where VBO is the valence band offset relative to the Al Fermi level. Thus, the problem maps to finding the VBO for the interface. For this purpose we use the \textit{macroscopic average} technique which was first developed for finding valence band offsets in lattice-matched metal-semiconductor heterojunctions and was further extended to the lattice-mismatched case \cite{Lattice_matched,Lattice_mismatched,band_method_review}. According to the method the valence band offset reads as:
\begin{equation}
VBO=E_V+\Delta V,
\label{vbo}
\end{equation}
where $E_V$ is the band-structure term and $\Delta V$ is the potential line-up. $E_V$ is defined as the difference between the VBM of the oxide and the Fermi level of the metal, both measured with respect to the macroscopic average electrostatic potentials in the corresponding bulk materials. Therefore, $E_V$ is independent of the interface and is defined by the bulk band structure, only. In contrast, $\Delta V$ is an interface-dependent term and represents the difference between the macroscopic average electrostatic potentials in the bulk-like regions of the interfaced materials. Figure \ref{alignment} illustrates the calculation procedure for the barrier heights on the example of 1Al FCC interface.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth, trim={0.5cm 0.0cm 0.0cm 0.0cm}]{alignment}
\caption{(color online) Schematics of the band alignment. The red and blue circles represent Al and Al$_2$O$_3$ layers, respectively. The thick continuous blue line is the plane-averaged effective potential along the direction perpendicular to the interface. The thin continuous black line shows the macroscopically averaged plane-averaged effective potential. $\phi$ is the tunnel barrier height. The fine-dashed arrow on the left-hand side shows the difference between the Fermi level of the \textit{bulk} Al and its average effective potential. Similarly, the fine-dashed arrow on the right-hand side shows the difference between the VBM of the \textit{bulk} Al$_2$O$_3$ with respect to its average effective potential. E$_\textrm{{c(Al$_2$O$_3$)}}$ is the CBM of the oxide. $\Delta V$ is the potential shift. VBO is equivalent to the sum of two quantities, the band-structure term ($E_V$) and the potential line-up ($\Delta V$) (see (\ref{vbo})).}
\label{alignment}
\end{center}
\end{figure}
\begin{table*}
\caption{\label{offsets} Valence band offsets (VBO) and barrier heights $\phi$ for the different interface geometries.}
\lineup
\noindent\begin{tabular}{@{}*{7}{l}}
\br
\textbf{Geometry} & 1Al FCC & 1Al HCP & 1Al OT & O FCC & O HCP & O OT \cr
\mr
\textbf{VBO(eV)} & $-7.65$ & $-5.65$ & $-6.15$ & $-7.55$ & $-7.55$ & $-7.05$ \cr
\boldmath{$\phi$}\textbf{(eV)} & \m1.15 & \m3.15 & \m2.65 & \m1.25 & \m1.25 & \m1.75 \cr
\br
\end{tabular}
\end{table*}
Since the junction is lattice-mismatched and, correspondingly, the periodicity of $V(z)$ is different in the metal and in the oxide bulk regions, the macroscopic averaging of the potential is done twice successively: first with the window size matched with the periodicity of the bulk Al, next - with the periodicity of the bulk Al$_2$O$_3$.
The Schottky barrier heights obtained for various interfaces are summarized in table \ref{offsets}. Our results show that the geometry of the interface highly influences the tunnelling properties as the difference in the barrier heights can be as large as 2 eV, against the average value of 1.87 eV. On the other hand, it is gratifying to note that the barrier heights derived based on experimental data are reported to be approximately 2 eV \cite{JungDFT}. The barrier heights for the Al-terminated interfaces are, on average, higher than those of the O-terminated interfaces, with the mean heights being 2.32 eV and 1.42 eV, respectively. This gives rise to an about 1 eV difference between the average heights of the two different terminations.
\subsubsection{Transition region width $\Delta d$}
Assuming that the local density of states follows the shape of the barrier along the coordinate perpendicular to the interface \cite{JungDFT, Bokes_conductance}, we extract the width of the transition region between the metal and the oxide from the projected density of states (PDOS) for atomic layers at the interface. For this reason, we perform linear fittings for the base and the leg of the trapezoid separately (figure \ref{pdosz}). The layer-projected densities of states are plotted as a function of the $z$ coordinate, and averaged over an energy window centred at the Fermi level (inset in the upper right corner).
The obtained results are summarized in table \ref{deld}. Interestingly, the transition widths of both the Al- and O-terminated interfaces follow the same trend: FCC, HCP, and OT in the order of decreasing width. The significant magnitude of $\Delta d$ indicates a pronounced deviation from the square barrier assumption, where an abrupt transition between the metal and the oxide is assumed. It must be noted that the presence of the transition region lowers the average height of the zero-bias barrier by a factor of $2/3$ compared to the rectangular barrier, since $\left\langle \phi_{rect.} \right\rangle=\phi_{rect.}$, and $\left\langle \phi_{trapz.} \right\rangle=(2/3) \phi_{trapz.}$ at $V=0$. This suggests that the barrier heights determined by fitting the rectangular barrier models (such as the widely used Simmons model \cite{Simmons1963}) with the experimental $I$-$V$ curves should be expected, in fact, to be higher by a factor of $3/2$ for symmetrical barriers.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth, trim={0.0cm 0.0 0.0cm 0.0}]{PDOSz}
\caption{(color online) Layer-projected density of states as a function of the $z$ coordinate. The fit of the trapezoid is shown with the thick dashed green lines. $\Delta d$ is the estimated width of the transition region. The inset illustrates the averaging window for the PDOS.}
\label{pdosz}
\end{center}
\end{figure}
\begin{table*}
\caption{Transition region widths ($\Delta d$) for the different geometries extracted from the layer-PDOS.} \label{deld}
\lineup
\noindent\begin{tabular}{@{}*{7}{l}}
\br
\textbf{Geometry} & 1Al FCC & 1Al HCP & 1Al OT & O FCC & O HCP & O OT \cr
\mr
\boldmath{$\Delta d$}\textbf{(\AA)} & 5.83 & 2.78 & 2.62 & 3.34 & 3.15 & 2.05 \cr
\br
\end{tabular}
\end{table*}
\subsubsection{Barrier width $d$}
The last parameter of our tunnel barrier model is the width $d$. To analyse the experimental structures, we assume that the interface studied in the experiment is mainly composed of one of the modelled structures. For each case we fit the width with the measured zero-bias conductance. According to reference \cite{Bokes_conductance} the conductance through our model barrier is:
\begin{equation}
G_{0(theor.)} \approx - \frac{e^{-F(E_F)}}{2\pi^2F^\prime(E_F)},
\end{equation}
where
\begin{equation}
F(E_F)=2\sqrt{2m^*\phi}\left(d+\frac{2}{3}\Delta d\right)
\end{equation}
and
\begin{equation}
F^\prime (E_F)=- \frac{2}{\sqrt{2m^*\phi}}\left(d+2\Delta d\right).
\end{equation}
The above expression has been derived following the Simmons model adopted for the trapezoidal barrier profile and taking into account the contribution of the electrons to tunnelling only from the states close to the Fermi energy. We calculate the effective electron mass $m^*$ from the DFT band structure of the bulk oxide. Our obtained value, 0.38 $m_e$, is in a perfect agreement with earlier experimental and theoretical works \cite{Perevalov2007,Medvedeva2007}. The obtained widths are displayed in table \ref{d}.
\begin{table*}
\caption{Barrier widths $d$ found by fitting the theoretical zero bias conductance to the experimentally measured one.} \label{d}
\begin{tabular}{@{}*{7}{l}}
\br
\textbf{Geometry} & 1Al FCC & 1Al HCP & 1A OT & O FCC & O HCP & O OT \cr
\mr
\boldmath{$d$}\textbf{(\AA)} & 8.89 & 6.77 & 7.54 & 10.34 & 10.48 & 9.70 \cr
\br
\end{tabular}
\end{table*}
Our results demonstrate that O-terminated interfaces exhibit wider barriers compared to the Al-terminated counterparts, with average values of 7.73 \AA~ and 10.17 \AA, respectively. Based on the data in tables \ref{offsets}-\ref{d}, resulted barrier profiles are visualized in figure \ref{barriers}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.45\textwidth]{barriers}
\caption{(color online) Predicted barrier profiles.\label{barriers}}
\end{center}
\end{figure}
\section{Analysis of the experimental data}
Since the widths were extracted by fitting to a single point, the zero-bias conductance, this way, we obtain the picture only about the relative widths of the different barriers with respect to each other. In addition, the electron effective mass of the oxide, which was fixed at the bulk value, might differ for each interface. Furthermore, we assumed that the interface measured in the experiment was composed of only one of the discussed interfaces, which might not be the case in the real junction. Therefore, to explain the experimental data, we extend our analyses beyond the zero bias. We make an average evaluation of the barrier parameters, and of the effective mass, using the WKB approximation by fitting the conductance to the experimental $G$-$V$ curve. $\phi$, $\Delta d$, $d$ and $m^*$ are left as free parameters. The tunnel probability along the $z$ direction within the WKB approximation is calculated with the following expression:
\begin{equation}
P(E_z)=\exp\left\{ -\frac{4\pi}{h} \int_{z_1}^{z_2} \sqrt{2m^*\left[\phi(z,V)-E_z\right]} dz \right\}.
\end{equation} ~
And the ensuing current density is:
\begin{eqnarray}
\fl
j =\frac{4\pi m^*q}{h^3}kT\int_0^\infty{P(E_z)} \\
\nonumber\times \ln\left\{ \frac{1+\exp\left[(E_F-E_z)/kT\right]}{1+\exp\left[(E_F-Ez-qV)/kT\right]}\right\}dE_z,
\end{eqnarray} ~
where $m^*$ is the effective electron mass, $E_z$ is the energy of the incident electron, and $z_1$ and
$z_2$ are the classical turning points at the given energy $E_z$ (figure \ref{tilt}).
As it is shown in figure \ref{fig:2}, the experimental curve exhibits a kink at about 0.17 V. The behaviour above this point has not been explained consistently in the existing studies, although, it has been suggested earlier to be the result of electron-phonon scattering \cite{Lau1981}. Therefore, we consider only the bias range below 0.17 V as the WKB approximation does not account for inelastic effects. We apply a tilt in a form of a linear potential to the trapezoid as a modification caused by the bias voltage (figure \ref{tilt}). Unlike for a rectangle, the width of the trapezoid is now reduced due to the tilt and it is dependent on $E_z$. The fitting, which reproduces essentially the parabolic $G$-$V$ dependence,
results in a barrier height of 2.56 eV, a transition region width of 1.70 \AA, and a barrier width of 8.43 \AA. The obtained barrier height is significantly higher than the barrier height corresponding to any O-terminated interface found from the DFT calculations. In contrast, it is closest to that of the 1Al OT configuration with the error of only 3$\%$. The second best agreement between the \textit{ab-initio} and fitted values is found for the 1Al HCP configuration with the error of 18$\%$. This indicates that the measured junction (mainly) consists of the Al-terminated interfaces. However, this reasoning cannot totally exclude the presence of O-terminated interfaces. The transition region width is underestimated compared to those obtained using DFT, with the biggest errors for the 1Al FCC, O FCC and O HCP geometries. Concentrating on the Al-terminated cases, the best agreement is found, again, for the 1Al OT and 1Al HCP interfaces, what further supports our previous prediction. The fitted and the DFT values of $\Delta d$ agree by 35$\%$ and 39$\%$ for 1Al OT and 1Al HCP, respectively. Taking into account that in our first-principles calculations our model crystalline geometries are idealistic approximations to the real junction geometry, which in fact, is amorphous, we consider the agreement between the two methods rewarding.
\begin{figure}[]
\begin{center}
\includegraphics[width=0.4\textwidth]{tilt}
\caption{Schematic representation of the trapezoidal barrier tilt under bias. The solid line represents the barrier profile at V=0. The dashed line shows the distorted barrier due to the applied voltage V. $E_z$ is the energy of the incident electron and $z_1$ and $z_2$ are the classical turning points at the energy $E_z$.}
\label{tilt}
\end{center}
\end{figure}
The electron effective mass derived from fitting, 0.29 $m_e$, is smaller than the calculated bulk value.
\section{Summary}
In conclusion, by combining first-principles density-functional and classical methods, we have extracted the tunnel barrier profiles for six possible interface geometries of the Al/Al$_2$O$_3$ interface. Variations in the obtained barrier heights, the barrier widths and the widths of the transition regions demonstrate that the electron transport properties of the junctions are highly sensitive specifically to the geometry of the interface. We obtained that the O-terminated interfaces exhibit on average 0.9 eV lower and 2.44 \AA~ wider barriers compared to the Al-terminated ones. The resulted transition region widths suggest that the real barriers are better represented by the trapezoid. In addition, by fitting the semiclassical model to the experimental $G$-$V$ curve, we have obtained information about the average barrier properties in a real junction. When compared with our DFT-based results the fitting predicts that the interfaces are Al-terminated and are best described by the octahedral stacking. The gained information is important for understanding electron transport through tunnel barriers on the atomistic level.
\ack
We acknowledge the availability of the facilities and technical support by Otaniemi research infrastructure for Micro and Nanotechnologies (Otanano). We acknowledge financial support from the European Community FP7 Marie Curie Initial Training Networks Action (ITN) Q-NET 264034 and the Academy of Finland through its CoE programme (projects no. 251748, 284621, 250280 and 284594). We acknowledge the computational resources provided by the Aalto Science-IT project. We have benefitted from fruitful discussions with D. Averin.
\section*{References}
\bibliographystyle{unsrt}
|
1,314,259,994,949 | arxiv | \section{Introduction}
Reaching the diffraction limit in the visible wavelength is one of the main reasons to place optical telescopes on-board satellites such as the Hubble Space Telescope (HST), thereby avoiding the distortions and blurring that the atmosphere introduces on the unaltered wavefronts. With its life-cycle coming to end and without any oncoming substitute in visible bands, it is crucial to provide the scientific community with tools capable of providing similar resolutions from ground. In the era of the extremely large telescopes, and due to the increase of the atmospheric distortion as the diameter of the aperture grows, this has become a world top engineering challenge.
There are two main techniques which lead to diffraction-limited imaging. On the one side, Lucky Imaging (LI) \citep{1964JOSA...54...52H,1978JOSA...68.1651F, Brandner2016} offers an excellent and cheap method for reaching diffraction limited spatial resolution in the visible band in small and mid-size ground-based telescopes \citep{2008SPIE.7014E..47O}. However, this technique suffers from two important limitations. First, resolutions similar to the HST can only be achieved in telescopes with sizes below 2.5m \citep{2011MNRAS.413.1524F,2011A&A...526A.144L}. Second, most of the images are discarded, meaning that only relatively bright targets can be observed.
The other technique, Adaptive Optics (AO), has been the main procedure to improve the quality of the largest ground-based telescopes during the last twenty years \citep{1990A&A...230L..29R,1993ARA&A..31...13B,Milli2016}. The use of AO systems in infrared observations provides an adequate performance due to the reduced effects of turbulence in this wavelength range, thus achieving excellent results \citep{2010SPIE.7736E..24G, 2014PNAS..11112661M, 2014ApJ...791...35L}. Unfortunately, the scarce number of AO systems developed for the visible bands \citep{2008SPIE.7014E..18B, 2012SPIE.8447E..0XC} have not yet achieved the versatility and image quality already achieved in NIR \citep{2013ApJ...774...94C,2005ApJ...629..592G}, except for solar telescopes \citep{2012AN....333..863B}. The difficulty of performing AO in the visible wavelenghts is explained by the fact that the correlation time of the atmospheric turbulence scales with $\lambda^{6/5}$ \citep{1977JOSA...67..390G,1990JOSAA...7.1224F}, which means that an AO control loop operating in visible bands needs to be faster that in the NIR bands in order to provide the same degree of correction.
The Adaptive Optics Lucky Imager (AOLI) is a state-of-the-art instrument which was conceived to obtain extremely high resolution at optical wavelengths on big-sized telescopes by combining the two techniques presented in the paragraphs above (AO + LI) \citep{2015hsa8.conf..850V, 2016MNRAS.460.3519V}. Initially targeted for the 4.2m William Herschel Telescope (WHT, Observatorio del Roque de los Muchachos, Spain), the instrument is designed as a double system that encompasses an adaptive optics control system before the science part of the instrument, this last implementing LI \citep{2016arXiv160803230M}. Each of the two parts of AOLI can behave as a standalone system, meaning that the AO subsystem might be used with any other science instrument, performing imaging, spectroscopy or even coronography or polarimetry.
Besides the fact of combining AO and LI for the first time in an astronomical instrument, the other key innovation of AOLI
is the development and use of a new type of wavefront sensor (WFS) in its AO subsystem: the Two Pupil Plane Positions Wavefront Sensor (TP3-WFS). The implementation of the TP3-WFS was motivated by the will of being able to use fainter AO reference stars than the ones that classical wavefront sensors such as the Shack-Hartmann WFS (SH-WFS) can use, all with the aim of increasing the sky coverage of the instrument without the need of laser guide stars.
The present work gives a comprehensive description of the AO subsystem of AOLI, focusing on the TP3-WFS and all the related algorithms and procedures that were developed for its characterization and testing. The first control-related results obtained with AOLI at the WHT, also included in the paper, confirm the viability of the TP3-WFS as part of a fully-functional adaptive optics system.
The paper is organized as follows. Section~\ref{sec:aoli_ao} introduces the AO subsystem of AOLI. Section~\ref{sec:arch} describes the AO real-time processing pipeline. Section~\ref{sec:sys_char} describes the procedures that were used to get information the AO subsystem, thus enabling the configuration the control algorithm. Section~\ref{sec:results} presents the results obtained both in laboratory and telescope tests. Finally, Section~\ref{sec:conclusions} draws the main conclusions.
\section{The Adaptive Optics System of AOLI}
\label{sec:aoli_ao}
AOLI has been built putting together the expertise of several institutions, each group specialized in a different subject. To face the challenge that AOLI represents we have implemented a new philosophy of instrumental prototyping by modularizing all its components \citep{2016arXiv160804806L}: simulator/calibrator \citep{2014SPIE.9147E..7VP}, science module (performing LI) and AO module. Figure~\ref{fig:aoli_zemax} depicts the optical layout of AOLI, along with a description of the setup. On the other hand, Figure~\ref{fig:aoli_photo} shows a photograph of the instrument as mounted on the WHT.
The AO subsystem of AOLI comprises a 241-actuators deformable mirror (DM) by ALPAO, a pick-off guide-star subsystem and the Two Pupil Plane Positions Wavefront Sensor (TP3-WFS). The TP3-WFS operates with the images provided by an Andor Ixon DU-897 camera, which is based on a sub-photon noise 512x512 e2v EMCCD (Electron Multiplying Charge-Coupled Device) detector. The heart of the AO system is its Real-Time Control Software (RTC), which allows the control of 153 Zernike modes with a delay under 40$\mu$s. The delay of the calculations performed in the TP3-WFS itself is around 1 ms for the same number of reconstructed modes.
AOLI was required to perform wavefront sensing using faint reference stars up to magnitude 16 in the \textit{I} band with a seeing of 1 arcsec and at a wind speed of 8 km/s, which corresponds to the median value of the wind speed at the Roque de los Muchachos observatory. For the WHT this means sensing with up to four magnitudes fainter stars than the limit reached with a classical Shack-Hartmann WFS (SH-WFS). On the other hand, it was estimated that performing low-order AO corrections at a rate of 100 Hz in combination with LI would provide the desired level of correction at the science camera.
\begin{figure*}
\centering
\includegraphics[scale=0.5]{AOLI-AO3-ALL}
\caption {AOLI optical layout. The system is divided in three modules: Deformable Mirror (DM), WaveFront Sensing (WFS) and SCIence (SCI). The common vertex is a Pick-off mirror that selects the reference star for wavefront sensing through a pin-hole in the mirror. This pin-hole can be selected as a real hole or with a splitting ratio $R/T=30/70$ and several sizes depending on the current seeing. The science arm can select two possible collimators to obtain two different Fields Of View (FOV). The WFS arm uses a lateral prism to introduce a differential delay between the two optical paths, using a common lens system to defocus the pupil image over the detector. The DM module is a typical 1:1 system with an Amici-biprims Atmospheric Dispersion Corrector (ADC).
}
\label{fig:aoli_zemax}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[scale=0.5]{aoli_photo}
\caption {AOLI mounted at one the two Nasmyth focuses of the WHT, inside the GRACE (GRound based Adaptive optics Controlled Environment) structure. The main components have been identified with labels.
}
\label{fig:aoli_photo}
\end{figure*}
\subsection{The TP3-WFS}
The leading position for sensing the wavefront on AO systems has been occupied so far by the Shack-Hartmann WFS (SH-WFS), \citep{hartmann1900,shack1971production,platt2001history}. One important disadvantage of the SH-WFS is that the incoming photons are distributed among all the illuminated lenslets. This sets a limitation for the magnitude of the reference stars whose wavefronts can be reconstructed, and consequently to the sky coverage of an instrument based on this type of sensor. We have developed the TP3-WFS with the aim of overcoming this disadvantage.
The TP3-WFS bases its calculations on the intensity of the images of two defocused pupil images taken at two different planes. Computer simulations predict that this way of measuring wavefronts allows attaining good reconstructions with down to 100 photons falling within each pupil image \citep{vanDam:02}. Although this statement is yet to be thoroughly tested under a real environment (not only by simulations), if confirmed it would mean a considerable improvement in the sensitivity when compared to the SH-WFS. Another advantage of the TP3-WFS with respect of previous wavefront sensors such as the SH-WFS is that it is capable of working on extended targets, as demonstrated later in Section~\ref{sec:results}.
The TP3-WFS is composed of the wavefront reconstruction software (WFR), the WFS camera and the surrounding optics (see Figure~\ref{fig:aoli_zemax}). The main function of the TP3-WFS optics is to obtain the two defocused images near the pupil plane, which are later acquired by the WFS camera and finally processed by the WFR software. The WFR software is founded on the algorithm proposed by \cite{vanDam:02}, and it operates in real-time thanks to its GPU-accelerated implementation \citep{2013OptEn..52e6601F}. More detail about the internals of the WFR software will be given later in Section~\ref{sec:wfr}.
\section{AO processing pipeline}
\label{sec:arch}
Had the AO system of AOLI been based on well-known technologies such as the Shack-Hartmann WFS, the team could have benefited from the developments and knowledge from previous projects, or even from commercially available solutions. However, the use of a type of WFS that had never been implemented before for astronomical applications forced the development of not only the WFS itself, but also of the surrounding methods and software. The new system had to be designed for the specific needs of the TP3-WFS, with a focus on flexibility so as to counter uncertainties that this new method arose.
In the end, the AO part of AOLI was implemented with three new pieces of software: the frame grabbing software (FG), the WFR and the RTC. The three pieces of software are interconnected in a processing pipeline as shown in Figure~\ref{fig:ao_computer}. This processing pipeline is triggered every time a new frame is produced by the WFS camera (which sets the AO sampling rate), and it re-enters the idle state once a new actuation vector is sent to the DM.
The target environment for all the AO-related software would be a single computer operating with GPUs and under Windows. We selected an Intel Core i7-4790K CPU and an nVidia GeForce GTX Titan Z GPU running the Windows 7 operative system. The following subsections will provide details about each of the three components of the AO processing pipeline.
\begin{figure*}
\centering
\includegraphics[scale=0.8]{ao_computer}
\caption {AO processing pipeline, consisting in three pieces of software running on the AO computer. The pipeline is triggered on the reception of a frame from the WFS camera, and re-enters an idle state after a new vector of actuations has been sent to the DM. All the elements are required to have a low latency so as to ensure that the actuation vectors sent to the DM correspond to wavefronts that still exist in the real world.}
\label{fig:ao_computer}
\end{figure*}
\subsection {Frame grabbing software}
\label{sec:fg_sw}
The main objective of the FG software is to continuously acquire images from the WFS camera and send them to the WFR software as soon as they are being received, with no further processing in between. Additionally, this software allows configuring the camera parameters and provides real-time information once the acquisition process has started.
Reference AO stars with bright apparent magnitudes may produce images in the EMCCD sensor with unnecessary large signal-to-noise ratios (S/N), with no actual improvement in the accuracy of the reconstructed wavefronts. In those cases, the operator of the instrument would normally decrease the exposure time of the WFS camera so as to get a faster sampling rate, which would enable the RTC software to produce DM actuations more frequently and thus improve the quality of the AO corrections even with bad atmospheric conditions. The upper limit of the sampling rate is set by the camera hardware itself and by the element of the AO processing pipeline whose execution time is longer.
On the other hand, the S/N of the pupil images of faint reference AO stars can indeed be improved by lowering the sampling rate of the camera, but this may have a negative impact on the quality of the AO corrections because the RTC software may not be able to keep up with the speed of variation of the atmospheric turbulence at that specific period of time. In those cases, the recommended solution is to activate the binning function of the EMCCD, which will increase the S/N of the image by combining the charges of adjacent pixels, at the expense of a lower spatial resolution in the acquired pupil images. This would improve the accuracy of the wavefront reconstructions in the cases of low light, although it will not be possible to reconstruct the higher-order modes because of the loss of image resolution.
For an optimal configuration of the camera, it is important to know that its sampling rate does not only depend on the exposure time and the size of the readout region, but also on other parameters such as the frequency of the EMCCD clocks, specially the clock that drives the ADC (analog-to-digital converter). This type of clock fine-tuning has to be done carefully in order not to degrade the S/N.
\subsection{Wavefront reconstruction software}
\label{sec:wfr}
The WFR software takes the two defocused images formed by the TP3-WFS optics and calculates the photon displacements between those two planes by applying the Radon transform \citep{Radon17} over a set of projection angles. The photon displacements are then used to produce an estimation of the slopes of the incident wavefront. Finally, the algorithm outputs a Zernike representation of the reconstructed wavefront \citep{von1934beugungstheorie}, calculated as the least-squares fit between the calculated slopes and the ones that each individual Zernike mode would produce.
The WFR algorithm is clearly the one that has a higher computational cost in the whole AO processing pipeline, so a huge effort was put in optimizing its implementation in order to achieve real-time performance. Among all the possible acceleration methods, it was decided that the WFR would take advantage of GPUs in order to achieve real-time operation, more specifically, by means of the CUDA language. When compared to other FPGA or CPU-based approaches, the GPU implementation was considered a good trade-off from the point of view of development costs, flexibility and re-usability \citep{ao4elt4_31563}.
Figure~\ref{fig:wfr_arch} provides a graphical summary of the steps of the WFR algorithm, whose equations are further described in \cite{vanDam:02}. Even though the description of the algorithm itself is out of the scope of this paper, the following subsections will give practical considerations about some of the blocks in Figure~\ref{fig:wfr_arch}, which can be helpful for both the development and use of future wavefront sensors using this reconstruction technique, besides the TP3-WFS itself.
\begin{figure*}
\centering
\includegraphics[scale=0.9]{wfr_arch}
\caption {WFR software architecture. For each input image, containing two defocused pupil images, a vector of Zernike modes is calculated. The whole algorithm is implemented on GPU, with the exception of the \textit{Extract pupils} block. This is justified because sending full images to the GPU would create a bottleneck in the CPU-GPU communication channel.}
\label{fig:wfr_arch}
\end{figure*}
\subsubsection{Extract pupils}
\label{sec:pupil_reg}
The first step of the algorithm is to extract the two regions of the input image that correspond to each pupil image. In the case of AOLI, this separation has to be done by software because the optics of the TP3-WFS were designed in such a way that both defocused pupil images fall within the area of a single WFS detector.
In earlier versions of AOLI, each pupil image was sent to one different camera, but in the end this proved to be a bad idea because of two reasons: first, because it was very hard to synchronize the camera hardware and the related software in order to acquire from the two cameras with a high level of synchronism. Synchronism is obviously a requirement because it only makes sense to process pupil that come from the same instant of time. The second reason is that even small differences between the two cameras (quantum efficiency, dynamic range, bias level, noise, etc.) have a considerable impact on the reconstruction quality. Using a single camera is thus an effective way to solve both problems.
\subsubsection{Bias and flat correction}
\label{sec:bias_flat}
The simulations made with different configurations showed that small deviations between the mean bias level of the two pupil regions has a considerable impact on the accuracy of the reconstructed wavefronts. Being an algorithm that bases its calculations on the differences between the two input images, this behaviour was actually expected. Therefore, it is considered mandatory to apply bias corrections as a pre-processing step.
In the same way, it is highly recommended to apply flat-field corrections to the input images, specially to reduce the effect of dust grains on the WFS sensor and optics. During a laboratory test, there was one big particle of dust falling on one pupil image that caused the control loop to enter a peculiar oscillatory regime due to the non-linearity caused by the presence of that particle.
From the point of view of a real-time implementation, it is very convenient to perform the bias and flat-field corrections to each individual pupil image, after they have been extracted to the full image. The rest of the pixels of the original image are not processed anyway, so it is a loss of processing time to correct them as well.
\subsubsection{Radon transform}
One important parameter to configure in the WFR algorithm is the number of projection angles for which the Radon transform will be calculated. This parameter is closely related to the number of Zernike modes to be reconstructed, as the last step of the algorithm is a least-squares adjustment between the measured wavefront slopes and the ones that the different Zernike modes would produce. This means that, for a given number of reconstructed Zernike modes, there is a minimum number of angles that need to be calculated so as to ensure that the least-squares fit is being executed correctly. A larger number of angles means a better ability to represent high-order aberrations in the Radon transforms themselves.
Of course, even though a large number of projection angles would increase the probabilities of performing a correct fit, in practice setting a very high number of angles is a bad idea because the execution time of the algorithm has a linear dependence with the number of angles. For a given number of reconstructed Zernike modes, a trade-off can be found by performing a sweep over the number of angles and determine by inspection whether the result of the static characterization is correct. Section~\ref{sec:sta_res} explains how to analyze the goodness of the static characterization results.
\subsubsection{Least-squares fit}
The final step of the algorithm actually requires to have pre-calculated the mean wavefront slopes that each Zernike mode would produce at perpendicular directions of each projection angle. This operation is particularly expensive from a computational perspective, so it was also GPU-accelerated in spite of not being part of the real-time AO processing pipeline. This enabled us to quickly experiment the effect of selecting different combinations of number of Randon projection angles and Zernike modes.
It is interesting to note that the input to the least-squares fit block in Figure~\ref{fig:wfr_arch} already contains all the information of the reconstructed wavefront, more specifically, the slopes along a set of projection angles. The least-squares fit is just a way of representing that wavefront in a more familiar way, which additionally enables separating some modes of special interest such as tip, tilt and defocus. It remains to be evaluated whether applying the control algorithm to the estimated slopes directly or to their 2-D integral results in a better level of correction.
At the time of writing the present work, the Zernike modes outputted by the TP3-WFS are not expressed in any particular units. While this could be a problem for other applications, it does not have any impact on the RTC control algorithm of AOLI, as this algorithm does not need to know the physical units of the measured wavefronts. Similarly, the results presented in this paper do not lose their validity because they are always intended to be analysed in a differential way (e.g., open loop vs. closed loop).
As a reference for future wavefront sensors willing to implement this reconstruction algorithm, Figure~\ref{fig:wfr_precalc} shows the appearance of the slope maps of the 10 lowest-order Zernike modes (excluding piston, as it cannot be reconstructed).
\begin{figure*}
\centering
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[scale=0.8]{H_001}
\caption{Vertical tilt}
\end{subfigure}
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[scale=0.8]{H_002}
\caption{Horizontal tilt}
\end{subfigure}
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[scale=0.8]{H_003}
\caption{Oblique astigmatism}
\end{subfigure}
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[scale=0.8]{H_004}
\caption{Defocus}
\end{subfigure}
\par\bigskip
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[scale=0.8]{H_005}
\caption{Vertical astigmatism}
\end{subfigure}
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[scale=0.8]{H_006}
\caption{Vertical trefoil}
\end{subfigure}
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[scale=0.8]{H_007}
\caption{Vertical comma}
\end{subfigure}
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[scale=0.8]{H_008}
\caption{Horizontal comma}
\end{subfigure}
\par\smallskip
\caption{Precalculated Zernike slope maps, used during the conversion of the calculated wavefront slopes to vectors of Zernike modes. The \textit{X} axis corresponds to the projection angle $\theta$ ($0 \leq \theta < \pi$), while the \textit{Y} axis corresponds to the Radon coordinate $r$ ($-R \leq r \leq R$, where $R$ is the radius of the aperture).}
\label{fig:wfr_precalc}
\end{figure*}
\subsection{Real-time control software}
\label{sec:rtc_sw}
The RTC software is the one responsible of actuating over the DM in such a way that the effect of atmospheric turbulence on the science camera images is minimized. The control algorithm is based on an array of proportional-integral-derivative (PID) controllers. Besides the real-time functionality itself, the RTC software implements the AO characterization procedures explained later in Section~\ref{sec:sys_char}.
The computations in the RTC software were accelerated by means of the Blaze library \citep{6266939}. The use of this library enabled easy exploiting of the AVX2 instruction set (Advanced Vector Extensions 2) available on the target CPU, which is specially useful for accelerating vector and matrix operations like the ones that are executed in the RTC algorithm. The proper use of these instructions ultimately allowed obtaining better performances than other highly-optimized GPU libraries like cuBLAS, as the latency of the CPU-GPU link constitutes a tight bottleneck for the RTC algorithm.
Figure~\ref{fig:rtc_arch} depicts the architecture of the real-time part of the RTC software. It consists on a processing pipeline that is triggered on reception of a new reconstructed wavefront, and finishes when the new set of actuations is sent to the DM. Below is a brief description of the blocks appearing in Figure~\ref{fig:rtc_arch}:
\begin{figure*}
\centering
\includegraphics[scale=0.9]{rtc-sw_arch}
\caption {RTC software architecture, showing all the configurable blocks and the nature of the signals traveling throughout them.}
\label{fig:rtc_arch}
\end{figure*}
\begin{itemize}
\item{\textbf{Mask}: Extracts the elements of interest from the input wavefront. In AOLI it is used to ignore the tip and tilt modes in the AO control loop, as the LI algorithm would remove them anyway in its shift-and-add stage, where the input images are re-centered using a predefined criterion such as the position of the peak pixel.}
\item{\textbf{W2C} (``\textit{wavefront to control}''): Takes the input wavefront and transforms it to a different format just before entering the actual control stage. In AOLI, the W2C block is configured as pass-through.}
\item{\textbf{Target}: Allows setting a control target that is different from 0. This is useful for compensating non-common path between the WFS camera and the science camera (NCPA, Non-Common Path Aberrations).}
\item{\textbf{PID array}: As its name suggests, this block is composed of a set of independent PID controllers, one for each element present at its input vector (actually, an error vector).}
\item{\textbf{C2A} (``\textit{control to actuations}''): transforms the output of the PID array into the specific set of actuations that produce the desired correction. In AOLI, this block is configured to transform Zernike modes into actuation vectors.}
\end{itemize}
\section{System analysis}
\label{sec:sys_char}
In order to have a fully working AO instrument one first needs to obtain information from the control plant (i.e., the elements that are part of the control loop) to configure the parameters of the RTC processing pipeline that were explained in Section~\ref{sec:rtc_sw}. For that purpose, we developed two different procedures, each of them characterizing the AO system under different conditions.
On one hand, the \textit{static characterization} procedure is used to obtain the C2A matrix by inverting the obtained influence matrix. On the other hand, the timing information given by the \textit{dynamic characterization} procedure allows setting proper values to the \textit{P}, \textit{I} and \textit{D} parameters of the PID array. The following subsections will explain these two procedures in detail.
\subsection{Static characterization}
\label{sec:sta}
The static characterization procedure allows obtaining the influence function of the AO system. This function represents the effect of actuating over the DM as seen by the WFS once the DM has reached its steady state.
The influence function is usually represented as the so-called \textit{influence matrix} $\mathbf{I_{M \times A}} = [\mathbf{i}_1,... , \mathbf{i}_A]$, where $M$ is the number of elements at the output of each WFS sample (normally a set of Zernike modes) and $A$ is the number of actuators of the DM. In this matrix, each row $\mathbf{i}_a$ contains the response of the WFS when a single actuator is pushed with an actuation value equal to 1, whichever the actuation units are.
From the static point of view, the influence matrix contains all the information that is needed to correct an aberrated input wavefront by actuating over the DM. But that would only possible if the control algorithm knew the vector of actuators that compensates a given input wavefront, which is just the opposite information that the influence matrix provides. Fortunately, it is possible to invert the influence matrix so that it can be used to convert from wavefronts to actuation vectors, even if the matrix is not square. If the matrix is not square, one can apply the Moore-Penrose pseudoinverse \citep{penrose2008} instead of performing a regular matrix inversion.
The acquisition of the influence matrix was performed using the algorithm described in Appendix~\ref{sec:sta_app}. There is a set of parameters that is used to configure the algorithm, namely: $n_{sta}$ (stabilize samples), $n_{pp}$ (push-pull iterations), $n_{avg}$ (average samples) and $a_{val}$ (actuation value).
The parameter $n_{sta}$ sets the amount of time (measured in WFS samples) to wait for the AO system to stabilize after a DM actuation. This does not only depend on the response time of the mirror, but also on the rest of elements on the AO chain. The best way to set a proper value to $n_{sta}$ is to measure the dynamic response of the system (Section~\ref{sec:dyn}) so as to discover the amount of WFS samples that takes for the system to reach a steady state after a step in the input.
One could expect that higher values in $n_{pp}$ and $n_{avg}$ should enable getting measurements with a lower level of noise. However, setting $n_{avg} > 1$ can have a negative effect on the measurement if there is a non-zero mean drift in the input wavefront (either from a star in the sky or from a calibration source). Therefore, it is recommended to set $n_{avg} = 1$ and increase the value of $n_{pp}$ in order to obtain a good signal-to-noise ratio.
Regarding $a_{val}$, it should be set with a value such that the signal that it produces on the WFS is bigger than the noise, and at the same time ensuring that the AO system (DM, optical path and WFS) does not exit its linear zone during the whole characterization process. The initial position of the DM is not relevant as long as it enables complying with this latter restriction.
The amount of time required to execute the static characterization procedure ($t_{sta}$) depends on the sampling rate of the WFS ($f_{WFS}$), the number of actuators ($A$) and on the following algorithm parameters: $n_{sta}$, $n_{pp} $ and $n_{avg}$. By analyzing the steps of the algorithm, it can be shown that the duration of the static characterization process follows the equation below:
\begin{equation}
\label{eq:sta_time}
t_{sta} = \frac{A \left(n_{sta} + 2 n_{pp} (n_{sta} + n_{avg})\right)}{f_{WFS}}
\end{equation}
\subsection{Dynamic characterization}
\label{sec:dyn}
The dynamic characterization procedure is used to obtain the impulse reponse of the AO system. Unlike the influence function obtained during the static characterization, the impulse response characterizes the system also in the time domain.
The information provided by the impulse response can be useful for designing a proper PID control loop, that is, setting adequate values to the $K_p$, $K_i$ and $K_d$ parameters. Another application of the measured impulse responses is to determine the amount of WFS samples that the AO system needs to reach a stable state after an actuation vector is sent to the DM, which sets a lower limit to the $n_{sta}$ parameter of the static characterization procedure.
The dynamic characterization process consists on two stages. In the first stage, a pre-defined sequence of actuations $d_{in}[j]$ (with $1 \le j \le n_{in})$ is introduced into one actuator of the DM, and the resulting WFS output $\mathbf{d}_{out}[j] = \left [ d_{out,1}[j],...,d_{out,M}[j] \right ]$ (where $M$ is the number of elements at the output of each WFS sample) is recorded as each element of the input sequence is introduced. After that, a post-processing stage is executed in order to obtain the impulse response of the system $\mathbf{h}[j] = \left [h_1[j],...,h_M[j] \right ]$. The specific post-processing that needs to be applied depends on the nature of the selected input sequence.
\subsubsection{On-line stage}
The steps required to perform the on-line stage of the dynamic characterization over one DM actuator are explained in Appendix~\ref{sec:dyn_app}. There are three configurable parameters in this algorithm: $n_{sta}$ (stabilize repetitions), $n_{avg}$ (average repetitions) and $t_{RTC}$ (duration of each iteration of the RTC algorithm).
It is required that $n_{sta} > 0$ because otherwise the system will not have entered a stationary state by the time the first output sample is read. Setting $n_{avg} > 1$ allows improving the signal-to-noise ratio, though the same effect can be achieved just by increasing the length of the input signal (i.e., increasing $n_{in}$), provided that the selected input signal is not a plain impulse. The parameter $t_{RTC}$ must represent the amount of time that each iteration of the RTC algorithm would take, otherwise there would a discrepancy between the complete AO system and the system which is being measured. The output signal $\mathbf{d}_{out}$ is the one that will be processed during the off-line stage so as to obtain the impulse response $\mathbf{h}[j]$.
As happened in the static characterization, the input parameters of the dynamic characterization have an effect on its execution time $t_{dyn}$, which may be calculated as follows:
\begin{equation}
\label{eq:dyn_time}
t_{dyn} = \frac{n_{in} (n_{sta} + n_{avg})}{f_{WFS}}
\end{equation}
\subsubsection{Off-line stage}
The calculations to be done on the off-line stage depend on the nature of the chosen input sequence. Among the different types of input sequences, we tested three of the most well-known ones: an impulse, white noise and maximum length sequences (MLS) \citep{borish1983efficient, rife1989transfer}. The tests were executed both in computer simulations \citep{2015inac.conf...51C} and with the actual AOLI instrument. The authors finally chose the MLS method because it proved to perform better than the others, meaning that it produced more accurate, less noisy impulse responses with lower measurement durations.
The MLSs were generated with LFSRs (Linear Feedback Shift Registers) using the taps proposed by \cite{ward2012table}. These LFSRs work with the values $-1$ and $1$ instead of $0$ and $1$, thus producing a pseudo-random sequence containing positive and negative pulses. The input sequence is multiplied by a constant $a_{val}$ in order to control the magnitude of the actuation, in a similar way as it was done during the static characterization. The length the MLS sequence $n_{in}$ depends on the MLS order $m$ as indicated in the following equation: $n_{in} = 2^m - 1$.
When the input sequence is a MLS, one can apply the cross-correlation between the MLS input and output in order to get the impulse response of the $i$th element of the WFS output vector (normally a Zernike mode):
\begin{equation}
\label{eq:dyn_mls}
h_i[j] = \frac { \mathcal{F}^{-1} \left ( \mathcal{F} \left ( d_{out,i}[j] \right ) \mathcal{F} \left ( d_{in}[j] \right )^\ast \right ) }{n_{in}(a_{val})^2}
\end{equation}
\section{Results}
\label{sec:results}
The AO system described in the previous sections was extensively tested under different conditions before going to telescope. Finally, on May 2016 the full AOLI instrument was moved to the William Herschel Telescope, were it saw first light on 22nd May 2016. At that night, we managed to close the AO control loop with a natural sky star with the new TP3-WFS, though with limited performance due to unexpected alignment issues. With the lessons learned during that night, further AO-related results were gathered on another commissioning run in October 2016, closing the loop once again but that time with several targets. In this section we will present the AO-related results, obtained both during laboratory and telescope tests.
\subsection{Static characterization}
\label{sec:sta_res}
During laboratory tests, the influence matrix itself proved to be an invaluable tool to learn how to configure the TP3-WFS, which had never been used on an AO system before. Given the fact that the WFR software was designed to also output the measured wavefronts as 2-D surfaces (calculated from the 1-D Zernike vectors), it was easy to determine whether the measurements were being done correctly just by representing each actuator response obtained during the static characterization (Section~\ref{sec:sta}) and comparing it with the expected response.
In a system that works correctly, the 2-D representation of each actuator response should contain a peak that represents the position of the actuator as seen by the WFS. In the case of the TP3-WFS, one just has to ensure that the number of reconstructed Zernike modes is large enough to achieve a resolution that allows identifying such peaks. The specific number of modes that produce such resolution can be obtained either by computer simulations or by performing a sweep over the number of reconstructed modes with the test equipment. In the case of AOLI, the second option was used.
The TP3-WFS was configured as shown in Figure~\ref{fig:wfs_conf}. The processed pupil regions measure 80x80 pixels each, the radius of the Zernike functions used during the precalculations was set to 25 pixels, the number of reconstructed Zernike modes was 153 and the number of Radon angles was 31. Given that the area of interest is only that of the pupils, the frame grabbing software was configured to read only the 90 scan lines where the pupils were located, in an attempt to maximize the sampling rate.
\begin{figure*}
\centering
\includegraphics[scale=1.0]{wfs_conf}
\caption {Configuration of the TP3-WFS, represented over an actual image read from the WFS camera while pointing to a natural star. The red squares correspond to the extracted pupil regions, while the green circles represent the size of the Zernike functions used for precalculating the slope maps. The background image does not correspond to a full frame but to a limited number of scan lines, in an attempt to maximize the camera sampling rate by reading only the region of interest.}
\label{fig:wfs_conf}
\end{figure*}
The rationale for configuring the pupil regions, the Zernike radius and the Radon angles the specified way can be found in Section~\ref{sec:wfr}. Regarding the 153 reconstructed modes, it was tested that closing the loop in laboratory with a lower number of modes led us to a worse PSF (Point Spread Function), while a higher number of modes did not result in a noticeable improvement. Ignoring the fact that LI would eliminate the need of controlling high-order modes, we decided to keep the specified number of modes with the objective of getting the best out of the AO loop alone.
On the other hand, the static characterization parameters were established as follows: $n_{sta} = 2$, $n_{pp} = 25$, $n_{avg} = 1$ and $a_{val} = 0.1$. This combination of parameters produced a good signal-to-noise ratio even on sky tests, with a reasonable execution time. The sampling period of the WFS camera was set to 16.243 milliseconds, and the number of actuators of the DM was $241$. As a result, static characterizations in AOLI took almost 10 minutes, which is coherent with equation (\ref{eq:sta_time}).
Figure~\ref{fig:sta_res} shows the results of performing the static characterization both in the laboratory (using the instrument calibration source, which simulates the WHT telescope) and with a natural star in the WHT. Instead of representing all the Zernike modes for all the actuators, this figure shows the 2-D surface representation of the modes for a few actuators, in order to ease the interpretation of the result. In the case of the laboratory tests (first row of Figure~\ref{fig:sta_res}) there was no simulated turbulence, while during the measurements with a natural star (second row of Figure~\ref{fig:sta_res}) the atmosphere created a natural turbulence giving a seeing of about 0.9 arcsec.
\begin{figure*}
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[scale=0.7]{sta_calib_101}
\caption{Normal actuator,\\ laboratory}
\label{fig:sta_calib_normal}
\end{subfigure}
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[scale=0.7]{sta_calib_183}
\caption{Border actuator,\\ laboratory}
\label{fig:sta_calib_border}
\end{subfigure}
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[scale=0.7]{sta_calib_121}
\caption{Center actuator,\\ laboratory}
\label{fig:sta_calib_center}
\end{subfigure}
\par\medskip
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[scale=0.7]{sta_star_101_v2}
\caption{Normal actuator,\\ sky star}
\label{fig:sta_star_normal}
\end{subfigure}
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[scale=0.7]{sta_star_183_v2}
\caption{Border actuator,\\ sky star}
\label{fig:sta_star_border}
\end{subfigure}
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[scale=0.7]{sta_star_121_v2}
\caption{Center actuator,\\ sky star}
\label{fig:sta_star_center}
\end{subfigure}
\caption{Static characterization results for a selection of actuators, using either a laboratory calibration source or a natural star as AO reference. The actuators are detected at the expected positions. The center actuator is a special case because it is hidden by the central obscuration of the aperture, making it impossible to detect.}
\label{fig:sta_res}
\end{figure*}
The results show that the influence functions obtained in the laboratory and in the sky were approximately equal. Of course, the ones obtained in the laboratory were more accurate because there was no turbulence, so they could be used as a reference to estimate the quality of the measurements performed with natural stars.
The 2-D surface representations in Figure~\ref{fig:sta_res} show the peak caused by each actuator in the cases were one would expect a peak. For example, one would expect a peak in Figures~\ref{fig:sta_calib_normal} and \ref{fig:sta_star_normal} because these actuators correspond to the visible area of the WFS (that is, the areas in grey in Figure~\ref{fig:wfs_conf}). In Figures~\ref{fig:sta_calib_border} and \ref{fig:sta_star_border} one can see the response of an actuator located just at the outer border of the Zernike area marked in green in Figure~\ref{fig:wfs_conf}, which can still be detected it because it falls within the pupil region (red square in Figure~\ref{fig:wfs_conf}). Lastly, Figures~\ref{fig:sta_calib_center} and \ref{fig:sta_star_center} show no peaks because the location of this actuator corresponds to the obscured area at the center of the pupils.
The main conclusion of the static characterization tests is that the TP3-WFS is able to reconstruct the high-resolution wavefronts generated by the movement of each individual actuator, even with pupils that have a central obscuration area as in the WHT. Knowing that the pupils in AOLI would have a central obscuration, we prepared an alternative version of the software which calculates annular Zernike modes \citep{Mahajan:81} instead of the regular ones. However, it was not necessary to use it during the commissioning at the telescope because the main software behaved perfectly.
After verifying that the TP3-WFS apparently produced correct measurements with natural reference stars, it remained to be checked whether the same thing could be said about extended objects such as planets. For this purpose, the telescope was pointed to Neptune (apparent diameter = 2.3 arcsec) and the characterization procedure was executed again. The pupil images acquired by the WFS camera (Figure~\ref{fig:nep_pupils}) were a good initial sign because they looked very similar to the ones that had been obtained while pointing to stars of similar apparent magnitudes. Figure~\ref{fig:sta_neptune} shows the result for a single actuator that is neither in the border nor in the center of the pupil region. The fact that the actuator was correctly detected suggests that the TP3-WFS may allow closing the AO loop with extended objects. This hypothesis will be confirmed later in Section~\ref{sec:closed_loop}.
\begin{figure}
\centering
\includegraphics[scale=0.75]{neptune_pupil_2}
\quad
\includegraphics[scale=0.75]{neptune_pupil_1}
\caption{A pair of pupil images while pointing at Neptune, extracted from a single camera frame. These images resemble those obtained while pointing to a natural star.}
\label{fig:nep_pupils}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=1.4]{sta_neptune_116}
\caption{Static characterization result of a single visible actuator, using Neptune as the AO reference object. The actuator is detected at the expected position, just as happened when referencing with stars.}
\label{fig:sta_neptune}
\end{figure}
\subsection{Dynamic characterization}
\label{sec:dyn_res}
The dynamic characterization procedure described in Section~\ref{sec:dyn} was performed on laboratory. The same results can be expected under all conditions, including sky observations.
The parameters of the dynamic characterization were established as follows: $n_{sta} = 2$, $n_{avg} = 10$, $a_{val} = 0.05$ and $t_{RTC} = 0$ seconds. These parameters were set this way because of the reasons already outlined in Section~\ref{sec:dyn}. The simulated delay of the RTC algorithm ($t_{RTC}$) was set to 0 because, in AOLI, this delay is several orders of magnitude lower than the rest of the delays of the system (just a few tens of microseconds). The sampling period of the WFS camera was set to 16.243 milliseconds, and only one actuator was measured: the one at the centre of the DM. The MLS order was set to $m = 9$, so the length of the input sequence was $n_{in} = 511$. As a result of the chosen parameters, dynamic characterizations in AOLI took almost 100 seconds, which is coherent with equation (\ref{eq:dyn_time}).
Figure~\ref{fig:impulse} shows the impulse response of the actuator selected for this test. The impulse response of an actuator is actually a vector of Zernike modes, but in this figure only the defocus mode was represented. This choice is justified by the fact that, for an actuator located at the center of the pupil, only the circular modes (defocus, primary spherical, secondary spherical, etc.) have enough signal level over the noise. From the point of view of the timing analysis, the specific mode that is chosen does not matter as long as it provides enough S/N ratio. For other purposes different than this analysis, one could measure the response of any actuator over any mode by executing the dynamic characterization over a long enough time, but the shape of the resulting impulse response would be the same.
\begin{figure}
\centering
\includegraphics[scale=0.7]{impulse}
\caption {Impulse response of the center actuator, defocus mode. The response is distributed over the first two samples. This behaviour can be explained by carefully analyzing the timing of each element in the AO processing pipeline.}
\label{fig:impulse}
\end{figure}
In Figure~\ref{fig:impulse}, the first feature that attracts the attention is the fact that the impulse response only has signal in two of the samples, the rest of the samples only contain noise. The rationale behind this behaviour is hard to realize unless one has some extra information about the timing of the system. That is exactly what Figure~\ref{fig:timing_diag} tries to illustrate. This figure represents a timeline of the tasks that occur during the dynamic characterization of the system, using actual timing data from AOLI.
\begin{figure*}
\centering
\includegraphics[scale=0.8]{timing_diag}
\caption {Timing diagram of the insertion of an impulse into the AO system, which explains the obtained impulse response. Only the top four delaying elements (mirror response time, integration period, CCD readout time and wavefront reconstruction latency) have been depicted in this diagram. Other elements such as the communication latency between the programs of the AO processing pipeline have not been drawn because they are not significant when compared to the rest of delays. In the diagram, each color in the intermediate lines corresponds the events that are triggered with the arrival of each sample (i.e., an image from the WFS camera).}
\label{fig:timing_diag}
\end{figure*}
The first conclusion that can be extracted from Figure~\ref{fig:timing_diag} is that transients of the DM cannot be observed by the WFS cameras. Given the fact that the integration period is much longer than the settling time of the DM, transients are somehow averaged during the integration process, and the camera sees approximately the same signal that it would see with a perfectly steep input. That is, from the point of view of the WFS, the DM responds instantly.
Further inspection of Figure~\ref{fig:timing_diag} reveals that the reason why the response to an impulse is divided in two samples is related to the instant of time along the integration period in which the actuation occurs, which in turn depends on the rest of delays of the system. For a system with timings similar to those of AOLI, it makes sense that the impulse response has two samples different from zero at most. The actual position and relative amplitude of these samples only depend on the accumulated delay of the tasks that are part of the plant of the system under control.
\subsection{Closed loop operation on the telescope}
\label{sec:closed_loop}
After gathering the required information and expertise to close the control loop with the TP3-WFS on laboratory tests, we managed to successfully close the control loop with a natural star in the first commissioning night of the complete AOLI instrument in May 2016, in spite of the bad atmospheric conditions (between 1 and 2 arcsec) and an unexpected alignment problem that was discovered just as the telescope was pointed to the first star. The effects of this alignment problem, located at the instrument-telescope interface, accumulated during the night and made it impossible to close the loop reliably with other targets. In addition, the conclusions that could be extracted from the first target were limited because the acquisition parameters of the science camera had been accidentally set in such a way that the detector got saturated on closed loop operation, causing the so-called blooming effect.
The following commissioning run took place in October 2016. This time a lot of effort was put on ensuring a good alignment between the optical axes of the instrument and the telescope. One of the methods used to align the AO subsystem with the telescope during daytime was to open its petals, turn on the interior dome lights, and then perform an iterative re-adjustment of the instrument so that both the telescope and the calibration unit produced the defocused pupil images at the same positions of the WFS detector, regardless the defocus distances.
Despite the improved alignment, it was determined that static characterizations performed with the instrument calibration unit can degrade the control performance when used with natural stars. The reason is that the slightest discrepancies between the optical axis at the output of the calibration unit and at the output of the telescope make the incoming light fall within different areas of the DM surface and the WFS detector, thus changing the static response of the AO system. Therefore, it is highly recommended to be cautious about this and spend some minutes calibrating with a natural star at least once at the beginning of the night. The calibration process can be accelerated by choosing a bright reference star, which allows setting a fast AO sampling rate and still obtain a good S/N.
During the telescope tests, the TP3-WFS was configured as explained in Section~\ref{sec:sta_res}. Depending on the magnitude of the AO reference object, the exposure time of the WFS camera and the number of scan lines was tuned to set a proper AO sampling rate, following the guidelines given in Section~\ref{sec:fg_sw}. Regarding the configuration of the control loop, most of the tests were performed discarding the tip-tilt out of the 153 measured modes. The rationale behind this decision can be found in Section~\ref{sec:rtc_sw}. Even though the PID parameters had already been tuned in the laboratory using the information obtained during the dynamic characterization, the PID parameters were further tuned using the real turbulence from the sky, seeking a fast and stable response. The PID parameters were finally set as follows: $0.4 \leq K_p \leq 0.8$ (depending on the atmospheric conditions), $K_i = 0.4 * f_{WFS}$ and $K_d = 0$ for all the 151 controlled modes.
Figure~\ref{fig:rms_HD207470} shows the root-mean-squared (RMS) plots of the error of the reconstructed wavefronts when using HD207470 (I magnitude = 7.5) as AO reference star. The error is calculated with respect to the control target, always set to a $0$-vector of Zernike modes with the exception of a manually-tuned defocus term that compensates the NCPA between the WFS camera and the science camera. When comparing open loop and closed loop operation, Figure~\ref{fig:rms_HD207470} shows an improvement factor of 5.23 in the average RMS, as well as an improvement factor of 2.03 in the standard deviation, which constitutes clear evidence that the control loop is operating correctly.
\begin{figure}
\centering
\includegraphics[scale=0.7]{rms_all_legend_HD207470}
\caption{RMS error of measured wavefront along time, using HD207470 as AO reference. Closing the control loop reduces both the average RMS error and its standard deviation.}
\label{fig:rms_HD207470}
\end{figure}
A more in-depth analysis was performed over the same AO reference star, obtaining statistics for each measured Zernike mode. The results are shown in Figures~\ref{fig:modes_ol}. These figures represent the average values and the standard deviation of the error in the first 50 modes (excluding tip and tilt) in open loop and closed loop operation, respectively. The modes have been ordered using the Noll notation \citep{noll1976zernike}. Once again, these plots confirm the correct operation of the control loop, as the average value and standard deviation of each mode is reduced due to the closing of the loop. It was also checked that modifying a mode in the control target vector indeed changed the average value measured for that mode in the specified amount.
\begin{figure}
\centering
\includegraphics[scale=0.7]{HD207470_modes_ol}
\caption{Mode-by-mode analysis of HD207470, open loop.}
\label{fig:modes_ol}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.7]{HD207470_modes_cl}
\caption{Mode-by-mode analysis of HD207470, closed loop.}
\label{fig:modes_cl}
\end{figure}
Similar results were obtained from other AO reference targets different from HD207470. Such results are summarized in Table~\ref{tab:rtc_perf}. The RMS error plots for each target are analogous to the ones presented in Figure~\ref{fig:rms_HD207470}, but modulated by the different RMS averages and standard deviations. For example, the closed-loop error plot for HIP10644 would be shifted down with respect to the plot for HD207470, and it will be less noisy. On the other hand, the lower noise would result in shorter error bars in the mode-by-mode analysis.
\begin{table}
\small
\centering
\begin{tabular}{lccccc}
\toprule
AO ref. & Mag. & $t_{WFS}$ & RMS avg. & RMS std. \\
\midrule
HIP10644 & $4.2$ (\textit{I}) & $15.2$ ms & $1.35 \times 10 ^{-3}$ & $2.16 \times 10 ^{-4}$\\
HDS389AB & $\approx$ $5.5$ (\textit{I}) & $15.2$ ms & $1.71 \times 10 ^{-3}$ & $3.84 \times 10 ^{-4}$\\
HD207470 & $\approx$ $7.5$ (\textit{I}) & $21.5$ ms & $3.57 \times 10 ^{-3}$ & $8.46 \times 10 ^{-4}$\\
Neptune & 7.7 (int.) & $15.2$ ms & $2.93 \times 10 ^{-3}$ & $7.09 \times 10 ^{-4}$\\
\bottomrule
\end{tabular}
\caption{Control performances for some AO reference objects. The table expresses the dependence between the magnitude of the reference object and the resulting control performance.}
\label{tab:rtc_perf}
\end{table}
One conclusion of Table~\ref{tab:rtc_perf} is that the performance of the control loop depends of the magnitude of the reference object, just as expected from any AO system. Unfortunately, due to a hardware failure that was discovered after the last commissioning run, it was not possible to perform an empirical study about the faintest magnitude of the reference star with which the AO subsystem can operate reliably. One of the EMI (electromagnetic interference) filters of the acquisition card of the WFS camera was physically damaged, resulting in a considerable degradation of the sensibility of the camera. The field study about the performance of the control loop under extremely low light levels is therefore postponed for future commissioning nights where this problem will be hopefully fixed.
Even though AOLI was not originally intended to do science with extended objects, Table~\ref{tab:rtc_perf} also presents the obtained control performances while pointing to Neptune, which confirm the ability of the TP3-WFS to work with extended objects. Further confirmation is presented in Figure~\ref{fig:nep_coords}, which plots the coordinates of Neptune on the science camera when the tip and tilt modes are controlled along with the rest of the modes. This figure is a clear evidence that the tip and tilt modes are being controlled correctly: the standard deviation of the euclidean distance to the mean position is 5.02 pixels in open loop operation, while in closed loop it is only 0.66. Regarding the rest of the modes, due of the lack of features in the surface of Neptune, the only possible test was to adjust the control target of the defocus mode and visually check how it indeed had a proportional effect on the blurriness of the science image.
\begin{figure}
\centering
\includegraphics[scale=0.7]{neptune_coords}
\caption{Coordinates of Neptune over the science camera. The dispersion of such coordinates is clearly reduced when applying AO with tip-tilt control activated.}
\label{fig:nep_coords}
\end{figure}
The results presented so far demonstrate the suitability of the TP3-WFS as the input of an adaptive optics control loop, which was the main purpose of this paper. Nevertheless, further preliminary results will be presented in the following paragraphs regarding the improvement obtained in the images acquired by the science camera. A more comprehensive analysis of the results at the science segment of AOLI, including the application of the lucky imaging algorithm, is scheduled as future work.
Figure~\ref{fig:control_sci} shows a set of science images of HIP10644, in open and closed loop, all acquired with 30 ms exposure time. By looking at the speckles, it is clear that the RTC algorithm is effectively reducing the effect of atmospheric aberrations on the science image. The average value of the maximum pixel of each image is 846 in the case of open loop operation, while in closed loop the average value is 1577. This means that the average of the improvement factor of the Strehl ratio is 1.864.
\begin{figure*}
\centering
\begin{subfigure}{0.24\textwidth}
\centering
\includegraphics[scale=0.8]{HIP_10644_worst_ol}
\caption{Open loop (worst)}
\label{fig:worst_ol}
\end{subfigure}
\begin{subfigure}{0.24\textwidth}
\centering
\includegraphics[scale=0.8]{HIP_10644_typ_ol}
\caption{Open loop (typical)}
\label{fig:typ_ol}
\end{subfigure}
\begin{subfigure}{0.24\textwidth}
\centering
\includegraphics[scale=0.8]{HIP_10644_typ_cl}
\caption{Closed loop (typical)}
\label{fig:typ_cl}
\end{subfigure}
\begin{subfigure}{0.24\textwidth}
\centering
\includegraphics[scale=0.8]{HIP_10644_best_cl}
\caption{Closed loop (best)}
\label{fig:best_cl}
\end{subfigure}
\caption{Images of HIP10644 with and without AO corrections, showing worst, typical and best images according to the maximum pixel value. It seems clear that the closure of the control loop has a positive impact on the quality of the acquired images.}
\label{fig:control_sci}
\end{figure*}
Finally, Figure~\ref{fig:cdf} plots the probability of obtaining an image whose peak pixel value is higher than a given value. The probabilities have been estimated from the images of HIP10644. This figure is specially interesting from the point of view of lucky imaging because it somehow represents the odds of getting a lucky image. For example, for a given number of images, the percentage of images whose maximum pixel value is over 2000 would be around 1\% in open loop, while in closed loop it would be as high as 46.5\%. This could mean an improvement from the point of view of the LI algorithm, which may be able to produce similar results with a significantly lower amount of processed images. This hypothesis, among others, will be thoroughly tested in future works that focus on the science segment of AOLI.
\begin{figure}
\centering
\includegraphics[scale=0.7]{cdf_interp}
\caption{Probability of obtaining an image whose maximum pixel value is higher than a given value, calculated from HIP10644. According to this plot, closing the AO loop gives a higher probability of getting an image of a given quality.}
\label{fig:cdf}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
This paper has presented the first use of the TP3-WFS in a real-time adaptive optics system, with successful results. The analyses, practical considerations and results presented in this paper thus pave the way for new developments besides AOLI willing to use this new type of wavefront sensor, which theoretically provides better sky coverage than previously existing wavefront sensors. The empirical study needed to confirm this latest statement is scheduled for future commissioning runs.
The closure of the control loop improved the average RMS of the 151 controlled modes in a factor of 5.23, which resulted in a clear improvement in the quality of the acquired speckle images, whose average Strehl ratio was increased by a factor of 1.864. In addition, it was confirmed that the reconstruction technique used by the TP3-WFS is able to work with extended objects such as planets, even though such targets fall out of the initial purpose of AOLI.
The presented AO-only results are encouraging due to the fact that they represent just a fraction of what can be achieved by the instrument with the combination of the AO+LI parts. In this regard, future works will focus on thoroughly analyzing the LI performance (open loop vs. closed loop). The preliminary analyses presented in this paper point to a significant improvement of the percentage of lucky images, anticipating a positive outcome also on the science segment.
Regarding the future tests that will be executed to measure the faintest magnitude with which the AO control loop can be closed reliably, two main strategies will be used to maximize the sensibility of the TP3-WFS. First, the radius of the pupil images over the detector will be reduced so as to concentrate better the incoming flux. As explained previously in the article, this would improve the S/N ratio at expense of a reduction in the spatial resolution of the sensed wavefront. The loss of resolution will not pose a problem for AOLI because it was calculated that compensating just the lowest-order modes would enable the LI algorithm to work as expected. The second strategy for maximizing the sensibility would be to use the EMCCDs in photon counting mode instead of the regular imaging mode that was used during the tests described in this article, thus matching the worst case scenario that was assumed in computer simulations. We expect to reach the required limiting magnitude for AO (mag. 16 in the \textit{I} band) by the combination of these two ideas.
\section*{Acknowledgements}
This work was supported by the Spanish Ministry of Economy under the projects AYA2011-29024, ESP2014-56869-C2-2-P, ESP2015-69020-C2-2-R and DPI2015-66458-C2-2-R, by project 15345/PI/10 from the Fundaci\'on S\'eneca, by the Spanish Ministry of Education under the grant FPU12/05573, by project ST/K002368/1 from the Science and Technology Facilities Council and by ERDF funds from the European Commission. The results presented in this paper are based on observations made with the William Herschel Telescope operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrof\'isica de Canarias. Special thanks go to Lara Monteagudo and Marcos Pellejero for their timely contributions.
\bibliographystyle{mnras}
|
1,314,259,994,950 | arxiv | \section{Introduction}
\IEEEPARstart{M}{atrix} recovery considers the problem of reconstructing a data matrix from a small number of measurements.
This problem has many applications, such as
recommedation system \cite{Goldberg92usingcollaborative}, X-ray crystallography \cite{Harrison93crystallography}, quantum tomography \cite{article17kliesch} and blind deconvolution \cite{zhang2018multichannel}, etc.
Though it is an ill-posed problem, tractable recovery is possible when the matrix admits certain low dimensional structures. Typical example is low rank matrix recovery where the target data matrix is assumed to be low rank.
Many computationally efficient approaches for low rank matrix recovery have been developed in recent decade, including convex methods as well as non-convex methods, see \cite{davenport2016overview,Cai2018ExploitingTS, chen2018harnessing,chi2019nonconvex} and reference therein.
Motivated by blind super-resolution of point sources, we study a different low rank structured matrix recovery problem, where the target matrix can be modeled as $\mX^\natural = \sum_{k=1}^r d_k \bfm h} \def\mH{\bfm H} \def\H{\mathbb{H}_k \bfm a} \def\mA{\bfm A} \def\A{\mathbb{A}_{\tau_k}^\mathsf{T}\in\C^{s\times n}$. Here $\bfm h} \def\mH{\bfm H} \def\H{\mathbb{H}_k$ is normalized vector such that $\twonorm{\bfm h} \def\mH{\bfm H} \def\H{\mathbb{H}_k}=1$ and $\bfm a} \def\mA{\bfm A} \def\A{\mathbb{A}_{\tau}$ is a vector defined as
$$\bfm a} \def\mA{\bfm A} \def\A{\mathbb{A}_\tau := \begin{bmatrix}
1 & e^{-2\pi i\tau\cdot 1} &\cdots & e^{-2\pi i\tau\cdot (n-1)}
\end{bmatrix}^\mathsf{T}.
$$
The linear measurements of $\mX^\natural$ is given by
\begin{align}
\bfm y} \def\mY{\bfm Y} \def\Y{\mathbb{Y} = {\cal A}} \def\cA{{\cal A}(\mX^\natural),
\end{align}
where ${\cal A}} \def\cA{{\cal A}:\C^{s\times n} \rightarrow \C^n$ is a linear operator that performs the linear observation $\bfm y} \def\mY{\bfm Y} \def\Y{\mathbb{Y}[j] = \left\langle \bfm b} \def\mB{\bfm B} \def\B{\mathbb{B}_j\ve_j^\mathsf{T}, \mX^\natural\right\rangle
$ and $\bfm b} \def\mB{\bfm B} \def\B{\mathbb{B}_j\in\C^s$ is a random vector.
Let ${\cal H}} \def\cH{{\cal H}$ be the vectorized Hankel lift operator \cite{chen2020vectorized} which maps a matrix $\C^{s\times n}$ into an $sn_1\times n_2$ vectorized Hankel matrix,
\begin{align*}
{\cal H}} \def\cH{{\cal H}(\mX)= \begin{bmatrix}
\bfm x} \def\mX{\bfm X} \def\X{\mathbb{X}_1&\cdots &\bfm x} \def\mX{\bfm X} \def\X{\mathbb{X}_{n_2}\\
\vdots &\ddots &\vdots\\
\bfm x} \def\mX{\bfm X} \def\X{\mathbb{X}_{n_1} &\cdots &\bfm x} \def\mX{\bfm X} \def\X{\mathbb{X}_{n},
\end{bmatrix}
\end{align*}
where $\bfm x} \def\mX{\bfm X} \def\X{\mathbb{X}_i\in\C^s$ is the $i$-th column of $\mX$ and $n_1+n_2 = n+1$.
We will investigate matrix recovery problem based on the low rank structure of vectorized Hankel matrix ${\cal H}} \def\cH{{\cal H}(\mX^\natural)$. Specifically, assuming
\begin{align}
\label{assumption: low rank}
\rank\left( {\cal H}} \def\cH{{\cal H}(\mX^\natural)\right) = r.
\end{align}
We are particularly interested in under what condition can the data matrix be recovered from a small number of linear measurements,
given the following non-convex
recovery problem
\begin{align}
\label{eq: nonconvex recovery procedure}
\min_{\mX\in\C^{s\times n}} \frac{1}{2} \twonorm{\bfm y} \def\mY{\bfm Y} \def\Y{\mathbb{Y} - {\cal A}} \def\cA{{\cal A}(\mX)}^2\text{ subject to }\rank({\cal H}} \def\cH{{\cal H}(\mX)) = r,
\end{align}
We tend to solve it in a fast and provable way and this leads us to the Fast Iterative Hard Thresholding algorithm.
\subsection{Related work}
The blind super-resolution problem is explored in many literatures \cite{chen2020vectorized,chi2016guaranteed, li2019atomic, yang2016super}. More specifically, an atomic norm minimization method has been proposed in \cite{li2019atomic,yang2016super} to solve the blind super-resolution problem. Recently, Chen et al.
\cite{chen2020vectorized} proposed a nuclear minimization method called Vectorized Hankel Lift to recover $\boldsymbol{X}^\natural$. Indeed, all of these recovery methods are for convex optimization problems which can be solved by many sophisticated algorithms such as interior point methods, gradient descent etc. However, one common drawback of the convex optimization approach is that the high computational complexity of solving the equivalent semi-definite programming.
\subsection{Our contribution}
In this paper, we consider a non-convex recovery approach for blind super-resolution problem \eqref{eq: nonconvex recovery procedure}.
Here, the objective function is a standard least squares function which minimizes the distance from the observed point sources to the recovered ones while the constraint function enforces low rank structure in the transform domain. We proposed a non-convex algorithm called Vectorized Hankel Lifted Fast Iterative Hard Thresholding (VHL-FIHT for short) to solve this low rank structured matrix recovery procedure \eqref{eq: nonconvex recovery procedure}. The algorithm is presented in Algorithm $\bm 1$.
We also establish the linear convergence rate of this method provided that number of measurements is of the order ${\cal O}} \def\cO{{\cal O}(s^2 r^2\log^2(sn))$ and given a suitable initialization.
\subsection{Notations and preliminaries}
The vectorized Hankel matrix ${\cal H}} \def\cH{{\cal H}(\mX^\natural)$ admits the following low rank decomposition \cite{chen2020vectorized}
\begin{align}
\label{def:hankel}
{\cal H}} \def\cH{{\cal H}(\mX^\natural) = \left(\mE_L \odot \mH \right) \diag\left(d_1,\ldots, d_r\right) \mE_R^\mathsf{T},
\end{align}
where $\odot$ is the Khatri-Rao product of matrix and the matrices $\mE_L\in \C^{n_1\times r}$ as well as $\mE_R\in \C^{n_2\times r}$ are full column rank matrices.
The adjoint of $\mathcal{H}$, denoted $\mathcal{H}^{*}$, is a linear mapping from $s n_{1} \times n_{2}$ matrices to matrices of size $s \times n$. In particular, for any matrix $\boldsymbol{Z} \in \mathbb{C}^{s n_{1} \times n_{2}}$, the $i$-th column of $\mathcal{H}^{*}(\boldsymbol{Z})$ is given by
\begin{align*}
{\cal H}} \def\cH{{\cal H}^\ast(\mZ) \ve_{i}=
\sum_{(j,k)\in{\cal W}} \def\cW{{\cal W}_i }\bfm z} \def\mZ{\bfm Z} \def\Z{\mathbb{Z}_{j, k}
\end{align*} where $\bfm z} \def\mZ{\bfm Z} \def\Z{\mathbb{Z}_{j,k}= \mZ[js:(j+1)s-1, k]$ and ${\cal W}} \def\cW{{\cal W}_i$ is the set \begin{align*}
\left\{(j,k)\mid j+k=i, 0 \leq j \leq n_{1}-1,0 \leq k \leq n_{2}-1\right\}\end{align*}
Let ${\cal D}} \def\cD{{\cal D}:\C^{s\times n}\rightarrow \C^{s\times n}$ be an operator such that $${\cal D}} \def\cD{{\cal D}(\mX)= \mX \diag\left(\sqrt{w_{0} },\ldots, \sqrt{w_{n-1}}\right)$$ for any $\mX$, where the scalar $w_{i}$ is defined as $w_{i}=\#{\cal W}} \def\cW{{\cal W}_i$ for $i=0, \ldots, n-1$. The Moore-Penrose pseudoinverse of $\mathcal{H}$ is given by $\mathcal{H}^{\dagger}=\mathcal{D}^{-2} \mathcal{H}^{*}$ which satisfies $\mathcal{H}^{\dagger} \mathcal{H}=\mathcal{I}$.
The adjoint of the operator
$\mathcal{A}(\cdot)$, denoted $\mathcal{A}^{*}(\cdot)$, is defined as $\mathcal{A}^{*}(\boldsymbol{y})=\sum_{j=0}^{n-1} \boldsymbol{y}[j] \boldsymbol{b}_{j} \boldsymbol{e}_{j}^{\top}$. Denote $\boldsymbol{Z} = {\cal H}} \def\cH{{\cal H}{(\boldsymbol{X})}$ and $\mathcal{G}=\mathcal{H} \mathcal{D}^{-1}$. The adjoint of $\mathcal{G}$, denoted $\mathcal{G}^{*}$, is given by $\mathcal{G}^{*}=\mathcal{D}^{-1} \mathcal{H}^{*}$.
\subsection{Organization}
The remainder of this letter is organized as follows. In Section 2, we will introduce VHL-FIHT algorithm. In Section 3, we will introduce two assumptions and establish our main results. The performance of VHL-FIHT is evaluated from numerical experiments in Section 4. In Section 5, we give the detailed proofs of main result. We close with a conclusion in Section 6.
\section{Vectorized Hankel Lifted Fast Iterative Hard Thresholding}
Let $\mZ^t = \mU^t \bSigma^t {\mV^t}^H\in\C^{sn_1\times n_2}$ be the compact singular value decomposition of a rank-$r$ matrix, where $\mU^t\in\C^{sn_1\times r}$, $\mV^t \in\C^{n_2\times r}$ and $\bSigma^t\in\R^{r\times r}$. It is known that the tangent space of the fixed rank-$r$ matrix manifold at $\mZ$ is given by
\begin{align*}
T = \left\{ \mU\mN^H + \mM{\mV}^H: \mM\in\C^{sn_1\times r}, \mN\in\C^{n_2\times r}\right \}.
\end{align*}
Given any matrix $\mW\in\C^{sn_1\times n_2}$, the projection of $\mW$ onto $T$ can be computed using the formula
\begin{align*}
{\cal P}} \def\cP{{\cal P}_{T}(\mW) = \mU{\mU}^H\mW + \mW\mV{\mV}^H - \mU{\mU}^H\mW \mV{\mV}^H.
\end{align*}
The VHL-FIHT method is shown in Algorithm $\bm 1$ when used to solve \eqref{eq: nonconvex recovery procedure}.An initial guess was obtained with spectrum method.
In each iteration of VHL-FIHT, the current estimate $\boldsymbol{X}^t$ is first updated along the gradient descent direction of the objective in \eqref{eq: nonconvex recovery procedure}. Then, the Hankel matrix corresponding to the update is formed via the application of the vectorized Hankel lift operator ${\cal H}} \def\cH{{\cal H}$, followed by a projection operator ${\cal P}} \def\cP{{\cal P}_{T_t}$ onto the $T_t$ space. Finally, it imposes a hard thresholding operator $\mathcal{T}_{r}$ to $\boldsymbol{W}^t$ by truncated SVD process and then apply ${\cal H}} \def\cH{{\cal H}^\dagger$ on the low rank matrix $\boldsymbol{Z}^{t+1}$.
\begin{algorithm}[htbp]\label{algo:fiht}
\caption{VHL-FIHT}
\hspace*{0.02in} {\bf Input:} Initialization $\mX^0 = {\cal H}} \def\cH{{\cal H}^\dagger{\cal T}} \def\cT{{\cal T}_r{\cal H}} \def\cH{{\cal H}{\cal A}} \def\cA{{\cal A}^\ast(\bfm y} \def\mY{\bfm Y} \def\Y{\mathbb{Y})$.\\
\hspace*{0.02in} {\bf Output:} $\mX^T$
\begin{algorithmic}[1]
\For{$t = 0,1,\dots,T-1$}
\State $\widetilde{\boldsymbol{X}}^t= \boldsymbol{X}^t- {\cal A}} \def\cA{{\cal A}^*({\cal A}} \def\cA{{\cal A}(\boldsymbol{X}^t)-\boldsymbol{y})\label{eq:Xtilde}$
\State $\boldsymbol{W}^t ={\cal P}} \def\cP{{\cal P}_{T_t} {\cal H}} \def\cH{{\cal H}(\widetilde{\boldsymbol{X}}^t)$
\State $\boldsymbol{Z}^{t+1}=\mathcal{T}_{r}(\boldsymbol{W}^t)$
\State $\boldsymbol{X}^{t+1}= {\cal H}} \def\cH{{\cal H}^{\dagger}(\boldsymbol{Z}^{t+1})$
\EndFor
\end{algorithmic}
\end{algorithm}
\section{Main Result}\label{section:main result}
In this section, we establish our main results. To this end, we make two assumptions.
\begin{assumption}\label{assump:1}
The column vectors $\left\{\boldsymbol{b}_{j}\right\}_{j=0}^{n-1}\subset \C^s$ of the subspace matrix $\boldsymbol{B}^{H}$ are i.i.d random vectors which obey
\begin{align*}
\E{\bfm b} \def\mB{\bfm B} \def\B{\mathbb{B}_j\bfm b} \def\mB{\bfm B} \def\B{\mathbb{B}_j^H} = \mI_s \text{ and }\max _{0 \leq \ell \leq s-1}|\bfm b} \def\mB{\bfm B} \def\B{\mathbb{B}_j[\ell]|^{2} \leq \mu_{0},
\end{align*}
for some constant $\mu_0$. Here, $\bfm b} \def\mB{\bfm B} \def\B{\mathbb{B}_j[\ell]$ denotes the $\ell$-th entry of $\bfm b} \def\mB{\bfm B} \def\B{\mathbb{B}_j$.
\end{assumption}
\begin{assumption}\label{assump:2}
There exists a constant $\mu_{1}>0$ such that
\begin{align*}
\max_{0\leq i \leq n_1-1} \fronorm{\mU_i}^2\leq \frac{\mu_1 r}{n} \text{ and } \max_{0\leq j \leq n_2-1} \twonorm{\ve_j^\mathsf{T} \mV}^2\leq \frac{\mu_1 r}{n},
\end{align*}
where the columns of $\mU\in\C^{sn_1\times r}$ and $\mV\in\C^{n_2\times r}$ are the left and right singular vectors of $\mZ^\natural={\cal H}} \def\cH{{\cal H}(\mX^\natural)$ separately, and $\mU_i = \mU[is:(i+1)s-1]$ is the $i$-th block of $\mU$.
\end{assumption}
Assumption \ref{assump:1} is introduced in \cite{candes2011probabilistic} and has been used in blind super-resolution \cite{chen2020vectorized,chi2016guaranteed,li2019atomic,yang2016super}. Assumption \ref{assump:2} is commonly used in spectrally sparse signal recovery \cite{cai2015fast,cai2018spectral,chen2014robust} and blind super-resolution \cite{chen2020vectorized}.
Now, we are in the position to state our main result.
\begin{theorem}
\label{thm 1}
Under Assumption \ref{assump:1} and \ref{assump:2}, with probability at least $1- c_1n^{-c_2}$, the iterations generated by FIHT (Alg.~1) with the initial guess
$\boldsymbol{X}^0 = {\cal H}} \def\cH{{\cal H}^\dagger\mathcal{T}_{r} {\cal H}} \def\cH{{\cal H} ({\cal A}} \def\cA{{\cal A}^*(\boldsymbol{y}))$ satisfies
\begin{align}
\fronorm{\mX^{t} - \mX^\natural} \leq \left( \frac{1}{2} \right)^t \fronorm{\mX^0 - \mX^\natural}
\end{align}
provided
\begin{align*}
n\geq C\kappa^2\mu_0^2\mu_1s^2r^2\log^2(sn)/\varepsilon^2,
\end{align*}
where $c_1,c_2$ and $C$ are absolute constants and $\kappa = \sigma_{\max}({\cal H}} \def\cH{{\cal H}{(\mX^\natural)}) / \sigma_{\min}({\cal H}} \def\cH{{\cal H}{(\mX^\natural)})$.
\end{theorem}
\begin{remark}
The sample complexity established in \cite{chen2020vectorized} for the Vectorized Hankel Lift is $n\geq c\mu_0 \mu_1\cdot sr\log^4(sn)$. While the sample complexity is sub-optimal dependence on $s$ and $r$, our recovery method requires low per iteration computational complexity. Similar to \cite{CAI2019Fast}, the per iteration computational cost of FIHT is about ${\cal O}} \def\cO{{\cal O}(r
^2sn+rn\log(sn)+r)$. To improve the dependence on $s$, we will investigate other initialization procedures in future work.
\end{remark}
\section{Numerical results}
Numerical experiments are conducted to evaluate the performance of FIHT. In the numerical experiments, the target matrix $\mX^\natural$ is generated by$\mX^\natural = \sum_{k=1}^r d_k \bfm h} \def\mH{\bfm H} \def\H{\mathbb{H}_k \bfm a} \def\mA{\bfm A} \def\A{\mathbb{A}_{\tau_k}^\mathsf{T}$ and the measurements are obtained by \eqref{measurements}, where the location $\{\tau_{k}\}_{k=1}^r$ are generated from a standard uniform distribution (i.e., U$(0,1)$), and the amplitudes $\{d_k\}_{k=1}^r$ are generated via $d_k = (1+10^{c_k})e^{-i\psi_k}$ where $\psi_k$ follows U$(0,2\pi)$ and $c_k$ follows U$(0,1)$. Each entry of subspace matrix $\mB$ is independently and identically generated from a standard normal distribution (i.e., $N(0,1)$). The coefficient vectors $\{\boldsymbol{h}_k\}_{k=1}^r$ are generated from a standardized multivariate Gaussian distribution (i.e., $MVN_s(0,I_{s\times s})$, where $I_{s\times s}$ is the identity matrix). We set the dimension of signal $n=256$ and the number of point sources $r=5$. Fig.~\ref{fig:efficiency} presents the logarithmic recovery error
$\log_{10}\fronorm{\mX_t - \mX^\natural}/\fronorm{\mX^\natural}$ with respect to the number of iteration. Numerical experiment shows that FIHT converges linearly.
\begin{figure}
\begin{center}
\includegraphics[width=0.4\textwidth]{eps0818.eps}
\caption{$\log_{10}\fronorm{\mX_t - \mX^\natural}/\fronorm{\mX^\natural}$ with respect to the number of iteration.}
\label{fig:efficiency}
\end{center}
\end{figure}
\section{Proof of main result}\label{section:proof}
We first introduce three auxiliary lemmas that will be used in our proof.
\begin{lemma}[{\cite[Corollary III.9]{chen2020vectorized}}]
Suppose $n\geq C\varepsilon^{-2} \mu_0 \mu_1 sr\log(sn)$. The event
\begin{align}
\label{ineq: local rip}
\opnorm{ {\cal P}} \def\cP{{\cal P}_{T} \left( {\cal G}} \def\cG{{\cal G}\calG^{\ast} - {\cal G}} \def\cG{{\cal G}\calA^{\ast} {\cal A}} \def\cA{{\cal A} \calG^{\ast} \right) {\cal P}} \def\cP{{\cal P}_{T} } \leq \varepsilon
\end{align}
occurs with probability exceeding $1-c_1n^{-c_2}$.
\end{lemma}
\begin{lemma}[{\cite[Lemma 5.2]{mao20projected}}]
\label{lemma initial}
Suppose that $n\geq C\varepsilon^{-2}\kappa^2 \mu_0^2\mu_1 s^2 r^2\log^2(sn)$. Then with probability at least $1-c_1n^{-c_2}$, the initialization $\mZ_0 = {\cal H}} \def\cH{{\cal H}(\mX^0)$ obeys
\begin{align*}
\opnorm{\mZ^0 - \mZ^\natural}\leq \frac{\sigma_{r}(\mZ^\natural)\varepsilon}{16\sqrt{(1+\varepsilon) \mu_0s}}.
\end{align*}
\end{lemma}
\begin{lemma}\label{lemma:1}
Suppose
\begin{align}
\label{condition 1}
\fronorm{\mZ^t - \mZ^\natural} \leq \frac{\sigma_{r}(\mZ^\natural)\varepsilon}{16\sqrt{(1+\varepsilon)\cdot \mu_0 s}}.
\end{align}
Conditioned on \eqref{ineq: local rip}, one has
\begin{align}
\label{ineq: one side rip}
\opnorm{ {\cal A}} \def\cA{{\cal A} \calG^{\ast}{\cal P}} \def\cP{{\cal P}_{T_t}} \leq 3\sqrt{ 1+\varepsilon},\\
\label{ineq: local rip at t step}
\opnorm{{\cal P}} \def\cP{{\cal P}_{T_t}{\cal G}} \def\cG{{\cal G}\left( {\cal I}} \def\cI{{\cal I} - {\cal A}} \def\cA{{\cal A}^\ast{\cal A}} \def\cA{{\cal A}\right)\calG^{\ast}{\cal P}} \def\cP{{\cal P}_{T_t} } \leq 2\varepsilon.
\end{align}
\end{lemma}
We can rewrite the iteration as
\begin{align}
\mZ^{t+1}= {\cal T}} \def\cT{{\cal T}_r \left( \mZ^t -{\cal G}} \def\cG{{\cal G}\calA^{\ast} {\cal A}} \def\cA{{\cal A} \calG^{\ast}\left( \mZ^t - \mZ^\natural \right)\rb .
\end{align}
Notice that
$\fronorm{\mX^{t}-\mX^{\natural}} = \fronorm{{\cal H}} \def\cH{{\cal H}^\dagger(\mZ^{t}-\mZ^{\natural})}\leq \fronorm{\mZ^{t}-\mZ^{\natural}}$.
Our proof follows the line of \cite{cai2015fast}.
We first assume that in the $t$-th iteration $\mZ^t$ obeys
\begin{align}
\label{induction hypothesis}
\fronorm{\mZ^t - \mZ^\natural} \leq \frac{\sigma_r(\mZ^\natural)\varepsilon}{16\sqrt{(1+\varepsilon)\mu_0s}}.
\end{align}
Denote $\mW^t = {\cal P}} \def\cP{{\cal P}_{T_t} \left( \mZ^t - {\cal G}} \def\cG{{\cal G}\calA^{\ast} {\cal A}} \def\cA{{\cal A} \calG^{\ast}\left( \mZ^t - \mZ^\natural \right) \right)$.
We have that $\mZ^{t+1} = {\cal T}} \def\cT{{\cal T}_r(\mW^t)$ and $\fronorm{\mZ^{t+1}- \mZ^\natural} $ can be bounded as follows:
{\small
\begin{align*}
&\fronorm{\mZ^{t+1}- \mZ^\natural} \\
&\quad \leq \fronorm{\mZ^{t+1} - \mW^t } +\fronorm{\mW^t - \mZ^\natural}\notag\\
&\quad \leq 2\fronorm{ {\cal P}} \def\cP{{\cal P}_{T_t} \left( \mZ^t -\mZ^\natural - {\cal G}} \def\cG{{\cal G}\calA^{\ast} {\cal A}} \def\cA{{\cal A} \calG^{\ast}\left( \mZ^t - \mZ^\natural \right) \right) }\\
&\quad \quad + 2\fronorm{ \left( {\cal I}} \def\cI{{\cal I} - {\cal P}} \def\cP{{\cal P}_{T_t} \right) \left( \mZ^\natural \right) } \\
&\quad \leq2\fronorm{ \left( {\cal I}} \def\cI{{\cal I} - {\cal P}} \def\cP{{\cal P}_{T_t} \right) \left( \mZ^\natural \right) }\\
&\quad \quad +2 \fronorm{ {\cal P}} \def\cP{{\cal P}_{T_t} \left( {\cal G}} \def\cG{{\cal G}\calG^{\ast} - {\cal G}} \def\cG{{\cal G}\calA^{\ast} {\cal A}} \def\cA{{\cal A} \calG^{\ast} \right) {\cal P}} \def\cP{{\cal P}_{T_t} \left( \mZ^t - \mZ^\natural \right) } \\
& \quad \quad+ 2 \fronorm{ {\cal P}} \def\cP{{\cal P}_{T_t} \left( {\cal G}} \def\cG{{\cal G}\calG^{\ast} - {\cal G}} \def\cG{{\cal G}\calA^{\ast} {\cal A}} \def\cA{{\cal A} \calG^{\ast} \right) \left( {\cal I}} \def\cI{{\cal I} - {\cal P}} \def\cP{{\cal P}_{T_t} \right) \left( \mZ^t - \mZ^\natural \right) } \\
&\quad \leq 2\fronorm{ \left( {\cal I}} \def\cI{{\cal I} - {\cal P}} \def\cP{{\cal P}_{T_t} \right) \left( \mZ^t-\mZ^\natural \right) }\\
&\quad \quad +2 \fronorm{ {\cal P}} \def\cP{{\cal P}_{T_t} \left( {\cal G}} \def\cG{{\cal G}\calG^{\ast} - {\cal G}} \def\cG{{\cal G}\calA^{\ast} {\cal A}} \def\cA{{\cal A} \calG^{\ast} \right) {\cal P}} \def\cP{{\cal P}_{T_t} \left( \mZ^t - \mZ^\natural \right)}\\
&\quad \quad+2 \fronorm{ {\cal P}} \def\cP{{\cal P}_{T_t} {\cal G}} \def\cG{{\cal G}\calG^{\ast} \left( {\cal I}} \def\cI{{\cal I} - {\cal P}} \def\cP{{\cal P}_{T_t} \right) \left( \mZ^t - \mZ^\natural \right) }\\
&\quad \quad+2 \fronorm{{\cal P}} \def\cP{{\cal P}_{T_t} {\cal G}} \def\cG{{\cal G}\calA^{\ast}{\cal A}} \def\cA{{\cal A}\calG^{\ast} \left( {\cal I}} \def\cI{{\cal I} - {\cal P}} \def\cP{{\cal P}_{T_t} \right)\left( \mZ^t - \mZ^\natural \right) }\\
&\quad \triangleq I_1+I_2+I_3+I_4.
\end{align*}
}
Applying Lemma \cite[Lemma 4.1]{wei2016guarantees} yields that
\begin{align*}
I_1 + I_3\leq 4\fronorm{\mZ^t - \mZ^\natural} ^2/ \sigma_{r}(\mZ^\natural).
\end{align*}
A simple computation obtains that $I_2 \leq 2 \varepsilon \fronorm{\mZ^t - \mZ^\natural}$. Finally, Lemma \ref{lemma:1} implies that
\begin{align*}
I_5 \leq 3 \sqrt{1+\varepsilon}\cdot \fronorm{\mZ^t - \mZ^\natural} ^2/ \sigma_{r}(\mZ^\natural).
\end{align*}
Combining these terms together, we have
\begin{align}
&\fronorm{\mZ^{t+1} -\mZ^\natural} \notag \\
&\quad {\leq} \left( 2 \varepsilon + \frac{4+ 3\sqrt{1+\varepsilon}}{\sigma_r(\mZ^\natural)} \fronorm{\mZ^t - \mZ^\natural}\right)\fronorm{\mZ^t - \mZ^\natural} \notag \\
&\quad {\leq} \left( 2\varepsilon + \frac{4+3\sqrt{1+\varepsilon}}{16\sqrt{(1+\varepsilon)\mu_0s}} \cdot \varepsilon \right) \cdot \fronorm{ \mZ^t -\mZ^\natural } \label{label:(a)} \\
&\quad{\leq} 3\varepsilon\fronorm{ \mZ^t -\mZ^\natural }\notag \\
&\quad {\leq} \frac{1}{2}\fronorm{ \mZ^t -\mZ^\natural }\label{label:(b)},
\end{align}
where \eqref{label:(a)} has used \eqref{condition 1} and \eqref{label:(b)} is due to $\varepsilon \leq 1/6$. Finally, it remains to verify \eqref{induction hypothesis}. By Lemma \ref{lemma initial}, the inequality \eqref{induction hypothesis} is valid for $t=0$. Since $\fronorm{\mZ^t - \mZ^\natural}$ is a contractive sequence following from \eqref{label:(b)}, the inequality \eqref{induction hypothesis} holds for all $t\geq 0$ by induction.
\begin{comment}
\section{numerical performance}\label{section:num}
In this section, we empirically evaluate the performance of Vectorized Hankel Lifted Fast Iterative Hard Thresholding for Blind Sparse Spikes Deconvolution. The location $\{\tau_{k}\}_{k=1}^r$ of the point source signals are generated randomly from $[0,1)$ while the amplitudes $\{d_k\}_{k=1}^r$ are generated via $d_k = (1+10^{c_k})e^{-i\psi_k}$ with $\psi_k$ being uniformly sampled from $[0,2\pi)$ and $c_k$ being uniformly sampled from $[0,1]$. The subspace matrix $\boldsymbol{B}$ for the point spread functions is generated randomly with i.i.d standard Gaussian entries. The coefficient $\{\boldsymbol{h}_k\}_{k=1}^r$ are i.i.d. standard Gaussian random vectors followed by normalization. For a given triple $(n,r,m) = (256,3,2)$, the Fig \ref{fig:efficiency} shows the convergences rate of FIHT under step size $0.05,0.1,0.2$.
\begin{figure}
\includegraphics[width=0.4\textwidth]{iter3.eps}
\caption{$\log_{10} \bigg( \frac{\|\boldsymbol{X}-\boldsymbol{X}^{\natural}\|}{\|\boldsymbol{X}^{\natural}\|} \bigg)$ with respect to the number of iterations when step size is 0.05, 0.1 and 0.2}
\label{fig:efficiency}
\end{figure}
\end{comment}
\begin{comment}
\section{Appendix}\label{section:app}
\begin{lemma}\cite[Lemma 4.1]{wei2016guarantees}
\label{appendix Lemma 1}
Let $\mZ_t = \mU_t \bSigma_t \mV_t^H$ be another rank $r$ matrix and $T_t$ be the tangent space of the rank $r$ matrix manifold at $\mZ_t$. Then
\begin{align*}
& \fronorm{\left( {\cal I}} \def\cI{{\cal I} - {\cal P}} \def\cP{{\cal P}_{T_t} \right) \left( \mZ^t - \mZ^\natural \right)} \leq \frac{ \fronorm{\mZ^t - \mZ^\natural}^2 }{ \sigma_{\min}(\mZ^\natural) },\\ & \opnorm{{\cal P}} \def\cP{{\cal P}_{T_t} - {\cal P}} \def\cP{{\cal P}_{T}} \leq 2\frac{\fronorm{\mZ^t - \mZ^\natural} }{ \sigma_{\min}(\mZ^\natural) }
\end{align*}
\end{lemma}
\begin{lemma}
\cite[Lemma 3.2]{mao20projected} Assume $\boldsymbol{Z}^{\natural}$ is $\mu_{1}$ -incoherent and let $\boldsymbol{Z}_{0}=\mathcal{T}_{r}\left(\mathcal{G D} \mathcal{A}^{*}(\boldsymbol{y})\right)=\mathcal{T}_{r}\left(\mathcal{G} \mathcal{A}^{*} \mathcal{A} \mathcal{G}^{*}\left(\boldsymbol{Z}^{\natural}\right)\right)$. Then
$$
\left\|\boldsymbol{Z}_{0}-\boldsymbol{Z}^{\natural}\right\| \lesssim \sqrt{\frac{\mu_{0} \mu_{1} s r \log ^{2}(s n)}{n}}\left\|\boldsymbol{Z}^{\natural}\right\|
$$
holds with high probability.
\end{lemma}
\end{comment}
\begin{comment}
\begin{lemma}
Under Assumpution 1, $\|{\cal A}} \def\cA{{\cal A}^*\|\leq \sqrt{s\mu_0}$
\end{lemma}
\begin{proof}
\begin{equation}
\begin{split}
\|{\cal A}} \def\cA{{\cal A}^*\| & =\sup_{\boldsymbol{y} \in \mathbb{C}^n :\|\boldsymbol{y}\| =1} \left\|{\cal A}} \def\cA{{\cal A}^*(\boldsymbol{y})\right\|_2\\
&= \sup_{\boldsymbol{y} \in \mathbb{C}^n :\|\boldsymbol{y}\| =1} \left\|\sum_{i=0}^{n-1}\boldsymbol{y}[i]\boldsymbol{b}_ie_i^T\right\|_2\\
&= \sup_{\boldsymbol{y} \in \mathbb{C}^n :\|\boldsymbol{y}\| =1} \left\|\sum_{i=0}^{n-1}\boldsymbol{b}_i\boldsymbol{y}[i]e_i^T\right\|_2\\
& = \sup_{\boldsymbol{y} \in \mathbb{C}^n :\|\boldsymbol{y}\| =1} \sqrt{\sum_{i=0}^{n-1}\left\|\boldsymbol{b}_i\boldsymbol{y}[i]e_i^T\right\|_2^2}\\
& = \sup_{\boldsymbol{y} \in \mathbb{C}^n :\|\boldsymbol{y}\| =1} \sqrt{\sum_{i=0}^{n-1}\left\|\boldsymbol{b}_i\boldsymbol{y}[i]\right\|_2^2}\\
& \leq\sup_{\boldsymbol{y} \in \mathbb{C}^n :\|\boldsymbol{y}\| =1}\sqrt{\sum_{i=0}^{n-1}\|b_i\|_2^2\sum_{i=0}^{n-1}\|\boldsymbol{y}_i\|_2^2}\\
&\leq \sqrt{s\mu_0}
\end{split}
\end{equation}
\end{proof}
\end{comment}
\subsection{Proof of Lemma \ref{lemma:1}}
\begin{proof}
For any $\mZ\in\C^{sn_1\times n_2}$, we have
\begin{align*}
\fronorm{{\cal A}} \def\cA{{\cal A} \calG^{\ast}{\cal P}} \def\cP{{\cal P}_{T}(\mZ)}^2 &= \left\langle {\cal A}} \def\cA{{\cal A} \calG^{\ast}{\cal P}} \def\cP{{\cal P}_{T}(\mZ), {\cal A}} \def\cA{{\cal A} \calG^{\ast}{\cal P}} \def\cP{{\cal P}_{T}(\mZ) \right\rangle \\
&= \left\langle \mZ, {\cal P}} \def\cP{{\cal P}_{T}{\cal G}} \def\cG{{\cal G}\left( \calA^{\ast}{\cal A}} \def\cA{{\cal A} - {\cal I}} \def\cI{{\cal I} \right)\calG^{\ast}{\cal P}} \def\cP{{\cal P}_{T}(\mZ) \right\rangle\\
&\quad + \left\langle \mZ, {\cal P}} \def\cP{{\cal P}_{T} {\cal G}} \def\cG{{\cal G}\calG^{\ast}{\cal P}} \def\cP{{\cal P}_{T}(\mZ) \right\rangle\\
&\leq (1+\varepsilon)\fronorm{\mZ}^2,
\end{align*}
where the last inequality is due to Lemma III in \cite{chen2020vectorized}. So it follows that $\opnorm{ {\cal A}} \def\cA{{\cal A} \calG^{\ast}{\cal P}} \def\cP{{\cal P}_{T}} \leq \sqrt{ 1+\varepsilon }$
and
\begin{align*}
\opnorm{ {\cal A}} \def\cA{{\cal A} \calG^{\ast}{\cal P}} \def\cP{{\cal P}_{T_t}} &\leq \opnorm{ {\cal A}} \def\cA{{\cal A} \calG^{\ast}{\cal P}} \def\cP{{\cal P}_{T}} + \opnorm{ {\cal A}} \def\cA{{\cal A} \calG^{\ast}\left( {\cal P}} \def\cP{{\cal P}_{T_t} - {\cal P}} \def\cP{{\cal P}_{T}\right) }\\
&\leq \sqrt{ 1+\varepsilon} + \opnorm{{\cal A}} \def\cA{{\cal A}}\cdot \opnorm{ {\cal P}} \def\cP{{\cal P}_{T_t} - {\cal P}} \def\cP{{\cal P}_{T} } \\
&\leq \sqrt{ 1+\varepsilon} + \frac{2\sqrt{\mu_0s} \fronorm{\mZ^t - \mZ^\natural} }{ \sigma_{\min}(\mZ^\natural) }\\
&\leq 3\sqrt{ 1+\varepsilon}.
\end{align*}
Finally, a straightforward computation yields that
\begin{align}
&\opnorm{{\cal P}} \def\cP{{\cal P}_{T_t}{\cal G}} \def\cG{{\cal G}\left( {\cal I}} \def\cI{{\cal I} - {\cal A}} \def\cA{{\cal A}^\ast{\cal A}} \def\cA{{\cal A}\right)\calG^{\ast}{\cal P}} \def\cP{{\cal P}_{T_t} }\notag\\
&\quad \leq \opnorm{ \left({\cal P}} \def\cP{{\cal P}_{T_t} - {\cal P}} \def\cP{{\cal P}_{T}\right) {\cal G}} \def\cG{{\cal G}\calG^{\ast}{\cal P}} \def\cP{{\cal P}_{T_t} }+ \opnorm{ {\cal P}} \def\cP{{\cal P}_{T} {\cal G}} \def\cG{{\cal G}\calG^{\ast}\left( {\cal P}} \def\cP{{\cal P}_{T_t} - {\cal P}} \def\cP{{\cal P}_{T} \right)} \notag\\
&\quad \quad+\opnorm{ \left({\cal P}} \def\cP{{\cal P}_{T_t} - {\cal P}} \def\cP{{\cal P}_{T}\right) {\cal G}} \def\cG{{\cal G} {\cal A}} \def\cA{{\cal A}^\ast{\cal A}} \def\cA{{\cal A} \calG^{\ast}{\cal P}} \def\cP{{\cal P}_{T_t} }\notag\\
&\quad \quad+ \opnorm{ {\cal P}} \def\cP{{\cal P}_{T} {\cal G}} \def\cG{{\cal G} {\cal A}} \def\cA{{\cal A}^\ast{\cal A}} \def\cA{{\cal A} \calG^{\ast}\left( {\cal P}} \def\cP{{\cal P}_{T_t} - {\cal P}} \def\cP{{\cal P}_{T} \right)}\notag\\
&\quad \quad+\opnorm{ {\cal P}} \def\cP{{\cal P}_{T} {\cal G}} \def\cG{{\cal G}\left( {\cal I}} \def\cI{{\cal I} - {\cal A}} \def\cA{{\cal A}^\ast{\cal A}} \def\cA{{\cal A}\right)\calG^{\ast} {\cal P}} \def\cP{{\cal P}_{T} }\notag\\
&\quad{\leq}\frac{4\fronorm{\mZ^t - \mZ^\natural} }{ \sigma_{r}(\mZ^\natural) } + \frac{2\fronorm{\mZ^t - \mZ^\natural} }{ \sigma_{r}(\mZ^\natural) }\cdot \sqrt{\mu_0 s}\notag\\
&\quad \quad \cdot \big( \opnorm{{\cal P}} \def\cP{{\cal P}_{T_t}{\cal G}} \def\cG{{\cal G}\calA^{\ast}}+ \opnorm{ {\cal P}} \def\cP{{\cal P}_{T}{\cal G}} \def\cG{{\cal G}\calA^{\ast}} \big) + \varepsilon \label{label:(c)}\\
&\quad{\leq} \frac{4\varepsilon}{16\sqrt{(1+\varepsilon)\cdot \mu_0 s}} + \frac{8\varepsilon \sqrt{\mu_0 s(1+\varepsilon)}}{16\sqrt{(1+\varepsilon)\cdot \mu_0 s}} + \varepsilon\label{label:(d)}\\
& \quad\leq 2\varepsilon,\notag
\end{align}
where \eqref{label:(c)} is due to Lemma \cite[Lemma 4.1]{wei2016guarantees} and the fact that $\opnorm{\calA^{\ast}} = \opnorm{{\cal A}} \def\cA{{\cal A}} \leq \sqrt{\mu_0 s}$ and $\opnorm{{\cal P}} \def\cP{{\cal P}_{T_t}{\cal G}} \def\cG{{\cal G}\calA^{\ast}} = \opnorm{{\cal A}} \def\cA{{\cal A} \calG^{\ast}{\cal P}} \def\cP{{\cal P}_{T_t}} $, step \eqref{label:(d)} follows \eqref{condition 1}
\end{proof}
\section{conclusion}
We propose a VHL-FIHT method to solve the blind super-resolution problem in a non-convex schema. The convergence analysis has been established for VHL-FIHT, showing that the algorithm will linearly converge to the target matrix given suitable initialization and provided the number of samples is large enough. The numerical experiments validate our theoretical results.
\section{acknowledgement}
We would like to thank professor Ke Wei for useful discussion.
\bibliographystyle{plain}
|
1,314,259,994,951 | arxiv |
\section{Conclusions and Open Problems}
We have largely resolved the problem of feasibility in the Tensor-Train format through the connection to
eigenvalue problems, honeycombs and systems of linear inequalities.
From the topological perspective, the fact that sets of squared feasible values form finitely generated, closed, convex cones
is most relevant. This implies that there are unique, minimal lists of linear inequalities equivalent to
feasibility. We have identified significant ones, but how to derive a complete list or to determine correspondent vertices persist
as open problems, which may require a deeper understanding of the respective theory. Only if all mode sizes are large enough,
then the (trivial) trace property remains as sole necessary and sufficient condition. Hence in the other cases,
the singular values of different matricizations, which appear in the Tensor-Train format, cannot be treated independently.
The reliability of the algorithms provided to check for feasibility and to construct tensors with prescribed singular spectrum
can certainly be improved, yet they are fine for small problems and demonstrative purposes as well as for visualization of hives.
\section{The Cone of Squared Feasible Singular Values}\label{sec:coneoffea}
We return to the tensor setting and collate previous results. Note that
we switch back to the initial tensor notation.
\begin{theorem}[Lemma \ref{prop1}, Lemma \ref{prop2}, Lemma \ref{kyfan} and implications of Corollary \ref{maincor}]\label{summary}
The following statements hold true:
\begin{itemize}
\item (Lemma \ref{prop1}) If $r_1, r_2 \leq n$, then
\[ \mathcal{F}_{n,r_1,r_2} = \mathcal{D}^{r_1}_{\geq 0} \times \mathcal{D}^{r_2}_{\geq 0} \cap \{(\widetilde{\gamma},\widetilde{\theta}) \mid \|\widetilde{\gamma}\|_2 = \|\widetilde{\theta}\|_2\},
\]
that is, any pair $(\gamma,\theta)$ with $\degree(\gamma), \degree(\theta) \leq n$ that holds the trace property is feasible for $n$.
\item (Lemma \ref{kyfan}) If $(\gamma,\theta) \in \mathcal{D}^{\infty}_{\geq 0} \times \mathcal{D}^{\infty}_{\geq 0}$ is feasible for $n$, then for any $k$
\[ \sum_{i = 1}^k \gamma_i^2 \leq \sum_{i = 1}^{nk} \theta_i^2 \]
must hold.
\item (Lemma \ref{prop2}) If $(\gamma,\theta) \in \mathcal{D}^{\infty}_{\geq 0} \times \mathcal{D}^{\infty}_{\geq 0}$ is feasible for $n$, then for any $k$,
\[ \gamma_{kn+1}^2 \leq \sum_{i = k+1}^{k+n} \theta_i^2. \] must hold.
\item If $r_1 \leq n r_2$ and $r_2 \leq n r_1$, then $\mathcal{F}^2_{n,r_1,r_2}$ is a closed, convex, polyhedral cone of dimension $r_1 + r_2 - 1$, embedded into $\ensuremath{\mathbb{R}}^{r_1 + r_2}$.
Otherwise, $\mathcal{F}^2_{n,r_1,r_2} \cap \ensuremath{\mathcal{D}}^{r_1}_{>0} \times \ensuremath{\mathcal{D}}^{r_2}_{>0}$ is empty.
\item If $(\gamma^{(1)},\theta^{(1)}), (\gamma^{(2)},\theta^{(2)})$ are feasible for $n$, then
\[ (\gamma^{(1,2)},\theta^{(1,2)}), \ ((\gamma^{(1,2)})^2,(\theta^{(1,2)})^2) := ((\gamma^{(1)})^2 + (\gamma^{(2)})^2,(\theta^{(1)})^2 + (\theta^{(2)})^2) \]
is feasible for $n$ as well.
\item If $r_1 \leq n r_2$ and $r_2 \leq n r_1$, then the set of feasible pairs in $\mathcal{F}_{n,r_1,r_2}$ that can not be perturbed in all directions,
is closed and has an $r_1 + r_2 - 1$-dimensional Hausdorff measure equal to zero (i.e. these are the ones corresponding to the faces of $\mathcal{F}^2_{n,r_1,r_2}$).
\end{itemize}
\end{theorem}
\begin{proof}
For the third statement, consider Lemma \ref{prop2} and Corollary \ref{maincor}. Then the only remaining part is to show that
the cone has dimension $r_1 + r_2 - 1$, or equivalently, it contains as many linearly independent vectors. These are however already
given by the examples carried out in Lemma \ref{trivial}. The fourth statement follows
from the first one, since for any two elements $x,y$ in a convex cone, the sum $x + y$ also
lies in the cone. The fifth statement is immediately given due to the geometrical nature of the cone $\mathcal{F}^2_{n,r_1,r_2}$.
\end{proof}
The feasibility of $(\gamma,\theta)$ in the example \eqref{not_triv_pair},
$\widetilde{\gamma}^2 = (7.5,5,0,0)$, $\widetilde{\theta}^2 = (6,3.5,2,1)$, can not only be proven by a corresponding hive (cf. Figure \ref{plot_not_triv_feasible}),
but, at least in this case, also by a decomposition into diagonally feasible pairs:
\begin{equation}
\scalebox{0.8}{$
\begin{aligned}
\begin{pmatrix} \gamma^2, \theta^2
\end{pmatrix} =
\begin{pmatrix} 1 & 2 \\
1 & 2 \\
1 & 0 \\
1 & 0
\end{pmatrix}
+
\begin{pmatrix} 1 & 2 \\
1 & 1 \\
1 & 0 \\
0 & 0
\end{pmatrix}
+
\begin{pmatrix} 1.5 & 1.5 \\
1.5 & 1.5 \\
0 & 0 \\
0 & 0
\end{pmatrix}
+
\begin{pmatrix} 1 & 0.5 \\
0 & 0.5 \\
0 & 0 \\
0 & 0
\end{pmatrix}
+
\begin{pmatrix} 1.5 & 1.5 \\
0 & 0 \\
0 & 0 \\
0 & 0
\end{pmatrix}
\end{aligned}$} \label{decompositionex}
\end{equation}
Each two columns of these matrices are the squared values of a diagonally feasible pair. With Theorem \ref{summary},
in explicit the cone property, follows the feasibility of $(\gamma,\theta)$. From the decomposition \eqref{decompositionex}
one may wonder whether every vertex defining $\mathcal{F}^2_{n,r_1,r_2}$ is diagonally feasible. Despite some effort, we can so far neither disprove nor verify this. \\\\
Theorem \ref{summary} further shows that there is a unique, minimal list of inequalities defining the faces of $\mathcal{F}^2_{n,r_1,r_2}$,
in analogy to the verified Horn Conjecture \ref{hornconj}.
The right sum in Lemma \ref{setofineq} has always $(n=m)$-times as many summands as the left sum.
For these inequalities, it further holds $\sum_{i = k+1}^{nr} i = \frac{k(n-1) (nr+k+1)}{2} = \sum_{i \notin A^{(2)}_n} i - \sum_{i \in A^{(1)}_n} i$,
where $k$ is the length of $a^{(1)}$. While the former is easily shown to be true for every inequality defining $\mathcal{F}^2_{n,r_1,n r_1}$ with $k \leq r_1$ summands of $\theta^2_i$ on
the lefthand side, an analogous of the latter relation can, likewise, so far only be conjectured to hold in general.\\\\
Sets of squared, feasible tuples form cones as well:
\begin{corollary}[Cone property for tuples]\label{Pyt}
For $0 \leq \nu < \mu \leq d+1$, let both \\ $(\sigma^{(\nu)},\ldots,\sigma^{(\mu)}) \in (\ensuremath{\mathcal{D}}^{\infty}_{\geq 0})^{\mu-\nu+1}$ and $(\tau^{(\nu)},\ldots,\tau^{(\mu)}) \in (\ensuremath{\mathcal{D}}^{\infty}_{\geq 0})^{\mu-\nu+1}$ be feasible
for $n = (n_{\nu+1},\ldots,n_\mu)$. Then
\[ (\upsilon^{(\nu)},\ldots,\upsilon^{(\mu)}), \quad (\upsilon^{(s)})^2 := (\sigma^{(s)})^2 + (\tau^{(s)})^2, \quad s = \nu,\ldots,\mu, \]
is feasible for $n$ as well.
\end{corollary}
The following
result is an exception to all others in this work, since it originates from tensor theory, not the other way around.
We at least do not see an easy way to derive it using only eigenvalue theory.
\begin{lemma}[Intermediate feasibility]\label{interm}
Let $(\sigma^{(\nu)},\sigma^{(\mu)}) \in \ensuremath{\mathcal{D}}^{\infty}_{\geq 0} \times \ensuremath{\mathcal{D}}^{\infty}_{\geq 0}$ be a pair of sequences and
$r_{\nu} = \degree(\sigma^{(\nu)})$, $r_\mu = \degree(\sigma^{(\mu)})$.
Then $(\sigma^{(\nu)},\sigma^{(\mu)})$ is feasible for $N = \prod_{s = \nu+1}^\mu n_s$, $n_s \in \ensuremath{\mathbb{N}}$, if and only if there exist
$(\sigma^{(\nu+1)},\ldots,\sigma^{(\mu-1)}) \in (\ensuremath{\mathcal{D}}^{\infty}_{\geq 0})^{\mu-\nu-1}$
such that $(\sigma^{(\nu)},\ldots,\sigma^{(\mu)})$ is feasible for $(n_{\nu+1},\ldots,n_\mu)$.
These sequences require at most
\[(\ \min(r_{\nu} n_{\nu+1} , r_\mu N/n_{\nu+1} ),\ \min(r_{\nu} n_{\nu+2} n_{\nu+1} ,r_\mu N/(n_{\nu+2} n_{\nu+1})), \ \ldots, \ \min(r_{\nu} N/n_\mu, r_\mu n_\mu )\ )\]
positive entries, respectively.
\end{lemma}
\begin{proof}
Assume $(\sigma^{(\nu)},\sigma^{(\mu)})$ is feasible for $N$. Then there exists a core $\ensuremath{\mathcal{G}}_{\nu+1,\ldots,\mu}$
of length $N$ and size $(r_{\nu},r_\mu)$ such that $\Sigma_+^{(\nu)} \ensuremath{\mathcal{G}}_{\nu+1,\ldots,\mu}$ is left-orthogonal and
$\ensuremath{\mathcal{G}}_{\nu+1,\ldots,\mu} \Sigma_+^{(\mu)}$ is right-orthogonal.
For simplicity, we extend this core by artificial cores $\ensuremath{\mathcal{G}}_{\nu}$ and $\ensuremath{\mathcal{G}}_{\mu+1}$ of length
$r_{\nu}$ and $r_\mu$ as well as size $(1,r_{\nu})$ and $(r_\mu,1)$, respectively,
to a tensor
$A = (\|A\|_F \ensuremath{\mathcal{G}}_{\nu}) \boxtimes (\Sigma^{(\nu)} \ensuremath{\mathcal{G}}_{\nu+1,\ldots,\mu} \Sigma^{(\mu)}) \boxtimes (\ensuremath{\mathcal{G}}_{\mu+1} \|A\|_F)$, $\|A\|_F = \|\sigma^{(\nu)}\|_2 = \|\sigma^{(\mu)}\|_2$,
of dimension $\mu-\nu+2$
such that $A$ has singular values $\sigma^{(\nu-1)},\ldots,\sigma^{(\mu+1)}$. This is possible, since
$((\|A\|_F,0,\ldots),\sigma^{(\nu)})$ and $(\sigma^{(\mu)},(\|A\|_F,0,\ldots))$ are diagonally feasible for $r_{\nu}$ and $r_\mu$, respectively.
This tensor has mode sizes $(r_{\nu},N,r_\mu)$, but we reshape it to a tensor $\widetilde{A}$ with mode sizes $(r_{\nu},n_{\nu+1},\ldots,n_\mu,r_\mu)$.
By definition, $\widetilde{\Sigma_i} = \Sigma_i$ for $i = \nu,\mu$. We can now decompose $\widetilde{A}$ by applying Lemma $\ref{stre}$
and use part $1$ of Theorem \ref{fundthm} to conclude the feasibility of $(\sigma^{(\nu)},\ldots,\sigma^{(\mu)})$
for $(n_{\nu+1},\ldots,n_\mu)$. The maximal degrees are given by Lemma \ref{prop2} (or just by checking
sizes involved in each SVD that is performed).
\end{proof}
Lemma \ref{interm} is sharp in the following sense: Let $\sigma^{(\nu+s)}_+ = (\sqrt{a_s},\ldots,\sqrt{a_s}) \in \ensuremath{\mathbb{R}}^{N/a_s}$, $a_s = \prod_{i = 1}^s n_{\nu+i}$,
$s = 1,\ldots,\mu-\nu$, $a_0 = N$.
Then the associated tuple of sequences is diagonally feasible for $(n_{\nu+1},\ldots,n_\mu)$ and
$(\sigma^{(\nu)},\sigma^{(\mu)}) = ((1,\ldots,1,0,\ldots,),(\sqrt{N},0,\ldots))$ is feasible for no less than $N$ considering the Ky Fan analogue
Corollary \ref{kyfan}.
\subsection{Algorithms}
The description in Lemma \ref{L1L2} yields a straightforward algorithm to determine the minimal
value $n$ for which some pair $(\gamma,\theta) \in \ensuremath{\mathcal{D}}^{\infty}_{\geq 0} \times \ensuremath{\mathcal{D}}^{\infty}_{\geq 0}$
is feasible.
\begin{algorithm}[!htb]
\caption{Linear programming check for feasibility \label{linprog}}
\begin{algorithmic}
\REQUIRE $(\widetilde{\gamma},\widetilde{\theta}) \in \ensuremath{\mathcal{D}}^{r}_{\geq 0} \times \ensuremath{\mathcal{D}}^{r}_{\geq 0}$ for some $r \in \mathbb{N}$;
\STATE establish matrices $Y_1, Y_2$ for which $\ensuremath{\mathrm{edge}}(\ensuremath{\mathrm{HONEY}}_r) = \{ x \mid Y_1 x \leq 0, \ Y_2 x = 0\}$;
\FOR{$n = 1,2\ldots$}
\STATE stack together copies of $Y_1, Y_2$ and include boundary coupling
to form $L_1,\ldots,L_3$ and set boundary vector $b = (\widetilde{\gamma}^2 \mid \widetilde{\theta}^2) \in \ensuremath{\mathbb{R}}^{2r}$ such that
\[\ensuremath{\mathrm{edge}}_S(\delta_P^{-1}(f_P)) = \{ x \mid L_1 x \leq 0, \ L_2 x = 0, \ L_3 x = b \}\]
for the hive $H$ as in Corollary \ref{relaxed};
\STATE use a linear programming algorithm to
\[ \mbox{minimize } Fx \mbox{ subject to } x \in \ensuremath{\mathrm{edge}}_S(\delta_P^{-1}(f_P)) \]
where $F$ is the vector for which $Fx \in \ensuremath{\mathbb{R}}_{\geq 0}$ is the total length of all edges in $H$;
\IF{no solution exists}
\STATE {continue with $n+1$};
\ENDIF
\IF{solution exists}
\RETURN minimal number $n \in \mathbb{N}$ for which $(\gamma,\theta)$ is feasible and a corresponding
$(r,2n)$-hive $H$ with minimal total edge length;
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
The algorithm always terminates for at most $n = \max(\degree(\gamma),\degree(\theta))$ due to Lemma \ref{prop1}.
In practice, a slightly different coupling of boundaries is used (cf. Figure \ref{plot_triv_feasible_single}), since then all of $H$ can be
visualized in $\ensuremath{\mathbb{R}}^2$. For that, it is required to rotate and mirror some of the honeycombs.
Depending on the linear programming algorithm, the input may be too badly conditioned to
allow a verification with satisfying residual. The very simple but heuristic algorithm \ref{numfea} can
be more reliable.
\begin{algorithm}[!htb]
\caption{Heuristic check for numerical feasibility \label{numfea}}
\begin{algorithmic}
\REQUIRE $(\gamma_+,\theta_+) \in \ensuremath{\mathcal{D}}^{r_1}_{> 0} \times \ensuremath{\mathcal{D}}^{r_2}_{> 0}$ for some $r_1,r_2 \in \mathbb{N}$
and a natural number $n$ \\ (as well as $\mathrm{tol} > 0$, $\mathrm{iter_{max}} > 0$);
\STATE initialize a core $H^{(1)}_1$ of length $n$ and size $(r_1,r_2)$ randomly;
\STATE set $\gamma^{(H)}, \theta^{(H)} = 0$; $k = 1$;
\WHILE{$\|\gamma^{(H)}-\gamma_+\| + \|\theta^{(H)}-\theta_+\| > \mathrm{tol}$ and $k \leq \mathrm{iter_{max}}$}
\STATE $k = k + 1$;
\STATE calculate the SVD and set $U_1 \Theta^{(k)} V_1^T = \lhb{H^{(k-1)}_1}$;
\STATE set $H^{(k)}_2$ via $\lhb{H^{(k)}_2} = U_1 \Theta$;
\STATE calculate the SVD and set $U_2 \Gamma^{(k)} V_2^T = \rhb{H^{(k)}_2}$;
\STATE set $H^{(k)}_1$ via $\rhb{H^{(k)}_1} = \Gamma V_2^T$;
\ENDWHILE
\IF{$\|\gamma^{(H)}-\gamma_+\| + \|\theta^{(H)}-\theta_+\| \leq \mathrm{tol}$}
\RETURN $H^{\ast} = \Gamma^{-1} \ H_1^{(k)} \ \Theta^{-1}$;
\STATE $(\gamma,\theta)$ is (numerically) feasible for $n$;
\ELSE
\STATE $(\gamma,\theta)$ is \textit{likely} to not be feasible for $n$;
\ENDIF
\end{algorithmic}
\end{algorithm}
The fixpoints of the iteration are given by the cores for which
$H \Theta^{-1}$ is left-orthogonal and $\Gamma^{-1} H$ is right-orthogonal.
Hence $H^{\ast} = \Gamma^{-1} \ H \ \Theta^{-1}$ is a core for which
$\Gamma H^{\ast}$ is left-orthogonal and $H^{\ast} \Theta$ is right-orthogonal, as
required by Definition \ref{fe}. Furthermore, the iterates cannot
diverge:
\begin{lemma}[Behavior of Algorithm \ref{numfea}]
For every $k > 1$ it holds $\|\gamma^{(k)}-\gamma_+\|_2 \leq \|\theta^{(k)}-\theta_+\|_2$ as well as
$\|\theta^{(k)}-\theta_+\|_2 \leq \|\gamma^{(k-1)}-\gamma_+\|_2$.
\end{lemma}
\begin{proof}
We only treat the first case, since the other one is analogous. Let $k > 1$ be arbitrary, but fixed.
Then
\[\|\underbrace{U_1 \Theta^{(k)} V_1^T}_{A:=} - \underbrace{U_1 \Theta V_1^T}_{B:=}\|_F = \|\Theta^{(k)} - \Theta\|_F.\]
$\rhb{A}$ has singular values $\gamma_+$, inherited from the last iteration and
$\rhb{B}$ has the same singular values as $\rhb{B} \diag(V_1,\ldots,V_1) = \rhb{B V_1} = \rhb{H_2^{(k)}}$, which are given by $\gamma^{(k)}$.
It follows by Mirsky's inequality about singular values \cite{Mi1960_Sym}, that $\|\gamma^{(k)}-\gamma_+\|_2 \leq \|A - B\|_F = \|\theta^{(k)}-\theta_+\|_2$.
\end{proof}
Convergence is hence not assured, but likely in the sense that the perturbation of matrices statistically
leads to a fractional amount of perturbation of its singular values. To construct an entire tensor,
the algorithm may be run in parallel for each single core.
\section{Weyl's Problem and Horn's Conjecture}\label{sec:weylhorn}
In 1912, H. Weyl posed a problem \cite{We1912_Das} that asks for an analysis of the following relation.
\begin{definition}[Eigenvalues of a sum of two hermitian matrices \cite{KnTa01_Hon}]\label{sumsofhermi}
Let $\lambda, \mu, \nu \in \mathcal{D}^n_{\geq 0}$. Then the relation
\[ \lambda \boxplus \mu \sim_c \nu \]
is defined to hold if there exist hermitian matrices $A,B \in \ensuremath{\mathbb{C}}^{n \times n}$ and $C := A + B$ with
eigenvalues $\lambda, \mu$ and $\nu$, respectively.
This definition is straight forwardly extended to more than two summands.
\end{definition}
Weyl and Ky Fan \cite{Fa1949_Ona} were among the first ones to give necessary, linear inequalities to this relation.
We refer to the excellent (survey) article
\textit{Honeycombs and Sums of Hermitian Matrices}\footnote[2]{
To the best of our knowledge, in Conjecture 1 (Horn Conjecture) on page 176 of the official AMS notice,
the relation $\geq$ needs to be replaced by $\leq$. This is a mere typo without any consequences and the authors are most
likely aware of it by now.}
\cite{KnTa01_Hon} by Knutson and Tao, which has been
the main point of reference for the remaining part and serves as historical survey as well.
There is unluckily a \textbf{conflict of notation}, since in the tensor community, $n$ usually refers to the
mode size and $r$ to the sizes of the matrices in the representation cores,
which is however referred to as $n$ in the paper of Knutson and Tao as size of the hermitian matrices,
while $r$ is used as index.
We switch to their notation in order to avoid confusion with references and use $m$ instead for the number of matrices
($m=2$ in Definition \ref{sumsofhermi}) as long as we remain within this topic. \\ \\
A. Horn introduced the famous \textit{Horn Conjecture} in 1962.
\begin{theorem}[(Verified) Horn Conjecture \cite{Ho1962_Eig}]\label{hornconj}
The relation $\lambda \boxplus \mu \sim_c \nu$ is satisfied if and only
if for each $(i,j,k) \in T_{r,n}, \ r = 1,\ldots,n-1$ the inequality
\begin{align} \nu_{i_1} + \ldots + \nu_{i_r} \leq \lambda_{i_1} + \ldots + \lambda_{i_r} + \mu_{j_1} + \ldots + \mu_{j_r} \label{hornineq} \end{align}
holds, as well as the trace property
\[ \sum_{i=1}^n \lambda_i + \sum_{i=1}^n \mu_i = \sum_{i=1}^n \nu_i. \]
The sets $T_{r,n}$ are defined in \ref{Trn}.
\end{theorem}
\begin{definition}[The set $T_{r,n}$ \cite{Ho1962_Eig}]\label{Trn}
The set $T_{r,n}$ is defined as the set of all triplets of indices
$1 \leq i_1 < \ldots < i_r \leq n, \ 1 \leq j_1 < \ldots < j_r \leq n, \ 1 \leq k_1 < \ldots < k_r \leq n$ which
obey
\[ i_1 + \ldots + i_r + j_1 + \ldots + j_r = k_1 + \ldots + k_r + r(r+1)/2 \]
and further
\[ i_{a_1} + \ldots + i_{a_s} + j_{b_1} + \ldots + j_{b_s} \leq k_{c_1} + \ldots k_{c_s} + s(s+1)/2 \]
for all $1 \leq s < r$ and all triplets of indices $1 \leq a_1 \leq \ldots < a_s \leq r, \ 1 \leq b_1 < \ldots < b_s \leq r,
\ 1 \leq c_1 < \ldots < c_s \leq r$ in $T_{s,r}$.
\end{definition}
As already indicated, the conjecture is correct, as proven by Knutson and Tao.
Fascinatingly, the inaccessibly appearing triplets in $T_{r,n}$ (Definition \ref{Trn}) can in turn be described
by eigenvalues relations themselves, as also stated by W. Fulton \cite{Fu00_Eig} (with slightly different notation),
which gives a very good overview as well.
\begin{theorem}[Description of $T_{r,n}$ \cite{KnTa01_Hon, Ho1962_Eig, Fu00_Eig}]\label{desTrn}
The triplet $(i,j,k)$ of increasing natural numbers in $\{1,\ldots,n\}$ is in $T_{r,n}$ if and only if for the
corresponding triplet it holds $\bigtriangleup i \boxplus \bigtriangleup j \sim_c \bigtriangleup k$,
where $\bigtriangleup \ell:=(\ell_r - r,\ell_{r-1}-(r-1),\ldots,\ell_2-2,\ell_1-1)$.
\end{theorem}
Even with just diagonal matrices, one can thereby derive various (possibly all) triplets. For example,
Ky Fan's inequality \cite{Fa1949_Ona}
\begin{align}
\sum_{i=1}^k \nu_i \leq \sum_{i=1}^k \lambda_i + \sum_{i=1}^k \mu_i, & \quad k = 1,\ldots,n.
\end{align}
follows immediately by the trivial fact that $0 \sim_c 0 \boxplus 0 \in \ensuremath{\mathbb{R}}^n$. A further
interesting property, as already shown by Horn, is given if \eqref{hornineq} holds as equality:
\begin{lemma}[Reducibility \cite{KnTa01_Hon, Ho1962_Eig}]\label{reduc}
Let $(i,j,k) \in T_{r,n}$ and $\lambda \boxplus \mu \sim_c \nu$. Further, let $\complement i, \complement j, \complement k$ be
their complementary indices with respect to $\{1,\ldots,n\}$. Then the following statements are equivalent:
\begin{itemize}
\item $\nu_{i_1} + \ldots + \nu_{i_r} = \lambda_{i_1} + \ldots + \lambda_{i_r} + \mu_{j_1} + \ldots + \mu_{j_r}$
\item Any associated triplet of hermitian matrices $(A,B,C)$ is block diagonalizable into two parts, which
contain eigenvalues indexed by $(i,j,k)$ and $(\complement i, \complement j, \complement k)$, respectively.
\item $\lambda|_i \boxplus \mu|_j \sim_c \nu|_k$
\item $\lambda|_{\complement i} \boxplus \mu|_{\complement j} \sim_c \nu|_{\complement k}$
\end{itemize}
The relation is in that sense split into two with respect to the triplet $(i,j,k)$.
\end{lemma}
\section{Honeycombs and Hives}\label{sec:honeycomb}
The following result poses the complete resolution to Weyls problem.
It is strongly advised to read the complete
article [?] about honeycombs for a better understanding.
They are designed to exactly reflect the mathematics behind the relation $\sim_c$
and allow graph theory as well as linear programming to be applied to Weyls problem.
Honeycombs (cf. Figure \ref{archetype}) can be described as two dimensional objects, contained in
\[ \ensuremath{\mathbb{R}}^3_{\sum = 0} := \{ x \in \ensuremath{\mathbb{R}}^3 \mid x_1 + x_2 + x_3 = 0\}, \]
consisting of line segments (or edges) and rays, each parallel to one of the cardinal directions $(0,1,-1)$ (north west), $(-1,0,1)$ (north east) or $(1,-1,0)$ (south),
as well as vertices, where those join.
As proven in the article,
nondegenerate $n$-honeycombs follow one identical topological structure. These, in order to be such honeycombs, obey linear constraints.
The constant coordinates of three edges meeting at a vertex add up to zero, and every edge has strictly positive length.
The involved eigenvalues appear as \textit{boundary values}. This leads
to one archetype, as displayed in Figure \ref{archetype} (for $n = 3$).
\begin{figure}[htb]
\centering
\ifuseprecompiled
\includegraphics[width=0.98\textwidth]{pdf_files/manifolds.pdf}
\else
\setlength\figureheight{4cm}
\setlength\figurewidth{12cm}
\input{tikz_base_files/plot_archetype/plot_archetype.tikz}
\fi
\caption{\label{archetype} Left: The archetype of nondegenerate $(n=3)$-honeycombs as decribed in Section \ref{sec:honeycomb}.
The rays pointing in directions north west, north east and south have constant coordinates $\lambda_i(h)$, $\mu_i(h)$ and $\nu_i(h)$, respectively.
The remaining line segments contribute to the total \textit{edge length} of the honeycomb. Right: A degenerate honeycomb,
where the line segment at the top has been contracted. Here, only eight line segments remain to contribute to the total edge length.
}
\end{figure}
Using the correspondance to an appropriate vectorspace,
the set of all $n$-honeycombs is identified by the closure of the set of nondegenerate ones (cf. Section \ref{sec:cones}), allowing edges of length zero as well.
\begin{theorem}[Honeycombs (?)]
The relation $\lambda \boxplus \mu \sim_c \nu$ is satisfied if and only if
there exists a honeycomb $h$ with boundary values $\delta(h) = (\lambda, \mu, -\nu)$.
\end{theorem}
There is also a related statement equivalent to the ones in Lemma \ref{reduc}.
If a triplet $(i,j,k) \in T_{r,n}$ yields an equality as in \eqref{hornineq}, then for the associated honeycomb $h$, $\delta(h) = (\lambda,\mu,-\nu)$,
it holds
\begin{align}
h = h_1 \otimes h_2, \quad \delta(h_1) = (\lambda|_i, \mu|_j, -\nu|_k), \ \delta(h_2) = (\lambda|_{\complement i}, \mu|_{\complement j}, -\nu|_{\complement k}) \label{overlay},
\end{align}
which means that $h$ is an (literal) overlay of two smaller honeycombs. \\\\
Without restrictios to the boundary, $\sim_c$
underlies an $S_3$ symmetry. Since we require all matrices
to be positive semi-definite, the symmetry is reduced again to $S_2$.
The boundary value $\nu$ hence takes a particular role towards $\lambda$ and $\mu$.
\begin{definition}[Nonnegative honeycomb]
We define a nonnegative honeycomb as a honeycomb with boundary balues $\lambda(h), \mu(h) \geq 0$ and $\nu(h) \leq 0$.
\end{definition}
A honeycomb can connect three matrices. In order to connect $m$ matrices,
one can use chains or systems of honeycombs put in relation to each other through their
boundary values. Although the phrase \textit{hive} has appeared before as similar object to honeycombs,
to which we do not intend to refer here, we use it
(in the absence of further \textit{bee related vocabulary}) to emphasize that a collection of honeycombs is given.
\begin{definition}[Hives]\label{hive}
Let $M \in \ensuremath{\mathbb{N}}$ and $B := \{ (i,\alpha) \mid i = 1,\ldots,M, \ \alpha \in \{\mlq{\lambda}\mrq,\mlq{\mu}\mrq,\mlq{\nu}\mrq\} \}$.
Further, let $\sim_S\ \in B \times B $ be an equivalence relation and $f_P: P \subset \bigslant{B}{\sim_S} \rightarrow \mathcal{D}^n_{\geq 0}$
a function. We define a (nonnegative) $(n,M)$-hive $H$ as a collection of (nonnegative) $n-$honeycombs $h^{(1)},\ldots,h^{(M)}$.
We further say it has structure $\sim_S$ and boundary function $f_P$ if the following holds:\\\\
Provided $(i,\alpha) \sim_S (j,\beta)$, then if both $\alpha$ and $\beta$ or neither of them equal $\nu$, it holds $\alpha(h^{(i)}) = \beta(h^{(j)})$ or
otherwise, $\alpha(h^{(i)}) = -\beta(h^{(j)})$. Likewise, provided $[(i,\alpha)] \in P$,
$\alpha(h^{(i)}) = f_P([(i,\alpha)])$ if $\alpha$ does not equal $\nu$ and $\alpha(h^{(i)}) = -f_P(i,\alpha)$ otherwise.
For $\sim_S$, we will only write minimal, generating sets of equivalences. \\\\
Furthermore, we define the hive set $\ensuremath{\mathrm{HIVE}}_{n,M}(\sim_S)$ as set of all $(n,M)$-hives $H$ with structure $\sim_S$ as
well as the boundary map $\delta_P$ to map any hive $H \in \ensuremath{\mathrm{HIVE}}_{n,M}(\sim_S)$ to
its boundary function $f_P$.
\end{definition}
Although formally $P$ is a subset of a quotient structure, we will not explicilty use further notations to
emphasize this fact, but write its elements ordinarily.
A honeycomb with boundary values $(a,b,-c)$ hence can be identified through the structure generated by $\emptyset$ and
boundary function $\{(1,\lambda)\mapsto a,(1,\mu)\mapsto b,(1,\nu)\mapsto c\}$. Further, $\ensuremath{\mathrm{HONEY}}_n = \ensuremath{\mathrm{HIVE}}_{n,1}(\emptyset)$
and $\mathrm{image}(\delta_P(h)) = \delta(h)$.
We will in that sense regard honeycombs as hives.
\begin{lemma}[Eigenvalues of a sum of hermitian matrices]\label{sevherm}
The relation \\ $a^{(1)} \boxplus \ldots \boxplus a^{(m)} \sim_c c$ is satisfied if and only if
there exists a hive $H$ of size $M = m-1$ (cf. Figure \ref{hive_example1}) with structure $\sim_S$, generated by $(i,\nu) \sim_S (i+1,\lambda)$, $i = 1,\ldots,M-1$
and \\ $f_P = \{(1,\lambda)\mapsto a^{(1)},(1,\mu) \mapsto a^{(2)} ,(2,\mu) \mapsto a^{(3)},\ldots,(M,\mu) \mapsto a^{(m)},(M,\nu) \mapsto c\}$.
\end{lemma}
\begin{proof}
The relation is equivalent to the existence of hermitian matrices $A^{(1)},\ldots,A^{(m)},\ C = A^{(1)} + \ldots + A^{(m)}$ with eigenvalues $a^{(1)},\ldots,a^{(m)},c$, respectively.
For $A^{(1,\ldots,k+1)} := A^{(1,\ldots,k)} + A^{(k+1)}, \ k = 1,\ldots,m-1$ with accordant eigenvalues $a^{(1,\ldots,k)}$, the relation can equivalently be
restated as $a^{(1,\ldots,k)} \boxplus a^{(k+1)} \sim_c a^{(1,\ldots,k+1)}, \ k = 1,\ldots,m-1$. This in turn is equivalent to the existence of honeycombs $h^{(1)},\ldots,h^{(m-1)}$ with
boundary values $\delta(h^{(1)}) = (a^{(1)},a^{(2)},-a^{(1,2)}), \delta(h^{(2)}) = (a^{(1,2)},a^{(3)},-a^{(1,2,3)}), \ldots$, $\delta(h^{(m-1)}) = (a^{(1,\ldots,m-1)},a^{(m)},-c)$.
This however depicts the structure of the hive $H$, hence all involved matrices can also be constructed in reverse.
\end{proof}
\begin{figure}[htb]
\centering
\ifuseprecompiled
\includegraphics[width=0.98\textwidth]{pdf_files/manifolds.pdf}
\else
\setlength\figureheight{2cm}
\setlength\figurewidth{14cm}
\input{tikz_base_files/hive_example1/hive_example1.tikz}
\fi
\caption{\label{hive_example1} The schematic display of an $(n,3)$-hive $H$ with structure $\sim_S$ as in Lemma \ref{sevherm}.
North west, north east and south rays correspond to $\lambda(h_i)$, $\mu(h_i)$ and $\nu(h_i)$, respectively.
Coupled boundaries are in grey and connected by a dashed line.}
\end{figure}
The idea behind honeycomb overlays \eqref{overlay} can be extended to hives as well:
\begin{corollary}[Zero eigenvalues]\label{zeroeig}
If the relation \\ $a^{(1)} \boxplus \ldots \boxplus a^{(m)} \sim_c c$ is satisfied for $a^{(i)} \in \mathcal{D}_{\geq 0}^n$, $i = 1,\ldots,m$
and $c_n = 0$, then already $a^{(1)}_n = \ldots = a^{(m)}_n = 0$ and further $a^{(1)}|_{\{1,\ldots,n-1\}} \boxplus \ldots \boxplus a^{(m)}|_{\{1,\ldots,n-1\}} \sim_c c|_{\{1,\ldots,n-1\}}$.
\end{corollary}
\begin{proof}
The first statement follows by basic linear algebra, since $a^{(1)},\ldots,a^{(n)}$ are nonnegative. For the second part,
Lemma \ref{sevherm} and \eqref{overlay} are used. Inductively, in each
honeycomb of the corresponding hive $H$, a seperate $1$-honeycomb with boundary values $(0,0,0)$ can be found. Hence, each honeycomb is an overlay
of such an $1$-honeycomb and an $(n-1)$-honeycomb. All remaining $(n-1)$-honeycombs then form a new hive with identical structure $\sim_S$.
\end{proof}
The second requirement in Theorem \ref{fundthm} can now be advanced.
\begin{corollary}[Advanced Fundamental Theorem]\label{relaxed}
Let $(\gamma,\theta) \in \mathcal{D}^{\infty}_{\geq 0} \times \mathcal{D}^{\infty}_{\geq 0}$ and $n \geq \ell(\gamma), \ell(\theta)$.
Further, let $\widetilde{\theta} = (\theta_+,0,\ldots,0)$, $\widetilde{\gamma} = (\gamma_+,0,\ldots,0)$ be $n$-tuples.
The following statements are equivalent:
\begin{itemize}
\item The pair $(\gamma,\theta)$ is feasible for $m \in \ensuremath{\mathbb{N}}$.
\item There exist $m$ pairs of hermitian, positiv semi-definite matrices $(A^{(i)},B^{(i)}) \in \ensuremath{\mathbb{C}}^{n \times n} \times \ensuremath{\mathbb{C}}^{n \times n}$,
each with identical (multiplicities of) eigenvalues, such that $A := \sum_{i=1}^{m} A^{(i)}$ has eigenvalues $\widetilde{\theta}^2$ and $B := \sum_{i=1}^{m} B^{(i)}$
has eigenvalues $\widetilde{\gamma}^2$, respectively. (The irrelevancy of zeros hence also translates into this setting).
\item There exist $a^{(1)},\ldots,a^{(m)} \in \ensuremath{\mathcal{D}}_{\geq 0}^n$ such that $a^{(1)} \boxplus \ldots \boxplus a^{(m)} \sim_c \widetilde{\gamma}^2$
as well as $a^{(1)} \boxplus \ldots \boxplus a^{(m)} \sim_c \widetilde{\theta}^2$.
\item There exists an $(n,M)$-hive $H$ of size $M = 2(m-1)$ (cf. Figure \ref{hive_example2}) with structure $\sim_S$ and boundary function $f_P$,
where $(i + u,\nu) \sim_S (i+1 + u,\lambda)$, $i = 1,\ldots,M/2-1$, $u \in \{0,M/2\}$, as well as
$(1,\lambda) \sim_S (1+M/2,\lambda)$ and $(i,\mu) \sim_S (i+M/2,\mu)$, $i = 1,\ldots,M$. Further
$f_P = \{(M/2,\nu) \mapsto \widetilde{\gamma},(M,\nu) \mapsto \widetilde{\theta}\}$.
\end{itemize}
\end{corollary}
\begin{proof}
The existence of matrices with actual size $\ell(\gamma)$, $\ell(\theta)$, respectively, follows by repeated application of Lemma \ref{zeroeig}.
The hive essentially consists of two \textit{parallel} hives as in Lemma \ref{sevherm}.
Therefor, the same argumentation holds, but instead of prescribed boundary values $a^{(i)}$, these values
are coupled between the two hive parts.
\end{proof}
\begin{figure}[htb]
\centering
\ifuseprecompiled
\includegraphics[width=0.98\textwidth]{pdf_files/manifolds.pdf}
\else
\setlength\figureheight{2cm}
\setlength\figurewidth{14cm}
\input{tikz_base_files/hive_example2/hive_example2.tikz}
\fi
\caption{\label{hive_example2} The schematic display of an $(n,6)$-hive $H$ (upper part in blue, lower part in magenta)
with structure $\sim_S$ as in Lemma \ref{sevherm}.
North west, north east and south rays correspond to $\lambda(h_i)$, $\mu(h_i)$ and $\nu(h_i)$, respectively.
Coupled boundaries are in grey and connected by a dashed line.}
\end{figure}
It is now easy to verify the feasibility of $(\gamma,\theta)$ as in the example \eqref{not_triv_pair},
as provided in Figure \ref{plot_not_triv_feasible}. Even though it is not trivially feasible,
the pair can be disassembled, in respect of Theorem \ref{summary}, into multiple, trivially
feasible pairs, which then as well prove its feasibilty.
\begin{figure}[htb]
\centering
\ifuseprecompiled
\includegraphics[width=0.98\textwidth]{pdf_files/...pdf}
\else
\setlength\figureheight{7cm}
\setlength\figurewidth{8cm}
\input{tikz_base_files/plot_non_triv_feasible/plot_non_triv_feasible.tikz}
\fi
\caption{\label{plot_not_triv_feasible} A $(4,2)$-hive consisting of two coupled honeycombs (blue for $\gamma$, magenta for $\theta$),
which are slightly shifted for better visibilty, generated by Algorithm \ref{linprog}. Note that some lines have multiplicity $2$.
The coupled boundary values are given by $a = (4,1.5,0,0)$ and $b = (3.5,3.5,0,0)$.
It proves the feasibilty of the pair $(\gamma,\theta)$, $\widetilde{\gamma}^2 = (7.5,5,0,0)$, $\widetilde{\theta}^2 = (6,3.5,2,1)$ for $m = 2$,
since $\widetilde{\gamma}^2, \widetilde{\theta}^2 \sim_c a \boxplus b$.
Only due to the short, vertical line segment in the middle, the hive does not provide trivial feasibility.}
\end{figure}
\begin{lemma}[A set of inequalities for feasible pairs]\label{setofineq}
Let $a^{(1)} \ \dot{\cup} \ a^{(2)} = \mathbb{N}$ be disjoint and both strictly increasing, with $a^{(1)}$ finite of length $r$.
If $(\gamma,\theta) \in \mathcal{D}^{\infty}_{\geq 0} \times \mathcal{D}^{\infty}_{\geq 0}$ is feasible for $m$, then it holds
\begin{align*}
\sum_{i \in A_m^{(1)}} \gamma^2_{i} \leq \sum_{i \notin A_m^{(2)}} \theta^2_i, \quad A_m^{(u)} = \{ m(a^{(u)}_i - i) + i \mid i = 1,2,\ldots \}, \ u = 1,2
\end{align*}
\end{lemma}
\begin{proof}
Let $n = \max(A^{(1)},\ell(\gamma),\ell(\theta)))$. Let further $\widetilde{A}_j^{(2)}$ contain the
$k = |A_m^{(2)} \cap \{1,\ldots,n\}|$ smallest elements of $A_j^{(2)}$, $j = 1,\ldots,m$, and let $\widetilde{A}_j^{(1)} = A_j^{(1)}$.
For fixed $u \in \{1,2\}$ and
\begin{align}\label{zeta_eig}
\zeta^{(u)} \sim_c \lambda^{(1)} \boxplus \ldots \boxplus \lambda^{(m)}, \quad \zeta^{(u)},\lambda^{(j)} \in \ensuremath{\mathcal{D}}_{\geq 0}^{n},
\end{align}
we first show that
\begin{align}\label{leftineq}
\sum_{i \in \widetilde{A}_m^{(u)}} \zeta^{(u)}_{i} \leq \sum_{j = 1}^m \sum_{i \in \widetilde{A}_1^{(u)}} \lambda^{(j)}_{i}
\end{align}
This follows inductively, if
\begin{align}\label{induc}
\sum_{i \in \widetilde{A}_j^{(u)}} \nu \leq \sum_{i \in \widetilde{A}_{j-1}^{(u)}} \lambda_i + \sum_{i \in \widetilde{A}_{1}^{(u)}} \mu_i
\end{align}
is true whenever $\nu \sim_c \lambda \boxplus \mu$. By Theorem \ref{desTrn} [?], this holds if each of the corresponding triples
$(\alpha,\beta,\omega) = ( \bigtriangleup \widetilde{A}_{j-1}^{(u)}, \bigtriangleup \widetilde{A}_1^{(u)}, \bigtriangleup \widetilde{A}_j^{(u)})$
fulfills $\alpha \boxplus \beta \sim_c \omega$. Then again, this already follows from the (diagonal) matrix identity
\begin{align*}
\diag(\widetilde{A}_j^{(u)}) - \diag(1,\ldots,n) & = \diag(\widetilde{A}_{j-1}^{(u)}) + \diag(\widetilde{A}_{1}^{(u)}) - 2\diag(1,\ldots,n) \\
\Leftrightarrow j(a^{(u)}_i - i) & = (j-1)(a^{(u)}_i - i) + (a^{(u)}_i - i), \quad i = 1,\ldots,n,
\end{align*}
proving \eqref{induc} and hence \eqref{leftineq}.
As $(\gamma,\theta)$ is feasible, we can as well set $\zeta^{(1)} = (\gamma_1^2,\ldots,\gamma_n^2), \ \zeta^{(2)} = (\theta_1^2,\ldots,\theta_n^2)$
in \eqref{zeta_eig}.
Since the trace property is valid, we have
$\sum_{i = 1}^{n} \theta_i^2 = \sum_{i = 1}^{n} \lambda^{(1)}_i + \ldots + \sum_{i = 1}^{n} \lambda^{(m)}_i$
By subtracting \eqref{leftineq} for $u = 2$ from this equality,
we receive
\begin{align}
\sum_{i \notin A_m^{(2)}} \theta^2_i \geq
\sum_{j = 1}^m \sum_{i \in \{1,\ldots,n\} \setminus \widetilde{A}_1^{(2)}} \lambda^{(j)}_{i}
\overset{\lambda_i^{(j)} \geq 0}{\geq} \sum_{j = 1}^m \sum_{i \in \widetilde{A}_1^{(1)}} \lambda^{(j)}_{i}
\overset{\eqref{leftineq} \mbox{ for } u = 1}{\geq} \sum_{i \in A_m^{(u)}} \gamma^2_{i}
\end{align}
\end{proof}
Among the various inequalities one might derive from Lemma \ref{setofineq},
the following two might be considered most characterizing.
Both inequalities are sharp as it can be proven by quite simple trivially feasible pairs.
\begin{corollary}[Ky Fan analogous for feasible pairs]\label{kyfan}
The choice $a^{(1)} = (1,\ldots,r)$ in Lemma \ref{setofineq} yields the inequality
\[ \sum_{i = 1}^r \gamma_i^2 \leq \sum_{i = 1}^{mr} \theta_i^2. \]
\end{corollary}
\begin{corollary}[Bottom inequality for feasible pairs]\label{prop2}
The choice $a^{(1)} = (r+1)$ in Lemma \ref{setofineq} yields the inequality
\[ \gamma_{rm+1}^2 \leq \sum_{i = r+1}^{r+m} \theta_i^2. \]
\end{corollary}
Let $\widetilde{\gamma}^2 = (10,2,0.5,0,0)$ and $\widetilde{\theta} = (4,3,2.5,2,1)$. According to
Corollary \ref{kyfan}, the pair $(\gamma,\theta)$ is not feasible for $m = 2,3$, but we know
that it must either be feasible for $m = 4$ or trivially feasible for $m = 5$ (cf. Lemma \ref{prop1}).
In fact, the hive in Figure \ref{plot_triv_feasible_single} and
\ref{plot_triv_feasible} provides that it is feasible for $m = 4$.
Algorithm \ref{linprog} reveals near trivial feasibility
for most pairs as well as that the coupled boundaries contain many zeros.
This results from
the minimization of the total edge length.
However, it does not disprove whether the pair might be trivially feasible.
In this case for example, one can quite easily verify
that the pair is even trivially feasible for $m=4$.
\begin{figure}[htb]
\centering
\ifuseprecompiled
\includegraphics[width=0.98\textwidth]{pdf_files/manifolds.pdf}
\else
\setlength\figureheight{3cm}
\setlength\figurewidth{12cm}
\input{tikz_base_files/plot_triv_feasible/plot_triv_feasible_n5m4_single.tikz}
\fi
\caption{\label{plot_triv_feasible_single} A $(5,4)$-hive consisting of six coupled honeycombs (blue for $\gamma$, magenta for $\theta$),
which are slightly shifted for better visibilty, generated by Algorithm \ref{linprog}. Note that some lines have multiplicity larger than $1$.
Also, in each second pair of honeycombs, the roles of boundaries $\lambda$ and $\mu$ are switched (which we can do due to the symmetry regarding $\boxplus$), such that the honeycombs can be
combined to a single diagram as in Figure \ref{plot_triv_feasible}. This means that the south rays of an odd numbered pair are always connected
to the north-east (instead of north-west) rays of the consecutive pair.
The coupled boundary values are given by $a = (2,0,0,0,0)$, $b = (1,1,0,0,0)$, $c = (4,0,0,0,0)$ and $d = (3,1,1,0,0)$.
It proves the feasibilty of the pair $(\gamma,\theta)$, $\widetilde{\gamma}^2 = (10,2,0.5,0,0)$, $\widetilde{\theta}^2 = (4,3,2.5,2,1)$ for $m = 4$,
since both $\widetilde{\gamma}^2, \widetilde{\theta}^2 \sim_c a \boxplus b \boxplus c \boxplus d$.}
\end{figure}
\begin{figure}[htb]
\centering
\ifuseprecompiled
\includegraphics[width=0.98\textwidth]{pdf_files/manifolds.pdf}
\else
\setlength\figureheight{10cm}
\setlength\figurewidth{11cm}
\input{tikz_base_files/plot_triv_feasible/plot_triv_feasible_n5m4.tikz}
\fi
\caption{\label{plot_triv_feasible} The three overlayed honeycomb pairs in Figure \ref{plot_triv_feasible_single} put together
with respect to their coupling.}
\end{figure}
\section{Honeycombs and Hives}\label{sec:honeycomb}
The following result by Knutson and Tao poses the complete resolution to Weyl's problem
and is based on preceding breakthroughs \cite{Kl98_Sta, HeRo95_Eig, KnTa99_The, KnTaWo04_The}.
It is strongly advised to read their complete
article \cite{KnTa01_Hon} for a better understanding of honeycombs.
These are designed to exactly reflect the mathematics behind the relation $\sim_c$
and allow graph theory as well as linear programming to be applied to Weyl's problem.
Honeycombs (cf. Figure \ref{archetype}) can be described as two dimensional objects, contained in the plane
\[ \ensuremath{\mathbb{R}}^3_{\sum = 0} := \{ x \in \ensuremath{\mathbb{R}}^3 \mid x_1 + x_2 + x_3 = 0\}, \]
consisting of line segments (or edges) and rays, each parallel to one of the cardinal directions $(0,1,-1)$ (north west), $(-1,0,1)$ (north east) or $(1,-1,0)$ (south),
as well as vertices, where those join.
As proven in the article,
nondegenerate $n$-honeycombs follow one identical topological structure. These, in order to be such honeycombs, obey linear constraints.
The constant coordinates of three edges meeting at a vertex add up to zero, and every edge has strictly positive length.
The involved eigenvalues appear as \textit{boundary values}. This leads
to one archetype, as displayed in Figure \ref{archetype} (for $n = 3$).
\begin{figure}[htb]
\centering
\ifuseprecompiled
\includegraphics[]{pdf_files/plot_archetype.pdf
\else
\setlength\figureheight{4cm}
\setlength\figurewidth{12cm}
\input{tikz_base_files/plot_archetype/plot_archetype.tikz}
\fi
\caption{\label{archetype} Left: The archetype of nondegenerate $(n=3)$-honeycombs as described in Section \ref{sec:honeycomb}.
The rays pointing in directions north west, north east and south have constant coordinates $\lambda_i(h)$, $\mu_i(h)$ and $\nu_i(h)$, respectively.
The remaining line segments contribute to the total \textit{edge length} of the honeycomb. Right: A degenerate honeycomb,
where the line segment at the top has been completely contracted. Here, only eight line segments remain to contribute to the total edge length.
}
\end{figure}
Using the correspondence to an appropriate vector space,
the set of all $n$-honeycombs is identified by the closure of the set of nondegenerate ones (cf. Section \ref{sec:cones}), allowing edges of length zero as well.
\begin{theorem}[Honeycombs \cite{KnTa01_Hon}]
The relation $\lambda \boxplus \mu \sim_c \nu$ is satisfied if and only if
there exists a honeycomb $h$ with boundary values $\delta(h) = (\lambda, \mu, -\nu)$.
\end{theorem}
There is also a related statement implicated by the ones in Lemma \ref{reduc}.
If a triplet $(i,j,k) \in T_{r,n}$ yields an equality as in \eqref{hornineq}, then for the associated honeycomb $h$, $\delta(h) = (\lambda,\mu,-\nu)$,
it holds
\begin{align}
h = h_1 \otimes h_2, \quad \delta(h_1) = (\lambda|_i, \mu|_j, -\nu|_k), \ \delta(h_2) = (\lambda|_{\complement i}, \mu|_{\complement j}, -\nu|_{\complement k}) \label{overlay},
\end{align}
which means that $h$ is an (literal) overlay of two smaller honeycombs. Vice versa, if a
honeycomb is an overlay of two smaller ones, then it yields two separate eigenvalue relations, however
the splitting does not necessarily correspond to a triplet in $T_{r,n}$ \cite{KnTa01_Hon}.\\\\
Without restrictions to the boundary, $\sim_c$
underlies an $S_3$ symmetry. Since we require all three matrices
to be positive semi-definite, the symmetry is reduced again to $S_2$.
The boundary value $\nu$ hence takes a particular role towards $\lambda$ and $\mu$.
\begin{definition}[Nonnegative honeycomb]
We define a nonnegative honeycomb as a honeycomb with boundary values $\lambda(h), \mu(h) \geq 0$ and $\nu(h) \leq 0$.
\end{definition}
A honeycomb can connect three matrices. In order to connect $m$ matrices,
one can use chains or systems of honeycombs put in relation to each other through their
boundary values. Although the phrase \textit{hive} has appeared before as similar object to honeycombs,
to which we do not intend to refer here, we use it
(in the absence of further \textit{bee related vocabulary}) to emphasize that a collection of honeycombs is given.
\begin{definition}[Hives]\label{hive}
Let $M \in \ensuremath{\mathbb{N}}$ and $B := \{ (i,\alpha) \mid i = 1,\ldots,M, \ \alpha \in \{\mlq{\lambda}\mrq,\mlq{\mu}\mrq,\mlq{\nu}\mrq\} \}$.
Further, let $\sim_S\ \in B \times B $ be an equivalence relation, $P := \{ (i,\alpha) \mid |[(i,\alpha)]| = 1 \}$ the set of unrelated
elements and $f_P: P \rightarrow \mathcal{D}^n_{\geq 0}$ a function. We define a (nonnegative) $(n,M)$-hive $H$ as a collection of (nonnegative) $n$-honeycombs $h^{(1)},\ldots,h^{(M)}$.
We further say it has structure $\sim_S$ and boundary function $f_P$ if the following holds:\\\\
Provided $(i,\alpha) \sim_S (j,\beta)$, then if both $\alpha$ and $\beta$ or neither of them equal $\nu$, it holds $\alpha(h^{(i)}) = \beta(h^{(j)})$ or
otherwise, $\alpha(h^{(i)}) = -\beta(h^{(j)})$. Likewise, provided $(i,\alpha) \in P$,
$\alpha(h^{(i)}) = f_P(i,\alpha)$ if $\alpha$ does not equal $\nu$ and $\alpha(h^{(i)}) = -f_P(i,\alpha)$ otherwise.
For $\sim_S$, we will only write minimal, generating sets of equivalences. \\\\
Furthermore, we define the hive set $\ensuremath{\mathrm{HIVE}}_{n,M}(\sim_S)$ as set of all $(n,M)$-hives $H$ with structure $\sim_S$ as
well as the boundary map $\delta_P$ to map any hive $H \in \ensuremath{\mathrm{HIVE}}_{n,M}(\sim_S)$ to
its boundary function $f_P$.
\end{definition}
A honeycomb with boundary values $(a,b,-c)$ hence can be identified through the structure generated by $\emptyset$ and
boundary function $\{(1,\lambda)\mapsto a,(1,\mu)\mapsto b,(1,\nu)\mapsto c\}$. Further, $\ensuremath{\mathrm{HONEY}}_n = \ensuremath{\mathrm{HIVE}}_{n,1}(\emptyset)$.
We will in that sense regard honeycombs as hives.
\begin{lemma}[Eigenvalues of a sum of hermitian matrices]\label{sevherm}
The relation \\ $a^{(1)} \boxplus \ldots \boxplus a^{(m)} \sim_c c$ is satisfied if and only if
there exists a hive $H$ of size $M = m-1$ (cf. Figure \ref{hive_example1}) with structure $\sim_S$, generated by $(i,\nu) \sim_S (i+1,\lambda)$, $i = 1,\ldots,M-1$
and \\ $f_P = \{(1,\lambda)\mapsto a^{(1)},(1,\mu) \mapsto a^{(2)} ,(2,\mu) \mapsto a^{(3)},\ldots,(M,\mu) \mapsto a^{(m)},(M,\nu) \mapsto c\}$.
\end{lemma}
\begin{proof}
The relation is equivalent to the existence of hermitian matrices $A^{(1)},\ldots,A^{(m)}$, $C = A^{(1)} + \ldots + A^{(m)}$ with eigenvalues $a^{(1)},\ldots,a^{(m)},c$, respectively.
For $A^{(1,\ldots,k+1)} := A^{(1,\ldots,k)} + A^{(k+1)}, \ k = 1,\ldots,m-1$ with accordant eigenvalues $a^{(1,\ldots,k)}$, the relation can equivalently be
restated as $a^{(1,\ldots,k)} \boxplus a^{(k+1)} \sim_c a^{(1,\ldots,k+1)}, \ k = 1,\ldots,m-1$. This in turn is equivalent to the existence of honeycombs $h^{(1)},\ldots,h^{(m-1)}$ with
boundary values $\delta(h^{(1)}) = (a^{(1)},a^{(2)},-a^{(1,2)}), \delta(h^{(2)}) = (a^{(1,2)},a^{(3)},-a^{(1,2,3)}), \ldots$, $\delta(h^{(m-1)}) = (a^{(1,\ldots,m-1)},a^{(m)},-c)$.
This depicts the structure $\sim_S$ and boundary function $f_P$. Hence all involved matrices can also be constructed in reverse, if the hive $H$ is assumed to exist.
\end{proof}
\begin{figure}[htb]
\centering
\ifuseprecompiled
\includegraphics[]{pdf_files/hive_example1.pdf
\else
\setlength\figureheight{2cm}
\setlength\figurewidth{14cm}
\input{tikz_base_files/hive_example1/hive_example1.tikz}
\fi
\caption{\label{hive_example1} The schematic display of an $(n,3)$-hive $H$ with structure $\sim_S$ as in Lemma \ref{sevherm}.
North west, north east and south rays correspond to $\lambda(h^{(i)})$, $\mu(h^{(i)})$ and $\nu(h^{(i)})$, respectively.
Coupled boundaries are in gray and connected by dashed lines.}
\end{figure}
The idea behind honeycomb overlays \eqref{overlay} can be extended to hives as well:
\begin{corollary}[Zero eigenvalues]\label{zeroeig}
If the relation \\ $a^{(1)} \boxplus \ldots \boxplus a^{(m)} \sim_c c$ is satisfied for $a^{(i)} \in \mathcal{D}_{\geq 0}^n$, $i = 1,\ldots,m$
and $c_n = 0$, then $a^{(1)}_n = \ldots = a^{(m)}_n = 0$ and already $a^{(1)}|_{\{1,\ldots,n-1\}} \boxplus \ldots \boxplus a^{(m)}|_{\{1,\ldots,n-1\}} \sim_c c|_{\{1,\ldots,n-1\}}$.
\end{corollary}
\begin{proof}
The first statement follows by basic linear algebra, since $a^{(1)},\ldots,a^{(n)}$ are nonnegative. For the second part,
Lemma \ref{sevherm} and \eqref{overlay} are used. Inductively, in each
honeycomb of the corresponding hive $H$, a separate $1$-honeycomb with boundary values $(0,0,0)$ can be found. Hence, each honeycomb is an overlay
of such an $1$-honeycomb and an $(n-1)$-honeycomb. All remaining $(n-1)$-honeycombs then form a new hive with identical structure $\sim_S$.
\end{proof}
The second requirement in Theorem \ref{fundthm} can now be extended.
\begin{corollary}[Extended Fundamental Theorem]\label{relaxed}
Let $(\gamma,\theta) \in \mathcal{D}^{\infty}_{\geq 0} \times \mathcal{D}^{\infty}_{\geq 0}$ and $n \geq \degree(\gamma), \degree(\theta)$.
Further, let $\widetilde{\theta} = (\theta_+,0,\ldots,0)$, $\widetilde{\gamma} = (\gamma_+,0,\ldots,0)$ be $n$-tuples.
The following statements are equivalent:
\begin{itemize}
\item The pair $(\gamma,\theta)$ is feasible for $m \in \ensuremath{\mathbb{N}}$.
\item There exist $m$ pairs of hermitian, positive semi-definite matrices $(A^{(i)},B^{(i)}) \in \ensuremath{\mathbb{C}}^{n \times n} \times \ensuremath{\mathbb{C}}^{n \times n}$,
each with identical (multiplicities of) eigenvalues, such that $A := \sum_{i=1}^{m} A^{(i)}$ has eigenvalues $\widetilde{\theta}^2$ and $B := \sum_{i=1}^{m} B^{(i)}$
has eigenvalues $\widetilde{\gamma}^2$, respectively. (The irrelevancy of zeros hence also translates into this setting).
\item There exist $a^{(1)},\ldots,a^{(m)} \in \ensuremath{\mathcal{D}}_{\geq 0}^n$ such that $a^{(1)} \boxplus \ldots \boxplus a^{(m)} \sim_c \widetilde{\gamma}^2$
as well as $a^{(1)} \boxplus \ldots \boxplus a^{(m)} \sim_c \widetilde{\theta}^2$.
\item There exists a nonnegative $(n,M)$-hive $H$ of size $M = 2(m-1)$ (cf. Figure \ref{hive_example2}) with structure $\sim_S$ and boundary function $f_P$,
where $(i + u,\nu) \sim_S (i+1 + u,\lambda)$, $i = 1,\ldots,M/2-1$, $u \in \{0,M/2\}$, as well as
$(1,\lambda) \sim_S (1+M/2,\lambda)$ and $(i,\mu) \sim_S (i+M/2,\mu)$, $i = 1,\ldots,M$. Further
$f_P = \{(M/2,\nu) \mapsto \widetilde{\gamma}^2,(M,\nu) \mapsto \widetilde{\theta}^2\}$.
\end{itemize}
\end{corollary}
\begin{proof}
The existence of matrices with actual size $\degree(\gamma)$, $\degree(\theta)$, respectively, follows by repeated application of Lemma \ref{zeroeig}.
The hive essentially consists of two \textit{parallel} hives as in Lemma \ref{sevherm}.
Therefor, the same argumentation holds, but instead of prescribed boundary values $a^{(i)}$, these values
are coupled between the two hive parts.
\end{proof}
\begin{figure}[htb]
\centering
\ifuseprecompiled
\includegraphics[]{pdf_files/hive_example2.pdf
\else
\setlength\figureheight{2cm}
\setlength\figurewidth{14cm}
\input{tikz_base_files/hive_example2/hive_example2.tikz}
\fi
\caption{\label{hive_example2} The schematic display of an $(n,6)$-hive $H$ (upper part in blue, lower part in magenta)
with structure $\sim_S$ as in Lemma \ref{sevherm}.
North west, north east and south rays correspond to $\lambda(h^{(i)})$, $\mu(h^{(i)})$ and $\nu(h^{(i)})$, respectively.
Coupled boundaries are in gray and connected by dashed lines.}
\end{figure}
It is now easy to verify the feasibility of $(\gamma,\theta)$ as in the example \eqref{not_triv_pair},
provided by Figure \ref{plot_not_triv_feasible}. Even though it is not diagonally feasible,
the pair can be disassembled, in respect of Theorem \ref{summary}, into multiple, diagonally
feasible pairs, which then as well prove its feasibility.
\subsection{Application of Hives}
\begin{figure}[htb]
\centering
\ifuseprecompiled
\includegraphics[]{pdf_files/plot_non_triv_feasible.pdf
\else
\setlength\figureheight{5cm}
\setlength\figurewidth{8cm}
\input{tikz_base_files/plot_non_triv_feasible/plot_non_triv_feasible.tikz}
\fi
\caption{\label{plot_not_triv_feasible} A $(4,2)$-hive consisting of two coupled honeycombs (blue for $\gamma$, magenta for $\theta$),
which are slightly shifted for better visibility, generated by Algorithm \ref{linprog}. Note that some lines have multiplicity $2$.
The coupled boundary values are given by $a = (4,1.5,0,0)$ and $b = (3.5,3.5,0,0)$.
It proves the feasibility of the pair $(\gamma,\theta)$, $\widetilde{\gamma}^2 = (7.5,5,0,0)$, $\widetilde{\theta}^2 = (6,3.5,2,1)$ for $m = 2$,
since $\widetilde{\gamma}^2, \widetilde{\theta}^2 \sim_c a \boxplus b$.
Only due to the short, vertical line segment in the middle, the hive does not provide diagonal feasibility.}
\end{figure}
\begin{lemma}[A set of inequalities for feasible pairs]\label{setofineq}
Let $a^{(1)} \ \dot{\cup} \ a^{(2)} = \mathbb{N}$ be disjoint and both strictly increasing, with $a^{(1)}$ finite of length $r$.
If $(\gamma,\theta) \in \mathcal{D}^{\infty}_{\geq 0} \times \mathcal{D}^{\infty}_{\geq 0}$ is feasible for $m$, then it holds
\begin{align*}
\sum_{i \in A_m^{(1)}} \gamma^2_{i} \leq \sum_{i \notin A_m^{(2)}} \theta^2_i, \quad A_m^{(u)} = \{ m(a^{(u)}_i - i) + i \mid i = 1,2,\ldots \}, \ u = 1,2.
\end{align*}
\end{lemma}
\begin{proof}
Let $n = \max(A_m^{(1)},\degree(\gamma),\degree(\theta)))$. Let further $\widetilde{A}_j^{(2)}$ contain the
$k = |A_m^{(2)} \cap \{1,\ldots,n\}|$ smallest elements of $A_j^{(2)}$ and let $\widetilde{A}_j^{(1)} = A_j^{(1)}$, $j = 1,\ldots,m$.
For fixed $u \in \{1,2\}$ and
\begin{align}\label{zeta_eig}
\zeta^{(u)} \sim_c \lambda^{(1)} \boxplus \ldots \boxplus \lambda^{(m)}, \quad \zeta^{(u)},\lambda^{(j)} \in \ensuremath{\mathcal{D}}_{\geq 0}^{n},
\end{align}
we first show that
\begin{align}\label{leftineq}
\sum_{i \in \widetilde{A}_m^{(u)}} \zeta^{(u)}_{i} \leq \sum_{j = 1}^m \sum_{i \in \widetilde{A}_1^{(u)}} \lambda^{(j)}_{i}
\end{align}
This follows inductively, if
\begin{align}\label{induc}
\sum_{i \in \widetilde{A}_j^{(u)}} \nu \leq \sum_{i \in \widetilde{A}_{j-1}^{(u)}} \lambda_i + \sum_{i \in \widetilde{A}_{1}^{(u)}} \mu_i
\end{align}
is true whenever $\nu \sim_c \lambda \boxplus \mu$. By Theorem \ref{desTrn} \cite{KnTa01_Hon, Ho1962_Eig, Fu00_Eig}, this holds if each of the corresponding triplets
$(\alpha,\beta,\omega) = ( \bigtriangleup \widetilde{A}_{j-1}^{(u)}, \bigtriangleup \widetilde{A}_1^{(u)}, \bigtriangleup \widetilde{A}_j^{(u)})$
fulfills $\alpha \boxplus \beta \sim_c \omega$. Then again, this already follows from the (diagonal) matrix identity
\begin{align*}
\diag(\widetilde{A}_j^{(u)}) - \diag(1,\ldots,n) & = \diag(\widetilde{A}_{j-1}^{(u)}) + \diag(\widetilde{A}_{1}^{(u)}) - 2\diag(1,\ldots,n) \\
\Leftrightarrow j(a^{(u)}_i - i) & = (j-1)(a^{(u)}_i - i) + (a^{(u)}_i - i), \quad i = 1,\ldots,n,
\end{align*}
proving \eqref{induc} and hence \eqref{leftineq}.
As $(\gamma,\theta)$ is feasible, we can as well set $\zeta^{(1)} = (\gamma_1^2,\ldots,\gamma_n^2)$, $\zeta^{(2)} = (\theta_1^2,\ldots,\theta_n^2)$
in \eqref{zeta_eig}.
Since the trace property is valid, we have
$\sum_{i = 1}^{n} \theta_i^2 = \sum_{i = 1}^{n} \lambda^{(1)}_i + \ldots + \sum_{i = 1}^{n} \lambda^{(m)}_i$.
By subtracting \eqref{leftineq} for $u = 2$ from this equality, we receive
\begin{equation}
\begin{aligned}
\sum_{i \notin A_m^{(2)}} \theta^2_i \geq \sum_{i \in \{1,\ldots,n\} \setminus \widetilde{A}_m^{(2)}} \theta^2_i & \geq
\sum_{j = 1}^m \sum_{i \in \{1,\ldots,n\} \setminus \widetilde{A}_1^{(2)}} \lambda^{(j)}_{i} \\
& \overset{\lambda_i^{(j)} \geq 0}{\geq} \sum_{j = 1}^m \sum_{i \in \widetilde{A}_1^{(1)}} \lambda^{(j)}_{i}
\overset{\eqref{leftineq} \mbox{ for } u = 1}{\geq} \sum_{i \in A_m^{(1)}} \gamma^2_{i}
\end{aligned}
\label{lasteqinsetineq}
\end{equation}
\end{proof}
Note that the right sum in Lemma \ref{setofineq} has always $m$-times as many summands as the left sum.
\begin{corollary}[Analogous to Lemma \ref{reduc}]
Let $b^{(1)}_1 < \ldots < b^{(1)}_r$ be the indices appearing
in $A^{(1)}_m$ and $b^{(2)}_1 < \ldots < b^{(2)}_{mr}$ those in $\mathbb{N} \setminus A^{(2)}_m$, and let
$c^{(u)}_1 < c^{(u)}_2 < \ldots$ be complementary to $b^{(u)}$, $u = 1,2$.
If the relation in Lemma \ref{setofineq} holds as equality,
then
\begin{align}
\left( (\gamma_{b^{(1)}_1},\ldots,\gamma_{b^{(1)}_r},0,\ldots), (\theta_{b^{(2)}_1},\ldots,\theta_{b^{(2)}_{mr}},0,\ldots)\right) \label{seprel1}
\end{align}
as well as
\begin{align}
\left( (\gamma_{c^{(1)}_1},\gamma_{c^{(1)}_2},\ldots), (\theta_{c^{(2)}_1},\theta_{c^{(2)}_2},\ldots)\right) \label{seprel2}
\end{align}
are already feasible for $m$.
\end{corollary}
\begin{proof}
All inequalities in the proof of Lemma \ref{setofineq}
must hold as equalities.
From \eqref{leftineq}, \eqref{induc} and by the thereby implied reducibility (Lemma \ref{reduc} \cite{KnTa01_Hon, Ho1962_Eig}), we already have
\begin{align}
\gamma^2|_{\widetilde{A}^{(1)}_m} & \sim_c \lambda^{(1)}|_{\widetilde{A}^{(1)}_1} \boxplus \ldots \boxplus \lambda^{(m)}|_{\widetilde{A}^{(1)}_1} \label{relA} \\
\theta^2|_{\widetilde{A}^{(2)}_m} & \sim_c \lambda^{(1)}|_{\widetilde{A}^{(2)}_1} \boxplus \ldots \boxplus \lambda^{(m)}|_{\widetilde{A}^{(2)}_1} \label{relB}
\end{align}
Hence also
\begin{align}
\gamma^2|_{\{1,\ldots,n\} \setminus \widetilde{A}^{(1)}_m} & \sim_c \lambda^{(1)}|_{\{1,\ldots,n\} \setminus \widetilde{A}^{(1)}_1} \boxplus \ldots \boxplus \lambda^{(m)}|_{\{1,\ldots,n\} \setminus \widetilde{A}^{(1)}_1} \label{relC} \\
\theta^2|_{\{1,\ldots,n\} \setminus \widetilde{A}^{(2)}_m} & \sim_c \lambda^{(1)}|_{\{1,\ldots,n\} \setminus \widetilde{A}^{(2)}_1} \boxplus \ldots \boxplus \lambda^{(m)}|_{\{1,\ldots,n\} \setminus \widetilde{A}^{(2)}_1}. \label{relD}
\end{align}
All nonzero entries of $\gamma,\theta$ have indices within $\{1,\ldots,n\}$.
Further, from \eqref{lasteqinsetineq}, it follows $\lambda^{(j)}_i = 0$, $i \in \{1,\ldots,n\} \setminus \widetilde{A}^{(1)}_1 \cup \widetilde{A}^{(2)}_1$.
Hence $\lambda^{(j)}|_{\{1,\ldots,n\} \setminus \widetilde{A}^{(2)}_1}$ only differs from $\lambda^{(j)}|_{\widetilde{A}^{(1)}_1}$ by additional zeros.
By adding these in \eqref{relA} and \eqref{relD}, the first part of the proof is finished with regard to Theorem \ref{relaxed}.
Analogously, the second part follows from \eqref{relB} and \eqref{relC}.
\end{proof}
Together with \eqref{overlay} this also implies that the corresponding hive
is an overlay of two smaller hives modulo zero boundaries.
Among the various inequalities one might derive from Lemma \ref{setofineq},
the following two might be considered most characterizing.
Both inequalities are sharp as it can be proven by quite simple diagonally feasible pairs.
\begin{corollary}[Ky Fan analogue for feasible pairs]\label{kyfan}
The choice $a^{(1)} = (1,\ldots,r)$ in Lemma \ref{setofineq} yields the inequality
\begin{align}
\sum_{i = 1}^r \gamma_i^2 \leq \sum_{i = 1}^{mr} \theta_i^2. \label{kyfanineq}
\end{align}
\end{corollary}
\begin{corollary}[Weyl analogue for feasible pairs]\label{prop2}
The choice $a^{(1)} = (r+1)$ in Lemma \ref{setofineq} yields the inequality
\[ \gamma_{rm+1}^2 \leq \sum_{i = r+1}^{r+m} \theta_i^2. \]
\end{corollary}
It is easy to derive other inequalities or slightly generalize Lemma \ref{setofineq}.
For example,
\begin{align}
\gamma^2_2 \leq \sum_{i = 1}^{m} \theta_{1 + 2(i-1)}^2 \label{reduineq}
\end{align}
is necessary and sharp in the sense that
each single index on the right-hand side cannot be increased without decreasing another.
It can easily be shown: Assume just that \eqref{kyfanineq} for $r = 2$ holds, but not \eqref{reduineq}. Then it follows
$\gamma^2_1 \leq \sum_{i = 1}^{m} \theta_{2 + 2(i-1)}^2 \leq \sum_{i = 1}^{m} \theta_{1 + 2(i-1)}^2 < \gamma^2_2$
which is not true since $\gamma$ is nonincreasing by definition. Hence the inequality \eqref{reduineq} is necessary.
However, this implication does not work the other way around
and hence the inequality is strictly redundant to \eqref{kyfanineq}. Thereby, it does not
appear in any minimal list of sufficient conditions for feasibility (cf. Theorem \ref{summary}).\\\\
Let $\widetilde{\gamma}^2 = (10,2,0.5,0,0)$ and $\widetilde{\theta}^2 = (4,3,2.5,2,1)$. According to
Corollary \ref{kyfan}, the pair $(\gamma,\theta)$ is not feasible for $m = 2,3$, but we know
that it must either be feasible for $m = 4$ or diagonally feasible for $m = 5$ (cf. Lemma \ref{prop1}).
In fact, the hive in Figures \ref{plot_triv_feasible_single} and
\ref{plot_triv_feasible} provides that it is feasible for $m = 4$
and has been constructed with Algorithm \ref{linprog}.
This algorithm reveals near diagonal feasibility
for most pairs - if not even diagonal feasibility for the case that the coupled boundaries do not contain zeros.
This results from the minimization of the total edge length.
However, it does not disprove whether the pair might not be diagonally feasible.
In this case for example, one can quite easily verify
that the pair is even diagonally feasible for $m=4$.
\begin{figure}[htb]
\centering
\ifuseprecompiled
\includegraphics[]{pdf_files/plot_triv_feasible_n5m4_single.pdf
\else
\setlength\figureheight{2.6cm}
\setlength\figurewidth{12cm}
\input{tikz_base_files/plot_triv_feasible/plot_triv_feasible_n5m4_single.tikz}
\fi
\caption{\label{plot_triv_feasible_single} A $(5,4)$-hive consisting of six coupled honeycombs (blue for $\gamma$, magenta for $\theta$),
which are slightly shifted for better visibility, generated by Algorithm \ref{linprog}. Note that some lines have multiplicity larger than $1$.
Also, in each second pair of honeycombs, the roles of boundaries $\lambda$ and $\mu$ have been switched (which we can do due to the symmetry regarding $\boxplus$), such that the honeycombs can be
combined to a single diagram as in Figure \ref{plot_triv_feasible}. This means that the south rays of an odd numbered pair are always connected
to the north-east (instead of north-west) rays of the consecutive pair.
The coupled boundary values are given by $a = (2,0,0,0,0)$, $b = (1,1,0,0,0)$, $c = (4,0,0,0,0)$ and $d = (3,1,1,0,0)$.
It proves the feasibility of the pair $(\gamma,\theta)$, $\widetilde{\gamma}^2 = (10,2,0.5,0,0)$, $\widetilde{\theta}^2 = (4,3,2.5,2,1)$ for $m = 4$,
since both $\widetilde{\gamma}^2, \widetilde{\theta}^2 \sim_c a \boxplus b \boxplus c \boxplus d$.}
\end{figure}
\begin{figure}[htb]
\centering
\ifuseprecompiled
\includegraphics[]{pdf_files/plot_triv_feasible_n5m4.pdf
\else
\setlength\figureheight{10cm}
\setlength\figurewidth{11cm}
\input{tikz_base_files/plot_triv_feasible/plot_triv_feasible_n5m4.tikz}
\fi
\caption{\label{plot_triv_feasible} The three overlayed honeycomb pairs in Figure \ref{plot_triv_feasible_single} put together
with respect to their coupling.}
\end{figure}
\section{Introduction}
Let $A\in\mathbb{R}^{{\mathcal I}}$ be a $d$-dimensional tensor,
\[ {\mathcal I} := {\mathcal I}_1\times\cdots\times{\mathcal I}_d,\quad {\mathcal I}_{\mu}:=\{1,\ldots,n_{\mu}\},\quad
\mu\in D:=\{1,\ldots,d\}.\]
Then the $d-1$ tuples of positive TT-singular values $\sigma_+^{(1)},\ldots,\sigma_+^{(d-1)}$
and accordant TT-ranks $r_1,\ldots,r_{d-1} \in \mathbb{N}_0$ of $A$
are given through SVDs of different matricizations (simple reshapings) of this tensor \cite{Os11_Ten, Gr10_Hie},
\begin{align*}
\sigma_+^{(\mu)} = \sigma_+(A^{(\{1,\ldots,\mu\})}) \in \ensuremath{\mathbb{R}}_{>0}^{r_\mu}, \ r_\mu = \mathrm{rank}(A^{(\{1,\ldots,\mu\})}), \quad \mu = 1,\ldots,d-1,
\end{align*}
such as displayed in Figure \ref{reshaping}.
\FloatBarrier%
\begin{figure}[h!]
\centering
\ifuseprecompiled
\includegraphics[]{pdf_files/reshaping.pdf
\else
\setlength\figureheight{3cm}
\setlength\figurewidth{12cm}
\input{tikz_base_files/reshaping/reshaping.tikz}
\fi
\caption{\label{reshaping} Matricization of a $4$-dimensional tensor with respect to $\{1,2\}$ to obtain $\sigma_+^{(2)} \in \ensuremath{\mathbb{R}}_{>0}^{r_2}$}
\end{figure}%
\FloatBarrier%
\noindent
These matrices $A^{(\{1,\ldots,\mu\})}\in\mathbb{R}^{(\mathcal{I}_1\times\cdots\times \mathcal{I}_{\mu}) \times (\mathcal{I}_{\mu+1}\times\cdots\times \mathcal{I}_{d})}$ are defined via
\begin{align*}
A^{(\{1,\ldots,\mu\})}&((i_1,\ldots,i_\mu),\, (i_{\mu+1},\ldots,i_d)) := A(i),
\end{align*}
where $(i_\nu,\ldots,i_\mu) \in \{1,\ldots,n_\nu \cdot \ldots \cdot n_\mu\} \subset \ensuremath{\mathbb{N}}$ makes use of the multiindex notation.
We further define $\sigma = (\sigma^{(1)},\ldots,\sigma^{(d-1)})$, $\sigma^{(\mu)} = (\sigma^{(\mu)}_+,0,\ldots)$, as singular spectrum.
Since its entries are based on the same object, the question about the \textit{feasibility}
of a prescribed singular spectrum arises immediately:
\begin{definition}[Feasibility]\label{deffeasibility}
Let $\sigma_+=(\sigma_+^{(1)},\ldots,\sigma_+^{(d-1)})$,
where each $\sigma_+^{(\mu)}$ is a positive, weakly decreasing $r_{\mu}$-tuple. Then $\sigma$ is called feasible
for $n = (n_1,\ldots,n_d)$ if there exists a tensor $A \in \ensuremath{\mathbb{R}}^{\mathcal{I}}$ giving rise to it in form of a singular spectrum.
\end{definition}
The number of additional zeros is of course irrelevant. One necessary condition,
\begin{align} \|A\|_F = \|\sigma^{(\nu)}\|_2 = \|\sigma^{(\mu)}\|_2, \quad \nu,\mu = 1,\ldots,d-1, \label{traceprop} \end{align}
arises directly and is denoted as trace property.
\subsection{Overview over Results in this Work}
Maybe the most characteristic result (Corollary \ref{Pyt}) which we show, and quite the contrary a nontrivial result, can be simplified
to the following: If $\sigma$ and $\tau$ are feasible for $n$, then $\upsilon := \sqrt{\sigma^2+\tau^2}$ (entrywise
evaluation) is feasible for $n$ as well.
This immediately implies that the set of squared feasible spectra is a
convex cone, and as it turns out it is closed and finitely generated. We roughly specify its faces by
providing significant, necessary linear inequalities (Lemma \ref{setofineq}, Theorem \ref{summary}).
If for each $\mu = 1,\ldots,d-1$ it holds $r_{\mu-1},r_{\mu} \leq n_{\mu}$, then feasibility is equivalent
to the validity only of the trace property \eqref{traceprop} (cf. Lemma \ref{prop1}). This is a generalization
of the case $d=2$, where the singular spectrum coincides with the usual singular values of a matrix and hence
the only restriction to feasibility is $r \leq n$.
\subsection{Other Sets of Matricizations}
In higher dimensions, several notions of ranks exist,
each yielding a quite different behavior, of which the Tensor Train
format captures a particular one. For the Tucker-type matricizations,
first steps have been made by \cite{HaKrUs16_Per, HaUs16_Ont} as well as \cite{DoStLa16_Ont}.
The authors from the first two papers, from which stem some notations, follow completely different approaches
and only in the latter article, a couple of eigenvalue relations from the Horn Conjecture appear.
No novel approaches or results regarding feasibility for these two formats could, to the best of our knowledge, so far be exchanged,
and this work has its own, separate roots as well.
It yet seems like different approaches are required for different formats, although one should expect to somehow find a
universal concept.
\subsection{Organization of Article}
In Section \ref{sec:reduction}, we use a so called standard representation
to reduce the problem of feasibility to only pairs of singular values.\
In Section \ref{sec:weylhorn}, we give a short
overview of the Horn Conjecture and related results, which we
apply to our problem with the help of \textit{honeycombs} in Section \ref{sec:honeycomb}.
In Section \ref{sec:cones}, we thereby identify the topological structure of sets
of squared feasible singular values as cones and provide further results as well as algorithms in Section \ref{sec:coneoffea}.
\section{Hives are Polyhedral Cones}\label{sec:cones}
As described by Knutson and Tao \cite{KnTa01_Hon}, nondegenerate $n$-honeycombs follow one identical topological structure (cf. Figure \ref{archetype}).
In that sense, the set of nondegenerate honeycombs lies in a real vector space $V_H = \ensuremath{\mathbb{R}}^N$, $N = \frac{3}{2} n(n+1)$.
Its coordinates are given by the constant coordinates of the edges of each honeycomb $h$, which we formally denote with $\ensuremath{\mathrm{edge}}(h) \in V_H$,
and honeycombs obey certain linear constraints therein (cf. Section \ref{sec:honeycomb}). The right-inverse $\ensuremath{\mathrm{edge}}^{-1}$ of this $1-1$ map can then be extended to the closure of its domain
such that its image identifies all $\ensuremath{\mathrm{HONEY}}_n$. In other words, all edges are only required to have nonnegative length.
It then follows that $\ensuremath{\mathrm{edge}}(\ensuremath{\mathrm{HONEY}}_n)$ forms a closed, convex, polyhedral cone and hence $\ensuremath{\mathrm{HONEY}}_n$ can be
regarded as such. For prescribed boundary values, one obtains a closed, convex polyhedron.
\begin{notation}[Hive sets and edge image]
Let
\[\ensuremath{\mathrm{edge}}(H) = (\ensuremath{\mathrm{edge}}(h^{(1)}),\ldots,\ensuremath{\mathrm{edge}}(h^{(M)})) \in V_H^{M} \]
be the collection of constant coordinates of all edges appearing in some hive $H$.
Although defined via the abstract set $B$ (in Definition \ref{hive}), we let
$\sim_S$ act on the related edge coordinates as well.
For $H \in \ensuremath{\mathrm{HIVE}}_{n,M}(\sim_S)$, we then define the edge image as $\ensuremath{\mathrm{edge}}_S(H) \in \bigslant{V_H^{M}}{\sim_S}$, in which coupled boundary
edges are assigned the same coordinate.
\end{notation}
\begin{lemma}[Hive sets are described by polyhedral cones]\label{L1L2}
\hspace*{0cm}
\begin{itemize}
\item The hive set $\ensuremath{\mathrm{HIVE}}_{n,M}(\sim_S)$, is a closed, convex, polyhedral cone, i.e. there exist matrices $L_1,L_2$ s.t.
$\ensuremath{\mathrm{edge}}_S(\ensuremath{\mathrm{HIVE}}_{n,M}(\sim_S)) = \{ x \mid L_1 x \leq 0, \ L_2 x = 0 \}$.
\item Each fiber of $\delta_P$ (i.e. a set of hives with structure $\sim_S$ and boundary function $f_P$), forms a closed, convex polyhedron,
i.e. there exist matrices $L_1,L_2,L_3$ and a vector $b$ s.t.
$\ensuremath{\mathrm{edge}}_S(\delta_P^{-1}(f_P)) = \{ x \mid L_1 x \leq 0, \ L_2 x = 0, \ L_3 x = b \}$.
\end{itemize}
\end{lemma}
\begin{proof}
Each honeycomb of a hive follows its linear constraints. The hive structure and identification of coordinates as one and the same by $\sim_S$
only imposes additional linear constraints. This implies the existence of $L_1$ and $L_2$ and hence the first statement
by elementary theory of polyhedrons and systems of inequalities. Choosing $b$ as image of $f_P$, $L_3$ only needs to
select the edges mentioned in $P$. It then follows that the edge image of each fiber is a closed, convex polyhedron.
\end{proof}
\begin{corollary}[Boundary set]\label{maincor}
The boundary set $\delta_P(\ensuremath{\mathrm{HIVE}}_{n,M}(\sim_S))$ (via the related coordinates) forms a closed, convex, polyhedral cone.
This hence also holds for any intersection with a
lower dimensional subspace.
\end{corollary}
\begin{proof}
The boundary set identified through the projection of $\ensuremath{\mathrm{edge}}_S(\ensuremath{\mathrm{HIVE}}_{n,M}(\sim_S))$ to the subset of coordinates mentioned by $P$.
The proof is finished, since projections to fewer coordinates of closed, convex, polyhedral cones are again such cones.
The same holds for intersections with subspaces.
\end{proof}
\section{Reduction to Mode-Wise Eigenvalues Problems}\label{sec:reduction}
The set of all tensors with (TT-)rank $r$ is denoted as $TT(r) \subset \ensuremath{\mathbb{R}}^{\mathcal{I}}$. This set
is closely related so called \textit{representations} \cite{Os11_Ten}:
\begin{definition}[TT format and representations]\label{TTformat}
Let $r_0 = r_d = 1$. For each $\mu = 1,\ldots,d$ and $i_{\mu} = 1,\ldots,n_\mu$, let $G_\mu(i_\mu)\in\mathbb{R}^{r_{\mu-1}\times r_\mu}$
be a matrix. We define the map $\tau_r$ via
\[ A = \tau_r(G) \in \ensuremath{\mathbb{R}}^{\mathcal{I}}, \quad A(i_1,\ldots,i_d) = G_1(i_1)\cdots G_d(i_d),\quad i\in\mathcal{I} \]
and for that matter call $G = (G_1,\ldots,G_d)$ representation
of rank $r$, each $G_\mu$ a core of length $n_\mu$ and size $(r_{\mu-1},r_\mu)$ as well as $\tau_r$ the representation map.
We further define $\boxtimes$ for two cores
$H_1, H_2$ as $(H_1 \boxtimes H_2)(i,j) := H_1(i) \ H_2(j)$ (interpreting TT-cores as vectors of matrices), such
that $A = G_1 \boxtimes \ldots \boxtimes G_d$. We skip the symbol $\boxtimes$ in products of a core and matrix.
\end{definition}
For any tensor $A \in TT(r) \subset \ensuremath{\mathbb{R}}^{\mathcal{I}}$, $r$ is the (unique) lowest valued tuple
for which there exists a representation $G$ of rank $r$ ($r_0 = r_d = 1$) such that
$A = \tau_r(G)$. This relation is established via the TT-SVD \cite{Os11_Ten} - an analog to the matrix SVD.
Usually, these representations are merely a tool to provide an efficient handling of low rank tensors.
In this context however, they allow us to change the perspective on feasibility and reduce
the problem from $d-1$ tuples to local, pairwise problems.
\begin{notation}[Unfoldings]\label{unfoldings}
For a core $H$ (possibly a product of smaller cores in the TT representation) with $H(i) \in \ensuremath{\mathbb{R}}^{k_1 \times k_2}$, $i = 1,\ldots,n$,
we denote the left and right unfolding
$\lhb{H}\in\mathbb{R}^{k_1 \cdot n \times k_2}$,
$\rhb{H}\in\mathbb{R}^{k_1 \times k_2 \cdot n}$
by
\begin{align*}
\left(\lhb{H}\right)_{(\ell,j), q}:= \left(H(j)\right)_{\ell,q}, \quad \left(\rhb{H}\right)_{\ell,(q,j)
&:= \left(H(j)\right)_{\ell,q},\quad
\end{align*}
for $1\le j\le n$, $1\le\ell\le k_1$ and $1\le q\le k_2$. $H$ is called left-orthogonal
if $\lhb{H}$ is column-orthogonal, and right-orthogonal if $\rhb{H}$ is row-orthogonal.
For a representation $G$, we
correspondingly define the interface matrices
\begin{align*}
G^{\leq \mu} & = \lhb{G_1 \boxtimes \ldots \boxtimes G_{\mu}} \in \ensuremath{\mathbb{R}}^{n_1\ldots n_{\mu} \times r_{\mu}},\\
G^{\geq \mu} & = \rhb{G_{\mu} \boxtimes \ldots \boxtimes G_d} \in \ensuremath{\mathbb{R}}^{r_{\mu-1} \times n_{\mu}\ldots n_d} \quad (\mbox{cf. Definition \ref{TTformat}}).
\end{align*}
We also use $G^{<\mu} := G^{\leq \mu-1}$ and $G^{>\mu} := G^{\geq \mu+1}$.
\end{notation}
The map $\tau_r$ is not injective.
However, there is an essentially unique standard representation (in terms of uniqueness of the truncated matrix SVD\footnote[1]{Both $U \Sigma V^T$ and $\widetilde{U} \Sigma \widetilde{V}^T$ are truncated SVDs of $A$ if and only if there exists
an orthogonal matrix $w$ that commutes with $\Sigma$ and for which $\widetilde{U} = U w$ and $\widetilde{V} = V w$.
For any subset of pairwise distinct nonzero singular values, the corresponding submatrix of $w$ needs to be diagonal with entries in $\{-1,1\}$.})
following certain orthogonality constraints. For that matter, it easy to verify that if both $H_1$ and $H_2$ are left- or right-orthogonal, then $H_1 \boxplus H_2$ is
left- or right-orthogonal, respectively.
\begin{lemma}[Standard representation]\label{stre}
Let $A \in \ensuremath{\mathbb{R}}^{{\mathcal I}}$ be a tensor and \\ $\Sigma^{(0)} = \|A\|_F, \ \Sigma^{(1)} = \diag(\sigma_+^{(1)}),\ldots,\ \Sigma^{(d-1)} = \diag(\sigma_+^{(d-1)}), \ \Sigma^{(d)} = \|A\|_F$ be square diagonal matrices which contain the TT-singular values of $A$. \\
1. There exists an essentially
unique representation (with minimal ranks)
\begin{align*} \label{sr} \mathcal{G} = (\Sigma^{(0)},\mathcal{G}_1,\Sigma^{(1)},\mathcal{G}_2,\Sigma^{(2)},\ldots,\Sigma^{(d-1)},\mathcal{G}_d,\Sigma^{(d)}) \end{align*}
for which $A = \tau_r(\mathcal{G}) := (\Sigma^{(0)} \mathcal{G}_1) \boxtimes (\Sigma^{(1)} \mathcal{G}_2) \boxtimes \ldots \boxtimes (\Sigma^{(d-1)} \mathcal{G}_d \Sigma^{(d)})$ such that
\[ \mathcal{G}^{\leq{\mu}} := \lhb{\Sigma^{(0)} \mathcal{G}_1 \boxtimes \ldots \boxtimes \Sigma^{({\mu}-1)} \mathcal{G}_{{\mu}}} \]
is column orthogonal for any ${\mu} = 1,\ldots,d$ and
\[ \mathcal{G}^{\geq{\mu}} := \rhb{\mathcal{G}_{{\mu}} \Sigma^{({\mu}+1)} \boxtimes \ldots \boxtimes \mathcal{G}_d \Sigma^{(d)}} \]
is row orthogonal for any ${\mu} = 1,\ldots,d$ and hence
\begin{align} \label{sSVD} \mathcal{G}^{\leq{\mu}} \ \Sigma^{(\mu)} \ \mathcal{G}^{>{\mu}} \end{align}
is a (truncated) SVD of $A^{(\{1,\ldots,{\mu}\})}$ for any ${\mu} = 1,\ldots,d-1$. \\
2. This in turn implies that $\Sigma^{({\mu}-1)} \mathcal{G}_{\mu}$ is left-orthogonal and $\mathcal{G}_{\mu} \Sigma^{(\mu)}$ is right-orthogonal, for all ${\mu} = 1,\ldots,d$.
\end{lemma}
\begin{proof}
\textit{1. uniqueness:} \\
In the following, with $w_\mu$, we denote some orthogonal matrices that commute with $\Sigma^{(\mu)}$.
Let there
be two such representations $\widetilde{\mathcal{G}}$ and $\mathcal{G}$.
First, by definition, both $\mathcal{G}^{\leq 1} = \lhb{\Sigma^{(0)} \mathcal{G}_1}$ and $\widetilde{\mathcal{G}}^{\leq 1} = \lhb{\Sigma^{(0)} \widetilde{\mathcal{G}}_1}$ contain the left-singular vectors of $A^{(\{1\})}$.
Hence, $\widetilde{\mathcal{G}}^{\leq 1} = \mathcal{G}^{\leq 1} w_1$.
By induction, let $\widetilde{\mathcal{G}}_s = w_{s-1}^T \mathcal{G}_s w_s$ for $s < {\mu}$, with $w_0 = w_d = 1$.
Analogously, we have $\widetilde{\mathcal{G}}^{\leq {\mu}} = \mathcal{G}^{\leq \mu} w_\mu$, which is
equivalent to
\begin{align*}
\Sigma^{(0)} \mathcal{G}_1 \boxtimes \ldots \boxtimes \Sigma^{({\mu}-1)} \mathcal{G}_{{\mu}} & = \Sigma^{(0)} \widetilde{\mathcal{G}}_1 \boxtimes \ldots \boxtimes \Sigma^{({\mu}-1)} \widetilde{\mathcal{G}}_{{\mu}} w^T_\mu \\
& = \Sigma^{(0)} {\mathcal{G}}_1 w_1 \boxtimes \Sigma^{(1)} w_1^T \ldots w_{\mu-1} \boxtimes \Sigma^{({\mu}-1)} \widetilde{\mathcal{G}}_{{\mu}} ww^T_\mu \\
& = \Sigma^{(0)} {\mathcal{G}}_1 \Sigma^{(1)} \boxtimes \ldots \boxtimes \Sigma^{({\mu}-1)} w_{\mu-1} \widetilde{\mathcal{G}}_{{\mu}} w^T_\mu
\end{align*}
Since all ranks are minimal and therefore $H \mapsto \Sigma^{(0)} \mathcal{G}_1 \boxtimes \ldots \boxtimes \Sigma^{({\mu}-1)} H$ is injective,
it follows $\widetilde{\mathcal{G}}_\mu = w_{\mu-1}^T \mathcal{G}_\mu w_\mu$. This completes the inductive argument.\newpage \noindent
\textit{1. existence (constructive):} \\
Let $G$ be a representation of $A$ where $G_\mu$, $\mu = 2,\ldots,d$ are right-orthogonal (this can always be achieved
using the degrees of freedom within a representation).
An SVD of $\lhb{G_1}$ yields $G_1 = (\Sigma^{(0)} \mathcal{G}_1) \Sigma^{(1)} V_1^T$,
where $\Sigma^{(0)} \mathcal{G}_1$ is left-orthogonal, such that $\lhb{\Sigma^{(0)} \mathcal{G}_1} \Sigma^{(1)} (V_1^T G^{>1})$ is a truncated SVD of $A^{(\{1\})}$.
Note that treating $V_1^T$ as part of $G_2$ does not change this. \\
Now, an SVD of $\lhb{\Sigma^{(1)} V_1^T G_2}$ yields
$\Sigma^{(1)} V_1^T G_2 = U_2 \Sigma^{(2)} V_2^T$, such that $\lhb{\Sigma^{(0)} \mathcal{G}_1 \boxtimes U_2} \Sigma^{(2)} (V_2^T G^{>2})$ is a truncated SVD of $A^{(\{1,2\})}$.
Again, multiplying $V_2^T$ into $G_3$ does not change constraints. One can hence define
$\mathcal{G}_2 = (\Sigma^{(1)})^{-1} U_2$. The latter part of this process is repeated for the remaining modes by treating $\Sigma^{(0)} \mathcal{G}_1 \boxtimes \Sigma^{(1)} \mathcal{G}_2$ as one core of a representation
with a by $1$ lower dimension, starting with an SVD of $\lhb{\Sigma^{(2)} V_2^T G_3}$. In the last step, without loss of generality, it simply holds $V_d = 1$. \\
\textit{2. orthogonality:} \\
With the previous, it follows that $\Sigma^{({\mu}-1)} \mathcal{G}_{\mu} = \Sigma^{({\mu}-1)} (\Sigma^{({\mu}-1)})^{-1} U_s = U_s$ is left-orthogonal
and that $\mathcal{G}_{\mu} \Sigma^{(\mu)} = V_{\mu-1}^T G_\mu V_\mu$, which is right-orthogonal. Since the standard representation is (essentially) unique,
this holds independently of the construction.
\end{proof}
\begin{corollary}[Inverse statement]\label{inst}
Let $(\Sigma^{(0)},\mathcal{G}_1,\Sigma^{(1)},\ldots,\mathcal{G}_d,\Sigma^{(d)})$ be such that each $\sigma_+^{(\mu)}$ is a positive, weakly
decreasing $r_\mu$-tuple and property $2$ of Lemma \ref{stre} is fulfilled.
Then $\tau_r(\mathcal{G})$ is a tensor in $TT(r)$ with singular spectrum $\sigma$.
\end{corollary}
We need to slightly generalize the notion of feasibility. In order to avoid
confusion, $\gamma$ will always be an infinite sequence (the object we call feasible), $\gamma_+$ its positive part (cf. Notation \ref{weakdectuple}),
$\Gamma = \diag(\gamma_+)$ the correspondent diagonal matrix and
$\widetilde{\gamma}$ will be finite, but may include zeros.
\begin{notation}[Set of weakly decreasing tuples/sequences]\label{weakdectuple}
Let $\ensuremath{\mathcal{D}}^n$, $n \in \ensuremath{\mathbb{N}} \cup \{\infty\}$, be the set of all weakly decreasing tuples (or sequences) of real numbers with finitely many nonzero elements ($n$ is to be read as index).
For $n \neq \infty$, the negation $-v \in \ensuremath{\mathcal{D}}^n$ of $v \in \ensuremath{\mathcal{D}}^n$ is defined via $-v := (-v_n, \ldots, -v_1)$.
The positive part $v_{+} \in \ensuremath{\mathcal{D}}_{> 0}^{\degree(v)}$ is defined as the nonzero elements of $v$, where $\degree(v) := |\{i \mid v_i > 0\}|$ is its degree.
\end{notation}
\begin{definition}[Feasibility]\label{fe}
For $\nu < \mu \in \ensuremath{\mathbb{N}}$, let $\sigma = (\sigma^{(\nu)},\ldots,\sigma^{(\mu)}) \in \ensuremath{\mathcal{D}}^{\infty}_{\geq 0} \times \ldots \times \ensuremath{\mathcal{D}}^{\infty}_{\geq 0}$ be a list of weakly decreasing sequences and $\widetilde{n} = (n_{\nu+1},\ldots,n_{\mu}) \in \ensuremath{\mathbb{N}}^{\mu-\nu}$.
Then $\sigma$ is \textit{feasible} for $\widetilde{n}$ if there exist cores $\mathcal{G}_{\nu+1},\ldots,\mathcal{G}_{\mu}$, that is $\mathcal{G}_{s} \in (\ensuremath{\mathbb{R}}^{r_{{s}-1} \times r_{{s}}})^{s}$, $r_s := \degree(\sigma^{(s)})$, such that
$\Sigma^{({s}-1)} \mathcal{G}_{s}$ is left-orthogonal and $\mathcal{G}_{s} \Sigma^{(s)}$ is right-orthogonal, for all ${s} = \nu,\ldots,\mu$. Due to Corollary \ref{inst}, if $\nu = 0, \mu = d$, $r_0 = r_d = 1$
and $\sigma^{(0)} = \sigma^{(d)} = \|A\|_F$, this coincides with the feasibility of $\sigma$ for a tensor (cf. Definition \ref{deffeasibility}).
\end{definition}
Using the standard representation, global feasibility can be decoupled into smaller and much simpler problems.
\begin{theorem}[Fundamental reduction to mode-wise eigenvalue problems]\label{fundthm}
\hspace*{0cm}
\begin{enumerate}
\item Let $\mu - \nu > 1$: The list $\sigma$, as in Definition \ref{fe}, is feasible for $(n_{\nu+1},\ldots,n_{\mu})$ if and only if
$(\sigma^{(\nu)},\ldots,\sigma^{(h)})$ is feasible for $(n_{\nu+1},\ldots,n_{h})$ and $(\sigma^{(h)},\ldots,\sigma^{(\mu)})$
is feasible for $(n_{h+1},\ldots,n_{\mu})$ for some $\nu < h < \mu$.
\item Let $\mu = \nu + 1$: A pair $(\gamma,\theta) \in \ensuremath{\mathcal{D}}^{\infty}_{\geq 0} \times \ensuremath{\mathcal{D}}^{\infty}_{\geq 0}$ of weakly decreasing sequences is feasible for $n \in \ensuremath{\mathbb{N}}$ if and only if the following holds: \\
There exist $n$ pairs of hermitian, positive semi-definite matrices $(A^{(i)},B^{(i)}) \in \ensuremath{\mathbb{C}}^{\degree(\theta) \times \degree(\theta)} \times \ensuremath{\mathbb{C}}^{\degree(\gamma) \times \degree(\gamma)}$,
each with identical (multiplicities of) eigenvalues up to zeros, such that $A := \sum_{i=1}^{n} A^{(i)}$ has eigenvalues $\theta_+^2$ and $B := \sum_{i=1}^{n} B^{(i)}$ has eigenvalues $\gamma_+^2$ .
\end{enumerate}\newpage \noindent
\begin{proof} \textit{(constructive)}
The first statement is merely transitivity. For $\mu = \nu + 1$, we show both directions separately.\\
``$\Rightarrow$'': Let $(\gamma,\theta)$ be feasible for $n$. Then by definition, for $\Gamma = \diag(\gamma_+), \ \Theta = \diag(\theta_+)$,
\[ \sum_{i = 1}^n \mathcal{G}(i)^T \ \Gamma^2 \ \mathcal{G}(i) = I, \quad \sum_{i = 1}^n \mathcal{G}(i) \ \Theta^2 \ \mathcal{G}(i)^T = I. \]
By substitution of $ \mathcal{G} \rightarrow \Gamma^{-1} \ H \ \Theta^{-1}$, this is equivalent to
\begin{align} \nonumber \sum_{i = 1}^n \Theta^{-1} \ H(i)^T \ H(i) \ \Theta^{-1} = I, & \quad \sum_{i = 1}^n \Gamma^{-1} \ H(i) \ H(i)^T \ \Gamma^{-1} = I \\
\Leftrightarrow \sum_{i = 1}^n H(i)^T \ H(i) = \Theta^2, & \quad \sum_{i = 1}^n \ H(i) \ H(i)^T = \Gamma^2. \label{red}
\end{align}
Now, for $A^{(i)} := H(i)^T \ H(i)$ and $B^{(i)} := H(i) \ H(i)^T$, we have found matrices as desired, since the eigenvalues of $A^{(i)}$ and $B^{(i)}$ are
each the same (up to zeros). \\
``$\Leftarrow$'': Let $A^{(i)}$ and $B^{(i)}$ be matrices as required. Then, by complex eigenvalue decompositions, $A = U_A \ \Theta^2 \ U_A^H$, $B = U_B \ \Gamma^2 \ U_B^H$ for unitary $U_A$, $U_B$ and
thereby $\sum_{i=1}^{n} U_A^H \ A^{(i)} \ U_A = \Theta^2$ and $\sum_{i=1}^{n} U_B^H \ B^{(i)} \ U_B = \Gamma^2$. Then again, by truncated, complex eigenvalue decompositions of
these summands, we obtain
\[ U_A^H \ A^{(i)} \ U_A = V_i \ S_i \ V_i^H, \quad U_B^H \ B^{(i)} \ U_B = U_i \ S_i \ U_i^H \]
for unitary (eigenvectors) $V_i, U_i$ and shared (eigenvalues) $S_i$. We can hence define $C_i := U_i \ S_i^{1/2} \ V_i^H$. Projection $\Re$ to the real numbers consequently gives
\[ \Re\left(\sum_{i = 1}^n C_i^H \ C_i \right) = \sum_{i = 1}^n \Re(C_i^H) \ \Re(C_i) = \sum_{i = 1}^n \Re(C_i)^T \ \Re(C_i) = \Re(\Theta^2) = \Theta^2, \]
which holds analogously for $C_i \ C_i^H$ and $\Gamma^2$. With the choice $H(i) := \Re(C_i)$, we arrive at \eqref{red}, which is equivalent to the desired statement.
\end{proof}
\end{theorem}
\subsection{Feasibility of pairs}
The feasibility of pairs is a reflexive and symmetric relation, yet it is generally not transitive.
In some cases, verification can be easier:
\begin{lemma}[Diagonally feasible pairs]\label{trivial}
Let $(\gamma,\theta) \in \mathcal{D}^{\infty}_{\geq 0} \times \mathcal{D}^{\infty}_{\geq 0}$ as well as
$a^{(1)},\ldots,a^{(n)} \in \ensuremath{\mathbb{R}}^r_{\geq 0}$, $r = \mathrm{max}(\degree(\gamma),\degree(\theta))$, and permutations $\pi_1,\ldots,\pi_n \in S_r$ such that
\[ a^{(1)}_i + \ldots + a^{(n)}_i = \gamma_i^2, \quad a^{(1)}_{\pi_1(i)} + \ldots + a^{(n)}_{\pi_n(i)} = \theta_i^2, \quad i = 1,\ldots,r. \]
Then $(\gamma,\theta)$ is feasible for $n$ (we write \textit{diagonally feasible} in that case).
For any $n$ and $\gamma_+^2 = (1,\ldots,1)$ of length $r_1$ and $\theta_+^2 = (k_1,\ldots,k_{r_2}) \in \ensuremath{\mathcal{D}}^{r_2} \cap \{1,\ldots,n\}^{r_2}$, with $\|k\|_1 = r_1$,
$(\gamma,\theta)$ is diagonally feasible for $n$.
\end{lemma}
\begin{proof}
The given criterion is just the restriction to diagonal matrices in Theorem \ref{fundthm}.
All sums of zero-eigenvalues can be ignored, i.e. we also find
diagonal matrices of actual sizes $\degree(\gamma) \times \degree(\gamma)$ and $\degree(\theta) \times \degree(\theta)$.
The subsequent explicit set of feasible pairs follows immediately by restricting $a^{(\ell)}_i \in \{0,1\}$ and by using appropriate permutations.
\end{proof}
Although for $n = 2$, $r \leq 3$, each feasible pair happens to be diagonally feasible, this does not hold in general.
For example, the pair $(\gamma,\theta)$,
\begin{align} \gamma^2 = (7.5,5,0,0) \quad \mbox{and} \quad \theta^2 = (6,3.5,2,1), \label{not_triv_pair} \end{align}
is feasible (cf. Figure \ref{plot_not_triv_feasible}) for $n = 2$, but it is quite easy to verify that it
is not diagonally feasible.
\begin{definition}[Set of feasible pairs]
Let $\mathcal{F}_{n,r_1,r_2}$ be the set of pairs $(\widetilde{\gamma},\widetilde{\theta}) \in \mathcal{D}^{r_1}_{\geq 0} \times \mathcal{D}^{r_2}_{\geq 0}$,
for which $(\gamma,\theta) = ((\widetilde{\gamma},0,\ldots),(\widetilde{\theta},0,\ldots))$ is feasible for $n$, and
\[ \mathcal{F}^2_{n,r_1,r_2} := \{ (\gamma_1^2, \ldots, \gamma_{r_1}^2, \theta_1^2, \ldots, \theta_{r_2}^2) \mid (\widetilde{\gamma},\widetilde{\theta}) \in \mathcal{F}_{n,r_1,r_2} \}. \]
\end{definition}
The set $\mathcal{F}^2_{n,r_1,r_2}$ is predestined to have a simpler, geometrical structure, since $\mathrm{tr}(\gamma_+^2) = \mathrm{tr}(\theta_+^2)$ has to hold (cf. \eqref{traceprop}).
\begin{lemma}[Properties of feasible pairs]\label{prop1}
\hspace{0cm}
If $r_1, r_2 \leq n$, then
\[ \mathcal{F}_{n,r_1,r_2} = \mathcal{D}^{r_1}_{\geq 0} \times \mathcal{D}^{r_2}_{\geq 0} \cap \{(\widetilde{\gamma},\widetilde{\theta}) \mid \|\widetilde{\gamma}\|_2 = \|\widetilde{\theta}\|_2\},
\]
that is, any pair $(\gamma,\theta) \in \ensuremath{\mathcal{D}}^{\infty}_{\geq 0} \times \ensuremath{\mathcal{D}}^{\infty}_{\geq 0}$ with $\degree(\gamma), \degree(\theta) \leq n$ that holds the trace property is (diagonally) feasible for $n$.
\end{lemma}
\begin{proof}
We give a proof by contradiction. We set $\widetilde{\gamma} = (\gamma_+,0,\ldots,0)$ and
$\widetilde{\theta} = (\theta_+,0,\ldots,0)$ such that both have length $n$.
Let the permutation $\widetilde{\pi}$ be given by the cycle $(1,\ldots,n)$ and $\pi_\ell := \widetilde{\pi}^{\ell-1}$.
For each $k$, let $R_k := \{ (i,\ell) \mid \pi_\ell(k) = i \}$.
Now, let the nonnegative (eigen-) values $a^{(\ell)}_i$, \ $\ell,i = 1,\ldots,n$, form a minimizer
of $w := \|A (1,\ldots,1)^T - \widetilde{\gamma}^2\|_1$, subject to
\[ \sum_{(i,\ell) \in R_k} a^{(\ell)}_i = a^{(1)}_{\pi_1(k)} + \ldots + a^{(n)}_{\pi_n(k)} = \theta_k^2, \ k = 1,\ldots,n, \]
where $A = \{a^{(\ell)}_i\}_{(i,\ell)}$ (the minimizer exists since the allowed values form a compact set).
Let further
\[ \#_{\gtrless} := |\{ i \mid a^{(1)}_i + \ldots + a^{(n)}_i \gtrless \gamma_i^2, \ i = 1,\ldots,n \}|. \]
As $\|\gamma\|_2 = \|\theta\|_2$ by assumption, either $|\#_>| = |\#_<| = 0$ or $|\#_>|,|\#_<| > 0$. In the first case, we are finished.
Assume therefor there is an $(i,j) \in \#_> \times \#_<$. Then there is an index $\ell_1$ such that
$a^{(\ell_1)}_i > 0$ as well as indices $k$ and $\ell_2$ such that $(i,\ell_1), (j,\ell_2) \in R_k$.
This is however a contradiction, since replacing $a^{(\ell_1)}_i \leftarrow a^{(\ell_1)}_i - \varepsilon$
and $a^{(\ell_2)}_j \leftarrow a^{(\ell_2)}_j + \varepsilon$ for some small enough $\varepsilon > 0$
is valid, but yields a lower minimum $w$. Hence it already holds
\[ a^{(1)}_i + \ldots + a^{(n)}_i = \gamma_i^2, \quad i = 1,\ldots,n. \]
Due to Lemma \ref{trivial}, the pair $(\gamma,\theta)$ is feasible.
\end{proof}
An algorithm can be constructed from the process in the proof by contradiction of Lemma \ref{prop1}.
If each $\varepsilon$ is chosen as large as possible and $a^{(\ell)}_k = \delta_{\ell,1} \theta_k^2$ as \textit{starting value},
then the minimizer is found after at most $\mathcal{O}(n^2)$ replacements. A corresponding core can easily be calculated subsequently,
as the proof of Theorem \ref{fundthm} is constructive.\\
The previous results have been rather basic. In the following section, we address
theory that has carried out a near century long and intricate development. Fortunately,
many results in this area can be transferred, even by just scratching on its surface -
last but not least because of the work of A. Knutson and T. Tao
and their theory of \textit{honeycombs} \cite{KnTa01_Hon}. It should be noted that with the connection
made through Theorem \ref{fundthm}, a complete resolution of the feasibility problem
would imply to establish a vast part of theory for eigenvalues of hermitian matrices.
Hence at this point, we can rather expect to find answers in the latter area than
the other way around.
|
1,314,259,994,952 | arxiv |
\subsection{Dynamic Neural Radiance Fields}
We represent the dynamic radiance field of a talking human head using a multi-layer perceptron (MLP) $\mathcal{D}_{\theta}$ that is embedded in a canonical space.
As the dynamic radiance field is a function of position $\mathbf{p}$, view $\vec{v}$ and dynamics in terms of facial expressions $\mathbf{\delta}$, we provide these inputs to the MLP which outputs color as well as density values for volumetric rendering:
\begin{equation}
\mathcal{D}_{\theta}(\mathbf{p}, \vec{v}, \mathbf{\delta}, \mathbf{\gamma}) = (RGB, \sigma)
\end{equation}
Note, to compensate for errors in the facial expression and pose estimation, we also provide a per-frame learnable latent code $\mathbf{\gamma}$ to the MLP.
Instead of directly inputting the canonical position $\mathbf{p}$ and viewing direction $\vec{v}$, we use positional encoding as introduced by Mildenhall \etal.~\cite{mildenhall2020nerf}.
In our experiments, we use $10$ frequencies for the position $\mathbf{p}$ and $4$ frequencies for the viewing direction $\vec{v}$.
\paragraph{Dynamics Conditioning}
A key component of the dynamic neural radiance fields is the conditioning on the dynamically changing facial expressions.
The facial expressions $\mathbf{\delta}$ are represented by coefficients of a low dimensional delta-blendshape basis of a morphable model ($\mathbf{\delta} \in \mathbb{R}^{76}$).
To estimate the per-frame expressions $\mathbf{\delta}_i$, we use an optimization-based facial reconstruction and tracking pipeline~\cite{thies2016face}.
Note that these expression vectors only model the coarse geometric surface changes and do not model changes of for example the eye orientation.
Besides expression parameters, we also store the rigid pose $P_i \in \mathbb{R}^{4\times4}$ of the face which allows us to transform camera space points to points in the canonical space of the head.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\linewidth]{latex/figures/method/justus_geometry.jpg}
\caption{
Our dynamic radiance field allows for manual editing via the expression vector $\delta$.
In the middle we show the reconstruction of the original expression. On the left and right we show the results of modifying the blendshape coefficient of the mouth opening (left $-0.4$, right $+0.4$).
The bottom row shows the corresponding normal maps computed via the predicted depth.
As can be seen, the dynamic radiance field adapts not only the appearance, but also the geometry according to the input expression.
}
\label{fig:geometry}
\end{figure}
To compensate for missing information of the expression vectors, we introduce learnable latent codes $\mathbf{\gamma}_i$ (one for each frame).
In the experiments, we are using $\mathbf{\gamma}_i \in \mathbb{R}^{32}$ and regularize them via an $\ell_2$ loss using weight decay ($\lambda=0.05$).
In Fig.~\ref{fig:latent_codes}, we show that the latent code improves the overall sharpness of the reconstruction.
Evaluating the Learned Perceptual Image Patch Similarity (LPIPS)~\cite{zhang2018perceptual} metric for the generated images with and without latent codes results in 0.059 and 0.068, respectively.
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.9\linewidth]{latex/figures/method/ablation.jpg}
\caption{
The background image enables us to faithfully reproduce the entire image. While the dynamics are mainly conditioned on the facial expressions, the learnable latent codes improve the sharpness of the image significantly.
Our method also implicitly gives access to a foreground segmentation.
Note that the shown images are from the test set (latent code is taken from the first frame of training set).
}
\label{fig:latent_codes}
\end{figure*}
\subsection{Volumetric Rendering of Portrait Videos}
In our experiments, we assume a static camera, and a static background.
The moving and talking human in the training portrait video is represented with a dynamic neural radiance field as introduced in the previous section.
To render images of this implicit geometry and appearance representation, we use volumetric rendering.
We cast rays through each individual pixel of a frame, and accumulate the sampled density and RGB values along the rays to compute the final output color.
Using the tracking information $P$ of the morphable model, we transform the ray sample points to the canonical space of the head model and evaluate the dynamic neural radiance field at these locations.
Note that this transformation matrix $P$ gives us the control over the head pose during test time.
We use a similar two-stage volumetric integration approach to Mildenhall \etal.~\cite{mildenhall2020nerf}.
Specifically, we have two instances of the dynamic neural radiance field network, a coarse and a fine one.
The densities predicted by the coarse network are used for importance sampling of the query points for the fine network, such that areas of high density are sampled more.
The expected color $\mathcal{C}$ of a camera ray $\mathbf{r}(t)=\mathbf{c}+t\vec{d}$ with camera center $\mathbf{c}$, viewing direction $\vec{d}$ and near $z_\text{near}$ and far bounds $z_\text{far}$ is evaluated as:
\begin{equation}
\label{eq:integration}
\mathcal{C}(\mathbf{r};\theta,P,\mathbf{\delta},\mathbf{\gamma})=\int_{z_\text{near}}^{z_\text{far}}{\sigma_{\theta}\left(\mathbf{r}\left(t\right)\right)\cdot\text{RGB}_{\theta}\bigl(\mathbf{r}\left(t\right),\vec{d}\bigr)}\cdot T(t)dt,
\end{equation}
where $RGB_{\theta}(\cdot)$ and $\sigma_{\theta}(\cdot)$ are computed via the neural scene representation network $\mathcal{D}_{\theta}$ at a certain point on the ray with head pose $P$, expressions $\mathbf{\delta}$ and learnable latent code $\mathbf{\gamma}$. $T(t)$ is the accumulated transmittance along the ray from $z_\text{near}$ to $t$:
\begin{equation}
T(t) = \exp{ \left( -\int_{ z_\text{near} }^{t}{\sigma_{\theta}\left(\mathbf{r}\left(s\right)\right)ds}\right) . }
\end{equation}
Note that the expected color is evaluated for both the coarse and the fine networks (with learnable weights $\theta_{coarse}$ and $\theta_{fine}$, respectively) to compute corresponding reconstruction losses at train time (see Eq.~\ref{eq:train_loss}).
We decouple the static background and the dynamically changing foreground by leveraging a single capture of the background $\mathcal{B}$ (i.e., without the person).
The last sample on the ray $\mathbf{r}$ is assumed to lie on the background with a fixed color, namely, the color of the pixel corresponding to the ray, from the background image.
Since the volumetric rendering is fully differentiable, the network picks up on this signal,
and learns to predict low density values for the foreground samples if the ray is passing through a background pixel, and vice versa - for foreground pixels, i.e., pixels that correspond to torso and head geometry, the networks predict higher densities, effectively ignoring the background image.
This way, the network learns a foreground-background decomposition in a self-supervised manner (see Fig.~\ref{fig:latent_codes}).
\subsection{Network Architecture and Training}
As mentioned above, the dynamic neural radiance field is represented as an MLP.
Specifically, we use a backbone of $8$ fully-connected layers, each $256$ neurons-wide, followed by ReLu activation functions.
Past the backbone, the activations are fed through a single layer to predict the density value, as well as a $4$-layer, $128$ neuron-wide branch to predict the final color value of the query point.
We optimize the network weights of both the coarse and the fine networks based on a photo-metric reconstruction error metric over the training images $I_i$ ($i \in [1,M]$):
\begin{equation}
\label{eq:train_loss}
L_{total} = \sum_{i=1}^{M} L_i(\theta_{coarse}) + L_i(\theta_{fine})
\end{equation}
with
\begin{equation}
\label{eq:train_loss2}
L_{i}(\theta) = \sum_{j \in \text{pixels}}{ \norm{ \mathcal{C}\bigl(\textbf{r}_j;\theta,P_i,\mathbf{\delta}_i,\mathbf{\gamma}_i) - I_i[j]}^2 } .
\end{equation}
For each training image $I_i$ and training iteration, we sample a batch of $2048$ viewing rays through the image pixels.
We use a bounding box of the head (given by the morphable model) to sample the rays such that 95\% of them correspond to pixels within the bounding box and, thus allowing us to reconstruct the face with a high fidelity.
Stratified sampling is used to sample $64$ points along each ray, which are fed into the coarse network $\mathcal{D}_{\theta_{coarse}}$.
Based on the density distribution along the ray, we re-sample $64$ points and evaluate the color integration (see Eq.~\ref{eq:integration}) using the fine network $\mathcal{D}_{\theta_{fine}}$.
Our method is implemented in PyTorch~\cite{pytorch}.
Both networks and the learnable codes $\mathbf{\gamma}_i$ are optimized using Adam~\cite{adam} ($lr=0.0005$).
In our experiments, we use $512\times512$ images and train each model for $400k$ iterations.
\begin{figure*}[]
\includegraphics[width=1.0\linewidth]{latex/figures/results/compare.jpg}
\caption{Comparison to state-of-the-art facial reenactment methods based on self-reenactment. From left to right: Deferred Neural Rendering (DNR)~\cite{thies2019}, First Order Motion Models (FOMM)~\cite{Siarohin_2019_NeurIPS}, Deep Video Portraits (DVP)~\cite{kim2018deep}, Ours and the ground truth image.
Note that DNR does not provide control over the pose parameters and only changes the facial expressions.
As can be seen, our approach faithfully reconstructs the expression and appearance of the faces, and can also represent the geometry of the glasses including the view-dependent effects (see last row).
}%
\label{fig:results_compare}%
\end{figure*}
\subsection{Monocular Training Data}
\label{sec:training_data}
Our method uses short monocular RGB video sequences.
We captured various human subjects with a Nikon D5300 DSLR camera at a resolution of $1920 \times 1080$ pixels with a framerate of $50$ frames per second.
The images are cropped to $1080 \times 1080$ and scaled to $512 \times 512$.
The sequences have a length of about $2$ min ($6000$ frames).
We hold out the last 20 seconds ($1000$ frames) to serve as a test sequence for each reconstruction.
The subjects were asked to engage in normal conversation, including expressions like smiling as well as head rotations.
\begin{figure*}[h!]
\centering
\includegraphics[width=0.9\textwidth]{latex/figures/results/results.jpg}
\caption{We demonstrate the manual controllability of pose and expression using our 4D facial avatars reconstructed from monocular video inputs.
Specifically, we demonstrate 3D consistent novel head pose synthesis and expression changes (by changing the `open mouth' blendshape coefficient).
}
\label{fig:main_results}
\end{figure*}
\subsection{Comparison to the State of the Art}
\label{sec:comp}
From the application stand-point, our method competes with state-of-the-art facial reenactment methods that allow to apply pose and expression changes.
Specifically, we compare our method with Deep Video Portrait of Kim \etal.~\cite{kim2018deep}, Deferred Neural Rendering of Thies \etal.~\cite{thies2019} and First-Order Motion Models of Siarohin \etal.~\cite{Siarohin_2019_NeurIPS}.
In Fig.~\ref{fig:results_compare} we show qualitative results of the above-mentioned and our own method in a self-reenactment scenario.
As can be seen, our method is able to reproduce the photo-realistic appearance of the subjects.
In contrast to the other methods, our approach generates 3D consistent results including view-dependent effects like the reflections on the glasses.
Especially, synthesizing new head rotations is challenging for the baseline methods.
Note that the approach of Thies \etal.~\cite{thies2019} only controls the facial expressions and not the pose.
To quantitatively evaluate our method and the other two approaches, we compute the mean $L_1$-distance, Peak Signal-to-Noise Ratio (PSNR), and Structure Similarity Index (SSIM)~\cite{ssim}, as well as the Learned Perceptual Image Patch Similarity (LPIPS)~\cite{zhang2018perceptual} metric.
The results are listed in Tab.~\ref{table:comparison_other_methods}.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|c|}
\hline
Method & $L_1 \downarrow$ & PSNR $\uparrow$ & SSIM $\uparrow$ & LPIPS $\downarrow$ \\
\hline\hline
FOMM~\cite{Siarohin_2019_NeurIPS} & $0.036$ & $23.77$ & $0.91$ & $0.16$ \\
DVP~\cite{kim2018deep} & $0.021$ & $25.67$ & $0.93$ & $0.10$ \\
\hline
Ours (no BG) & $0.035$ & $23.52$ & $0.90$ & $0.18$ \\
Ours (no dyn.) & $0.024$ & $26.65$ & $0.93$ & $0.11$ \\
Ours (full) & $\mathbf{0.019}$ & $ \mathbf{26.85}$ & $\mathbf{0.95}$ & $\mathbf{0.06}$ \\
\hline
\end{tabular}
\end{center}
\caption{Quantitative evaluation of our method in comparison to state-of-the-art facial reenactment methods based on self-reenactment (see. Fig.~\ref{fig:results_compare}). Ours (no dyn.) refers to our method without conditioning on dynamics. Ours (no BG) is our method without background image input.}
\label{table:comparison_other_methods}
\end{table}
\subsection{Novel Pose and Expression Synthesis}
\label{sec:main_results}
The goal of our method is the reconstruction of a 4D facial avatar with explicit control over pose and expressions.
We show several reconstructed avatars in Fig.~\ref{fig:main_results} including synthesized images with modified facial expressions and rigid pose.
The results are best seen in the supplemental video, which shows that our dynamic neural scene representation can effectively store the appearance and geometry of a talking head.
In addition to the manual expression and pose edits, we demonstrate facial reenactment where we transfer the facial expressions of one person to another (see Fig.~\ref{fig:reenactment}).
\begin{figure*}
\centering
\includegraphics[width=0.95\linewidth]{latex/figures/results/reenactment.jpg}
\caption{Our 4D facial avatars allow for facial reenactment, where the expressions of a source person are transferred to a target actor which we represent with our dynamic neural radiance field. Note that for facial reenactment we only need to train a model for the target actor; the expressions and pose changes from the source actor can be obtained in real-time~\cite{thies2016face}.
\label{fig:reenactment}
\end{figure*}
\subsection{Ablation Studies}
\label{sec:ablation}
Our method assumes a static background and receives a background image as input.
This background image helps to disentangle the foreground (4D facial avatar) and the background (see Fig.~\ref{fig:latent_codes}).
The conditioning on the facial dynamics in form of the per-frame facial expression coefficients and learnable latent codes is one of the key components of our approach.
Note that during test time we always employ the latent code from the first frame of the training set.
Besides qualitative results, we also list a quantitative evaluation in Tab.~\ref{table:comparison_other_methods}.
As can be seen, all components of our approach improve the quality of the results.
While static neural radiance fields can achieve satisfactory quality with as few as 100 posed images~\cite{mildenhall2020nerf}, our method requires more training data.
In our setting the dynamic radiance field is required to generalize over the space of expression vectors.
To quantify the need of a large training corpus, we conducted experiments by only training on the first halves and quarters of the training sequences, such that a lower variety of expressions and poses is seen during training.
The measured degradation in quality as we train with less data is shown in Tab. \ref{table:ablation_data}.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|c|}
\hline
Method & $L_1 \downarrow$ & PSNR $\uparrow$ & SSIM $\uparrow$ & LPIPS $\downarrow$ \\
\hline\hline
Ours ($25\%$) & $0.029$ & $24.22$ & $0.93$ & $0.09$ \\
Ours ($50\%$)& $0.024$ & $25.47$ & $0.94$ & $0.07$ \\ \hline
Ours (full) & $\mathbf{0.019}$ & $ \mathbf{26.85}$ & $\mathbf{0.95}$ & $\mathbf{0.06}$ \\
\hline
\end{tabular}
\end{center}
\caption{Ablation study w.r.t. training corpus size. All metrics significantly benefit from a larger training corpus.}
\label{table:ablation_data}
\end{table}
\section{Network Architecture}
We provide additional details of the proposed dynamic neural radiance fields architecture. As mentioned in the main paper, the dynamic neural radiance field is represented as a multi-layer perceptron (MLP).
In Fig.~\ref{fig:arch}, we depict the underlying architecture.
The dynamic neural radiance field is controlled by the expression coefficients that correspond to the blendshape basis of the used face tracker~\cite{thies2016face}.
To compensate for missing information, we also feed in the learned latent codes $\gamma$.
For a given sample location $(x,y,z)$ and the corresponding viewing direction $\vec{d}$, the MLP outputs the color and density which is used for the volumetric rendering, explained in the main document.
The MLP is based on a backbone of $8$ fully-connected layers, each $256$ neurons-wide, followed by ReLu as activation functions.
These activations are fed through a single layer to predict the density value, as well as a $4$-layer, $128$ neuron-wide branch to predict the final color value of the query point.
\section{Introduction}
\input{latex/chapters/01_introduction}
\section{Related Work}
\input{latex/chapters/02_related}
\section{Method}
\input{latex/chapters/03_method}
\section{Results}
\input{latex/chapters/04_results}
\section{Limitations}
\input{latex/chapters/05_limitations}
\section{Conclusion}
\input{latex/chapters/06_discussion}
\section*{Acknowledgments}
\input{latex/chapters/08_acknowledgements}
{\small
\bibliographystyle{ieee_fullname}
\subsection{Dynamic Neural Radiance Fields}
We represent the dynamic radiance field of a talking human head using a multi-layer perceptron (MLP) $\mathcal{D}_{\theta}$ that is embedded in a canonical space.
As the dynamic radiance field is a function of position $\mathbf{p}$, view $\vec{v}$ and dynamics in terms of facial expressions $\mathbf{\delta}$, we provide these inputs to the MLP which outputs color as well as density values for volumetric rendering:
\begin{equation}
\mathcal{D}_{\theta}(\mathbf{p}, \vec{v}, \mathbf{\delta}, \mathbf{\gamma}) = (RGB, \sigma)
\end{equation}
Note, to compensate for errors in the facial expression and pose estimation, we also provide a per-frame learnable latent code $\mathbf{\gamma}$ to the MLP.
Instead of directly inputting the canonical position $\mathbf{p}$ and viewing direction $\vec{v}$, we use positional encoding as introduced by Mildenhall \etal.~\cite{mildenhall2020nerf}.
In our experiments, we use $10$ frequencies for the position $\mathbf{p}$ and $4$ frequencies for the viewing direction $\vec{v}$.
\paragraph{Dynamics Conditioning}
A key component of the dynamic neural radiance fields is the conditioning on the dynamically changing facial expressions.
The facial expressions $\mathbf{\delta}$ are represented by coefficients of a low dimensional delta-blendshape basis of a morphable model ($\mathbf{\delta} \in \mathbb{R}^{76}$).
To estimate the per-frame expressions $\mathbf{\delta}_i$, we use an optimization-based facial reconstruction and tracking pipeline~\cite{thies2016face}.
Note that these expression vectors only model the coarse geometric surface changes and do not model changes of for example the eye orientation.
Besides expression parameters, we also store the rigid pose $P_i \in \mathbb{R}^{4\times4}$ of the face which allows us to transform camera space points to points in the canonical space of the head.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\linewidth]{latex/figures/method/justus_geometry.jpg}
\caption{
Our dynamic radiance field allows for manual editing via the expression vector $\delta$.
In the middle we show the reconstruction of the original expression. On the left and right we show the results of modifying the blendshape coefficient of the mouth opening (left $-0.4$, right $+0.4$).
The bottom row shows the corresponding normal maps computed via the predicted depth.
As can be seen, the dynamic radiance field adapts not only the appearance, but also the geometry according to the input expression.
}
\label{fig:geometry}
\end{figure}
To compensate for missing information of the expression vectors, we introduce learnable latent codes $\mathbf{\gamma}_i$ (one for each frame).
In the experiments, we are using $\mathbf{\gamma}_i \in \mathbb{R}^{32}$ and regularize them via an $\ell_2$ loss using weight decay ($\lambda=0.05$).
In Fig.~\ref{fig:latent_codes}, we show that the latent code improves the overall sharpness of the reconstruction.
Evaluating the Learned Perceptual Image Patch Similarity (LPIPS)~\cite{zhang2018perceptual} metric for the generated images with and without latent codes results in 0.059 and 0.068, respectively.
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.9\linewidth]{latex/figures/method/ablation.jpg}
\caption{
The background image enables us to faithfully reproduce the entire image. While the dynamics are mainly conditioned on the facial expressions, the learnable latent codes improve the sharpness of the image significantly.
Our method also implicitly gives access to a foreground segmentation.
Note that the shown images are from the test set (latent code is taken from the first frame of training set).
}
\label{fig:latent_codes}
\end{figure*}
\subsection{Volumetric Rendering of Portrait Videos}
In our experiments, we assume a static camera, and a static background.
The moving and talking human in the training portrait video is represented with a dynamic neural radiance field as introduced in the previous section.
To render images of this implicit geometry and appearance representation, we use volumetric rendering.
We cast rays through each individual pixel of a frame, and accumulate the sampled density and RGB values along the rays to compute the final output color.
Using the tracking information $P$ of the morphable model, we transform the ray sample points to the canonical space of the head model and evaluate the dynamic neural radiance field at these locations.
Note that this transformation matrix $P$ gives us the control over the head pose during test time.
We use a similar two-stage volumetric integration approach to Mildenhall \etal.~\cite{mildenhall2020nerf}.
Specifically, we have two instances of the dynamic neural radiance field network, a coarse and a fine one.
The densities predicted by the coarse network are used for importance sampling of the query points for the fine network, such that areas of high density are sampled more.
The expected color $\mathcal{C}$ of a camera ray $\mathbf{r}(t)=\mathbf{c}+t\vec{d}$ with camera center $\mathbf{c}$, viewing direction $\vec{d}$ and near $z_\text{near}$ and far bounds $z_\text{far}$ is evaluated as:
\begin{equation}
\label{eq:integration}
\mathcal{C}(\mathbf{r};\theta,P,\mathbf{\delta},\mathbf{\gamma})=\int_{z_\text{near}}^{z_\text{far}}{\sigma_{\theta}\left(\mathbf{r}\left(t\right)\right)\cdot\text{RGB}_{\theta}\bigl(\mathbf{r}\left(t\right),\vec{d}\bigr)}\cdot T(t)dt,
\end{equation}
where $RGB_{\theta}(\cdot)$ and $\sigma_{\theta}(\cdot)$ are computed via the neural scene representation network $\mathcal{D}_{\theta}$ at a certain point on the ray with head pose $P$, expressions $\mathbf{\delta}$ and learnable latent code $\mathbf{\gamma}$. $T(t)$ is the accumulated transmittance along the ray from $z_\text{near}$ to $t$:
\begin{equation}
T(t) = \exp{ \left( -\int_{ z_\text{near} }^{t}{\sigma_{\theta}\left(\mathbf{r}\left(s\right)\right)ds}\right) . }
\end{equation}
Note that the expected color is evaluated for both the coarse and the fine networks (with learnable weights $\theta_{coarse}$ and $\theta_{fine}$, respectively) to compute corresponding reconstruction losses at train time (see Eq.~\ref{eq:train_loss}).
We decouple the static background and the dynamically changing foreground by leveraging a single capture of the background $\mathcal{B}$ (i.e., without the person).
The last sample on the ray $\mathbf{r}$ is assumed to lie on the background with a fixed color, namely, the color of the pixel corresponding to the ray, from the background image.
Since the volumetric rendering is fully differentiable, the network picks up on this signal,
and learns to predict low density values for the foreground samples if the ray is passing through a background pixel, and vice versa - for foreground pixels, i.e., pixels that correspond to torso and head geometry, the networks predict higher densities, effectively ignoring the background image.
This way, the network learns a foreground-background decomposition in a self-supervised manner (see Fig.~\ref{fig:latent_codes}).
\subsection{Network Architecture and Training}
As mentioned above, the dynamic neural radiance field is represented as an MLP.
Specifically, we use a backbone of $8$ fully-connected layers, each $256$ neurons-wide, followed by ReLu activation functions.
Past the backbone, the activations are fed through a single layer to predict the density value, as well as a $4$-layer, $128$ neuron-wide branch to predict the final color value of the query point.
We optimize the network weights of both the coarse and the fine networks based on a photo-metric reconstruction error metric over the training images $I_i$ ($i \in [1,M]$):
\begin{equation}
\label{eq:train_loss}
L_{total} = \sum_{i=1}^{M} L_i(\theta_{coarse}) + L_i(\theta_{fine})
\end{equation}
with
\begin{equation}
\label{eq:train_loss2}
L_{i}(\theta) = \sum_{j \in \text{pixels}}{ \norm{ \mathcal{C}\bigl(\textbf{r}_j;\theta,P_i,\mathbf{\delta}_i,\mathbf{\gamma}_i) - I_i[j]}^2 } .
\end{equation}
For each training image $I_i$ and training iteration, we sample a batch of $2048$ viewing rays through the image pixels.
We use a bounding box of the head (given by the morphable model) to sample the rays such that 95\% of them correspond to pixels within the bounding box and, thus allowing us to reconstruct the face with a high fidelity.
Stratified sampling is used to sample $64$ points along each ray, which are fed into the coarse network $\mathcal{D}_{\theta_{coarse}}$.
Based on the density distribution along the ray, we re-sample $64$ points and evaluate the color integration (see Eq.~\ref{eq:integration}) using the fine network $\mathcal{D}_{\theta_{fine}}$.
Our method is implemented in PyTorch~\cite{pytorch}.
Both networks and the learnable codes $\mathbf{\gamma}_i$ are optimized using Adam~\cite{adam} ($lr=0.0005$).
In our experiments, we use $512\times512$ images and train each model for $400k$ iterations.
\begin{figure*}[]
\includegraphics[width=1.0\linewidth]{latex/figures/results/compare.jpg}
\caption{Comparison to state-of-the-art facial reenactment methods based on self-reenactment. From left to right: Deferred Neural Rendering (DNR)~\cite{thies2019}, First Order Motion Models (FOMM)~\cite{Siarohin_2019_NeurIPS}, Deep Video Portraits (DVP)~\cite{kim2018deep}, Ours and the ground truth image.
Note that DNR does not provide control over the pose parameters and only changes the facial expressions.
As can be seen, our approach faithfully reconstructs the expression and appearance of the faces, and can also represent the geometry of the glasses including the view-dependent effects (see last row).
}%
\label{fig:results_compare}%
\end{figure*}
\subsection{Monocular Training Data}
\label{sec:training_data}
Our method uses short monocular RGB video sequences.
We captured various human subjects with a Nikon D5300 DSLR camera at a resolution of $1920 \times 1080$ pixels with a framerate of $50$ frames per second.
The images are cropped to $1080 \times 1080$ and scaled to $512 \times 512$.
The sequences have a length of about $2$ min ($6000$ frames).
We hold out the last 20 seconds ($1000$ frames) to serve as a test sequence for each reconstruction.
The subjects were asked to engage in normal conversation, including expressions like smiling as well as head rotations.
\begin{figure*}[h!]
\centering
\includegraphics[width=0.9\textwidth]{latex/figures/results/results.jpg}
\caption{We demonstrate the manual controllability of pose and expression using our 4D facial avatars reconstructed from monocular video inputs.
Specifically, we demonstrate 3D consistent novel head pose synthesis and expression changes (by changing the `open mouth' blendshape coefficient).
}
\label{fig:main_results}
\end{figure*}
\subsection{Comparison to the State of the Art}
\label{sec:comp}
From the application stand-point, our method competes with state-of-the-art facial reenactment methods that allow to apply pose and expression changes.
Specifically, we compare our method with Deep Video Portrait of Kim \etal.~\cite{kim2018deep}, Deferred Neural Rendering of Thies \etal.~\cite{thies2019} and First-Order Motion Models of Siarohin \etal.~\cite{Siarohin_2019_NeurIPS}.
In Fig.~\ref{fig:results_compare} we show qualitative results of the above-mentioned and our own method in a self-reenactment scenario.
As can be seen, our method is able to reproduce the photo-realistic appearance of the subjects.
In contrast to the other methods, our approach generates 3D consistent results including view-dependent effects like the reflections on the glasses.
Especially, synthesizing new head rotations is challenging for the baseline methods.
Note that the approach of Thies \etal.~\cite{thies2019} only controls the facial expressions and not the pose.
To quantitatively evaluate our method and the other two approaches, we compute the mean $L_1$-distance, Peak Signal-to-Noise Ratio (PSNR), and Structure Similarity Index (SSIM)~\cite{ssim}, as well as the Learned Perceptual Image Patch Similarity (LPIPS)~\cite{zhang2018perceptual} metric.
The results are listed in Tab.~\ref{table:comparison_other_methods}.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|c|}
\hline
Method & $L_1 \downarrow$ & PSNR $\uparrow$ & SSIM $\uparrow$ & LPIPS $\downarrow$ \\
\hline\hline
FOMM~\cite{Siarohin_2019_NeurIPS} & $0.036$ & $23.77$ & $0.91$ & $0.16$ \\
DVP~\cite{kim2018deep} & $0.021$ & $25.67$ & $0.93$ & $0.10$ \\
\hline
Ours (no BG) & $0.035$ & $23.52$ & $0.90$ & $0.18$ \\
Ours (no dyn.) & $0.024$ & $26.65$ & $0.93$ & $0.11$ \\
Ours (full) & $\mathbf{0.019}$ & $ \mathbf{26.85}$ & $\mathbf{0.95}$ & $\mathbf{0.06}$ \\
\hline
\end{tabular}
\end{center}
\caption{Quantitative evaluation of our method in comparison to state-of-the-art facial reenactment methods based on self-reenactment (see. Fig.~\ref{fig:results_compare}). Ours (no dyn.) refers to our method without conditioning on dynamics. Ours (no BG) is our method without background image input.}
\label{table:comparison_other_methods}
\end{table}
\subsection{Novel Pose and Expression Synthesis}
\label{sec:main_results}
The goal of our method is the reconstruction of a 4D facial avatar with explicit control over pose and expressions.
We show several reconstructed avatars in Fig.~\ref{fig:main_results} including synthesized images with modified facial expressions and rigid pose.
The results are best seen in the supplemental video, which shows that our dynamic neural scene representation can effectively store the appearance and geometry of a talking head.
In addition to the manual expression and pose edits, we demonstrate facial reenactment where we transfer the facial expressions of one person to another (see Fig.~\ref{fig:reenactment}).
\begin{figure*}
\centering
\includegraphics[width=0.95\linewidth]{latex/figures/results/reenactment.jpg}
\caption{Our 4D facial avatars allow for facial reenactment, where the expressions of a source person are transferred to a target actor which we represent with our dynamic neural radiance field. Note that for facial reenactment we only need to train a model for the target actor; the expressions and pose changes from the source actor can be obtained in real-time~\cite{thies2016face}.
\label{fig:reenactment}
\end{figure*}
\subsection{Ablation Studies}
\label{sec:ablation}
Our method assumes a static background and receives a background image as input.
This background image helps to disentangle the foreground (4D facial avatar) and the background (see Fig.~\ref{fig:latent_codes}).
The conditioning on the facial dynamics in form of the per-frame facial expression coefficients and learnable latent codes is one of the key components of our approach.
Note that during test time we always employ the latent code from the first frame of the training set.
Besides qualitative results, we also list a quantitative evaluation in Tab.~\ref{table:comparison_other_methods}.
As can be seen, all components of our approach improve the quality of the results.
While static neural radiance fields can achieve satisfactory quality with as few as 100 posed images~\cite{mildenhall2020nerf}, our method requires more training data.
In our setting the dynamic radiance field is required to generalize over the space of expression vectors.
To quantify the need of a large training corpus, we conducted experiments by only training on the first halves and quarters of the training sequences, such that a lower variety of expressions and poses is seen during training.
The measured degradation in quality as we train with less data is shown in Tab. \ref{table:ablation_data}.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|c|}
\hline
Method & $L_1 \downarrow$ & PSNR $\uparrow$ & SSIM $\uparrow$ & LPIPS $\downarrow$ \\
\hline\hline
Ours ($25\%$) & $0.029$ & $24.22$ & $0.93$ & $0.09$ \\
Ours ($50\%$)& $0.024$ & $25.47$ & $0.94$ & $0.07$ \\ \hline
Ours (full) & $\mathbf{0.019}$ & $ \mathbf{26.85}$ & $\mathbf{0.95}$ & $\mathbf{0.06}$ \\
\hline
\end{tabular}
\end{center}
\caption{Ablation study w.r.t. training corpus size. All metrics significantly benefit from a larger training corpus.}
\label{table:ablation_data}
\end{table}
\section{Network Architecture}
We provide additional details of the proposed dynamic neural radiance fields architecture. As mentioned in the main paper, the dynamic neural radiance field is represented as a multi-layer perceptron (MLP).
In Fig.~\ref{fig:arch}, we depict the underlying architecture.
The dynamic neural radiance field is controlled by the expression coefficients that correspond to the blendshape basis of the used face tracker~\cite{thies2016face}.
To compensate for missing information, we also feed in the learned latent codes $\gamma$.
For a given sample location $(x,y,z)$ and the corresponding viewing direction $\vec{d}$, the MLP outputs the color and density which is used for the volumetric rendering, explained in the main document.
The MLP is based on a backbone of $8$ fully-connected layers, each $256$ neurons-wide, followed by ReLu as activation functions.
These activations are fed through a single layer to predict the density value, as well as a $4$-layer, $128$ neuron-wide branch to predict the final color value of the query point.
\section{Introduction}
\input{latex/chapters/01_introduction}
\section{Related Work}
\input{latex/chapters/02_related}
\section{Method}
\input{latex/chapters/03_method}
\section{Results}
\input{latex/chapters/04_results}
\section{Limitations}
\input{latex/chapters/05_limitations}
\section{Conclusion}
\input{latex/chapters/06_discussion}
\section*{Acknowledgments}
\input{latex/chapters/08_acknowledgements}
{\small
\bibliographystyle{ieee_fullname}
|
1,314,259,994,953 | arxiv | \section{Introduction}
Magnetic materials with topological spin textures have attracted considerable attention for use in spintronic devices \cite{tokura}. Materials with magnetic skyrmions are considered useful for nonvolatile \cite{nonvolatile}, low power computing \cite{lowpower1, lowpower2, lowpower3} as topological magnetic materials are robust to perturbation due to the topological protection of the magnetic states. Magnetic phases with nontrivial topology have attracted attention for their ability to be manipulated by low current densities \cite{lowcurrent1, lowcurrent2, lowcurrent3, lowcurrent4} and with low dissipation \cite{dissipationless}. Nontrivial topological magnetic states most commonly arise from geometric frustration in hexagonal lattices \cite{frustration1, frustration2, frustration3, frustration4}, and spin-orbit coupling through the Dzyaloshinskii-Moriya interaction \cite{DMI1, DMI2} such as chiral noncentrosymmetric B20 materials MnSi \cite{mnsi}, FeGe \cite{fege}, and Fe$_{1-x}$Co$_x$Si \cite{fecosi}. In contrast, in inversion symmetric materials with no geometric frustration, topological spin textures are less common and normally require additional mechanisms, such as centrosymmetric tetragonal GdRu$_2$Si$_2$ which displays a skyrmion lattice owing to the interactions between itinerant electrons and magnetic order\cite{gdrusi}.
Materials with layered crystal structure are of particular interest. These materials are ideal candidates for device applications as their structure may allow for exfoliation and easy integration into layered heterostructures, which opens a new route in exploring intrinsic magnetism in the 2D limit. To date, 2D magnetism has been observed in many hexagonal and trigonal materials, but remains elusive in other lattice types \cite{2d1, 2d2, 2d3}. Therefore, the discovery of tetragonal layered magnetic materials with tunable spin textures is highly desired for better understanding of low dimensional magnetism. One promising class of materials is Mn$_{2-x}$\textit{T}$_x$Sb, where \textit{T} is a transition metal such as Zn \cite{valkov, ZnDoped, GMR}, Cr \cite{CrDoped, CrDoped2}, Co \cite{CoDoped}, or Fe \cite{FeDoped}. As the parent compound of Mn$_{2-x}$\textit{T}$_x$Sb, Mn$_2$Sb crystallizes in a Cu$_2$Sb-type centrosymmetric tetragonal lattice with \textit{P}4/\textit{nmm} space group. It displays a ferrimagnetic (FI) order with spins collinear along the \textit{c}-axis below 550 K, and undergoes another magnetic transition with spins re-oriented within the \textit{ab} plane below 240 K \cite{Mn2Sb}. The substitution of Mn by other transition metals \textit{T} tunes magnetic phases, which consequently modifies other properties such as cell volume \cite{valkov}, magnetostriction \cite{magnetostriction}, electronic transport \cite{valkov}, and the magnetocaloric effect \cite{MCE}. Such coupling of magnetism and physical properties, together with the layered structure and high electrical conductivity of these materials, makes Mn$_{2-x}$\textit{T}$_x$Sb a good material platform to study and tune the competing exchange interactions, as well as to develop new magnetic devices.
Among transition metals, the non-magnetic Zn substitution to create Mn$_{2-x}$Zn$_x$Sb is particularly interesting. It has been reported that light Zn-substitution leads to a weak ferrimagnetic state at low temperatures \cite{valkov}. Further increasing Zn content to $x = 1$ can fully suppress the anti-parallel magnetic sublattice, leading to ferromagnetic (FM) order with the Curie temperature of 320 K in MnZnSb \cite{MnZnSb_Structure, MnZnSb}. Such room temperature ferromagnetism is of great interest for technological applications. As Mn$_{2-x}$Zn$_{x}$Sb has been ascribed to itinerant electron magnetism \cite{valkov} and displays tunable magnetic phases, it is an interesting candidate for nontrivial magnetic topology in a centrosymmetric tetragonal lattice. Under this motivation, we studied the magnetic and electronic properties of Mn$_{2-x}$Zn$_{x}$Sb compounds. We focused on Zn-rich samples (${x} > 0.5$) which has not been previously studied (except for ${x} = 1$). We observed rich magnetic phases in this material system and a very large topological Hall effect (THE) with a maximum $\rho_{xy}^T ~ \sim 2\ \mu\Omega$ occurring at high temperature up to 250 K. These observations, together with the magnetoentropic analysis, imply Mn$_{2-x}$Zn$_{x}$Sb can be a rare material platform for tunable topological magnetism in centrosymmetric tetragonal lattices.
\section{Experimental Methods}
Mn$_{2-x}$Zn$_{x}$Sb single crystals were grown by a self-flux method with Zn flux. Mn powder (99.6\%, Alpha Aesar), Zn powder (99.9\%, Alpha Aesar) and Sb powder (99.5\%, Beantown Chemical) with ratio 1+\textit{x}:6:1 were loaded in an alumina crucible inside evacuated fused quartz tubes. The tubes were heated to the maximum growth temperature over 30 hours, held for 3 days, and cooled down to 600$^\circ$C, followed by subsequent centrifuge to remove the excess flux. The maximum growth temperature varied with sample composition. For example, 800$^\circ$C for stoichiometric MnZnSb while 900$^\circ$C for other compositions. Millimeter-size crystals with metallic luster were obtained, as shown in the insets of Fig. 1b. Their compositions and crystallinity were checked by energy dispersive x-ray spectroscopy (EDS) and x-ray diffraction (XRD) (Fig. 1b), respectively. Throughout this work, the compositions shown below are measured compositions determined by EDS. The crystal structure were determined by single crystal XRD in a Bruker Apex I diffractometer with Mo \textit{K}$_\alpha$ radiation ($\lambda$ = 0.71073 Å). Resistivity and most of the magnetization measurements were carried out using a Quantum Design Physical Properties Measurement System. Some magnetization measurements were performed with a Quantum Design Superconducting Quantum Interference Device.
\section{Results}
As shown in Fig. 1a, the parent compound Mn$_{2}$Sb crystallizes in a layered tetragonal structure with two inequvalent Mn sites Mn(I) and Mn(II), whereas the Zn substitution has been found to occur at the Mn(II) site \cite{valkov, MnZnSb_Structure, GMR}. The systematic shift of the (00\textit{l}) XRD peaks with varying the Zn content indicates successful substitution in our samples, as shown in Fig. 1b. We have further performed crystal structure refinement using single crystal XRD on Mn$_{2-x}$Zn$_{x}$Sb, which have revealed that Zn-substitution maintains the crystal structure with the same space group \textit{P}4/\textit{nmm}, and the Zn substitution indeed occurs at the Mn(II) site. In Supplemental Materials \cite{SM} we provide the structure parameters resolved using JANA2006 \cite{JANA2006}.
\begin{figure}[t!]
\includegraphics[width=\textwidth]{Fig1.pdf}
\caption{\label{Figure 1} (a) Crystal structure for Mn$_{2-x}$Zn$_{x}$Sb. (b) Left: Images of a few representative Mn$_{2-x}$Zn$_{x}$Sb single crystals used in this work and their (00\textit{L}) series x-ray diffraction patterns. Right: zoom of the XRD pattern to show the systematic shift of the (003) diffraction peak.}
\end{figure}
In Mn$_{2}$Sb, Mn(I) and Mn(II) sublattices carry opposite magnetic moments which leads to two FI phases. The high temperature FI phase (denoted as HT-FI) occurs between 240 and 550K, which is made up of inequivalent Mn(I) and Mn(II) magnetic sublattices with moments oriented out-of-plane. Below 240 K, the moments of both magnetic sublattices are re-oriented to in-plane direction which forms the low temperature FI phase (denoted as LT-FI) \cite{Mn2Sb}. The schematic magnetic structures are shown in the insets of Fig. 2c. Therefore, the dilution of Mn(II) magnetism by non-magnetic Zn substitution can significantly affect the magnetism. The magnetic transition temperatures have been reported to be suppressed by light Zn-substitution ($x < 0.3$) \cite{valkov, ZnDoped, GMR}. In this work, we clarified the magnetism of the heavily Zn-substituted compounds. Upon increasing the Zn content, the out-of-plane (\textbf{H}//\textit{c}, Fig. 2a) and in-plane (\textbf{H}//\textit{c}, Fig. 2b) magnetic susceptibility measurements indicate that the two transition temperatures systematically decrease. The HT-FI ordering temperature saturates to 300 - 310 K. For the LT-FI phase, the order is completely suppressed when Mn(II) is fully replaced by Zn (${x} = 1$), leaving a room temperature ferromagnetism with $T_c\approx$ 310 K that corresponds to the magnetic ordering of the Mn(I) sublattice. These observations indicate that the magnetic phases in Mn$_{2-x}$Zn$_{x}$Sb are determined by the coupling of Mn(I) and Mn(II) magnetic lattices.
\begin{figure}[t!]
\includegraphics[width=\textwidth]{Fig2.pdf}
\caption{\label{Figure 2} (a-b) Temperature dependence of magnetic susceptibility for Mn$_{2-x}$Zn$_{x}$Sb measured with magnetic field of 0.1 T applied (a) along the \textit{c}-axis ($H\parallel c$) and (b) within the \textit{ab}-plane ($H\parallel ab$). Data for each composition are shifted for clarity. The same color code is used for (a) and (b). (c) Magnetic phase diagram constructed from the susceptibility measurements presented in (a) and (b). Inset: the schematic magnetic structures for the HT-FI and LT-FI phases.}
\end{figure}
The evolution of magnetism in heavily Zn-substituted ($x > 0.5$) samples is summarized by a magnetic phase diagram in Fig. 2c, together with the possible magnetic structures for HT-FI and LT-FI phases adopted from the earlier studies \cite{Mn2Sb, valkov}. For MnZnSb which is FM, the magnetic structure is similar to that for the HT-FI phase, except that the Mn(II) sublattice is replaced by non-magnetic Zn \cite{MnZnSb}. Though the reported magnetic structures for HT-FI and LT-FI phases were originally determined for lightly Zn-substituted samples \cite{valkov}, they agrees well with the observed magnetic properties of our Zn-rich samples. As shown in Fig. 2a, when {H}//{c}, susceptibility $\chi_c$ is the greatest in the HT-FI phase but drops steeply upon entering the LT-FI phase. In contrast, under in-plane field {H}//{ab}, the LT-FI phase susceptibility $\chi_{ab}$ is comparable or greater than that of the HT-FI phase. These results are consistent with the proposed magnetic structures with the easy axis being out-of-plane for the HT-FI phase but in-plane for the LT-FI phase (Fig. 2c, insets), which has also been observed in Co-doped Mn$_{2}$Sb \cite{CoDoped}.
\begin{figure}[t!]
\includegraphics[width=\textwidth]{Fig3.pdf}
\caption{\label{Figure 3} Magnetic field dependence of (a-c) isothermal magnetization $M$, (d-f) Hall resistivity $\rho_{xy}$, and (g-i) normalized magnetoresistivity, MR = [$\rho_{xx}(H)$ - $\rho_{xx}(H=0)$]/$\rho_{xx}(H=0)$ for $x = 0.77$, $x = 0.85$, and $x = 1$ samples, measured with \textbf{H}//\textit{c}. Metamagnetic transitions below the HT-FI transition temperatures are manifested by the jump at higher fields at 150 K in $x = 0.77$ and 100 K in $x = 0.85$ sample. For each of those three compositions, identical samples were used for Hall effect and magnetoresistivity measurements. Lower insets in (a-c): zoom-in of the low field magnetization. Upper inset in (b): out-of-plane (\textbf{H}//\textit{c}) and in-plane (\textbf{H}//\textit{ab}) magnetization for the $x = 0.85$ sample at $T = 2$ K. The same color code is used for all panels.}
\end{figure}
Magnetic properties of Zn-substituted samples have also been characterized by isothermal field-dependent magnetization measurements. In Figs. 3a-c we present the field dependent magnetization $M(H)$ for a few representative samples $x = 0.77, 0.85$, and $1$, measured with \textbf{H}//\textit{c}. The low field hysteresis loop and the high field moment saturation behavior are consistent with the nature of ferrimagnetism. In the $x = 0.77$ and $0.85$ samples with mixed Mn(II)/Zn plane, a strong metamagnetic transition can be observed below the LT-FI transition temperature (e.g., 150 K for the $x = 0.77$ sample and 100 K for the $x = 0.85$ sample), which shifts to higher fields with lowering temperature and is absent in measurements under in-plane magnetic field ({H}//{ab}, see Fig. 3b, inset). Those observations imply a field-driven spin canting and reorientation from the LT-FI magnetic structure to the HT-FI structure (see magnetic structures in Fig. 2c). In MnZnSb ($x=1$), the full replacement of the Mn(II) lattice by Zn plane leads to a typical FM behavior with saturation moment around 1.2 $\mu_B$/Mn, consistent with earlier studies \cite{MnZnSb}. In addition, as shown in the insert of Fig. 3c, the width of the hysteresis loop is negligible in MnZnSb. The small magnetic coercivity further indicates soft room temperature ferromagnetism.
The highly tunable magnetic phases in Mn$_{2-x}$Zn$_{x}$Sb provides an good opportunity to study the evolution of electronic transport properties with fine-tuning of magnetism. The strong coupling of magnetism and electron transport in this material system has been revealed by the observation of strong anomalous Hall effect (AHE) in all Mn$_{2-x}$Zn$_{x}$Sb samples we have measured. Figures. 3d-f present the field dependence of Hall resistivity $\rho_{xy}(H)$ for a few representative samples with $x = 0.77, 0.85$, and 1. The jumps in Hall resistivity $\rho_{xy}$ match well with the magnetization jumps, both near the zero field and at the metamagnetic transition (Figs. 3a-3c). In addition to the low field hysteresis loop that was also seen in the magnetization measurements, the $x = 0.77$ sample also display a hysteresis loop in the field scan at the metamagnetic transition around 4 T at 150 K (Fig. 3d), which is not observed in the magnetization measurement (Fig. 3a). In addition, as shown in Figs. 3g-3i, the magnetotransport measurements reveal negative magnetoresistivity (MR) which is consistent with the nature of ferrimagnetism, as well as the clear kinks at magnetization transitions.
One interesting feature in our transport study is the presence of THE in Mn$_{2-x}$Zn$_{x}$Sb in $\rho_{xy}$ data. In materials with nontrivial spin texture, conduction electrons acquire a Berry phase that gives rise to a fictitious magnetic field, leading to additional THE contribution to Hall resistivity, as given by \cite{THEformula}:
\begin{equation}
\rho_{xy} = R_0B + S_H \rho_{xx}(B,T)^2M(B,T) + \rho_{xy}^T
\end{equation}
where $R_0 = \frac1{nq}$ is the ordinary Hall coefficient, $S_H$ is the anomalous Hall coefficient, $\rho_{xx}$ is the longitudinal resistivity, and $M$ is the magnetization. The first, second, and third terms in Eq. 1 correspond to ordinary Hall effect (OHE), anomalous Hall effect (AHE), and topological Hall effect (THE). For the AHE component, we choose the widely used quadratic dependence for longitudinal resistivity (i.e., AHE $\propto\rho_{xx}(B,T)^2$), which is applicable for extrinsic AHE in good metals ($\sigma \sim 10^4 - 10^6$ ($\Omega$ cm)$^-1$), or AHE with an intrinsic Berry phase mechanism \cite{AHEreview,THEformula}. Although extrinsic AHE originating from a skew scattering mechanism can show a linear dependence (i.e., AHE $\propto\rho_{xx}$) \cite{AHEreview,THEformula}, it cannot fit our measured data. For the THE component $\rho_{xy}^T$, it can arise due to topological spin texture in the real space, or the Berry phase in momentum space through a topological band structure.
For FM MnZnSb in which the Mn(II) plane is fully replaced by Zn, $\rho_{xy}$ can be well reproduced by ordinary and anomalous Hall components with a negative anomalous Hall coefficient $S_H$. However, Mn$_{2-x}$Zn$_{x}$Sb samples with mixed Mn(II)/Zn lattice plane display distinct features. First, $S_H$ reverses its sign from positive to negative with lowering the temperature (Figs. 3d and 3e). The mechanism for such sign change is unclear, but appears to be associated with changing magnetic easy axis from out-of-plane in HT-FI phase to in-plane in LT-FI phase (see magnetic structures in Fig. 2b). Second, $\rho_{xy}$ at low fields does not scale to magnetization, indicating the presence of a THE component $\rho_{xy}^T$, which can be extracted by subtracting the ordinary and anomalous Hall contributions from the total measured $\rho_{xy}$, from the fits using the above Eq. 1. A typical example showing the extraction of $\rho_{xy}^T$ at $T=2$ K for the $x= 0.85$ sample is shown in Fig. 4a, in which the measured magnetic field dependence of Hall resistivity has been decomposed to the OHE, AHE, and the remaining THE component. The extracted $\rho_{xy}^T(H)$ exhibits a peak near the zero field, which is gradually suppressed upon increasing magnetic field. Such a field dependence for $\rho_{xy}^T(H)$ is reminiscent of the THE in skyrmion systems. In addition to magnetic field, increasing temperature also suppresses $\rho_{xy}^T$, which can be directly seen from the absence of the low field feature in the measured Hall effect data (Figs. 3d and 3e). Similar behavior has also been observed in all other Mn$_{2-x}$Zn$_{x}$Sb samples with varied $\rho_{xy}^T$ amplitude, except for the $x = 1$ sample. In Fig. 4b we summarized the extracted maximum $\rho_{xy}^T$ in color map to present the temperature and composition dependence of the THE in Mn$_{2-x}$Zn$_{x}$Sb. It can be clearly seen that THE occurs with Zn substitution for Mn(II), becomes the strongest in the $x= 0.85$ sample with a large $\rho_{xy}^T \sim 2\ \mu\Omega$ , and diminishes when Mn(II) is fully replaced by Zn. Interestingly, the sizable THE even persists at high temperatures (250 K), which is large compared to other materials showing THE close to room temperature \cite{HT_THE1, HT_THE2, HT_THE3, HT_THE4, HT_THE5, HT_THE6, HT_THE7}.
\begin{figure}[t!]
\includegraphics[width=\textwidth]{Fig4.pdf}
\caption{\label{Figure 4} (a) Separation of the ordinary (OHE), anomalous (AHE), and the topological (THE) Hall components for the $x = 0.85$ sample at $T = 2$ K according to Eq. 1. (b) Color map for the maximum topological Hall resistivity for various Mn$_{2-x}$Zn$_{x}$Sb samples and at various temperatures. (c) Low field ($< 0.1$ T) susceptibility for the $x = 0.85$ sample, measured with both cooling and heating as indicated by the arrows. Data collected at various fields are shifted for clarity. (d) $(dS/dH)_T=(dM/dT)_H$ for the $x = 0.85$ sample, obtained from (c). Data collected at various fields are shifted for clarity, with the dashed lines indicating the zero value.}
\end{figure}
In magnetic materials, large THE is expected \cite{largeTHE} and observed \cite{THEnoncollinear, THEnoncollinear2} with noncollinear antiferromagnetism, which has been attributed to the Berry curvature-driven intrinsic anomalous Hall effect \cite{largeTHE}. As a specific scenario, the formation of skyrmion lattice with topological spin texture is known to cause large THE \cite{frustration4}. In addition, there is also a proposal of artificial THE caused by signals from multiple magnetic phases in inhomogeneous or polycrystalline samples \cite{CrTe,CoGd}, which can be excluded because no secondary phases have been probed in structural and magnetic property characterizations on our single crystalline samples. To clarify the mechanism of the large THE, we have performed the temperature dependent magnetization for the $x= 0.85$ sample under several perpendicularly applied fields (\textbf{H}//\textit{c}), as shown in Fig. 4c. Here we focus on low field ($< 0.1$ T) measurements because THE diminishes at high fields (Fig. 4a). Here the data were taken during both cooling and heating, in contrast to the conventional magnetization measurements in which the data are usually collected with increasing temperature. Our measurements reveal a field-dependent ordering temperature which shifts to lower values with increasing field. Furthermore, there is a thermal hysteresis in the HT-FI to LT-FI transition, and an anomaly peak immediately above the transition. The anomaly peak only occurs upon heating from the LT-FI phase, which is suppressed at higher fields ($> 0.07$ T). This magnetization anomaly, which occurs as a precursor to the magnetic transition, implies that there is an intermediate phase depending on the temperature history of the sample. Such an anomaly peak in magnetization has also been observed in other materials with non-trivial spin textures \cite{FeGeSkyrmion, CoZnMn}.
The transitions between topologically distinct spin states in real space are characterized by entropy changes which can be conveniently revealed by temperature dependent magnetization measurements via the Maxwell relation $(dS/dH)_T=(dM/dT)_H$ \cite{FeGeSkyrmion}. The derivative of the warming data of magnetization is presented in Fig. 4d, which displays a positive peak corresponding to the magnetic transition in $M(T)$. At low fields, a negative peak (the blue shaded region) in $dS/dH$ occurs immediately above the magnetic transition temperature and extend to ~220 K, indicating a reduced magnetic entropy. This negative peak disappears at higher field ($> 0.07$ T). The temperature and field dependence of the negative peak agrees well with the emergence of THE in the $x= 0.85$ sample (Figs. 4a and 4b). Similar magnetoentropy signatures have also been observed in FeGe \cite{FeGeSkyrmion} and Co$_x$Zn$_y$Mn$_z$ \cite{CoZnMn} which are attributed to the formation of skyrmion lattice. Though direct probes such as Lorentz transmission electron microscopy or small-angle neutron scattering are needed to clarify the existence of skyrmion phase in Mn$_{2-x}$Zn$_{x}$Sb, our observations indeed imply the development of a distinct spin state prior to the HT-FI to LT-FI phase transition.
\section{Discussion and Conclusion}
The magnetoentropic analysis indicates the presence of an intermediate phase that occurs only at low field, which further suggests that the THE is due to a real space Berry phase caused by a topological magnetic spin texture. One possibility is the skyrmion lattice phase stabilized by magnetic fluctuations near the phase boundary between the HT-FI and LT-FI phases. Centrosymmetric material possessing magnetic skyrmion or skyrmion-like features have been discovered such as SrFe$_{1-x}$Co$_x$O$_3$ and MnNiGa \cite{SFCO,MnNiGa}. Large THE in tetragonal lattice has also been reported in Mn$_2$PtSn which is non-centrosymmetric \cite{Mn2PtSn}. However, a skyrmion lattice phase is rare in the lattice which is centrosymmetric and tetragonal. In centrosymmetric tetragonal Mn$_{2-x}$Zn$_{x}$Sb, magnetic fluctuations and frustrations may be introduced by partial replacement of the magnetic sublattice. Furthermore, It has been reported that coupling between itinerant electron states and localized magnetic moments can lead to a skyrmion lattice without geometric frustration. These exchange interactions have been shown to lead to a square skyrmion lattice when stabilized by magnetic anisotropy in materials such as GdRu$_2$Si$_2$ through both experimental observations \cite{gdrusi, nanometric} and Monte Carlo simulations \cite{gdrusi, squareskyrmion}.
More direct probes are needed to clarify the spin texture in Mn$_{2-x}$Zn$_{x}$Sb. Providing large THE in centrosymmetric tetragonal lattice is rare \cite{EuAl4}, the large THE occurring at very high temperatures and its high tunablity with compositions indicate that magnetically disordered Mn$_{2-x}$Zn$_{x}$Sb is a good candidate material for a tunable topological magnetism.
\textbf{Acknowledgement}
This work is primarily supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences program under Award No. DE-SC0019467 (support for personnel, sample synthesis, major transport and magnetization measurements, and data analysis). H.C. acknowledges NSF award DMR-1848281 (support for part of transport measurements). R.B. acknowledges the support from Chancellor’s Innovation and Collaboration Fund at the University of Arkansas (support for personnel). Z,Q.M. acknowledges the support by the US National Science
Foundation under grants DMR 1917579 and 1832031.
$^{*}$ jinhu@uark.edu
$^{\dagger}$ wegner@uark.edu
|
1,314,259,994,954 | arxiv | \section{Introduction}
The conception of quantum weak measurement was introduced by
Aharonov, Albert, and Vaidman (AAV) in 1988~\cite{AAV}. Their theory
is based on the von Neumann measurement with a very weak coupling
between two quantum systems~\cite{Jozsa}, for example, the weak
spin-orbit coupling of electrons in the Stern-Gerlach (SG) device. A
key feature of the weak measurement is that the observable quantity
(acting as the pointer) is measured in a certain subensemble, for
example, measuring the expectation value of the electrons' position
with the postselected spin state $|f\rangle$. This measurement leads
to an interesting result that the pointer has a shift proportional
to the value
\begin{equation}
A_w=\frac{\langle f|\hat{A}|i\rangle}{\langle f|i\rangle}\,,
\end{equation}
where $|i\rangle$ and $\hat{A}$ are, respectively, the initial
state and the observable operator of the spin system. $A_w$ is the
so-called weak value. Compared to the strong measurement $\langle
i|\hat{A}|i\rangle$, the weak value provides an improved approach to
detect $\hat{A}$, and some interesting phenomena result.
Recently, the weak value has attracted much interest because it
could be arranged to amplify some weak signals~\cite{Amplification1,
Amplification2,Amplification3,Amplification4,Amplification5,Hall
Effect}. It is also used to study the foundational questions of
quantum
mechanics~\cite{three-box,C-Cat,Nature-wavefuction,Science-trajectory,Superluminal},
such as Hardy's paradox~\cite{HD}, the Leggett-Garg
inequality~\cite{LG}, Heisenberg's uncertainty
relation~\cite{Uncertain}, and the wave-particle
correlation~\cite{correlation}. Regarding the physical
implementations, most of the previous studies used the light both as
the pointer and the measured system~\cite{RMP}. There are several
interesting works implementing weak measurement using the
condensed-matter system, e.g., the quantum dot~\cite{Dot}, the
superconducting phase qubit~\cite{Superconducting}, and the
semiconducting Aharonov-Bohm interferometer~\cite{AB}. Recently,
Ref.~\cite{Atom} studied the weak measurement of a cold-atom system
based on the dynamics of spontaneous emission.
In this article, the weak measurement is applied to the system of
atom-cavity interaction. In such a system, the cavity
electrodynamics (cavity QED) have predicted many nonclassical
phenomena such as the famous vacuum Rabi
oscillation~\cite{HarocheRabi,HarocheCat,HarocheRMP} and the vacuum
Rabi splitting~\cite{splitting1983,splittingkimble,splittingNature}.
These effects concern the cavity-induced changes in the internal
electron's states of atoms. Remarkably, it has been shown that the
light in a cavity can significantly affect the atom's center-of-mass
(c.m.) motions, for example, Kapitza-Dirac
scattering~\cite{Kapitza-Dirac-vacuum,Kapitza-Dirac-0,Kapitza-Dirac-1,Kapitza-Dirac-2,Kapitza-Dirac-RMP}.
This effect is due to the atom stimulated emitting and absorbing
photon in the cavity (resulting in a momentum change in the atom).
It can be found that a vacuum cavity can also generate a similar
transverse effect of a neutral atom via the virtual excitation of a
photon. Here, we propose a weak value amplification (WVA) setup to
observe such an interesting nonclassical effect of vacuum. After the
atom-cavity interaction, we perform a single-qubit operation on the
two internal states of atoms and postselected on an internal state.
Then, we obtain a weak value; its real and imaginary parts
determine, respectively, the shifts of the average momentum and the
position of the atoms' external motions. Consequently, the
controllable weak value could be used to amplify the vacuum-induced
transverse shifts of atoms. It is shown that the present WVA could
offer some certain advantages for experimentally detecting the weak
transverse effects of atoms.
Our paper is organized as follows. In Sec.~II, we present the
vacuum-induced weak coupling between the internal and external
motions of free atoms. This coupling acts as a force to push the
neutral atoms moving transversely. In Sec. III, we get the desirable
weak value using the single-qubit operation and postselection and
use it to amplify the transverse shifts of atoms. In Sec. IV, we
discuss the physical meaning of WVA. Our conclusions are summarized
in Sec. V.
\section{The vacuum-induced coupling between the internal and external motions of free atoms}
Following the original work of AAV, we consider the weak measurement
experiment as showing in Fig.~1. The spatially coherent atoms, e.g.,
a released BEC~\cite{Kapitza-Dirac-RMP}, are injected into the
equipment through a pinhole located about the point of $(0,0,0)$.
This pinhole selects a part of the matter wave, and thus the
positional uncertainty of the selected atoms is on the order of the
size of pinhole. Hence, one can use the typical Gaussian wave-packet
to describe the spatially coherent atoms (after the pinhole). In
$x$-direction, the Gaussian state reads
\begin{equation}
|G\rangle=\int_{-\infty}^{\infty}dx \phi(x) |x\rangle\,.
\end{equation}
Where $\phi(x)=\langle
x|G\rangle=(2\pi\Delta^2)^{-1/4}\exp[-x^2/(4\Delta^2)]$ is the
probability-amplitude of position eigenstate $|x\rangle$ and
$\Delta$ describes the root-mean-square (rms) width of the
wave-packet. Of course, the state (2) can be also written as
$|G\rangle=\int_{-\infty}^{\infty}dp\, \phi(p) |p\rangle$ with the
momentum eigenstate $|p\rangle$ and the Gaussian function
$\phi(p)=\langle
p|G\rangle=[2\Delta^2/(\pi\hbar^2)]^{1/4}\exp(-\Delta^2p^2/\hbar^2)$.
For this Gaussian state, the expectation value of position is
$\langle x\rangle=0$ and its uncertainty reads $\Delta=\sqrt{\langle
x^2\rangle-\langle x\rangle^2}$. The average momentum along $x$
direction is $\langle p\rangle=0$ and its uncertainty reads
$\Delta_{p}=\sqrt{\langle p^2\rangle-\langle
p\rangle^2}=\hbar/(2\Delta)$. Physically, the uncertainty $\Delta$
(or $\Delta_p$) determines the main distribution range of particles'
positions (or momentums). Out of this range, the probability to find
the particles is negligible. Below, we study the vacuum field (in
the cavity 1) induced change on the initial wave packet $\phi(x)$
within a very short duration (i.e., the free diffraction of atom is
negligible).
\begin{figure}[tbp]
\includegraphics[width=14cm]{F1.eps}
\caption{(Color online) Sketch of the weak measurement process. The
two-levels atoms are prepared in a certain internal state
$|S_i\rangle$, and pass through a pinhole with momentum along the
$z$ direction. The vacuum field (with the $x$-directional gradient)
in cavity $1$ generates a weak coupling between the atoms' internal
states and the external $x$-directional motions. Cavity $2$, with
classical light, resonantly excites atoms and generates the
desirable single-qubit operation $\hat{U}$. The applied voltage $\pm
V$ ionizes the atoms in excited state (similar to the procedure in
the experiments of Haroche
group~\cite{HarocheRabi,HarocheCat,HarocheRMP}) and leave the ground
state atoms to be detected. In the selected ensemble of ground
state, the atoms have a shift (along $x$ direction) of the average
position on the deposition plate. This shift can be described by the
so-called weak value, which depends on the pre-selection
$|S_i\rangle$ and the single-qubit operation $\hat{U}$. }
\end{figure}
In cavity $1$, the quantized field of a mode takes the
form~\cite{Scully}
\begin{equation}
\vec{E}=\vec{\tau}E_{0}\sin (kx+kx_0)(\hat{a}^\dagger+\hat{a})
\end{equation}
which excites the incoming atoms. Here, $\vec{\tau}$, $E_{0}$ and
$k$ are respectively the polarization-vector, amplitude, and
wave-number of the standing wave (such as the first excited mode).
$\hat{a}^\dagger$ and $\hat{a}$ are respectively the creation and
annihilation operators of the corresponding cavity mode (with
frequency $\omega_c$).
We consider the microwave excitation of the two-level Rydberg atoms.
Although the orbit radius of Rydberg states are very large (about
$10^3$ atomic units~\cite{HarocheRabi,HarocheCat,HarocheRMP}), they
are far smaller than the wavelength of the microwave cavity (on the
order of centimeter). Therefore, in the atomic internal region the
driving field (3) can be regarded as uniform. Performing the dipole
approximation, the interaction between the atom and cavity field
reads
\begin{equation}
\hat{H}_{\rm int}=\hbar\Omega_{0}\sin
(kx+kx_0)(\hat{a}^\dagger+\hat{a})\hat{\sigma}_x
\end{equation}
with the so-called Rabi frequency
$\Omega_{0}=E_{0}\mu/\hbar$~\cite{Scully}. Where, $\hbar$ is the
Planck constant divided by $2\pi$, $\hat{\sigma}_x=|e\rangle\langle
g|+|g\rangle\langle e|$ is the transition operator of the two-level
atom with the ground state $|g\rangle$ and the exciting state
$|e\rangle$, and $\mu$ is the transition matrix element of the
two-level atom.
We consider $k\Delta\ll1$ and $0\ll kx_0\ll\pi/2$, the Hamiltonian
(4) can be approximately written as
\begin{equation}
\hat{H}_{\rm
int}=\hbar\Omega\left(x+x_c\right)(\hat{a}^\dagger+\hat{a})\hat{\sigma}_x
\end{equation}
with the constants $\Omega=k\cos(kx_0)\Omega_{0}$ and
$x_c=\tan(kx_0)/k$. Here, we have used the well-known trigonometric
function $\sin(kx+kx_0)=\cos(kx_0)\sin(kx)+\cos(kx)\sin(kx_0)$ and
neglected the high order of $kx$. Note that, $k\Delta\ll 1$ means
that the range of atomic motion in $x$ direction is much smaller
than the wave length of the cavity mode. The range of $x$ depends on
the initial uncertainty $\Delta$, and the wave packet spread (i.e.,
the diffraction). As mentioned earlier, the diffraction of the atom
is negligible as the duration of the cavity-atom interaction is very
short, i.e., $t\ll m\Delta^2/\hbar$ ($m$ is the mass of atom). Thus,
the value of $x$ is on the order of its initial uncertainty $\Delta$
(e.g., $10~\mu$m), which can be much smaller than the wave length of
cavity mode (about $1$~cm~\cite{HarocheRMP}).
With the interaction (5), the total Hamiltonian of the system can be
written as
\begin{equation}
\hat{H}_p=\frac{p^2}{2m}+\frac{\hbar\omega_{\rm
a}}{2}\hat{\sigma}_z+\hbar\omega_c(\hat{a}^\dagger\hat{a}+\frac{1}{2})+\hbar
\Omega\left(\hat{x}+x_c\right)(\hat{a}^\dagger+\hat{a})\hat{\sigma}_x
\end{equation}
in the Hilbert space of momentum eigenstates. In this space, the
position operator is given by $\hat{x}=i\hbar\partial/\partial p$.
Physically, the first term in the right hand of Eq.~(6) describes
the CM motion of the free atom. The second term describes the atomic
two internal levels (by the Pauli operator
$\hat{\sigma}_z=|e\rangle\langle e|-|g\rangle\langle g|$ and the
transition frequency $\omega_a$). The third term is the free
Hamiltonian of the cavity ground mode. The last term describes the
coupling between the considered three degrees of freedom, i.e., a
position-dependent Jaynes-Cummings interaction. In the rotating
frame defined by $\hat{U}_1=\exp[-ip^2t/(2m\hbar )]$, the
Hamiltonian (6) can be written as
\begin{equation}
\hat{H}_{p}=\frac{\hbar\omega_{\rm
a}}{2}\hat{\sigma}_z+\hbar\omega_c(\hat{a}^\dagger\hat{a}+\frac{1}{2})+\hbar
\Omega(\hat{x}+x_c+\frac{pt}{m})(\hat{a}^\dagger+\hat{a})\hat{\sigma}_x\,
\end{equation}
With such a transform, the free term $p^2/(2m)$ is eliminated.
Considering the atom rapidly crosses the cavity (i.e., the effective
interaction duration $t$ is sufficiently short), there is an impulse
atom-cavity interaction corresponding to the von Neumann
measurement~\cite{AAV,Jozsa}. Thus, $pt/m\rightarrow0$, and the
Hamiltonian (7) reduces to
\begin{equation}
\hat{H}_{p}=\frac{\hbar\omega_{\rm
a}}{2}\hat{\sigma}_z+\hbar\omega_c(\hat{a}^\dagger\hat{a}+\frac{1}{2})+\hbar
\Omega\left(\hat{x}+x_c\right)(\hat{a}^\dagger+\hat{a})\hat{\sigma}_x\,.
\end{equation}
Performing an unitary transformation of $\hat{U}_2=\exp[-i\omega_c
t(\hat{a}^\dagger\hat{a}+1/2)-it\omega_a\hat{\sigma}_z/2]$, the
Hamiltonian (8) further reduces to
\begin{equation}
\hat{H}_{p}=\hbar
\Omega(\hat{x}+x_c)\left(\hat{a}^\dagger\hat{\sigma}_-e^{-i\delta
t}+\hat{a}\hat{\sigma}_+e^{i\delta t}\right)\,
\end{equation}
with the detuning $\delta=\omega_{\rm a}-\omega_c\,$ and the
operators $\hat{\sigma}_-=|g\rangle\langle e|$ and
$\hat{\sigma}_+=|e\rangle\langle g|$. Here, the usual rotating wave
approximation is performed, i.e., the terms relating to the
sum-frequency $\omega_{\rm a}+\omega_c$ have been neglected.
The time-evolution operator for the Hamiltonian (9) can be given by
the Dyson-series:
\begin{equation}
\begin{array}{l}
\hat{U}_{\rm evol}=1+\left(\frac{-i}{\hbar}\right)\int_0^t\hat{H}_p(t_1)dt_1\\
\\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
+\left(\frac{-i}{\hbar}\right)^2\int_0^t\hat{H}_p(t_1)
\int_0^{t_1} \hat{H}_p(t_2)dt_2dt_1\\
\\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,+\cdots\,.
\end{array}
\end{equation}
Under the conditions of large detuning: $\Omega\ll\delta$, the above
time-evolution operator can be approximately written as
\begin{equation}
\hat{U}_{\rm evol}\approx e^{-\frac{i}{\hbar}\hat{H}_{\rm eff} t}
\end{equation}
with the effective Hamiltonian $\hat{H}_{\rm
eff}=(\hbar\Omega^2/\delta)(\hat{x}+x_c)^2(\hat{a}^\dagger\hat{a}\hat{\sigma}_z+|e\rangle\langle
e|)$. Considering the cavity is in the vacuum state $|0\rangle$,
i.e., $\hat{a}^\dagger\hat{a}|0\rangle=0$, the effective Hamiltonian
reduces to
\begin{equation}
\hat{H}_{\rm eff}=\hbar
g_0\left(\frac{1}{x_c}\hat{x}+1\right)^2|e\rangle\langle e|\,
\end{equation}
with $g_0=(\Omega x_c)^2/\delta$. This Hamiltonian just describes a
position-dependent vacuum Rabi splitting~\cite{Harochevacuum}, and
the parameter $g_0$ describes the coupling strength between the
internal and external motions of atom. Numerically, considering the
wave length $\lambda=1$~cm of the cavity mode and the Rabi frequency
$\Omega_0/2\pi=10~$KHz~\cite{HarocheRabi}, we have $\Omega
x_c=\Omega_0\sin(k x_0)\approx2\pi\times7$~KHz with $kx_0=\pi/4$,
and consequently $g_0\approx2\pi\times0.7$~KHz with $\Omega
x_c/\delta=0.1$. Of course, as detuning $\delta$ increased, the
coupling strength $g_0$ decreased significantly.
\section{The weak value amplification}
In the following, we will show that the vacuum-induced interaction
(12) can generate a small shift to the initial wave packet
$\phi(p)$, and this displacement can be amplified by using the weak
value technique. In the momentum space, the evolved state of atom
can be written as $|\psi\rangle=\hat{U}_{\rm
evol}|G\rangle|S_i\rangle=\int_{-\infty}^{\infty} dp\psi |p\rangle$,
with $|S_i\rangle$ being the initial state of atomic qubit. We
rewrite the initial Gaussian wave function as
$\phi(p)=\phi(\tilde{p})=(2\pi)^{-1/4}\Delta_p^{-1/2}\exp(-\tilde{p}^2)$
with the dimensionless number $\tilde{p}=p\Delta/\hbar$. Then, we
have
\begin{equation}
\psi=e^{g_c|e\rangle\langle e|\frac{\partial}{\partial
\tilde{p}}}e^{ig_c'|e\rangle\langle e|\frac{\partial^2}{\partial
\tilde{p}^2}}\phi(\tilde{p})|i\rangle\,
\end{equation}
by using the relation $\hat{x}=i\hbar\partial/\partial
p=i\Delta\partial/\partial \tilde{p}$. Here, $g_c=2g_0t(\Delta/x_c)$
and $g_c'=g_0t(\Delta/x_c)^2$ are the dimensionless coupling
parameters, and $|i\rangle=\exp\left(-ig_0t|e\rangle\langle
e|\right)|S_i\rangle$. Considering $\Delta\ll x_c$, i.e., $g_c'\ll
g_c$, the state (13) can be approximately written as
\begin{equation}
\psi=e^{g_c|e\rangle\langle e|\frac{\partial}{\partial
\tilde{p}}}\phi(\tilde{p})|i\rangle\,.
\end{equation}
For simplicity, we re-define
$|i\rangle=\alpha|g\rangle+\beta\exp(i\theta)|e\rangle$ being the
initial internal state of the atoms (which can be prepared by the
well-known single qubit operations). Here, $\theta$ is the phase of
the superposition state, and $\alpha$ and $\beta$ are the
superposition coefficients (real number) satisfying the normalized
condition $\alpha^2+\beta^2=1$. Immediately, we have the state
evolution
$\phi(p)|i\rangle\longrightarrow\alpha\phi(p)|g\rangle+\beta
e^{i\theta}\phi(p+\hbar g_c/\Delta)|e\rangle$, and consequently the
expectation value of atom's momentum reads
\begin{equation}
\langle p\rangle=-\beta^2\frac{\hbar
g_c}{\Delta}=-2\beta^2g_c\Delta_p\,.
\end{equation}
This equation means that, the vacuum in cavity $1$ generates a
transverse shift $\langle p\rangle-0=\langle p\rangle$ to the
average momentum of atoms. Because $\beta^2\leq1$, the shift
$\langle p\rangle\rightarrow0$ for a very weak coupling of
$g_c\rightarrow0$. Furthermore, one can easily calculate the
expectation value $\langle x\rangle=0$ of the atomic position. These
results indicate that the weak coupling $g_c$ can generate
significant changes neither on the observable $\langle p\rangle$ nor
$\langle x\rangle$.
We now use the weak value technique to amplify the shifts $\langle
p\rangle$ and $\langle x\rangle$. First, we preform a single-qubit
operation $\hat{U}=\exp\left(-i\eta\hat{\sigma}_x\right)$ to the
state (14) with the controllable parameter $\eta$. Alternatively,
this single-qubit operation can be realized by the classical
resonant light, as it shown in Fig.~1. Consequently, we have the
final state
\begin{equation}
\begin{array}{l}
\psi'=\hat{U}\psi=\hat{U}e^{g_c|e\rangle\langle
e|\frac{\partial}{\partial
\tilde{p}}}\phi(\tilde{p})|i\rangle\\
\\
\,\,\,\,\,\,\,=\hat{U} \left[1+g_c(|e\rangle\langle
e|)\frac{\partial}{\partial
\tilde{p}}+\frac{g_c^2}{2}(|e\rangle\langle
e|)^2\frac{\partial^2}{\partial
\tilde{p}^2}+\cdots\right]\phi(\tilde{p})|i\rangle\,.
\end{array}
\end{equation}
Second, we post-select an eigenstate of the atomic qubit, e.g.
$|g\rangle$, and immediately the external motion of atoms collapses
on the wave function:
\begin{equation}
\psi'_{w}=\langle g|\psi'\rangle=\langle
g|\hat{U}|i\rangle\left(1+g_cA_w\frac{\partial}{\partial
\tilde{p}}+\frac{g_c^2A_w}{2}\frac{\partial^2}{\partial
\tilde{p}^2}+\cdots\right)\phi(\tilde{p})
\end{equation}
with
\begin{equation}
A_w=\frac{\langle g|(\hat{U}|e\rangle\langle e|)|i\rangle}{\langle
g|\hat{U}|i\rangle}\,.
\end{equation}
Here, we have used the relation $(|e\rangle\langle
e|)^n=|e\rangle\langle e|$ with $n=1,2,3,\cdots$. $A_w$ is our weak
value, although it dos not satisfy the standard definement of
Eq.~(1). This will be explained in the Sec. IV.. Physically, the
post-selection of $|g\rangle$ could be realized by the
field-ionization~\cite{HarocheRabi,HarocheCat,HarocheRMP}. Since
$|e\rangle$ and $|g\rangle$ have the different ionization energies,
the ionization is state selective. Supposing the atoms only in
exciting state $|e\rangle$ are effectively ionized by the applied
moderate electric field, and then the exciting state atoms will be
accelerated in $y$ direction and discarded. However, the ground
state atoms will arrive the plate to be finally detected, as it
shown in Fig.~1.
Considering the weak interaction, i.e., $g_c\ll1$ and
$g_c^2|A_w|\ll1$, the wave function (17) can be approximately
written as
\begin{equation}
\begin{array}{l}
\psi_{w}=\frac{\psi'_{w}}{\langle
g|\hat{U}|i\rangle}=\left(1+g_cA_w\frac{\partial}{\partial
\tilde{p}}\right)\phi(\tilde{p})
\\
\\
\,\,\,\,\,\,\,\,=\phi(p)-
\frac{2g_c\Delta}{\hbar}\text{Re}(A_w)p\phi(p)-i\frac{2g_c\Delta}{\hbar}\text{Im}(A_w)p\phi(p)\,.
\end{array}
\end{equation}
Here, the high orders of $g_c$ have been neglected, and
$\text{Re}(A_w)$ and $\text{Im}(A_w)$ are respectively the real and
imaginary parts of $A_w$. With this approximation, the probability
for successfully post-selecting $|g\rangle$ reads $P\approx|\langle
g|\hat{U}|i\rangle|^2$. According to Eq.~(19), we have the
expectation value of momentum:
\begin{equation}
\langle\hat{p}\rangle_w=\int_{-\infty}^{\infty}\psi^*_{w}p\psi_{w}dp\approx
-\hbar \frac{g_c}{\Delta}
\text{Re}(A_w)=-2g_c\Delta_p\text{Re}(A_w)\,.
\end{equation}
This means that, within the post-selected sub-ensemble the shift of
average momentum $\langle p\rangle_w-0=\langle p\rangle_w$ is
proportional to the real part of the weak value. On the other hand,
in the position presentation, the wave function (19) reads
\begin{equation}
\begin{array}{l}
\phi_w=\int_{-\infty}^{\infty}\psi_{w}\langle x|p\rangle
dp\\
\\
\,\,\,\,\,\,\,\, =\frac{1}{\sqrt{2\pi\hbar}}\int_{-\infty}^{\infty}
\phi(p)e^{ipx/\hbar}dp+\frac{1}{\sqrt{2\pi\hbar}}\frac{\hbar
g_cA_w}{\Delta}\int_{-\infty}^{\infty}
e^{ipx/\hbar}\frac{\partial\phi(p)}{\partial p}dp\\
\\
\,\,\,\,\,\,\,\, =(1-i\frac{g_cA_w}{\Delta}x)\phi(x)
\end{array}
\end{equation}
and consequently the expectation value of positions reads
\begin{equation}
\langle x\rangle_w=\int_{-\infty}^{\infty}\phi_w^*x\phi_wdx\approx
\frac{2g_c}{\Delta}\text{Im}(A_w)\int_{-\infty}^{\infty}\phi(x)x^2\phi(x)dx=2g_c\Delta\text{Im}(A_w)\,.
\end{equation}
This indicates that, within the post-selected sub-ensemble the shift
of average position $\langle x\rangle_w-0=\langle x\rangle_w$ is
proportional to the imaginary-part of the weak value.
Due to the single-qubit operations
$\hat{U}|g\rangle=\cos(\eta)|g\rangle-i\sin(\eta)|e\rangle$ and
$\hat{U}|e\rangle=\cos(\eta)|e\rangle-i\sin(\eta)|g\rangle$, our
weak value reads
\begin{equation}
A_w=\frac{\langle g|(\hat{U}|e\rangle\langle e|)|i\rangle}{\langle
g|\hat{U}|i\rangle}=\frac{1}{Ae^{i\vartheta}+1}
\end{equation}
with $A=\alpha\cos(\eta)/[\beta\sin(\eta)]$ and
$\vartheta=(\pi/2)-\theta$. Consequently, we have
\begin{equation}
\text{Re}(A_w)=\frac{1+A\cos(\vartheta)}{A^2+2A\cos(\vartheta)+1}\,
\end{equation}
\begin{equation}
\,\,\text{Im}(A_w)=\frac{-A\sin(\vartheta)}{A^2+2A\cos(\vartheta)+1}\,.
\end{equation}
These values could be as large as we want by properly adjusting the
parameters $A$ and $\vartheta$. For example, if $\cos(\vartheta)=1$
and $A\rightarrow-1$, then
$\text{Re}(A_w)=1/(1+A)\rightarrow\infty$. If $A=-\cos(\vartheta)$
and $\vartheta\rightarrow0$, then
$\text{Im}(A_w)=\cot(\vartheta)\rightarrow\infty$.
With these enlarged weak values, the weak interaction of $g_c$ could
significantly change the transverse CM motions of atoms via the
basic equations:
\begin{equation}
\,\,\,\,\frac{\langle p\rangle_w}{2\Delta_p}\approx
-g_c\text{Re}(A_w)\,,
\end{equation}
\begin{equation}
\frac{\langle x\rangle_w}{2\Delta}\approx g_c\text{Im}(A_w)\,.
\end{equation}
We would like to emphasize that the shifts $\langle p\rangle_w$ and
$\langle x\rangle_w$ can not be infinitely amplified, as the weak
values were obtained under the weak interaction condition of
$g_c^2|A_w|\ll1$. That is, the amplified displacements of average
position and momentum are limited in the regimes of $g_c\langle
p\rangle_w/(2\Delta_p)\ll1$ and $g_c\langle
x\rangle_w/(2\Delta)\ll1$, respectively. Hence, the present
amplification effects are significant just for the weak interaction
of $g_c\rightarrow0$.
There is a cost of WVA. The probability $P\approx|\langle
g|\hat{U}|i\rangle|^2$ for successfully post-selecting $|g\rangle$
decreases rapidly with the increasing $\text{Re}(A_w)$ or
$\text{Im}(A_w)$, so that the more significant amplification needs
more atoms. In the term of metrology, the WVA may be suboptimal for
parameter estimation since many atoms (information) were
discarded~\cite{SNR,Fisher,Knee2}. However, in the practical
experimental systems the discarded atoms may bring also noises into
the final detection. As it pointed by the previous
refs.~\cite{merit0,merit1,merit2,merit3,SNRZ,JordanX}, the WVA can
offer some certain technical advantages, for example, suppressing
the systematic errors~\cite{SNRZ} or avoiding the detectors
saturation~\cite{JordanX}.
In the present system, it would be very difficult to precisely scan
the position- or momentum-distribution of final atoms. Possibly, one
can place two atoms-detectors (such as the hot-wire
ionizers~\cite{Kapitza-Dirac-RMP}) at the symmetrical positions $x$
and $-x$ to estimate the transverse effects of atoms. In the unit
time, the expected atoms-counting in detectors are given by
$\bar{n}_1=NP\int_{x-l/2}^{x+l/2}|\phi_w(x)|^2dx$ and $\bar{n}_2=
NP\int_{-x-l/2}^{-x+l/2}|\phi_w(x)|^2dx$, respectively. $N$ is the
total number of inputted atoms in the unit time, $l<x$ is the
atoms-collecting region of detectors. According to $\bar{n}_1$ and
$\bar{n}_2$, we have
\begin{equation}
\bar{s}=\frac{\bar{n}_1}{\bar{n}_2}-1=\frac{1+2g_c\text{Im}(A_w)\frac{\bar{x}_l}{\Delta}}{1-2g_c\text{Im}(A_w)\frac{\bar{x}_l}{\Delta}}-1
\approx 4g_c\text{Im}(A_w)\frac{\bar{x}_l}{\Delta}
\end{equation}
with
$\bar{x}_l=\int_{x-l/2}^{x+l/2}x\phi^2(x)dx\big/\int_{x-l/2}^{x+l/2}\phi^2(x)dx$.
Above, the high orders of $g_c$ have been neglected, and $\bar{s}$
can be regarded as the signal of atoms transverse shift. We note
that $\bar{n}_1$, $\bar{n}_2$, and consequently $\bar{s}$ are the
expectation values. In practice, the experimental results may take
$n_{i}=\chi \bar{n}_{i}+\delta_{i}^s+\delta_{i}^r$ (with the index
$i=1,2$) and consequently the equation (28) is replaced by
$s=(n_1/n_2)-1$. $\chi$ is the detection efficiency of the
atoms-detectors. There are two kinds of errors in measurements,
namely systematic error $\delta_{i}^s$ and random error
$\delta_{i}^r$. Certainly, the WVA does not offer advantages for
suppressing the random error since the inputted atoms were reduced
by the post-selection~\cite{SNRZ}. However, it can be found that the
WVA is very useful for suppressing the systematic error which is
proportional to the number of atoms, i.e.,
$\delta_{i}^s=\delta_0\bar{n}_i$ with $\delta_0$ being a small
uncertainty coefficient. This systematic error arises perhaps
because of the unsteady detection efficiency of the atom detector,
the uncertain location of the detector, etc.
\section{Discussion}
Here, we give a brief discussion on the physical meaningful of the
WVA. In the original work of AAV~\cite{AAV}, there are two SG
devices. The first one is used to generate a weak coupling between
the spin and orbit of electron, and the second one is arranged to
preform the post-selection of the electron's spin states. The
present weak measurement processing is similar to that of AAV. The
cavity $1$ plays an atomic SG device to implement the coupling
between the internal qubit and the external CM orbital motion of
atom. The cavity $2$ acts as the second SG device of AAV for
coherently manipulating the atoms. After the cavity $1$ the atom is
in the state (14), which can be written as the standard form of
$\psi\approx\phi(p)|i\rangle-ig_c\hat{A}\hat{P}\phi(p)|i\rangle $,
with $\hat{A}=|e\rangle\langle e|$ and
$\hat{P}=i\hbar\partial/\partial p$. Using the orthonormal
eigenstates $|g\rangle$ and $|e\rangle$ of the two-levels atom,
$\psi$ can be further written as:
\begin{equation}
\psi=(|g\rangle\langle g|+|e\rangle\langle e|)\psi=\langle
g|i\rangle\phi(p,A_g)|g\rangle+\langle
e|i\rangle\phi(p,A_e)|e\rangle\,.
\end{equation}
Here, $A_g=\langle g|\hat{A}|i\rangle/\langle g|i\rangle$,
$A_e=\langle e|\hat{A}|i\rangle/\langle e|i\rangle$,
$\phi(p,A_g)=(1-ig_cA_g\hat{P})\phi(p)$, and
$\phi(p,A_e)=(1-ig_cA_e\hat{P})\phi(p)$.
Obviously, Eq.~(29) represses an entangled state. If the internal
state $|g\rangle$ is measured, then the external motion of atom
collapses on the wave function $\phi(p,A_g)$; whereas if the state
$|e\rangle$ is measured, the atom collapses on $\phi(p,A_e)$. These
measurements preformed on the qubit are just the well-known
projective measurements $\hat{P}_g=|g\rangle\langle g|$ and
$\hat{P}_e=|e\rangle\langle e|$. And the outcomes of $A_g$ and $A_e$
can be regarded as the weak values since they take the same form of
Eq.~(1). However, it can be found that $A_g=0$ and $A_e=1$ because
$\hat{A}=|e\rangle\langle e|$, so that they can not realize the
desirable amplification functions, whatever the initial state
$|i\rangle$ is. We note that, $A_g$ and $A_e$ are both real. Hence,
applying directly the projective measurements to the state (29) can
not yield the effect of positional shifts of atoms, as it mentioned
early.
Comparing to the projective measurement, the weak measurement due to
the post-selection $\hat{P}_f=|f\rangle\langle f|$ is a more general
conception, because the state $|f\rangle$ is beyond the eigenstates
of the system. How can a coherent superposition of the eigenstates
be realized? In AAV's proposal, the desired post-selection is
implemented by the second SG device. It couples the spin to the
$y$-directional orbital motion of electron (the third degree of
freedom of electron). And consequently one can select the
$y$-directional motions (via the strong measurement) to realize a
post-selection of the superposition state of spin (see, e.g., the
ref.~\cite{PRD} which discussed detailedly the AAV's idea). In the
recent optics experiments~\cite{RMP}, the post-selection is realized
by a polarizer which is oriented at a certain angle and then selects
the desirable superposition state of polarization of light.
Here, the cavity $2$ together with the ionization electrodes just
realized an operation $\hat{P}'_f=|g\rangle\langle
g|\hat{U}=|g\rangle\langle f|$ to the state (29). And the weak value
(18) can be written as the standard form of
\begin{equation}
A_w=\frac{\langle g|\hat{U}\hat{A}|i\rangle}{\langle
g|\hat{U}|i\rangle}=\frac{\langle f|\hat{A}|i\rangle}{\langle
f|i\rangle}
\end{equation}
with $\langle f|=\langle g|\hat{U}$. This weak value can be as large
as we want, such as $\text{Im}(A_w)\neq0$. Physically, the present
weak value can be regarded as an outcome of the coherent operation
$\hat{U}$. It can be found that the standard post-selection also
implies the coherent operations, by writing
$\hat{P}_f=|f\rangle\langle f|=\hat{R}|g\rangle\langle
g|\hat{R}^\dagger$ with the unitary evolution operator $\hat{R}$ and
the eigenstate $|g\rangle$ of any systems.
\section{Conclusion}
In this theoretical work, we have shown that a vacuum microwave
cavity can shift the neutral atoms to move transversely. This
non-classical effect is due to the vacuum-induced coupling between
the internal and external motions of free atoms, i.e., a
position-dependent vacuum Rabi splitting. We further showed that the
present effect could be amplified by the weak value technique. After
the atom-cavity coupling, we preformed a single-qubit rotation on
the atomic internal states and consequently post-selected an
internal eigenstate (strong measurement). Then, we obtained a weak
value which was used to amplify the vacuum-induced shift of the
average position or momentum of atoms. Technically, the present WVA
could offers advantages in the practical experiment systems for
observing the weak transverse effect of atoms, such as suppressing
the systematic error of detectors. Physically, our WVA is a
quantum-mechanical effect due to the necessary single-qubit
operation. Finally, we hope the present studies could encourage the
further studies on the weak measurements and cavity-QED.
{\bf Acknowledgements}: This work was partly supported by the
National Natural Science Foundation of China Grants No. 11204249.
\\
\vspace{-1cm}
|
1,314,259,994,955 | arxiv | \section{Introduction}
Since the first large scale simulations of self gravitating systems
the direct $N$-body method has gained a solid footing in the research
community. At the moment $N$-body techniques are used in astronomical
studies of planetary systems, debris discs, stellar clusters, galaxies
all the way to simulations of the entire universe
\citep{2006astro.ph..1232H}. Outside astronomy the main areas of
research which utilise the same techniques are molecular dynamics,
elementary particle scattering simulations, plate tectonics, traffic
simulations and chemical reaction network studies. In the latter
non-astronomical applications, the main force evaluating routine is
not as severe as in the gravitational $N$-body simulations, but the
backbone simulation environments are not very different.
The main difficulty in simulating self gravitating systems is the lack
of antigravity, which results in the requirement of global
communication; each object feels the gravitational attraction of any
other object.
The first astronomical simulation of a self gravitating $N$-body
system was carried out by \cite{1941ApJ....94..385H} with the use of
37 light bulbs and photoelectric cells to evaluate the forces on the
individual objects. Holmberg spent weeks in order to perform this
quite moderate 37-particle simulation. Over the last 60 or so years
many different techniques have been introduced to speed up the kernel
calculation. Today, such a calculation requires about 50\,000
integration steps for one dynamical time unit. At a speed of $\sim
10$\,Gflop/s the calculation would be performed in a few seconds.
The gravitational $N$-body problem has made enormous advances in the
last decade due to algorithmic design. The introduction of digital
computers in the arena
\citep{1963ZA.....57...47V,1964ApNr....9..313A,1968BAN....19..479V} led
to a relatively quick evaluation of mutual particle forces. Advanced
integration techniques introduced to turn the particle forces in a
predicted space-time trajectory, opened the way to predictable
theoretical results \citep{1975ARA&A..13....1A,1999PASP..111.1333A}.
One of the major developments in the speed-up and improved accuracy of
the direct $N$-body problem was the introduction of the block-time
step algorithm \citep{1991ApJ...369..200M,1993ApJ...414..200M}.
In the late 1980s it became quite clear that the advances of modern
computer technology via Moore's law \citep{Moore} was insufficient to
simulate large star clusters by the new decade
\citep{1988ApJS...68..833M,1990ApJ...365..208M}. This realization
brought forward the initiatives employed around the development of
special hardware for evaluating the forces between the particles
\citep{1986LNP...267...86A,1996IAUS..174..141T,1998sssp.book.....M,2001ASPC..228...87M,2003PASJ...55.1163M},
and of the efficient use of assembler code on general purpose hardware
\citep{2006NewA...12..169N,2006astro.ph..6105N}.
One method to improve performance is by parallelising force evaluation
Eq.\,\ref{Eq:Force} for use on a Beowulf or cluster computer (with or
without dedicated hardware)\citep{2006astro.ph..8125H}, a large
parallel supercomputer \citep{2002NewA....7..373M,2003JCoPh.185..484D}
or for grid operations \citep{2004astro.ph.12206G}. In particular for
distributed hardware it is crucial to implement an algorithm that
limits communication as much as possible, otherwise the bottleneck
simply shifts from the force evaluation to interprocessor
communication.
A breakthrough in direct-summation $N$-body simulations came in the
late 1990s with the development of the GRAPE series of
special-purpose computers \citep{1998sssp.book.....M}, which achieve
spectacular speedups by implementing the entire force calculation in
hardware and placing many force pipelines on a single chip. The
latest special purpose computer for gravitational $N$-body
simulations, GRAPE-6, performs at a peak speed of about 64\,Tflop/s
\citep{2001ASPC..228...87M}.
In our standard setup, one GRAPE-6Af processor board is attached to a
host workstation, in much the same way that a floating-point or
graphics accelerator card is used. We use a smaller version: the
GRAPE-6Af which has four chips connected to a personal workstation via
the PCI bus delivering a theoretical peak performance of $\sim 131$
Gflops for systems of up to 128k particles at a cost of $\sim\$6$K
\citep{2005PASJ...57.1009F}. Advancement of particle positions
[$\mathcal{O}(N)$] is carried out on the host computer, while
interparticle forces [$\mathcal{O}(N^2)$] are computed on the GRAPE.
The latest developments in this endeavour is the design and
construction of the GRAPE-DR, the special purpose computer which will
break the Pflop/s barrier by the summer of 2008
\citep{2005astro.ph..9278M}\footnote{See {\tt
http://grape.astron.s.u-tokyo.ac.jp/grape/computer/grape-dr.html}}.
One of the main arguments to develop such a high powered and
relatively diverse computer is to perform simulations of entire
galaxies \citep{2005JKAS...38..165M,Hoekstra}.
The main disadvantages of these special purpose computers, however, are
the relatively short mean time between failure, the limited
availability, the limited applicability, the limited on-board memory
to store particles, the simple fact that they are basically build by a
single research team led by prof. J. Makino and the lack of competing
architectures.
The gaming industry, though not deliberately supportive of scientific
research, has been developing high power parallel vector processors
for performing specific rendering applications, which are in
particular suitable for boosting the frame-rate of games. Over the
last 7 years graphics processing units (GPUs) have evolved from fixed
function hardware for the support of primitive graphical operations to
programmable processors that outperform conventional CPUs, in
particular for vectorizable parallel operations. Regretfully, the
precision of these processors is still 32-bit IEEE which is below the
average general purpose processor, but for many applications it turns
out that the higher (double) precision is not crucial or can be
emulated at some cost. It is because of these developments, that more
and more people use the GPU for wider purposes than just for graphics
\citep{GPUGems1,GPUGems2,buck04brook}. This type of programming is also
called general purpose computing on graphics processing units
(GPGPU)\footnote{see {\tt http://www.gpgpu.org}}. Earlier attempts to
use a GPU for gravitational N-body simulations were carried out
approximate force evaluation methods using shared time steps
\cite{Nyland04}, but provide little improvement in performance.
A 25-fould speed increase compared to an Intel Pentium IV processor
was reported by \cite{Elsenetal06}, but details of their implemtation
of the force evaluation alrgorithm are yet unclear.
Using the GPU as a general purpose vector processor works as follows.
Colours in a computer are represented by one or more numbers. The
luminance can be represented by just a single number, whereas a
coloured pixel may contain separate values indicating the amount of
red, green and blue. A fourth value alpha may be included to indicate
the amount of transparency. Using this information, a pixel may be
drawn. There are many pixels in a frame, and ideally, these should be
updated all at the same time and at a rate exceeding the response time
of the human eye. This requires fast computations for updating the
pixels, for example when a camera moves or a new object comes into
view. Such operations usually have an impact on many or even all
pixels fast computations are required. But since the majority of
pixes do not require information from other pixes, processing can be
done efficiently in parallel. All information required to build a
pixel should go through a series of similar operations, a technique
which is better known as single instruction, multiple data
(SIMD). There are many different kinds of operations this information
needs to go through. The stream programming model has been designed to
make the information go through these operations efficiently, while
exposing as much parallelism as possible. The stream programming model
views all informations as ``streams'' of ordered data of the same data
type. The streams pass through ``kernels'' that operate on the streams
and produce one or more streams as output.
In this paper we report on our endeavour to convert a high precision
production quality $N$-body code to operate with graphics processor
units. In \S\,\ref{Sect:Nbody} we explain the adopted $N$-body
integration algorithm, in \S\,\ref{Sect:Cg} we address the programming
environment we used to program the GPU, In the sections
\S\,\ref{Sect:Results} and \S\,\ref{Sect:performance} we present the
results on two GPUs and compare them with GRAPE-6Af and we discuss a
model to explain the GPUs performance. In
\S\,\ref{Sect:Discussion} we summarise our findings, and in the Appendix we present a snippet of the source code in Cg.
\section{Calculating the force and integrating the particles}
\label{Sect:Nbody}
The gravitational evolution of a system consisting of $N$ stars with
masses $m_j$ and at position ${\bf r}_j$ is computed by the direct
summation of the Newtonian force between each of the $N$ stars. The
force ${\bf F}_i$ acting on particle $i$ is then obtained by summation
of all other $N-1$ particles
\begin{equation}
{\bf F}_i \equiv m_i {\bf a}_i = m_i G \sum^{N}
_{j=1, j \ne i}
m_j
{{\bf r}_i-{\bf r}_j \over |{\bf r}_i-{\bf r}_j|^3}.
\label{Eq:Force}\end{equation}
Here $G$ is the Newton constant.
A cluster consisting of $N$ stars evolves dynamically due to the
mutual gravity of the individual stars. For an accurate force
calculation on each star a total of ${1 \over 2} N(N-1)$ partial
forces have to be computed. This {\large{O}($N^2$)} operation is the
bottleneck for the gravitational $N$-body problem.
The GPU scheme described in this paper is implemented in the
$N$-body integrator. Here particle motion is calculated using a
fourth-order, individual-time step ``Hermite'' predictor-corrector
scheme (Makino and Aarseth 1992).\nocite{1992PASJ...44..141M} This
scheme works as follows. During a time step the positions (${\bf x}$)
and velocities (${\bf v} \equiv \dot{{\bf x}}$) are first predicted to
fourth order using the acceleration (${\bf a} \equiv \ddot{{\bf x}}$)
and the ``jerk'' ($\mbox{${\bf k}$} \equiv \dot{{\bf a}}$, the time derivative
of the acceleration) which are known from the previous step.
The predicted position (${\bf x}_p$) and velocity (${\bf v}_p$) are
\begin{eqnarray}
{\bf x}_p &=& {\bf x} + ({\bf v} + (dt/2) ({\bf a} + (dt/3) \mbox{${\bf k}$})) dt, \\
{\bf v}_p &=& {\bf v} + ({\bf a} + (dt/2) \mbox{${\bf k}$}) dt.
\end{eqnarray}
The acceleration and jerk are then recalculated at the predicted time,
using $x_p$ and $v_p$. Finally, a correction is based on the estimated
higher-order derivatives:
\begin{eqnarray}
{\bf a3} &=& 2 ({\bf a} - {\bf a}_p) + (\mbox{${\bf k}$} + \mbox{${\bf k}$}_p) dt, \\
{\bf a2} &=& -3 ({\bf a} - {\bf a}_p) - (2 \mbox{${\bf k}$} + \mbox{${\bf k}$}_p) dt.
\end{eqnarray}
where
\begin{eqnarray}
{\bf a2} &=& \dot{\mbox{${\bf k}$}} dt^2 / 2, \\
{\bf a3} &=& \ddot{\mbox{${\bf k}$}} dt^3 / 6.
\end{eqnarray}
Which then leads to the new position and velocity at time $t+dt$.
\begin{eqnarray}
{\bf x} &=& {\bf x}_p + ({\bf a2}/12 + {\bf a3}/20) dt^2, \\
{\bf v} &=& {\bf v}_p + ({\bf a2}/3 + {\bf a3}/4) dt.
\end{eqnarray}
The new ${\bf a}$ and $\mbox{${\bf k}$}$ are computed by direct summation, and
the motion is subsequently corrected using the additional derivative
information thereby obtained.
A single integration step in the integrator proceeds as follows:
\begin{itemize}
\item[$\bullet$] Determine which stars are to be updated. Each star
has an individual time ($t_i$) associated with it at which it
was last advanced, and an individual time step ($dt_i$). The
list of stars to be integrated consists of those with the
smallest $t_i+dt_i$. Time steps are constrained to be powers of
2, allowing ``blocks'' of many stars to be advanced
simultaneously \citep{1993ApJ...414..200M}.
\item[$\bullet$] Before the step is taken, check for system
reinitialization, diagnostic output, termination
of the run, storing data.
\item[$\bullet$] Perform low-order prediction of all particles to the
new time $t_i+dt_i$. This operation may be performed on the
GPU, if available.
\item[$\bullet$] Recompute the acceleration and jerk on all stars in
the current block (using the GPU, if available), and correct
their positions and velocities to fourth-order.
\end{itemize}
Note that this scheme is rather simple as it does not include
treatment for close encounters, binaries or higher order (hierarchical
or democratic) stable multiple systems.
\begin{table}
\caption[]{ Detailed information on the hardware used in our
experiments. The first column gives the parameter followed by the
four different hardware setups (GRAPE-6Af, GeForce 8800GTX, Quadro
FX1400 and information about the host computer. The information for
the GRAPE is taken from \cite{2003PASJ...55.1163M}, the GPU
information is from {\tt http://www.nvidia.com}.
The hardware details are the number of processors pipelines (\mbox{$n_{\rm pipe}$}),
the processor's clock frequency ($\mbox{$f_{\rm GPU}$} \equiv 1/\mbox{$t_{\rm GPU}$}$), the memory
bandwidth for communication between host and attached processor
($1/\mbox{$t_{\rm bus}$}$), the amount of memory (in number of particles, one particle
requires 84\,bytes, here we adopt $1{\rm k} \equiv 1024$). For measured
hardware parameters, see Tab.\,\ref{Tab:Hardware}.
\label{Tab:GPU}
}
\begin{tabular}{lrrrrr}
data & GRAPE-6Af & 8800GTX & FX 1400& Xeon & unit \\
\hline
\mbox{$n_{\rm pipe}$} & 48 & 128 & 12 & 1 & \\
\mbox{$f_{\rm GPU}$} & 90 & 575 & 350 & 3400 & MHz \\
$1/\mbox{$t_{\rm bus}$}$ & 33.8 & 86.4 & 19.2 & NA & GB/s\\
Memory & 128k & 9362k & 1562k & -- & particles \\
\hline
\end{tabular}
\end{table}
\section{The programming environment}\label{Sect:Cg}
The part of the algorithm that executes on the GPU (the force
evaluation) is implemented in the Cg computer language (C for
graphics, \cite{Cg}, see Appendix A), which has a syntax quite similar
to C. The Cg programming environment includes a compiler and run-time
libraries for use with the open graphics library (OpenGL)\footnote{see
{\tt http://www.opengl.org}} and DirectX\footnote{see {\tt
http://www.microsoft.com/directx}} graphics application programming
interfaces. Though originally developed for the creation of real-time
special effects without the need to program directly to the graphics
hardware assembly language, researchers soon recognised the potential
of Cg and started to apply it not only to high-performance graphics
but also to a wide variety of ``general-purpose computing'' problems
\citep{GPUGems1,GPUGems2}.
\subsection{Mapping the $N$-body problem to a GPU}\label{Sect:Mapping}
The challenge in the implementation of an efficient $N$-body code on a
GPU lies in the mapping of the algorithm and the data to
graphical entities supported by the Cg language. Particle data arrays
are represented as ``textures''. Normally, textures are used to
represent pixels colour attributes with one single component
(luminance, red, green, blue or alpha), three components (red, green
and blue) or four components (red, green, blue and alpha). In our
implementation we use multiple textures to represent the input and
output data of $N$ particles, as follows:
\begin{itemize}
\item{} Input: mass ($N$), position ($3N$) and velocity ($3N$)
\item{} Output: acceleration ($3N$), jerk ($3N$) and potential ($N$)
\end{itemize}
All values are represented as single precision (32-bit) floating point
numbers for a total of 21 floats or 84\,bytes per particle. In Appendix
A we present a snippet of the source code in Cg, showing the
implemented force evaluation routine. With the 768\,Mbyte on-board
memory of the GeForce 8800GTX it can store about 9 million particles,
whereas the GRAPE-6Af can store only 128k (see Tab.\,\ref{Tab:GPU}).
Transferring data from CPU to GPU is accomplished through the
definition of textures, which can either be read-only or write-only,
but not both at the same time. The data structures in the CPU are then
copied onto appropriately defined textures in the graphics card's
memory. Obtaining the results from GPU to CPU is done by reading back
the pixels from the appropriate rendering targets into data structures
on the host CPU. Therefore the output textures (acceleration, jerk
and potential) are represented by a double-buffered scheme, where
after each GPU computation the textures are swapped between reading
and writing. There is some additional overhead (of order $N$) for this operation which has to be performed every block time
step.
Conventionally, graphics cards render into a ``frame buffer'', a
special memory area that represents the image seen on a
display. However, a frame buffer is unsuitable for our purposes as the
data elements in this buffer are ``clamped'' to value ranges that map
the capabilities of the display. Invariably this means that 32-bit
real vectors are reduced in resolution and therefore in accuracy
too. This is perfectly fine for visual displays where the number of
colours after clamping are still $2^{24}$ ($\approx 16$ million),
sufficient to make two neighbouring colours indiscernible to the human
eye. However, this is unacceptable for scientific production
calculations. The workaround is to create an off-screen frame buffer
object and instruct GPU programs to render into these rather than to
the screen. Off-screen frame buffers support 32-bit floating point
values and are not clamped and therefore preserve their precision.
The GPU has two main kernel operations available in programmable
graphics pipelines, these are a ``vertex shader'' and a ``fragment
shader''. Our implementation only makes use of the fragment shader
pipeline as it is better suited for the kind of calculations in the
N-body problem and because the fragment pipeline in general provides
more processing power\footnote{Before the 8800GTX family of GPUs,
vertex programs and fragment programs had to execute on distinct
processing units. The 8800GTX is the first generation of GPUs where
this distinction no longer exists and the two are unified.}. The host
CPU is responsible for allocating the input textures and frame buffer
objects, copy the data between CPU and GPU, and binding textures that
are to be processed by kernels. The lower order prediction and
correction of the particle positions is done on the host CPU. In
Tab.\,\ref{Tab:GPU} we summarise the hardware properties of the two
adopted GPU's and the GRAPE.
\section{Results}\label{Sect:Results}
\begin{table}
\caption[]{Results of the performance measurements for a Plummer
sphere with $N$ equal mass particles initially in virial
equilibrium for 0.25$N$-body time units (from $t = 0.25$ to
$t=0.5$) using a softening of 1/256. In the first column
we list the number of particles, followed by the timing
results of the GRAPE in seconds. In the last column we give
the timing results for the calculation without an attached
processor. The GRAPE (second column) was measured up to
128k particles, because the on-board memory did not allow
for larger simulations. Simulations on the FX1400 and the
host computer were limited for practical reasons.}
\label{Tab:Results}
\begin{tabular}{lccccc}
\hline
$N$ & GRAPE-6Af & 8800GTX & FX1400 & Xeon \\
\hline
256 & 0.07098 & 2.708& 3.423 & 0.1325 \\
512 & 0.1410 & 8.777& 10.59 & 0.5941 \\
1024 & 0.3327 & 17.46 & 20.20 & 2.584 \\
2048 & 0.7652 & 45.27 & 54.16 & 10.59 \\
4096 & 1.991 & 128.3 & 157.8 & 50.40 \\
8192 & 5.552 & 342.7 & 617.3 & 224.7 \\
16384 & 16.32 & 924.4 & 3398 & 994.0 \\
32768 & 51.68 & 1907 & 13180 & 4328 \\
65536 & 178.2 & 3973 & 40560 & 19290 \\
131072 & - & 8844 & - & - \\
262144 & - & 22330 & - & - \\
524288 & - & 63960 & - & - \\
\hline
\end{tabular}
\end{table}
To test the various implementations of the force evaluator we perform
several tests on different hardware. For clarity we perform each test
with the same realization of the initial conditions. For this we vary
the number of stars from $N=256$ particles with steps of two to half a
million stars (see Tab.\,\ref{Tab:Results}). Not each set of initial
condition is run on every processor, as the Intel Xeons, for example,
would take a long time and the scaling with $N$ is unlikely to change
as we clearly have reached the CPU-limited calculation regime (see
\S\,\ref{Sect:performance}).
The initial conditions for each set of simulations were generated by
randomly selecting the stellar positions and velocities according to
the \cite{1911MNRAS..71..460P} distribution using the method described
by \cite{1974A&A....37..183A}. Each of the stars were given the same
mass. The initial particle representations were scaled to virial
equilibrium before starting the calculation.
Each set of initial conditions is run from $t=0$ to $t=0.50$ time
units \citep{1986LNP...267..233H}\footnote{see also {\tt
http://en.wikipedia.org/wiki/Natural\_units\#N-body\_units}}, but the
performance is measured only over the last quarter of an $N$-body time
unit to reduce the overhead for reading the snapshot and the
initialisation of the integrator. The maximum time step for the
particles was 0.125, to guarantee that each particle was evaluated at
least twice during the course of the simulation. The force
calculations were performed by adopting a softening of $1/256$ for all
simulations.
For our performance measurements we have used four nodes of a
Hewlett-Packard xw8200 workstation with a dual Intel Xeon CPU running
at 3.4 GHz and either the GRAPE, a Quadro FX1400 or GeForce 8800GTX
graphics card in the PCI Express ($16\times$) bus. The cluster nodes
were running a Linux SMP kernel version 2.6.16, Cg version 1.4,
graphics card driver version 1.0-9746 and the OpenGL 2.0 bindings.
In Figure\,\ref{fig:GPU} we show the timing results of the $N$-body
simulations. The FX1400 is slower than the general purpose computer
over the entire range of $N$ in our experiments. The bad performance
of the FX1400 is mainly attributed to the additional overhead in
communication and memory allocation. For $N \aplt 10^4$ the GeForce
8800GTX GPU is slower than the host computer but continues to
have a relatively flat scaling, comparable to the GRAPE-6, whereas the
host has a much worse ($\propto N^2$) scaling. The scaling of the
compute time of the GPU is proportional to that of the GRAPE ($\propto
N^{3/2}$), but the latter has a smaller offset by about an order of
magnitude. This is mainly caused by the efficient use of the GRAPE
pipeline, which requires fewer clock cycles per force evaluation
compared to the GPU (see
\S\,\ref{Sect:Discussion}).
\begin{figure}
\psfig{figure=./fig_GPU_timing.eps,width=\columnwidth}
\caption[]{ Timing of several implementations of the gravitational
$N$-body simulations for $N=256$ particles to $N=512$k particles (only
for the 8800GTX, the others up to 64k) over one $N$-body time unit.
The 8800GTX are represented with open circles connected with a solid
curve, the GRAPE is given by bullets with dashed line. The thin
dashed (triangles) line and thin dotted (squares) lines give the
results of the calculations with the FX1400 and with only the host
computer. Note that in timings in Tab.\,\ref{Tab:Results} were
multiplied by a factor of four to estimate the compute time for one
dynamical time unit, rather than the $1/4^{\rm th}$ over which the
timing calculations were performed.
\label{fig:GPU} }
\end{figure}
\section{Performance modelling of the GPU}\label{Sect:performance}
In modelling the performance of the GPU we adopt the model proposed by
\cite{2002NewA....7..373M,2006astro.ph..8125H} but tailored to the host plus
GPU and to the GRAPE architecture.
The wall clock time required for advancing the \mbox{$n_{\rm block}$}\ particles in a
single block time step in the $N$-body systems is
\begin{equation}
\mbox{$t_{\rm step}$} = \mbox{$t_{\rm host}$} + \mbox{$t_{\rm force}$} + \mbox{$t_{\rm comm}$}.
\end{equation}
Here $\mbox{$t_{\rm host}$} = \mbox{$t_{\rm pred}$} + \mbox{$t_{\rm corr}$}$ is the time spend on the host computer
for predicting and correcting the particles in the block, \mbox{$t_{\rm force}$}\, is
the time spend on the attached processor and \mbox{$t_{\rm comm}$}\, is the time
spend communicating between the host and the attached processor. We
now discuss the characteristics of each of the elements in the
calculation for \mbox{$t_{\rm step}$}.
\paragraph{Host operation.}
The predictions and corrections of the particles are calculated on the
host computer, and the time for this operation is directly related to
the speed of the host processor $t_{\rm cpu}$, the number of
operations in the prediction step $n_{\rm pred}$ and in the correction
step $n_{\rm corr}$. The total time spend per block step then yields
\begin{equation}
\mbox{$t_{\rm pred}$} \simeq n_{\rm pred} t_{\rm cpu} N,
\end{equation}
for the prediction and
\begin{equation}
\mbox{$t_{\rm corr}$} \simeq n_{\rm corr} t_{\rm cpu} N.
\end{equation}
for the correction. The number of operations per prediction step
$n_{\rm pred} \simeq 300$ and for the correction $n_{\rm corr} \simeq
1000$. This operation could be performed on the GPU, though the GRAPE
is not designed for the predictior and corrector calculation. For a
fair comparison between the GRAPE and the GPU and in order to preserve
high accuracy we performed these calculations on the host.
\paragraph{Communication.}\label{Sect:Communication}
The time spend communicating between the host and the attached
processor is expressed by the sum of the time needed to send \mbox{$n_{\rm send}$}\,
particles to the acceleration hardware and the time needed to receive
\mbox{$n_{\rm rec}$}\, particles from the acceleration hardware:
\begin{equation}
\mbox{$t_{\rm comm}$} = \eta_{\rm send} \mbox{$t_{\rm send}$} \mbox{$n_{\rm send}$} + \eta_{\rm rec} \mbox{$t_{\rm rec}$} \mbox{$n_{\rm send}$}.
\end{equation}
Here $\eta_{\rm send} \mbox{$t_{\rm send}$}$\, and $\eta_{\rm rec}\mbox{$t_{\rm rec}$}$\, are the
time needed to send and receive one particle, respectively. For the
computation without the hardware acceleration $\mbox{$t_{\rm comm}$} = 0$, since the
forces between all particles are calculated locally. For the GRAPE
and the GPUs, however, a considerable amount of time is spend in
communication. For the GRAPE sending data is equally fast as receiving
data, i.e: $\mbox{$t_{\rm send}$} = \mbox{$t_{\rm rec}$} = \mbox{$t_{\rm bus}$}$. Sending data to the GPU is
considerably slower than receive data (see Tab.\,\ref{Tab:Hardware}).
The two send and receive efficiency factors $\eta_{\rm send}$ and
$\eta_{\rm rec}$, are the product of the overhead $\eta_o$ and the
number of bytes per particle that has to be send or received. The
overhead $\eta_o = 188$ \citep{2005PASJ...57.1009F} for each of the
attached processors. Since for the GRAPE the send and receive
operation are equally expensive we can just count the number of bytes
that has to be transported per particle, which for the GRAPE hardware
is 72\,bytes \citep{2005PASJ...57.1009F,2006astro.ph..8125H}. For the
GRAPE we then write $\eta_{\rm send}\mbox{$t_{\rm send}$} + \eta_{\rm rec}\mbox{$t_{\rm rec}$} = 72
\times 188 \mbox{$t_{\rm bus}$}$.
For the GPU $\eta_{\rm send} > \eta_{\rm rec}$ (see
Table\,\ref{Tab:Hardware} for the measured values). The additional
overhead $\eta_o$ is the same as for the GRAPE, but per particle the
number of bytes to send is different from than the number to receive.
As we discussed in \S\,\ref{Sect:Mapping} a total of 56\,bytes has to be
send from the host to the GPU, whereas only 28\,bytes are received. For
the GPU we then write $\eta_{\rm send} = 56 \times 188$, whereas
$\eta_{\rm rec} = 28 \times 188$.
In addition to the difference in the speeds for sending and receiving
data, the GPU suffers from an additional penalty. The GRAPE sends the
particles in the block ($\mbox{$n_{\rm send}$} = \mbox{$n_{\rm block}$}$), whereas due to internal
memory management the GPU has to send and receive all particles
$\mbox{$n_{\rm send}$} = N$ (see \S\,\ref{Sect:Mapping}). This efficiency loss is
quite substantial, and will probably be reduced when we use CUDA as
programming environment (see \S\,\ref{Sect:Discussion}).
For the adopted (Hermite predictor-corrector block time-step)
integration scheme the number of particles in a single block \mbox{$n_{\rm block}$}\,
cannot be determined implicitly, though theoretical arguments suggest
$\mbox{$n_{\rm block}$} \propto N^{2/3}$. Instead of using this estimate We fitted
the average number of particles in a block time step. This fit was
done with the equal mass Plummer sphere initial conditions running on
GRAPE and run over one dynamical (N-body) time unit. The average
number of particles in a single block is then
\begin{equation}
\mbox{$n_{\rm block}$} \simeq 0.20 n^{0.81}.
\label{Eq:Nblock}
\end{equation}
\paragraph{Calculation.}
The time spend by the hardware acceleration (\mbox{$t_{\rm force}$}) is directly related
to the speed of the dedicated processor (\mbox{$t_{\rm GPU}$}), the number of
pipelines per processor (\mbox{$n_{\rm pipe}$}) and the number of operations for one
force evaluation ($\eta_{\rm fe} \simeq 60$).
\begin{equation}
\mbox{$t_{\rm force}$} = \eta_{\rm fe} N \mbox{$n_{\rm block}$} \mbox{$t_{\rm GPU}$} / \mbox{$n_{\rm pipe}$}.
\end{equation}
The details of the different hardware are presented in
Tab.\,\ref{Tab:GPU} and the measured values are in
Tab.\,\ref{Tab:Hardware} The GRAPE has a vector pipeline for each
processor which allows a more efficient force evaluation than the
GPU's, the number of operations per force evaluation for the GRAPE
therefore $\eta_{\rm fe} \simeq$ {\rm O}(1).
In order to enable hardware acceleration on our $N$-body code we had
to introduce a number of additional operations, like reallocating
arrays, which give rise to an extra computation overhead. For the
calculations with the host computer without hardware acceleration we
adopt $\eta_{\rm fe} \simeq 180$, a factor of three larger than for the
GPUs.
\paragraph{Total performance.}
The total wall-clock time spent per dynamical (N-body) time unit is
then
\begin{equation}
t = \mbox{$n_{\rm steps}$} \mbox{$t_{\rm step}$}.
\end{equation}
Here we fitted to number of block steps per dynamical (N-body) time
units. According to \cite{1988ApJS...68..833M,1990ApJ...365..208M}
$\mbox{$n_{\rm steps}$}
\propto n^{1/3}$. We measured the number of block time steps using the
equal mass Plummer distributions as initial conditions, using the
GRAPE enabled code and fitted the result:
\begin{equation}
\mbox{$n_{\rm steps}$} \simeq 247 N^{0.35}.
\end{equation}
In fig.\,\ref{fig:PerformanceModel} we compare the results of the
performance model with the measurements on the workstation without
additional hardware (squares) and with three attached processors; a
single GRAPE-6Af processor board (bullets), an FX 1400 (triangles) and
the newer GeForce 8800GTX (circles). Note that the measurements in
Tab.\,\ref{Tab:Results} were multiplied by a factor four to compensate
for the fact that we performed our timings only over a quarter
$N$-body time unit. Though these curves are not fitted, they give a
satisfactory comparison.
The largest discrepancy between the performance model and the
measurements can be noticed for the FX1400 GPU, which, for $N\apgt
10^4$ seems to perform considerably less efficiently than expected
according to the performance model. Part of this discrepancy, though
not explicitly mentioned in \S\,\ref{Sect:Communication} is in part
the result of a hysteresis effect in the communication of both
GPU's. For the 8800GTX, however, this effect is less evident, but
still present. Both GPU's tend to have a maximum communication speed
for blocks of 0.5\,Mbyte (about 6000 particles). An additional effect
which causing performance loss on the FX\,1400 is the increase in the
number of block time steps. This number continued to increase beyong
our measurements performed with GRAPE (see Eq.\,\ref{Eq:Nblock}).
The numbers listed in Tab.\,\ref{Tab:Hardware}, and used in our
performance model, are the optimum values. The communication speed
drops by about a factor of two for much larger amounts of data
transfer to and from the GPU. For the FX1400, this drop in
communication is considerable, whereas for the 8800GTX it results in a
smaller performance loss (mainly due to the larger number of processor
pipelines). The discrepancy for the GRAPE calculation with low $N$ is
the result of neglecting the limited size of the processor pipeline in
the performance model and due to the irregular behavior of the number
of particles in each block time step.
\begin{figure}
\psfig{figure=fig_GPU_model.eps,width=\columnwidth}
\caption[]{The results of the above described performance model
(thick lines) over-plotted with the results of the measurements for
the three attached processors (symbols). The bullets represents the
results from a single GRAPE-6Af processor board, the squares give
the host workstation, the circles are for the GeForce 8800GTX and
the triangles give the FX 1400 graphics processor.
\label{fig:PerformanceModel} }
\end{figure}
\begin{table}
\caption[]{
Measurements for the various hardware using in this paper. The first
line gives the time spent by the host computer for predicting and
correcting a single particle. The second row is for calculating the
force between two particles. The last two columns give the time to
send a single particle to, and to receive a single particle from the
attached hardware. For the calculations with only the host computer
this operation is not available. In particular the communication with
the GPUs turns out to be relatively slow.
\label{Tab:Hardware}
}
\begin{tabular}{lcccccc}
param & GRAPE-6Af & 8800GTX & FX 1400 & Xeon \\
\hline
\mbox{$t_{\rm host}$}&$3.82 \times 10^{-7}$&$3.82 \times 10^{-7}$&$3.82 \times 10^{-7}$&$3.82 \times 10^{-7}$ \\
$\eta_{\rm fe}\mbox{$t_{\rm force}$}$&$1.11 \times 10^{-8}$&$1.04 \times 10^{-7}$&$1.72 \times 10^{-7}$&$5.29 \times 10^{-8}$ \\
$\eta_{\rm send}\mbox{$t_{\rm send}$}$&$2.00 \times 10^{-7}$ &$1.76 \times 10^{-5}$ &$1.89 \times 10^{-5}$&NA\\
$\eta_{\rm rec}\mbox{$t_{\rm rec}$}$ &$2.00 \times 10^{-7}$ &$5.97 \times 10^{-6}$ &$5.98 \times 10^{-6}$&NA\\
\hline
\end{tabular}
\end{table}
In fig.\,\ref{fig:Speedup} we present the speed-up for the various
hardware configurations, compared to running on the host workstation.
Here it is quite clear that for low $N$ the GPU's do not give a
appreciable speedup, but for a large number of particles, the GeForce
8800GTX gives a speedup of at least an order of magnitude, but not as
much as the GRAPE. The latter, however, will not be able to perform
simulation of more than 128k particles\footnote{Due to a defective
chip on our GRAPE-6 the on-board memory was reduced from 128k particles
to 64k particles. The latest GRAPE-6Af are equipped with 256k
particles of memory.}.
\begin{figure}
\psfig{figure=./fig_GPU_Speedup.eps,width=\columnwidth}
\caption[]{The speedup of the two GPUs and the GRAPE with respect
to the host workstation as a function of the number of particles.
The (lower) dotted curve is for the Quadro FX1400, the solid curve
(middle) gives the timing for the GeForce 8800GTX and the top line
(dashes) represents the GRAPE.
\label{fig:Speedup} }
\end{figure}
\section{Discussion}\label{Sect:Discussion}
We have successfully implemented the direct gravitational force
evaluation calculation using Cg on two graphics cards, the NVIDIA
Quadro FX1400 and the NVIDIA GeForce 8800GTX, and compared their
performance with the host workstation and the GRAPE-6Af special
purpose computer.
For $N \aplt 10^4$ objects the workstation outperforms the GPUs. This
is mainly due to additional overhead introduced by the communication
to the GPU and memory allocation on the GPU. For a larger number of
particles the more modern GPU (8800GTX) outperforms the workstation by
up to about a factor of 50 (for 9 million particles). Such a large
number of particles cannot be simulated on the GRAPE-6Af, due to
memory limitations. For up to 256k, the maximum number of particles
that can be stored on the GRAPE, the 8800GTX is slower than the GRAPE
by about an order of magnitude. Still, at this particle number the GPU
is faster than the workstation by an order of magnitude.
The GPU has lower accuracy compared to the GRAPE or the host
workstation. The GRAPE-6 uses 24-bit mantissa for calculating the
differential position, and 64bit fixed point format for accumulation.
The pipeline for the time derivative is designed with 20bit mantissa
and 32 bit fixed-point notation for the final accumulation
\citep{2003PASJ...55.1163M}. In principle the NVIDIA architecture
should not be able to achieve similar precision, but would fall short
in precision by about an order of magnitude compared to the GRAPE-6.
The average mean error in the energy is $(1.7 \pm 1.6)
\times 10^{-6}$ for the 8800GTX and $(5.1 \pm 0.56) \times 10^{-6}$
for the FX1400 (averaged over the simulations for $N=256$ to $N=64$k),
whereas for the GRAPE we measured $(1.9 \pm 1.2) \times 10^{-7}$,
which is comparable to the mean error on the host. Both the host and
GRAPE produce an energy error which is about an order of magnitude
smaller than that of the GPUs.
The precision of the GPU is regretfully unlikely to increase anytime
soon, as the higher precision is not required for graphics
applications and it would imply a considerable redesign of the
hardware. But we could improve the accuracy of the GPU even further
by sorting the forces on size before adding them, summing the smallest
forces first.
The fixed point notation in the GRAPE-6 allows for a much more
efficient use of clock cycles, allowing effectively one operation per
clock cycle, whereas the NVIDIA architecture requires more
cycles. This turns out to be an important reason why the 8800GTX is
slower than the GRAPE-6.
The main advantage of the GPU over that of the dedicated GRAPE
hardware, is the much larger memory, the wider applicability and the
much lower cost of the former. The large memory on the GPU allows
simulations of up to about 9 million particles, though one has to wait
for about two years for one dynamical time scale.
In theory the 8800GTX should be able to outperform the GRAPE-6Af, but
due to relatively inefficient memory access and additional overhead
cost, which is not present in the GRAPE hardware, many clock cycles
seem to be lost. With a more efficient use of the hardware the GPU
could, in principle, improve performance by about two orders of
magnitude. For the next generation of GPUs we hope that this
efficiency bottleneck will be lifted. In that case, the GPU would
outperform GRAPE by almost an order of magnitude. Note, however, that
the GRAPE-6 is based on 5 year old technology, and the next generation
GRAPE is likely to outperform modern GPUs by a sizable margin.
These current bottlenecks in the GPU may be reduced using the compute
unified device architecture (CUDA)\footnote{see {\tt
http://developer.nvidia.com/cuda}} programming environment, which is
supposed to provide an improved environment for general purpose
programming on the GPU. In fig.\,\ref{fig:Future} we present the
possible future performance assuming that the communication additional
overhead on the GPU is lifted, the clock cycles are used more
efficiently without any assumptions of improved hardware speed. In
the first step we simple reduce communication to blocks rather than
having to transport all particles each block time step (solid
curve). This relatively simple improvement can already be carried out
using CUDA. The second optimalization (dashed curve in
fig.\,\ref{fig:Future}) is achieved when, in addition to reducing the
communication we also carried the predictor and corrector steps to the
GPU. This improvement, however, may be associated with a quite severe
accuracy penalty. For both improvements we used the performance data
for the current design 8800GTX. Further improvement can be achieved
when, in addition to more efficient communication the force pipeline
can be represented more efficiently by the shader pipeline, like is
done on GRAPE. The result of this hypothetical case would improve
performance by more than a factor 100 compared to the workstation over
the entire range of $N$.
\begin{figure}
\psfig{figure=./fig_GPU_Future,width=\columnwidth}
\caption[]{Prospective of future CPU and GPU performance, based on
the model from \S\,\ref{Sect:performance}. The two thin curves
with squares and circles give the measured performance of the CPU
and 8800GTX GPU, respectively.
The thick solid curve gives a prediction of the performance for the
8800GTX in which only blocks of particles are communicated with the
GPU. The dashed curves gives in addition the effect of carrying the
predictor/corrector calculation to the GPU. The doted curve gives
the performance of a hypothetical 8800GTX-like architecture for
which in addition the processor pipeline would be used more
efficiently ($\eta_{\rm fe} = 1$).
\label{fig:Future} }
\end{figure}
\section*{Acknowlegments}
We are grateful to Mark Harris and David Luebke of NVIDIA for
supplying us with the two NVIDIA GeForce 8800GTX graphics cards on
which part of the simulations were performed. We also like to thanks
Alessia Gualandris and Jun Makino for numerous discussions. This work
was supported by NWO (via grant \#635.000.303 and
\#643.200.503) and the Netherlands Advanced School for Astrophysics (NOVA).
The calculations for this work were done on the Hewlett-Packard xw8200
workstation cluster and the MoDeStA computer in Amsterdam, both are
hosted by SARA computing and networking services, Amsterdam.
|
1,314,259,994,956 | arxiv |
\section{Introduction}\label{sec:intro}
Measurement of the solar photospheric magnetic field is well-established and long-standing: routine observations have been now been made for over 75 years \citep[see][]{Babcock53, HowardBabcock60, HowardEtal83}. A variety of instruments currently make regular measurements of the photospheric magnetic field; this work concentrates on the `synoptic' measurements made by the GONG instruments, which are designed to make regularly recurring observations of the entire solar hemisphere visible from Earth. A primary use of these `synoptic magnetograms' is as the `boundary condition' for models of the coronal magnetic field. These in turn are the primary inputs to models that attempt to forecast the solar wind at Earth. Solar wind forecast models such as WSA/Enlil \citep[][used in operations by NOAA, the US Air Force (USAF), and the UK MetOffice]{ArgePizzo_JGR2000} provide the primary current predictions of geomagnetic storms due to the solar wind and Coronal Mass Ejections (CMEs).
One major issue with these measurements is that no magnetograms currently in use have been calibrated in any absolute sense, and a variety of apparent disagreements have between found of studies comparing these measurements made by different instruments. Further reinforcing the need for an absolute calibration of magnetograms, comparison of {\em in situ} measurements of magnetic flux in the solar wind at 1 AU (e.g, from the ACE and WIND spacecraft, \cite{1998StoneEtal_ACE_SSRv86_1S, 1995LeppingEtal_WindMFI_SSRv71_207L}) with model predictions based on photospheric magnetic field extrapolations into the solar wind are consistently at least two times higher than model predictions based on several different magnetogram inputs \citep{LinkerOpenFlux2017}.
We tackle this `absolute' calibration issue by developing an `end-to-end' simulation of the GONG measurement process: A MURaM 3D MHD simulation \citep[MURaM;][]{Rempel2015} provides a numerical `ground truth', which is used to produce a spectrum using Rybicki and Hummer (RH) radiative transfer model \citep{Uitenbroek_ApJ2001}. A GONG optical, polarimetric, and magnetogram processing model then simulates all of the major physical and numerical processes that comprise a full-disk magnetogram observation. By comparing these `synthetic magnetograms' with a `ground truth magnetogram' produced directly from the MURaM magnetic field, we produce a set of calibration curves for GONG magnetograms that can be used to correct both full-disk magnetograms and the synoptic maps that are created from them.
This is the last in a series of three papers on this work. In the first \citep{PlowmanEtal_2019I}, we gave an updated description of the GONG instrument and its magnetic field measurement process. In the second \citep{PlowmanEtal_2019II}, we considered the theory and goals of the calibration, and of magnetogram comparison in general. One major result of that paper is that, for each pixel, we calibrate to the entire spatial region sampled by that pixel rather than only within the `boundaries' implied by the physical pixel grid on the GONG CCDs. Thus, `absolute' calibration means that we compare the measurements to {\em all} of the ground truth values, combined, that have contributed to the measurement at that pixel, including those contributions due to the instrument PSF. We found that omission of the PSF from that calibration process lead to serious problems with the resulting calibration, in particular that it did not preserve flux relative to the ground truth.
In this last paper \citep{PlowmanEtal_2019III}, we begin by addressing two remaining issues in the theory of calibration: the line-of-sight integration aspect of the `ground truth reduction' introduced in \cite{PlowmanEtal_2019II}, in Section \ref{sec:loseffects} and the actual fitting of the calibration curves, in Section \ref{sec:curvefitting}. We then proceed to unify these last two pieces of the analysis with those described in \cite{PlowmanEtal_2019I,PlowmanEtal_2019II}, producing the calibration curves for GONG. We then discuss the implications of these curves, make a preliminary check of their effects on the open flux, and review all of the conclusions of this work.
The calibration problem can be summarized as follows: Given the MURaM `ground truth', the simulated measurements made from them, and the real measurements, how do we produce a set of calibrated measurements corresponding to the real measurements? The ground truth consists of a very high resolution, {\em three}-dimensional magnetic field (or flux) datacube, whereas the measurements are a low-resolution, {\em two}-dimensional image. In our case with GONG and MURaM, of order $10^5$ ground truth values are integrated over to form each pixel's single magnetic field measurement. The calibrated measurements will also be a two-dimensional image with the same number of pixels as the measurements.
In \cite{PlowmanEtal_2019II}, we specialized to the case where the calibrated flux measurement at a given pixel does not depends on the (uncalibrated) measurements only via their value {\em at} that pixel, not on the vallues at any of the {\em surrounding pixels}: for a uncalibrated measurement value, the calibrated measurement will always have the same value (some random scatter could be added as well, but this is no different than adding noise). This is necessary due to the limitations of potential multipixel methods and the small size of our calibration ground truth. The calibration must therefore be a one dimensional function -- a calibration curve -- of the (uncalibrated) measured values.
We determine these calibration curves by fitting them to `pixel-to-pixel' scatter plots obtained from the ground truth and corresponding synthetic measurements: The scatterplot has one point for each pixel in the synthetic measurement (i.e., the magnetogram image). The measured values are on the x axis, while the y axis is what would be obtained if the measurements were perfectly calibrated, which are produced from the ground truth. The overall calibration process is shown in Figure \ref{fig:calibration_boxdiagram}. We call the process of producing the `perfectly calibrated' values `ground truth reduction'. The ground truth reduction has two components; 3-D to 2-D (i.e., along the line of sight), and high resolution to low resolution (i.e., in the plane of the sky).
\cite{PlowmanEtal_2019II} concentrated on the plane-of-sky component of the ground truth reduction. We demonstrated that flux conservation is an essential metric for any such calibration, both because the measurements (being based on area-integrated quantities) are themselves akin to fluxes, and because a calibration that does not conserve flux will give incorrect 3D magnetic field extrapolations even in the simplest case (i.e., a potential field). We then showed that a calibration that conserved flux was obtained from a scatter plot (or point cloud) of the synthetic ground truth against the synthetic measurements, but only if the ground truth reduction resampled the ground truth to the pixel scale of the measurements {\em and} applied the instrument PSF. Omitting the PSF resulted in a calibration that did not conserve flux. We concluded that the `perfectly calibrated' measurements must include the instrument PSF. We argued that this is not only necessary, but fitting: the measurements are area-integrated quantities, like fluxes, and much of the `weight' of the area integration comes from outside the pixel boundary. For GONG, only about $10\%$ of the weight comes from inside the pixel boundary, so it is measuring more flux `outside' the pixel more than `inside' it. It is therefore unsurprising that attempting to construct a calibration while ignoring most of the flux contributing to the measurement would be unsuccessful.
\begin{figure}
\begin{center}\includegraphics[width=\textwidth]{Calibration_process_boxchart.pdf}\end{center}
\caption{Flow chart of the calibration process. On the right is the end-to-end simulation of the measurement process, while on the left is the `ground truth reduction' which places the ground truth in a form that can be directly compared with the measurements. This is necessary because some parts of the measurement process are irreversible.}\label{fig:calibration_boxdiagram}
\end{figure}
\section{Line of sight integration effects: From 3D ground truth to 2D}\label{sec:loseffects}
We now turn to the line-of-sight aspects of the ground truth reduction. Line of sight integration effects are considerably more difficult to treat systematically than the spatial (plane-of-sky) resolution effects. Rather than having a PSF that applies in a consistent way across all pixels, there is instead a contribution function of height for each wavelength which is different for every line of sight. Each of these contribution functions depends on the solar plasma parameters, including the magnetic field. The dependence is such that two wavelengths along the same line of sight may be weighted toward different regions of the plasma, with different magnetic fields, even if both are near the line center. These differences become more dramatic as the inclination angle moves away from the vertical (i.e., as latitude and/or longitude move away from disk center).
As a result, there is no specific height for which the magnetic field along the line-of-sight direction exactly corresponds to the field value inferred from the Zeeman signal, neither is there a single weighting function which represents the effects of line-of-sight integration on the magnetic field. For the vertical, the field values can be fairly close (Figure \ref{fig:vertical_field_correspondence_images}), but the correspondence gets progressively worse as the inclination angle increases (Figure \ref{fig:70degree_field_correspondence_images} shows the 70 degree case).
\begin{figure}
\begin{center}
\includegraphics[width=0.19\textwidth]{simulator_example_hires_z100_0_deg.pdf}
\includegraphics[width=0.19\textwidth]{simulator_example_hires_z150_0_deg.pdf}
\includegraphics[width=0.19\textwidth]{simulator_example_hires_0_deg.pdf}
\includegraphics[width=0.19\textwidth]{simulator_example_hires_z200_0_deg.pdf}
\includegraphics[width=0.19\textwidth]{simulator_example_hires_z250_0_deg.pdf}
\end{center}
\caption{Comparison of line-of-sight field strengths from the MURaM `ground truth' field (left and right images) with magnetogram from GONG simulator (center image), at 8 times GONG's native resolution and with vertical viewing angle. The MURaM ground truth are taken at heights of 150 to 250 km above the height of $\tau=1$ in the continuum (also with vertical viewing angle). Variation of the vertical MURaM field with height is small, and the GONG simulator magnetogram is most similar to the 150-200 km height range.}\label{fig:vertical_field_correspondence_images}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.19\textwidth]{simulator_example_hires_z100_70_deg.pdf}
\includegraphics[width=0.19\textwidth]{simulator_example_hires_z150_70_deg.pdf}
\includegraphics[width=0.19\textwidth]{simulator_example_hires_70_deg.pdf}
\includegraphics[width=0.19\textwidth]{simulator_example_hires_z200_70_deg.pdf}
\includegraphics[width=0.19\textwidth]{simulator_example_hires_z250_70_deg.pdf}
\end{center}
\caption{Comparison of line-of-sight field strengths from the MURaM `ground truth' field (left and right images) with magnetogram from GONG simulator (center image) at 8 times GONG's native resolution and with 70 degree viewing angle. The MURaM ground truth are taken at heights of 150 to 250 km above the height of $\tau=1$ in the continuum (also with 70 degree viewing angle). Variation of this inclined MURaM field with height is significant (in part because the variation in the continuum $\tau=1$ height is more significant at 70 degrees), but the GONG simulator magnetogram is still most similar to the 150-200 km height range.}\label{fig:70degree_field_correspondence_images}
\end{figure}
Because there is no such specific height, and the contribution functions vary considerably in height and wavelength, we have instead chosen to average over a range of heights: from 100 to 250 km above the height of continuum $\tau=1$. This reflects the range of typical contribution function heights -- evidenced by figures \ref{fig:vertical_field_correspondence_images} and \ref{fig:70degree_field_correspondence_images}, which suggest that the ground truth is most similar to the measurement at around 175 km above $\tau=1$. The range we chose is based on this consideration. We have checked a variety of such choices and the dependence on exact choice of height and averaging range is not strong. We defer showing examples of the scatter plots produced by this ground truth reduction until the results in Section \ref{sec:results}.
\subsection{Calibration curve fitting}\label{sec:curvefitting}
In the appendices of \citep{PlowmanEtal_2019II}, we mentioned two `per-pixel' curve fitting methods. One (histogram equating) does not technically use a scatter plot at all, but simply matches the one-dimensional distributions of per-pixel fluxes of the two axes. We showed that it still requires the same ground truth reduction (including PSF), even though it does not use the scatterplots directly and makes no use of an explicit correspondence between the two data sets being compared. The other method fits a curve to the per-pixel scatter plot, and therefore makes use of direct correspondence between pixels of the measurement and of the `reduced' ground truth. We prefer to base our calibration on the more direct fitting method, where the correspondence is made explicit. The method described here is equivalent to the latter bin-wise `flux-conserving' method described in \cite{PlowmanEtal_2019II} in the limit of small bin size and large numbers of points in each bin, but also ensures bin-wise flux conservation for large bins and smaller numbers of points in each bin.
We begin, as before, by dividing the point clouds into bins (using $x_i$ to denote the bin boundaries) in the {\em measured} values
The calibration curve is taken to be a linear interpolated function, with the nodes of the linear interpolation at the centers of each bin. In terms of the node values $y_i$ and bin {\em centers} $x_i'$ (located at $(x_i+x_{i+1})/2$), the calibration for a given measured field $m_{kl}$ is given by the usual linear interpolation formula,
\begin{equation}
c_{kl} = \frac{y_{i+1}(m_{kl}-x_i')-y_i(x_{i+1}'-m_{kl})}{x_{i+1}'-x_i'},
\end{equation}
where the index $i$ is chosen so that $m_{kl}$ falls between $x_i'$ and $x_{i+1}'$, or chosen to extrapolate if $m_{kl}$ falls outside the bin center range: If $m_{kl} < x_0'$ (measurement to be calibrated is less than the lowest bin center), we use $i=0$, and if $m_{kl}>x_n'$ (measurement to be calibrated is greater than the highest bin center), we use $i=n$. To determine $y_i$ (the calibration values at the node points) we require that, for the ground truth and GONG simulator data used to produce the calibration, the calibrated net flux in each bin must equal the net flux from the ground truth in the same bin:
\begin{equation}\label{eq:linear_interp_binfluxconservation}
\sum_{x_i < m_{kl} < x_{i+1}} c_{kl} = \sum_{x_i < m_{kl} < x_{i+1}} a_{kl}
\end{equation}
Here the summation is over all points whose $m_{kl}$ falls in the $i$th bin (i.e., between $x_i$ and $x_{i+1}$). The bins cover the entire range of the $m_{kl}$, with the first and last bin boundaries chosen so that they each contain 1600 points to reduce errors. The remaining, inner, bins are uniformly distributed between the upper bound of the lower bin and the lower bound of the upper bin. With the node points located at the bin centers and the edge treatment described above, this results in a linear tridiagonal system which can be solved for the $y_i$ by the usual matrix methods. The calibration curve fit can then be applied to a magnetogram (measured values $m_{ij}$) by linear interpolation of the curve $y_i(x_{i}')$ at each $m_{ij}$ -- i.e., the `$x$' axis values at which the interpolated values of the curve are to be computed are the $m_{ij}$.
Because the MHD simulation has a significant flux bias, especially when the sunspot periphery is carefully cropped out, we also `mirror' the point cloud to fill in the underrepresented negative polarity. That is, given a set of `measured' and `ground truth' values $m_{ij}$ and $a_{ij}$, we append $-m_{ij}$ and $-a_{ij}$ to the set. We make separate scatter plots for the the `quiet sun' (non-sunspot) pixels and for the sunspot pixels. Sunspot pixels are identified as those which are over 31 Mm from the sunspot center (a conservative cut to avoid contamination from the sunspot).
\section{GONG Simulator Calibration and Results}\label{sec:results}
Descriptions of the MURaM simulation and our simulation of the GONG instrument can be found in \cite{PlowmanEtal_2019I}, and we will not attempt to reiterate them here. However, we will quote the following paragraph for reference:
\begin{quotation}
These spectral cubes are produced from MURaM simulations by the RH radiative transfer code. The MURaM simulations are much smaller than the whole sun due to computational limitations, so we tile them at several scales to cover the GONG detector plane. The intention is to increase the number of points in the calibration (via the tiling), and have some ability to investigate the effect of instrument resolution built into the simulator results. The effect of different viewing angles are investigated with an independent run of the simulator (resulting in a separate image), rather than tackling the thorny problem of stitching the simulation over a sphere. The tiling, and multiple resolutions, and separate images for each viewing angle are illustrated in Figure 11.
\end{quotation}
We will first consider the `native GONG' resolution quadrant (lower left) of the results, and the `quiet sun' pixels, which we identify as those which are over 31 Mm from the center of the sunspot (a conservative cut to avoid contamination). This is the most interesting resolution for GONG calibration purposes, and the non-sunspot regions are more important for space weather. Figures \ref{fig:example_scatterplot}, \ref{fig:cal_curve_25deg}, and \ref{fig:cal_curve_75deg} show the results of our calibration procedure for 0, 25, and 75 degree latitude. Figure \ref{fig:cal_curves_combined} shows curves for all 6 latitudes (0, 25, 45, 60, 70, and 75 degrees) together on the same graph.
\begin{figure}
\begin{center}\includegraphics[width=\textwidth]{gongcal_plot_0.pdf}\end{center}
\caption{Calibration and scatter plot for 0 degree (vertical, disk center) inclination. Left shows ground truth magnetogram, center shows GONG simulator magnetogram, right shows point cloud and calibration curve.}\label{fig:example_scatterplot}
\end{figure}
\begin{figure}
\begin{center}\includegraphics[width=\textwidth]{gongcal_plot_25.pdf}\end{center}
\caption{Calibration and scatter plot for 25 degree inclination. Left shows ground truth magnetogram, center shows GONG simulator magnetogram, right shows point cloud and calibration curve.}\label{fig:cal_curve_25deg}
\end{figure}
\begin{figure}
\begin{center}\includegraphics[width=\textwidth]{gongcal_plot_75.pdf}\end{center}
\caption{Calibration curve for 75 degree inclination. Left shows ground truth magnetogram, center shows GONG simulator magnetogram, right shows point cloud and calibration curve.}\label{fig:cal_curve_75deg}
\end{figure}
\begin{figure}
\begin{center}\includegraphics[height=0.4\textheight]{calibration_curve_plots_combined.pdf}\end{center}
\caption{Calibration curves for all six inclinations/latitudes considered.}\label{fig:cal_curves_combined}
\end{figure}
We find a `quiet sun' (i.e., non-sunspot) calibration factor of $\sim 2.2$ at disk center, which drops to near one at high latitudes. This is very similar to the behavior seen for the nonlinear `convective blueshift' effects alone in \cite{PlowmanEtal_2019II}, so those effects are likely to dominate the calibration factors at disk center. At higher latitudes, the quiet sun calibration factor does not drop as quickly: higher-latitude calibration factors for the full simulation are slightly higher than for the nonlinearity effects in \cite{PlowmanEtal_2019II}. Evidently, other effects become more significant at higher latitudes.
Particularly interesting is the upturn in the calibration factor for strong flux/field pixels at higher latitudes -- although there is not a significant miscalibration of the {\em weakest} high-latitude fields, {\em stronger} ($\sim 30$ Gauss) high-latitude fields are underestimated. Moreover, the factor by which they are underestimated increases with latitude. This result should be considered preliminary, and extrapolation to even higher latitude with our relatively small simulation volume is fraught. However, this effect may provide an additional boost to the calibration factor for polar fields. Comparison of magnetograms as regions rotate onto the limb may prove informative for this question, albeit complicated by the time variations of those observations and the physical differences between polar and near-equatorial solar regions. Simultaneous observations from two widely separated perspectives (e.g., one from Earth and one significantly away from the Earth-Sun line) are likely to shed considerable light onto this issue.
The other three quadrants of the synthetic GONG images allow us to investigate the effect of spatial resolution on the calibration -- i.e., how does it change if GONG were 2, 4, or 8 times its present resolution? In the interests of brevity, the results are summarized in tabular form in Table \ref{tab:calfactors}. The point cloud relationships are similar to those at GONG's native resolution, except there is more scatter at high resolution, and the calibration factors gradually become smaller as the resolution increases. An important point of comparison is between GONG and HMI, whose resolution is nearly eight times that of GONG, including atmospheric seeing. Table \ref{tab:calfactors} therefore indicates that the resolution difference results in a relative calibration factor of $\sim 1.2$ between GONG and HMI for weak-field non-sunspot regions at disk center, before any other instrument differences are taken into account. However, this relative difference drops as the viewing angle increases, resulting in very similar factors at all resolutions near the limb. Instrumental differences other than resolution alone (different spectral lines, spectral resolution, etc) could easily change the comparison, of course.
\begin{table}[h]
\begin{center}\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
Resolution & $0^\circ $ & $25^\circ $ & $45^\circ $ & $60^\circ $ & $70^\circ $ & $75^\circ $ \\
\hline
\hline
1x GONG resolution & 2.17 & 1.89 & 1.59 & 1.34 & 1.15 & 0.98 \\
2x GONG resolution & 2.10 & 1.81 & 1.44 & 1.24 & 1.12 & 1.01 \\
4x GONG resolution & 1.96 & 1.66 & 1.24 & 1.08 & 1.02 & 0.98 \\
8x GONG resolution & 1.59 & 1.49 & 1.13 & 1.02 & 0.96 & 0.89 \\
\hline
\end{tabular}\end{center}
\caption{Weak-field non-sunspot calibration factors as a function of resolution and viewing angle.}\label{tab:calfactors}
\end{table}
To check this behavior, we have compared HMI and GONG, being careful to match HMI's resolution with GONG's by convolving with the GONG PSF (the HMI PSF is much smaller than GONG's so its residual presence will have negligible effect), coalign, and match their pixel scale. This is presented in Figure \ref{fig:hmi_gong_comparison_center}, for disk center, and Figure \ref{fig:hmi_gong_comparison_limb} for the limb. Remarkably, very similar behavior to Table \ref{tab:calfactors} is observed: For disk center, GONG slightly underestimates relative to HMI, while on the limb they are very close, with GONG slightly overestimating relative to HMI. There is some spread in the scatterplots, it is true, but the relative correspondence with Table \ref{tab:calfactors} is rather striking.
\begin{figure}
\begin{center}\includegraphics[width=\textwidth]{HMI_gong_comparison_diskcenter.pdf}\end{center}
\caption{Comparison between HMI (coaligned and resolution-matched) with GONG near disk center (highlighted area). There, GONG underestimates relative to HMI, whereas at the limb (Figure \ref{fig:hmi_gong_comparison_limb}), the reverse is true -- just as suggested by Table \ref{tab:calfactors}.}\label{fig:hmi_gong_comparison_center}
\end{figure}
\begin{figure}
\begin{center}\includegraphics[width=\textwidth]{HMI_gong_comparison_limb.pdf}\end{center}
\caption{Comparison between HMI (coaligned and resolution-matched) with GONG near the limb (highlighted area). There, GONG overestimates relative to HMI, whereas at disk center (Figure \ref{fig:hmi_gong_comparison_limb}, the reverse is true -- just as suggested by Table \ref{tab:calfactors}.}\label{fig:hmi_gong_comparison_limb}
\end{figure}
The sub-unity factors for 8x GONG resolution at 70 and 75 degrees are surprising and require some comment. In the case of the convective blueshift, that effect can reverse at high inclinations, becoming a convective redshift. It may be that the same thing is going on here, but in that case the question is why that doesn't happen for lower resolutions, especially since that effect should become less significant at higher resolution. A detailed investigation of this question is beyond the scope of the present work, but the following speculative explanations can be ventured:
\begin{itemize}
\item Although the other factors are not significantly sub-unity, they are still surprisingly small as well, and are likely also being affected by the transition from a blueshift-like effect to a redshift-like one.
\item The improvement with resolution of a convective blueshift-like effects exhibits a thresholding behavior: If the resolution is enough to resolve the high-resolution bright/dark blueshifting/redshifting (or vice versa) structures, the effect vanishes. If the observations are not close to that resolution, on the other hand, there's little change in the behavior. Due to projection, the $ 75^\circ $ case has one quarter the resolution (in the direction perpendicular to the limb) of the disk center case, so the resolution of the 8x case may fall below the threshold on the limb but above it at disk center. The factor at disk center changes significantly between 4x and 8x GONG resolution, which suggests that 8x is close to that threshold but 1x and 2x are not.
\item As mentioned above, it appears that additional effects are present which tend to cause underestimation of the field. These are likely to improve with resolution. If the 8x resolution case is not much affected by these effects but only by the (near limb) convective redshift-like effect, this would explain its overestimation of the field.
\end{itemize}
The preceding discussion has been of the `quiet sun', non-sunspot point clouds and calibration curves. These are the most important for space weather applications since they dominate the extrapolations \citep[e.g.,][]{Petrie2013}, but we also consider briefly the point clouds and corresponding curve for the sunspot. These are shown in Figure \ref{fig:sunspot_scatterplots}. Sunspot pixels are identified as areas which are dark in the pseudo-continuum intensity compared to their surroundings, and a fairly tight constraint is applied to ensure that quiet sun pixels do not contribute to the sunspot point clouds. At disk center, a very tight relationship with a slope of 1.77 is obtained. It appears that the sunspot correction factors are roughly constant between 0 and 45 degrees and then {\em increase} with latitude (opposite the behavior of the quiet sun case).
However, the quality of the point clouds quickly deteriorates with increasing inclination: even the one at $25^\circ $ is marginal, and all show pronounced features in the point clouds which are specific to this sunspot (the `tracks' seen in many of the point clouds correspond to specific features of the ground truth seen at differing subpixel offsets due to our tiling of the simulation). In the non-sunspot case, the point cloud represents a wide variety of small structures for which there is a typical ground truth field for any given measured field strength. That is not true for the sunspot: it is a monolithic structure and the higher inclination scatter plots make it clear that they have no `typical' ground truth field for a given measured field strength. Therefore, calibration by one-dimensional curve from a single sunspot is not sufficient at higher inclination. Multiple sunspot `ground truth' simulations and a more involved calibration method (e.g., neural network based) are likely to be necessary. Note also that we omit molecular lines from our radiative transfer, which is another limitation of sunspot calibration.
This presents a dilemma for the present discussion. We are applying a significant correction factor to the non-sunspot fields, so if the magnetograms are to remain self-consistent the sunspots should have some correction as well. Fortunately, the low-inclination curves are the ones that are most important: sunspots are usually at low latitudes and the synoptic maps upon which most field extrapolations are based typically use only field values near the central meridian. As already mentioned, the sunspots are a secondary contributor to large-scale field extrapolations in any case \citep[][]{Petrie2013}. So the high-inclination sunspot calibration will not have a significant effect on data's primary use case. Since it appears that there are no issues with the disk center sunspot calibration curve, and those with the 25 degree sunspot curve are minor, we use only those sunspot curves in Section \ref{sec:calapplication}; higher inclination fields detected as sunspots have the 25 degree sunspot curve applied to them for visual continuity, but should be taken with a grain of salt. In particular, the trustworthyness of GONG sunspot measurements above 25 degrees inclination is unclear, especially for strong fields (see Figure \ref{fig:sunspot_scatterplots}), with calibration curve or without. Our treatment of these more inclined sunspot fields is for illustrative purposes and not meant to be prescriptive: a dedicated investigation of that question is indicated.
This primary issue with higher inclination sunspot fields may be due to the presence of the polarity inversion line. The abrupt change in slope of the curve at high field strengths suggests a difference in weighting of polarities in the measurement vs. that in the reduced ground truth. This can produce a drastic effect when the polarity reverses near where the field is strongest. That is exactly what happens at higher inclination, because the polarity inversion is near the middle of the sunspot. The synthetic magnetogram images (e.g., in Figures \ref{fig:example_scatterplot}, \ref{fig:cal_curve_25deg}, and \ref{fig:cal_curve_75deg}) illustrate this. This is exacerbated by GONG's low resolution: since the sunspot is only a few resolution elements tall at high inclination, the polarity inversion line contributes to a large fraction of the sunspot pixels. At high resolution the effect is reduced, since the fraction of pixels near enough to the polarity inversion line to be effected is much smaller. The sunspot point clouds at 2, 4, and 8 times GONG's resolution (not shown) are consistent with this, although their quality remains poor otherwise.
\begin{figure}
\begin{center}\includegraphics[width=0.49\textwidth]{gongcal_plot_sunspot_0.pdf}\includegraphics[width=0.49\textwidth]{gongcal_plot_sunspot_25.pdf}\end{center}
\begin{center}\includegraphics[width=0.49\textwidth]{gongcal_plot_sunspot_45.pdf}\includegraphics[width=0.49\textwidth]{gongcal_plot_sunspot_75.pdf}\end{center}
\caption{Point clouds and calibration curves for sunspot at inclinations from 0 degrees (top left), 25 degrees (top right), 45 degrees (bottom left) and 75 degrees (bottom right). Small-field correction factors at low inclinations are $\sim 1.7$. Above 25 degree inclination, these point clouds are not usable for calibration purposes (see discussion in text).}\label{fig:sunspot_scatterplots}
\end{figure}
\subsection{Application of calibration curves}\label{sec:calapplication}
To check the effects of these calibration curves, we have applied them to the period from June 8 to July 19, 2010. This is the same period investigated by \cite{LinkerOpenFlux2017}. In that paper, they find an open flux at 1 AU corresponding to $\sim 0.64$ nT average radial field strength based on GONG extrapolations, whereas the in situ observations (from OMNI) were $\sim 2$ nT. As a preliminary test of our calibration we performed a similar experiment for this time period using a PFSS extrapolation \citep[described in][]{Petrie2013}. As previously mentioned, the 0 and 25 degree sunspot curves are applied to sunspot regions (identified as those which are dark compared to their surroundings in the pseudo-continuum, with the same criteria as in the curve fitting); any sunspot regions at inclinations over 25 degrees use the 25 degree curve. Non sunspot regions use all 6 non-sunspot calibration curves. Figure \ref{fig:example_calibrated_gong} shows the calibrated magnetogram from one of these days (July 7) as an example, and compares it with the uncalibrated magnetogram.
\begin{figure}
\begin{center}\includegraphics[width=\textwidth]{bbzqa100716t1954_2panel.png}\end{center}
\caption{Application of combined calibration curves to GONG magnetogram taken at 1954 UT on July 7, 2010. Left: original GONG magnetogram; Right: curve-corrected GONG magnetogram. Plotted magnetic field range is -100 to 100 Gauss.}\label{fig:example_calibrated_gong}
\end{figure}
Our open flux comparison finds an average radial field strength at 1 AU of 0.42 nT without the calibration and 0.52 nT with it. The calibration therefore produces an increase in the open flux of $\sim 25\%$ with this fairly simple PFSS extrapolation. Our smaller values overall are likely due to the less sophisticated extrapolation, and the $25\%$ increase suggests the calibrated data might result in a GONG open flux of $\sim 0.8$ if the \cite{LinkerOpenFlux2017} analysis were applied to it. Figure \ref{fig:openflux_extrapolations} shows the flux distribution at the source surface resulting from our extrapolations. In addition to the overall stronger field, the calibrated extrapolation shows a more `distorted' field configuration, which would be relevant to space weather forecasting.
\begin{figure}
\begin{center}
\includegraphics[width=0.49\textwidth]{newcal_brss.png}\includegraphics[width=0.49\textwidth]{brss_nocal.png}
\end{center}
\caption{Extrapolated flux distribution at the source surface resulting from preliminary application of our calibration curves to the June 8 to July 19, 2010 time period. Left shows the flux distribution with calibration, while right shows that without it. Heavy dark line in each shows location of the current sheet.}\label{fig:openflux_extrapolations}
\end{figure}
\section{Conclusions, Discussion, and Summary}\label{sec:conclusions}
These papers \citep{PlowmanEtal_2019I,PlowmanEtal_2019II,PlowmanEtal_2019III} have a number of significant findings, which we summarize here. \cite{PlowmanEtal_2019I} provides background, an overview of our analysis, a description of GONG and our simulation of it, and demonstrates that GONG is not subject to classical magnetograph saturation. Then, \cite{PlowmanEtal_2019II} examined the theory of calibration and magnetograph calibration, finding an issue that can arise with either one:
\begin{itemize}
\item Per-pixel comparison of magnetograms (e.g., scatter plots and histogram equating) will show apparent calibration differences unless their resolutions (including PSF) exactly match, typically appearing to show that a lower-resolution magnetograph's measurements are systematically lower than those of a higher-resolution magnetograph. The reason for this is that if the observations are dominated by unresolved and uncorrelated spatial variations (`salt \& pepper'), as they are on much of the sun, these fluctuations will cancel to a larger degree in the lower resolution instrument due to root $n$ averaging.
\item Per-pixel comparison of synthetic measurement and `reduced' ground truth will likewise show the same differences unless their resolutions (including PSF) exactly match. A calibration curve made from such a comparison will therefore tend to inflate the measurements, since the resolution of the ground truth is always at least as high as the synthetic measurements (usually it is much higher).
\end{itemize}
This difference is {\em not} indicative of a {\em real} calibration difference, however: it occurs even in the ideal case where the resolution differences do not add or remove flux from the magnetograms, only rearrange it. Calibration curves resulting in this case will therefore tend to inflate the flux. We demonstrated all of this for the ideal case by adding a PSF difference using a linear convolution -- the `linear' case.
This effect is therefore a `red herring' where magnetograph calibration or comparison is concerned: There is no need to account for it when using the magnetograms (e.g., for space weather extrapolation), only when making comparisons or (inter-)calibrations. To account for it in those cases, we advocated a the following solution, and demonstrated that it was effective (in the sense that the resulting calibration curves restored the ground truth fluxes to the synthetic measurements) in both the linear case and the more general case:
\begin{itemize}
\item Carefully match resolutions before making per pixel comparisons: For example, when comparing magnetograms, the higher resolution magnetogram should be reduced to the resolution (including PSF) of the lower resolution one. If the resolutions are not well characterized, both must be degraded to a significantly lower resolution than either PSF \citep[e.g.,][]{2010LambEtal_ApJ720_140,2013PietarilaEtal_SoPh282_91}, or they should be compared with a resolution-aware method \citep[e.g.,][]{VirtanenMursula2017}.
\item Similarly, the resolution of the ground truth should be reduced to that of the synthetic measurements prior to making comparisons and calibration curves as we do in this work.
\end{itemize}
These differences vanish when the magnetograms are compared at large spatial scales, and have no effect on the large-scale fluxes which are most important for space weather. This likely explains why, for example, \cite{Riley_comparison2014} finds significant differences in per pixel comparisons between GONG and HMI magnetograms, but when \cite{LinkerOpenFlux2017} compare extrapolations made with these instruments, the open fluxes are almost identical: The per-pixel flux comparisons of \cite{Riley_comparison2014} are different because the resolutions are not exactly matched, but in fact the fluxes measured by both instrument are very similar, leading to very similar extrapolated open fluxes for both. In this paper, we made a preliminary comparison of fluxes between HMI and GONG with resolution matching, and find that the fluxes are indeed very similar. Thus it appears likely that magnetograph measurements are in better agreement than the existing literature would indicate, and that a significant fraction of the reported apparent disagreement between magnetograms is caused by this effect.
\noindent Some works in the literature \citep[e.g.,][]{2012LiuEtal_SoPh279_295L} have already employed a similar resolution-matching method, demonstrating that the need has already been recognized and that the solution has a peer reviewed track record. In this paper, we have employed this approach when constructing our calibration curves from the full GONG simulator results. We now turn to those results:
As with the preliminary results in \cite{PlowmanEtal_2019II}, we find that there is also a {\em real} effect which casuses synthetic GONG measurements underestimate their fluxes by factor of over 2 compared to the flux {\em over the same area} in the MURaM ground truth. In \cite{PlowmanEtal_2019II}, we explained that this effect is likely similar to the `convective blueshift': the unresolved granulation pattern biases the measurements to the brighter regions (the granule centers), which have weak fields. This is for the `quiet sun' (non-sunspot) regions at disk center; the factor drops to near 1 at the limb, which is also consistent with the convective blueshift. We investigated the effects of resolution and found that it persists as long as the resolution is too low to resolve the granulation pattern. The situation with the sunspot calibration is more complex; those results are summarized as follows:
\begin{itemize}
\item A clear calibration curve relationship is found at disk center. For weak fields, the slope is $\sim 1.73$, and drops to $\sim 1.5$ for stronger fields. Thus the miscalibration at disk center is somewhat less for sunspots than for quiet sun (non-sunspots).
\item Unlike the non-sunspot case, the degree of underestimation appears to {\em increase} at large inclinations.
\item The quality of the scatter plots deteriorates rapidly with inclination, such that above 45 degrees there is no clear `typical' ground truth field for a given measured field strength at 45 degrees or above. Thus, the point cloud and curve approach is unsuitable to calibration of sunspot fields above $\sim 25$ degrees.
\item For the same reason, sunspot magnetograms at high inclination may not be trustworthy: there is not a clear monotonic relationship between the measured field strength at the ground truth field that produced it. Caution should be exercised in the use and interpretation of these measurements.
\item Because this effect is largely absent at zero inclination, concurrent observations from Earth and a vantage point away from the Earth-sun line are likely to prove invaluable in understanding and correcting for these effects.
\end{itemize}
These results suggest that all synoptic magnetograms are, to some extent, underestimating the real solar magnetic fluxes, except at high inclination in the quiet sun. Therefore they will all tend to underestimate the extrapolated fields, as has been found in the literature, although a preliminary check of the open flux using this new calibration finds a factor of 25\% increase, overall, in the open flux. We have therefore accounted for a significant fraction of the $\sim 2$ factor needed to make extrapolations match the {\em in situ} flux observations.
To summarize, our results indicate that the large apparent disagreements found in some previous comparisons of magnetographs do not reflect a real difference in their relative flux calibrations and are therefore a red herring. The open flux discrepancies, on the other hand, are due in part to a real effect that causes magnetographs to underestimate their fluxes, by more than a factor of two in some cases. As expected, magnetograph calibration does not appear to be able to account for all of the missing flux alone, and the other usual suspects are still in play: longitudinal-to-radial conversion, the source surface assumption, linear vs. nonlinear field extrapolations in general, etc.
Finally, we point out that observations from NSO's new Daniel K. Inouye Solar Telescope (DKIST) will resolve the granulation pattern, and should be unaffected by this convective blueshift-like effect. As a result, we predict that DKIST Zeeman effect measurements (e.g., from ViSP) of the magnetic flux at disk center will be a factor of $\sim 2$ higher than HMI (or GONG) measurements of the same fluxes (making sure to take the much larger integration area of HMI or GONG into account by integrating the DKIST fluxes over those areas). DKIST measurements will also be very interesting for investigating these effects at high latitude.
These papers have been the first phase of a research project funded under a NASA space weather `operations to research' (O2R) grant whose specific goal is to improve the accuracy of solar wind prediction models. In subsequent phases we address polar field measurements in more detail, as well as apply the `re-calibrated' GONG magnetograms to WSA/Enlil model runs for known solar wind and CME events that were particularly poorly forecast.
\subsection*{Acknowledgements}
This work was funded in part by the NASA Heliophysics Space Weather Operations-to-Research program, grant number 80NSSC19K0005, and by a University of Colorado at Boulder Chancellor’s Office Grand Challenge grant for the Space Weather Technology, Research, and Education Center (SWx TREC).
We acknowledge contributions, discussion, information, and insight from a variety of sources: Gordon Petrie, Jack Harvey, Valentin Martinez Pillet, Sanjay Gosain, and Frank Hill, among others.
\section*{Disclosure of Potential Conflicts of Interest}
The authors declare that they have no conflicts of interest.
\bibliographystyle{apj}
|
1,314,259,994,957 | arxiv | \section{Introduction}\label{sec:intro}
Wavelet techniques have a long standing history in the field
of data science. Applications comprise
signal processing, image analysis and machine learning,
see for instance \cite{Chui,Dahmen,Daubechies,Mallat,Mallat2016}
and the references therein. Assuming a signal generated
by some function, the pivotal idea of wavelet
techniques is the splitting of this function into its
respective contributions with respect to a hierarchy of scales.
Such a multiscale ansatz starts from an approximation
on a relatively coarse scale and successively resolves details
at the finer scales. Hence, compression and adaptive
representation are inherently built into this ansatz.
The transformation of a given signal into its wavelet
representation and the inverse transformation can be performed
with linear cost in terms of the degrees of freedom.
Classically, wavelets are constructed by refinement relations
and therefore require a sequence of nested approximation
spaces which are copies of each other, except for a different scaling.
This restricts the concept of wavelets to structured data.
Some adaption of the general principle is possible in
order to treat intervals, bounded domains and surfaces,
compare \cite{DKU,Quak,HS,STE} for example. The seminal
work \cite{TW03} by Tausch and White overcomes this obstruction
by constructing wavelets as suitable linear combinations
of functions at a given fine scale. In particular,
the stability of the resulting basis,
which is essential for numerical algorithms,
is guaranteed by orthonormality.
In this article, we take the concept of wavelets to the next
level and consider discrete data. To this end, we modify the
construction of Tausch and White accordingly and construct
a multiscale basis which consists of localized and discrete
signed measures. Inspired by the term wavelet, we call such
signed measures \emph{samplets}.
Samplets can be constructed
such that their associated measure integrals vanish
for polynomial integrands.
If this is the case for all polynomials of total degree less
or equal than \(q\), we say that the samplets
have \emph{vanishing moments} of order $q+1$.
We remark that lowest order samplets, i.e.\ \(q=0\), have
been considered earlier for data compression in \cite{RE11}.
\textcolor{red}{Another concept for constructing multiscale
bases on data sets are \emph{diffusion wavelets}, which employ
a diffusion operator to construct the multiscale hierarchy,
see \cite{CM06}.
In contrast to diffusion wavelets, however,
the construction of samplets is solely based on discrete concepts
and can always be performed with linear cost
for a balanced cluster tree, even for non-uniformly distributed
data.}
When representing discrete data by samplets, then,
due to the vanishing moments, there is a fast decay of
the corresponding samplet coefficients with respect to
the support size if the data are smooth. This straightforwardly
enables data compression. In contrast, non-smooth regions
in the data are indicated by large samplet coefficients. This,
in turn, enables singularity detection and extraction. However,
we emphasize that our construction is not limited to the use
of polynomials. Indeed, it would easily be possible to adapt
the construction of samplets to other primitives with other
desired properties.
As a further application of samplets, we consider the
compression of kernel matrices as they arise in kernel
based machine learning and scattered data approximation,
compare \cite{Fasshauer2007,HSS08,Rasmussen2006,
Schaback2006,Wendland2004,Williams1998}. Kernel
matrices are typically densely populated since the underlying
kernels are nonlocal. Nonetheless, these kernels
are usually \emph{asymptotically smooth}, meaning
that they behave like smooth functions apart
from the diagonal. A discretization of an asymptotical
smooth kernel with respect to a samplet basis with
vanishing moments results in quasi-sparse kernel
matrices, which means that they can be compressed such
that only a sparse matrix remains, compare \cite{BCR,DHS,
DPS,PS,SCHN}. Especially, it has been demonstrated in
\cite{HM} that nested dissection, see \cite{Geo73,LRT79},
is applicable in order to obtain a fill-in reducing reordering
of the matrix. This reordering in turn allows for the
rapid factorization of the system matrix by the Cholesky
factorization without introducing additional errors.
\textcolor{red}{The asymptotically smoothness of the kernels
is also exploited by cluster methods like the fast multipole
method, see \cite{GR,RO,YBZ04} and particularly \cite{MXTY+2015}
for high-dimensional data. Nonetheless, these methods do
not allow for direct inversion, advantageous for example when
simulating Gaussian random fields. Another approach, which
is more in line of the work presented here, is the gamblet
based approach for the compression of the kernel matrix
in \cite{SSO21}. Nonetheless, compared to samplets with
vanishing moments, the construction of \emph{gamblets}
is more involved in comparison to our construction as
they need to be adapted to the specific operator at
hand, compare \cite{Owh17}.}
The rest of this article is organized as follows.
In Section~\ref{section:Samplets}, the novel concept of
samplets is introduced. The subsequent Section~\ref{sct:construction}
is devoted to the actual construction of samplets and to their
properties. The change of basis by means of the discrete samplet
transform is the topic of Section~\ref{sec:FST}. In Section
\ref{sec:Num1}, we demonstrate the capabilities of samplets
for data compression and smoothing for data in one, two and
three dimensions. Section~\ref{sec:kernelCompression} deals
with the samplet compression of kernel matrices.
Especially, we also develop an interpolation based \(\Hcal^2\)-matrix
approach in order to efficiently assemble the compressed
kernel matrix. Corresponding numerical results are then
presented in Section \ref{sec:Num2}. Finally, in
Section~\ref{sec:Conclusion}, we state
concluding remarks.
\section{Samplets}
\label{section:Samplets}
Let \(X\mathrel{\mathrel{\mathop:}=}\{{\boldsymbol x}_1,\ldots,{\boldsymbol x}_N\}\subset\Omega\)
denote a set of points within some region \(\Omega\subset\Rbb^d\).
Associated to each point \({\boldsymbol x}_i\), we introduce
the Dirac measure
\[
\delta_{{\boldsymbol x}_i}({\boldsymbol x})\mathrel{\mathrel{\mathop:}=}
\begin{cases}
1,&\text{if }{\boldsymbol x}={\boldsymbol x}_i\\
0,&\text{otherwise}.
\end{cases}
\]
With a slight abuse of notation, we also introduce the
point evaluation functional
\[
(f,\delta_{{\boldsymbol x}_i})_\Omega=\int_\Omega
f({\boldsymbol x})\delta_{{\boldsymbol x}_i}({\boldsymbol x})\d{\boldsymbol x}\mathrel{\mathrel{\mathop:}=}
\int_{\Omega}f({\boldsymbol x})\delta_{{\boldsymbol x}_i}(\d{\boldsymbol x})
=f({\boldsymbol x}_i),
\]
where $f\in C(\Omega)$ is a continuous function.
Next, we define the space
\(V\mathrel{\mathrel{\mathop:}=}\spn\{\delta_{{\boldsymbol x}_1},\ldots,\delta_{{\boldsymbol x}_N}\}\)
as the \(N\)-dimensional vector space
of all discrete and finite signed measures supported at the
points in \(X\).
An inner product on \(V\) is defined by
\[
\langle u,v\rangle_V\mathrel{\mathrel{\mathop:}=}\sum_{i=1}^N u_iv_i,\quad\text{where }
u=\sum_{i=1}^Nu_i\delta_{{\boldsymbol x}_i},\ v=\sum_{i=1}^Nv_i\delta_{{\boldsymbol x}_i}.
\]
Indeed, the space \(V\) is isometrically isomorphic to \(\Rbb^N\)
endowed with the canonical inner product.
Similar to the
idea of a multiresolution analysis in the construction of
wavelets, we introduce the spaces \(V_j\mathrel{\mathrel{\mathop:}=}\spn{\boldsymbol \Phi_j}\), where
\[
{\boldsymbol\Phi_j}\mathrel{\mathrel{\mathop:}=}\{\varphi_{j,k}:k\in\Delta_j\}.
\]
Here, $\Delta_j$ denotes a suitable
index set with cardinality $|\Delta_j|=\dim V_j$ and
\(j\in\Nbb\) is referred to as \emph{level}.
Moreover, each basis function \(\varphi_{j,k}\) is a linear
combination of Dirac measures
such that
\[
\langle \varphi_{j,k},\varphi_{j,k'}\rangle_V=0\quad\text{for }k\neq k'
\]
and
\[
\diam(\supp\varphi_{j,k})\mathrel{\mathrel{\mathop:}=}
\diam(\{{\boldsymbol x}_{i_1},\ldots,{\boldsymbol x}_{i_p}\})\sim 2^{-j/d}.
\]
For the sake of notational convenience, we shall identify bases
by row vectors,
such that, for ${\boldsymbol v}_j
= [v_{j,k}]_{k\in\Delta_j}$, the corresponding measure
can simply be written as a dot product according to
\[
v_j = \mathbf\Phi_j{\boldsymbol v}_j=\sum_{k\in\Delta_j} v_{j,k}\varphi_{j,k}.
\]
Rather than using the multiresolution
analysis corresponding to the hierarchy
\[
V_0\subset V_1\subset\cdots\subset V,
\]
the idea of samplets is
to keep track of the increment of information
between two consecutive levels $j$ and $j+1$. Since we have
$V_{j}\subset V_{j+1}$, we may decompose
\begin{equation}\label{eq:decomposition}
V_{j+1} = V_j\overset{\perp}{\oplus} S_j
\end{equation}
by using the \emph{detail space} $S_j$. Of practical interest
is the particular choice of the basis of the detail space $S_j$ in $V_{j+1}$.
This basis is assumed to be orthonormal as well and will be denoted by
\[
{\boldsymbol\Sigma}_j = \{\sigma_{j,k}:k\in\nabla_j\mathrel{\mathrel{\mathop:}=}\Delta_{j+1}
\setminus \Delta_j\}.
\]
Recursively applying the decomposition \eqref{eq:decomposition},
we see that the set
\[
\mathbf\Sigma_J = {\boldsymbol\Phi}_0 \bigcup_{j=0}^{J-1}{\boldsymbol\Sigma}_j
\]
forms a basis of \(V_J\mathrel{\mathrel{\mathop:}=} V\), which we call a \emph{samplet basis}.
In order to employ samplets for the compression of data and
kernel matrices, \textcolor{red}{it is favourable}
that the measures $\sigma_{j,k}$
are localized with respect to the corresponding level $j$, i.e.\
\begin{equation}\label{eq:localized}
\diam(\supp\sigma_{j,k})\sim 2^{-j/d},
\end{equation}
\textcolor{red}{however, this is not a requirement in our construction}
and that they are stable, i.e.\
\[
\langle \sigma_{j,k},\sigma_{j,k'}\rangle_V=0\quad\text{for }k\neq k'.
\]
Moreover, an essential ingredient is the vanishing moment
condition, meaning that
\begin{equation}\label{eq:vanishingMoments}
(p,\sigma_{j,k})_\Omega
= 0\quad \text{for all}\ p\in\Pcal_q(\Omega),
\end{equation}
where \(\Pcal_q(\Omega)\) denotes the space of all polynomials
with total degree at most \(q\).
We say then that the samplets have $q+1$ \emph{vanishing
moments}.
\begin{remark}
The concept of samplets has a very natural interpretation
in the context of reproducing kernel Hilbert spaces, compare
\cite{Aronszajn50}. If \((\Hcal,\langle\cdot,\cdot\rangle_{\Hcal})\)
is a reproducing kernel Hilbert space with reproducing kernel
\(\mathcal{K}\), then there holds
\((f,\delta_{{\boldsymbol x}_i})_\Omega
=\langle \mathcal{K}({\boldsymbol x}_i,\cdot),f\rangle_{\Hcal}\). Hence,
the samplet
\(\sigma_{j,k}=\sum_{\ell=1}^p\beta_\ell\delta_{{\boldsymbol x}_{i_\ell}}\)
can directly be identified with the function
\[
\hat{\sigma}_{j,k}\mathrel{\mathrel{\mathop:}=}
\sum_{\ell=1}^p\beta_\ell \mathcal{K}({\boldsymbol x}_{i_\ell},\cdot)\in\mathcal{H}.
\]
In particular, it holds
\[
\langle\hat{\sigma}_{j,k},h\rangle_\Hcal=0
\]
for any \(h\in\Hcal\) which satisfies
\(h|_{\supp\sigma_{j,k}}\in\Pcal_q(\supp\sigma_{j,k})\).
\end{remark}
\section{Construction of samplets}\label{sct:construction}
\subsection{Cluster tree}
In order to construct samplets with the desired properties,
especially vanishing moments, cf.\ \eqref{eq:vanishingMoments},
we shall transfer the wavelet construction of Tausch and
White from \cite{TW03} into our setting. The first step is to
construct subspaces of signed measures with localized
supports. To this end, we perform a hierarchical
clustering on the set \(X\).
\begin{definition}\label{def:cluster-tree}
Let $\mathcal{T}=(P,E)$ be a tree with vertices $P$ and edges $E$.
We define its set of leaves as
\[
\mathcal{L}(\mathcal{T})\mathrel{\mathrel{\mathop:}=}\{\nu\in P\colon\nu~\text{has no sons}\}.
\]
The tree $\mathcal{T}$ is a \emph{cluster tree} for
the set $X=\{{\boldsymbol x}_1,\ldots,{\boldsymbol x}_N\}$, iff
the set $X$ is the \emph{root} of $\mathcal{T}$ and
all $\nu\in P\setminus\mathcal{L}(\mathcal{T})$
are disjoint unions of their sons.
The \emph{level} \(j_\nu\) of $\nu\in\mathcal{T}$ is its distance from the root,
i.e.\ the number of son relations that are required for traveling from
$X$ to $\nu$. The \emph{depth} \(J\) of \(\Tcal\) is the maximum level
of all clusters. We define the set of clusters
on level $j$ as
\[
\mathcal{T}_j\mathrel{\mathrel{\mathop:}=}\{\nu\in\mathcal{T}\colon \nu~\text{has level}~j\}.
\]
Finally, the \emph{bounding box} $B_{\nu}$ of \(\nu\)
is defined as the smallest axis-parallel cuboid that
contains all its points.
\end{definition}
There exist several possibilities for the choice of a
cluster tree for the set \(X\). However, within this article,
we will exclusively consider binary trees and remark that it is of course
possible to consider other options, such as
\(2^d\)-trees, with the obvious modifications.
Definition~\ref{def:cluster-tree} provides a hierarchical cluster
structure on the set \(X\). Even so, it does not provide guarantees
for the sizes and cardinalities of the clusters.
Therefore, we introduce the concept
of a balanced binary tree.
\begin{definition}
Let $\Tcal$ be a cluster tree on $X$ with depth $J$.
$\Tcal$ is called a \emph{balanced binary tree}, if all
clusters $\nu$ satisfy the following conditions:
\begin{enumerate}
\item
The cluster $\nu$ has exactly two sons
if $j_{\nu} < J$. It has no sons if $j_{\nu} = J$.
\item
It holds $|\nu|\sim 2^{J-j_{\nu}}$.
\end{enumerate}
\end{definition}
A balanced binary tree can be constructed by \emph{cardinality
balanced clustering}. This means that the root cluster
is split into two son clusters of identical (or similar)
cardinality. This process is repeated recursively for the
resulting son clusters until their cardinality falls below a
certain threshold.
For the subdivision, the cluster's bounding box
is split along its longest edge such that the
resulting two boxes both contain an equal number of points.
Thus, as the cluster cardinality halves with each level,
we obtain $\mathcal{O}(\log N)$ levels in total.
The total cost for constructing the cluster tree
is therefore $\mathcal{O}(N \log N)$. Finally, we remark that a
balanced tree is only required to guarantee the cost bounds
for the presented algorithms. The error and compression estimates
we shall present later on are robust in the sense that they
are formulated directly in terms of the actual cluster sizes
rather than the introduced cluster level.
\subsection{Multiscale hierarchy}
Having a cluster tree at hand, we
shall now construct a samplet basis on the resulting
hierarchical structure. We begin by introducing a \emph{two-scale}
transform between basis elements on a cluster $\nu$ of level $j$.
To this end, we create \emph{scaling functions} $\mathbf{\Phi}_{j}^{\nu}
= \{ \varphi_{j,k}^{\nu} \}$ and \emph{samplets} $\mathbf{\Sigma}_{j}^{\nu}
= \{ \sigma_{j,k}^{\nu} \}$ as linear combinations of the scaling
functions $\mathbf{\Phi}_{j+1}^{\nu}$ of $\nu$'s son clusters.
This results in the \emph{refinement relation}
\begin{equation}\label{eq:refinementRelation}
[ \mathbf{\Phi}_{j}^{\nu}, \mathbf{\Sigma}_{j}^{\nu} ]
\mathrel{\mathrel{\mathop:}=}
\mathbf{\Phi}_{j+1}^{\nu}
{\boldsymbol Q}_j^{\nu}=
\mathbf{\Phi}_{j+1}^{\nu}
\big[ {\boldsymbol Q}_{j,\Phi}^{\nu},{\boldsymbol Q}_{j,\Sigma}^{\nu}\big].
\end{equation}
In order to provide both, vanishing moments and orthonormality,
the transformation \({\boldsymbol Q}_{j}^{\nu}\) has to be
appropriately constructed. For this purpose, we consider an orthogonal
decomposition of the \emph{moment matrix}
\[
{\boldsymbol M}_{j+1}^{\nu}\mathrel{\mathrel{\mathop:}=}
\begin{bmatrix}({\boldsymbol x}^{\boldsymbol 0},\varphi_{j+1,1})_\Omega&\cdots&
({\boldsymbol x}^{\boldsymbol 0},\varphi_{j+1,|\nu|})_\Omega\\
\vdots & & \vdots\\
({\boldsymbol x}^{\boldsymbol\alpha},\varphi_{j+1,1})_\Omega&\cdots&
({\boldsymbol x}^{\boldsymbol\alpha},\varphi_{j+1,|\nu|})_\Omega
\end{bmatrix}=
[({\boldsymbol x}^{\boldsymbol\alpha},\mathbf{\Phi}_{j+1}^{\nu})_\Omega]_{|\boldsymbol\alpha|\le q}
\in\Rbb^{m_q\times|\nu|},
\]
where
\begin{equation}\label{eq:mq}
m_q\mathrel{\mathrel{\mathop:}=}\sum_{\ell=0}^q{\ell+d-1\choose d-1}={q+d\choose d}\leq(q+1)^d
\end{equation}
denotes the dimension of \(\Pcal_q(\Omega)\).
In the original construction by
Tausch and White, the matrix \({\boldsymbol Q}_{j}^{\nu}\) is obtained
by a singular value decomposition of \({\boldsymbol M}_{j+1}^{\nu}\).
For the construction of samplets, we follow the idea
form \cite{AHK14} and rather
employ the QR decomposition, which has the advantage of generating
samplets with an increasing number of vanishing moments.
It holds
\begin{equation}\label{eq:QR}
({\boldsymbol M}_{j+1}^{\nu})^\intercal = {\boldsymbol Q}_j^\nu{\boldsymbol R}
\mathrel{=\mathrel{\mathop:}}\big[{\boldsymbol Q}_{j,\Phi}^{\nu} , {\boldsymbol Q}_{j,\Sigma}^{\nu}\big]{\boldsymbol R}
\end{equation}
Consequently, the moment matrix
for the cluster's own scaling functions and samplets is then
given by
\begin{equation}\label{eq:vanishingMomentsQR}
\begin{aligned}
\big[{\boldsymbol M}_{j,\Phi}^{\nu}, {\boldsymbol M}_{j,\Sigma}^{\nu}\big]
&= \left[({\boldsymbol x}^{\boldsymbol\alpha},[\mathbf{\Phi}_{j}^{\nu},
\mathbf{\Sigma}_{j}^{\nu}])_\Omega\right]_{|\boldsymbol\alpha|\le q}
= \left[({\boldsymbol x}^{\boldsymbol\alpha},\mathbf{\Phi}_{j+1}^{\nu}[{\boldsymbol Q}_{j,\Phi}^{\nu}
, {\boldsymbol Q}_{j,\Sigma}^{\nu}])_\Omega
\right]_{|\boldsymbol\alpha|\le q} \\
&= {\boldsymbol M}_{j+1}^{\nu} [{\boldsymbol Q}_{j,\Phi}^{\nu} , {\boldsymbol Q}_{j,\Sigma}^{\nu} ]
= {\boldsymbol R}^\intercal.
\end{aligned}
\end{equation}
As ${\boldsymbol R}^\intercal$ is a lower triangular matrix, the first $k-1$
entries in its $k$-th column are zero. This corresponds to
$k-1$ vanishing moments for the $k$-th function generated
by the transformation
${\boldsymbol Q}_{j}^{\nu}=[{\boldsymbol Q}_{j,\Phi}^{\nu} , {\boldsymbol Q}_{j,\Sigma}^{\nu} ]$.
By defining the first $m_{q}$ functions as scaling functions and
the remaining as samplets, we obtain samplets with vanishing
moments at least up to order $q+1$. By increasing
the polynomial degree to \(\hat{q}\geq q\) at the leaf clusters
such that \(m_{\hat{q}}\geq 2m_q\), we can even construct
samplets with an increasing number of vanishing moments up to order \(\hat{q}+1\)
without any additional cost.
\begin{remark}
We remark that the samplet construction using vanishing moments
is inspired by the classical wavelet theory. However, it is easily
possible to adapt the construction to other primitives of interest.
\end{remark}
\begin{remark}
\label{remark:introCQ}
Each cluster has at most a constant number of scaling
functions and samplets: For a particular cluster $\nu$, their number
is identical to the cardinality of $\mathbf{\Phi}_{j+1}^{\nu}$. For leaf
clusters, this number is bounded by the leaf size.
For non-leaf clusters, it is bounded by the number of scaling functions
provided from all its son clusters. As there are at most two
son clusters with a maximum of $m_q$ scaling functions each,
we obtain the bound $2 m_q$ for non-leaf clusters. Note that,
if $\mathbf{\Phi}_{j+1}^{\nu}$ has at most $m_q$ elements, a
cluster will not provide any samplets at all and all functions
will be considered as scaling functions.
\end{remark}
For leaf clusters, we define the scaling functions by
the Dirac measures supported at the points \({\boldsymbol x}_i\), i.e.\
$\mathbf{\Phi}_J^{\nu}\mathrel{\mathrel{\mathop:}=}\{ \delta_{{\boldsymbol x}_i} : {\boldsymbol x}_i\in\nu \}$,
to account for the lack of son clusters that could provide scaling functions.
The scaling functions of all clusters on a specific level $j$
then generate the spaces
\begin{equation}\label{eq:Vspaces}
V_{j}\mathrel{\mathrel{\mathop:}=} \spn\{ \varphi_{j,k}^{\nu} : k\in \Delta_j^\nu,\ \nu \in\Tcal_{j} \},
\end{equation}
while the samplets span the detail spaces
\begin{equation}\label{eq:Wspaces}
S_{j}\mathrel{\mathrel{\mathop:}=}
\spn\{ \sigma_{j,k}^{\nu} : k\in \nabla_j^\nu,\ \nu \in \Tcal_{j} \} =
V_{j+1}\overset{\perp}{\ominus} V_j.
\end{equation}
Combining the scaling functions of the root cluster with all
clusters' samplets gives rise to the samplet basis
\begin{equation}\label{eq:Wbasis}
\mathbf{\Sigma}_{N}\mathrel{\mathrel{\mathop:}=}\mathbf{\Phi}_{0}^{X}
\cup \bigcup_{\nu \in T} \mathbf{\Sigma}_{j}^{\nu}.
\end{equation}
Writing $\mathbf{\Sigma}_{N}
= \{ \sigma_{k} : 1 \leq k \leq N \}$, where
$\sigma_{k}$ is either a samplet or a scaling function
at the root cluster, we can establish a unique indexing of
all the signed measures comprising the samplet
basis. The indexing induces an order on the
basis set $\mathbf{\Sigma}_{N}$, which we choose
to be level-dependent: Samplets belonging to a particular
cluster are grouped together, with those on finer levels
having larger indices.
\begin{remark}\label{remark:waveletLeafSize}
We remark that the samplet basis on a balanced
cluster tree can be computed in cost $\mathcal{O}(N)$,
we refer to \cite{AHK14} for a proof.
\end{remark}
\subsection{Properties of the samplets}
By construction, the samplets satisfy the following
properties, which can directly be inferred from
the corresponding results in \cite{HKS05,TW03}.
\begin{theorem}\label{theo:waveletProperties}
The spaces $V_{j}$ defined in equation \eqref{eq:Vspaces}
exhibit the desired multiscale hierarchy
\[
V_0\subset V_1\subset\cdots\subset V_J = V,
\]
where the corresponding complement spaces $S_{j}$ from \eqref{eq:Wspaces}
satisfy $V_{j+1}=V_j\overset{\perp}{\oplus} S_{j}$ for all $j=0,1,\ldots,
J-1$. The associated samplet basis $\mathbf{\Sigma}_{N}$ defined in
\eqref{eq:Wbasis} constitutes an orthonormal basis of $V$.
In particular:
\begin{enumerate}
\item The number of all samplets on level $j$ behaves like $2^j$.
\item The samplets have $q+1$ vanishing moments.
\item
Each samplet is supported in a specific cluster $\nu$.
If the points in $X$ are uniformly distributed, then the
diameter of the cluster satisfies $\diam(\nu)\sim
2^{-j_\nu/d}$ and it holds \eqref{eq:localized}.
\end{enumerate}
\end{theorem}
\begin{remark}
Due to $S_j\subset V$ and $V_0\subset V$,
we conclude that each samplet is a linear combination of the
Dirac measures supported at the points in $X$. Especially, the
related coefficient vectors ${\boldsymbol\omega}_{j,k}$ in
\begin{equation}\label{eq:coefficientVectorsOfWavelets}
\sigma_{j,k} = \sum_{i=1}^{N}
\omega_{j,k,i} \delta_{{\boldsymbol x}_i} \quad
\text{and} \quad \varphi_{0,k} = \sum_{i=1}^{N} \omega_{0,k,i} \delta_{{\boldsymbol x}_i}
\end{equation}
are pairwise orthonormal with respect to the inner
product on \(\Rbb^N\).
\end{remark}
Later on, the following bound on the samplets'
coefficients $\|\cdot\|_1$-norm will
be essential:
\begin{lemma}\label{lemma:waveletL1Norm}
The coefficient vector ${\boldsymbol\omega}_{j,k}=\big[\omega_{j,k,i}\big]_i$ of
the samplet $\sigma_{j,k}$ on the cluster $\nu$ fulfills
\begin{equation}\label{eq:ell1-norm}
\|{\boldsymbol\omega}_{j,k}\|_{1}\le\sqrt{|\nu|}.
\end{equation}
The same holds for the scaling functions $\varphi_{j,k}$.
\end{lemma}
\begin{proof}
It holds $\|{\boldsymbol\omega}_{j,k}\|_{\ell^2}=1$. Hence,
the assertion follows immediately from the Cauchy-Schwarz
inequality
\[
\|{\boldsymbol\omega}_{j,k}\|_{1}\le\sqrt{|\nu|}\|{\boldsymbol\omega}_{j,k}\|_{2}
=\sqrt{|\nu|}.
\]
\end{proof}
The key for data compression and singularity detection
is the following estimate which shows that the samplet
coefficients decay with respect to the samplet's level
provided that the data result from the evaluation of a smooth function.
Therefore, in case of smooth data, the samplet
coefficients are small and can be set to zero without
compromising the accuracy. Vice versa, a large samplet
coefficients reflects that the data are singular in the
region of the samplet's support.
\begin{lemma}\label{lemma:decay}
Let $f\in C^{q+1}(\Omega)$. Then, it holds for
a samplet $\sigma_{j,k}$ supported
on the cluster $\nu$ that
\begin{equation}\label{eq:decay}
|(f,\sigma_{j,k})_\Omega|\le
\diam(\nu)^{q+1}\|f\|_{C^{q+1}(\Omega)}\|{\boldsymbol\omega}_{j,k}\|_{1}.
\end{equation}
\end{lemma}
\begin{proof}
For ${\boldsymbol x}_0\in\nu$, the Taylor expansion of $f$ yields
\[
f({\boldsymbol x}) = \sum_{|\boldsymbol\alpha|\le q}
\frac{\partial^{|\boldsymbol\alpha|}}{\partial{\boldsymbol x}^{\boldsymbol\alpha}}f({\boldsymbol x}_0)
\frac{({\boldsymbol x}-{\boldsymbol x}_0)^{\boldsymbol\alpha}}{\boldsymbol\alpha!}
+ R_{{\boldsymbol x}_0}({\boldsymbol x}).
\]
Herein, the remainder $R_{{\boldsymbol x}_0}({\boldsymbol x})$ reads
\begin{align*}
R_{{\boldsymbol x}_0}({\boldsymbol x}) &= (q+1)\sum_{|\boldsymbol\alpha|=q+1}
\frac{({\boldsymbol x}-{\boldsymbol x}_0)^{\boldsymbol\alpha}}{\boldsymbol\alpha!}
\int_0^1\frac{\partial^{q+1}}{\partial{\boldsymbol x}^{\boldsymbol\alpha}}
f\big({\boldsymbol x}_0+s({\boldsymbol x}-{\boldsymbol x}_0)\big)(1-s)^q\d s.
\end{align*}
In view of the vanishing moments, we conclude
\begin{align*}
|(f,\sigma_{j,k})_\Omega|
&= |(R_{{\boldsymbol x}_0},\sigma_{j,k})_\Omega|
\le\sum_{|\boldsymbol\alpha|=q+1}\frac{\|{\boldsymbol x}-{\boldsymbol x}_0\|_2^{|\boldsymbol\alpha|}}{\boldsymbol\alpha!}
\max_{{\boldsymbol x}\in\nu}\bigg|\frac{\partial^{q+1}}
{\partial{\boldsymbol x}^{\boldsymbol\alpha}}f(\boldsymbol x)\bigg|\|{\boldsymbol\omega}_{j,k}\|_{1}\\
&\le\diam(\nu)^{q+1}\|f\|_{C^{q+1}(\Omega)}\|{\boldsymbol\omega}_{j,k}\|_{1}.
\end{align*}
Here, we used the estimate
\[
\sum_{|\boldsymbol\alpha|=q+1}\frac{2^{-(q+1)}}{\boldsymbol\alpha!}\le 1,
\]
which is obtained by choosing \({\boldsymbol x}_0\) as the
cluster's midpoint.
\end{proof}
\section{Discrete samplet transform}\label{sec:FST}
In order to transform between the samplet basis
and the basis of Dirac measures, we introduce
the \emph{discrete samplet transform} and its inverse.
To this end, we assume that the data
\(({\boldsymbol x}_1,y_1),\ldots,({\boldsymbol x}_N,y_N)\)
result from the evaluation of some (unknown) function
\(f\colon\Omega\to\Rbb\),
i.e.\
\[y_i=f_i^{\Delta}=(f,\delta_{{\boldsymbol x}_i})_\Omega.
\]
Hence, we may represent the function \(f\) on \(X\)
according to
\[f = \sum_{i = 1}^{N} f_i^{\Delta} \delta_{{\boldsymbol x}_i}.
\]
Our goal is now to compute the representation
\[f =
\sum_{i = 1}^{N} f_{k}^{\Sigma} \sigma_{k}
\]
with respect to the samplet basis.
For
sake of a simpler notation, let
${\boldsymbol f}^{\Delta}\mathrel{\mathrel{\mathop:}=} [f_i^{\Delta}]_{i=1}^N$
and ${\boldsymbol f}^{\Sigma}\mathrel{\mathrel{\mathop:}=} [f_i^\Sigma]_{i=1}^N$ denote
the associated coefficient vectors.
\begin{figure}[htb]
\begin{center}
\scalebox{0.75}{
\begin{tikzpicture}[x=0.4cm,y=0.4cm]
\tikzstyle{every node}=[circle,draw=black,fill=shadecolor, minimum size=1.2cm]%
\tikzstyle{ptr}=[draw=none,fill=none,above]%
\node at (0,5) (1) {${\boldsymbol f}^{\Delta}$};
\node at (8,5) (2) {${\boldsymbol f}_{J-1}^{\Phi}$};
\node at (8,1) (3) {${\boldsymbol f}_{J-1}^{\Sigma}$};
\node at (16,5) (4) {${\boldsymbol f}_{J-2}^{\Phi}$};
\node at (16,1) (5) {${\boldsymbol f}_{J-2}^{\Sigma}$};
\node at (24,5) (6) {${\boldsymbol f}_{J-3}^{\Phi}$};
\node at (24,1) (7) {${\boldsymbol f}_{J-3}^{\Sigma}$};
\node at (30,5) (8) {${\boldsymbol f}_{1}^{\Phi}$};
\node at (38,5) (9) {${\boldsymbol f}_{0}^{\Phi}$};
\node at (38,1) (10) {${\boldsymbol f}_{0}^{\Sigma}$};
\tikzstyle{forward}=[draw,-stealth]%
\tikzstyle{every node}=[style=ptr]
\draw
(1) edge[forward] node[above,sloped]{${\boldsymbol Q}_{J-1,\Phi}^\intercal$} (2)
(1) edge[forward] node[above,sloped]{${\boldsymbol Q}_{J-1,\Sigma}^\intercal$} (3)
(2) edge[forward] node[above,sloped]{${\boldsymbol Q}_{J-2,\Phi}^\intercal$} (4)
(2) edge[forward] node[above,sloped]{${\boldsymbol Q}_{J-2,\Sigma}^\intercal$} (5)
(4) edge[forward] node[above,sloped]{${\boldsymbol Q}_{J-3,\Phi}^\intercal$} (6)
(4) edge[forward] node[above,sloped]{${\boldsymbol Q}_{J-3,\Sigma}^\intercal$} (7)
(8) edge[forward] node[above,sloped]{${\boldsymbol Q}_{0,\Phi}^\intercal$} (9)
(8) edge[forward] node[above,sloped]{${\boldsymbol Q}_{0,\Sigma}^\intercal$} (10);
\tikzstyle{every node}=[style=ptr]%
\tikzstyle{ptr}=[draw=none,fill=none]%
\node at (27,5) (16) {$\hdots$};
\end{tikzpicture}}
\caption{\label{fig:haar}Visualization of the discrete samplet transform.}
\end{center}
\end{figure}
The discrete samplet transform is based on
recursively applying the refinement relation
\eqref{eq:refinementRelation} to the point evaluations
\begin{equation}\label{eq:refinementRelationInnerProducts}
(f, [ \mathbf{\Phi}_{j}^{\nu}, \mathbf{\Sigma}_{j}^{\nu} ])_\Omega
=(f, \mathbf{\Phi}_{j+1}^{\nu} [{\boldsymbol Q}_{j,\Phi}^{\nu} ,{\boldsymbol Q}_{j,\Sigma}^{\nu} ])_\Omega \\
=(f, \mathbf{\Phi}_{j+1}^{\nu})_\Omega [{\boldsymbol Q}_{j,\Phi}^{\nu} , {\boldsymbol Q}_{j,\Sigma}^{\nu} ].
\end{equation}
On the finest level, the entries of the vector
$(f, \mathbf{\Phi}_{J}^{\nu})_\Omega$
are exactly those of ${\boldsymbol f}^{\Delta}$. Recursively
applying equation \eqref{eq:refinementRelationInnerProducts} therefore
yields all the coefficients $(f, \mathbf{\Psi}_{j}^{\nu})_\Omega$,
including $(f, \mathbf{\Phi}_{0}^{X})_\Omega$,
required for the representation of $f$ in the samplet basis,
see Figure~\ref{fig:haar} for a visualization. The
complete procedure is
formulated in Algorithm~\ref{algo:DWT}.\bigskip
\begin{algorithm}[H]
\caption{Discrete samplet transform}
\label{algo:DWT}
\KwData{Data ${\boldsymbol f}^\Delta$,
cluster tree $\Tcal$ and transformations
$[{\boldsymbol Q}_{j,\Phi}^{\nu},{\boldsymbol Q}_{j,\Sigma}^{\nu}]$.}
\KwResult{Coefficients ${\boldsymbol f}^{\Sigma}$
stored as
$[(f,\mathbf{\Phi}_{0}^{X})_\Omega]^\intercal$ and
$[(f,\mathbf{\Sigma}_{j}^{\nu})_\Omega]^\intercal$.}
\Begin{
store $[(f,\mathbf{\Phi}_{0}^{X})_\Omega]^\intercal\mathrel{\mathrel{\mathop:}=}$
\FuncSty{transformForCluster}($X$)
}
\end{algorithm}
\begin{function}[H]
\caption{transformForCluster($\nu$)}
\Begin{
\uIf{$\nu=\{{\boldsymbol x}_{i_{1}}, \dots,{\boldsymbol x}_{i_{|\nu|}}\}$
is a leaf of \(\Tcal\)}{
set ${\boldsymbol f}_{j+1}^{\nu}\mathrel{\mathrel{\mathop:}=}
\big[f_{i_{k}}^\Delta\big]_{k=1}^{|\nu|}$
}
\Else{
\For{all sons $\nu'$ of $\nu$}{
execute $\transformForCluster(\nu')$\\
append the result to ${\boldsymbol f}_{j+1}^{\nu}$
}
}
set $[(f,\mathbf{\Sigma}_{j}^{\nu})_\Omega]^\intercal
\mathrel{\mathrel{\mathop:}=}({\boldsymbol Q}_{j,\Sigma}^{\nu})^\intercal {\boldsymbol f}_{j+1}^{\nu}$
\Return{$({\boldsymbol Q}_{j,\Phi}^{\nu})^\intercal{\boldsymbol f}_{j+1}^{\nu}$}
}
\end{function}\bigskip
\begin{remark}
Algorithm \ref{algo:DWT} employs the transposed version of
\eqref{eq:refinementRelationInnerProducts} to preserve
the column vector structure of ${\boldsymbol f}^\Delta$ and ${\boldsymbol f}^{\Sigma}$.
\end{remark}
The inverse transformation is obtained by reversing
the steps of the discrete samplet transform:
For each cluster, we compute
\[
(f, \mathbf{\Phi}_{j+1}^{\nu})_\Omega
= (f, [ \mathbf{\Phi}_{j}^{\nu}, \mathbf{\Sigma}_{j}^{\nu} ]
)_\Omega[{\boldsymbol Q}_{j,\Phi}^{\nu} ,{\boldsymbol Q}_{j,\Sigma}^{\nu} ]^\intercal
\]
to either obtain the
coefficients of the
son clusters' scaling functions
or, for leaf clusters, the coefficients ${\boldsymbol f}^{\Delta}$.
The procedure is summarized in Algorithm~\ref{algo:iDWT}.\bigskip
\begin{algorithm}[H]
\caption{Inverse samplet transform}
\label{algo:iDWT}
\KwData{Coefficients ${\boldsymbol f}^\Sigma$,
cluster tree $\Tcal$ and transformations
$[{\boldsymbol Q}_{j,\Phi}^{\nu},{\boldsymbol Q}_{j,\Sigma}^{\nu}]$.}
\KwResult{Coefficients ${\boldsymbol f}^{\Delta}$
stored as
$[(f,\mathbf{\Phi}_{j}^{\nu})_\Omega]^\intercal$.}
\Begin{
\FuncSty{inverseTransformForCluster}($X$,
$[(f,\mathbf{\Phi}_{0}^{X})_\Omega]^\intercal$)
}
\end{algorithm}
\begin{function}[H]
\caption{inverseTransformForCluster($\nu$,
\unexpanded{$[(f,{\boldsymbol\Phi}_{j}^\nu)_\Omega]^\intercal$})}
\Begin{
$[(f,{\boldsymbol\Phi}_{j+1}^\nu)_\Omega]^\intercal
\mathrel{\mathrel{\mathop:}=} [{\boldsymbol Q}_{j,\Phi}^{\nu} ,{\boldsymbol Q}_{j,\Sigma}^{\nu} ]
\begin{bmatrix}
[(f,{\boldsymbol\Phi}_{j}^\nu)_\Omega]^\intercal\\
[(f,{\boldsymbol\Sigma}_{j}^\nu)_\Omega]^\intercal
\end{bmatrix}$
\uIf{$\nu=\{{\boldsymbol x}_{i_{1}}, \dots,{\boldsymbol x}_{i_{|\nu|}}\}$
is a leaf of \(\Tcal\)}{set $\big[f_{i_{k}}^\Delta\big]_{k=1}^{|\nu|}
\mathrel{\mathrel{\mathop:}=}[(f,{\boldsymbol\Phi}_{j_\nu+1}^\nu)_\Omega]^\intercal$
}
\Else{
\For{all sons $\nu'$ of $\nu$}{
assign the part of $[(f,{\boldsymbol\Phi}_{j+1}^\nu)_\Omega]^\intercal$
belonging to \(\nu'\) to $[(f,{\boldsymbol\Phi}_{j'}^{\nu'})_\Omega]^\intercal$
execute \FuncSty{inverseTransformForCluster}($\nu'$,
$[(f,{\boldsymbol\Phi}_{j'}^{\nu'})_\Omega]^\intercal$) }
}
}
\end{function}\bigskip
The discrete samplet transform and its inverse
can be performed in linear cost. This
result is well known in case of wavelets and was
crucial for their rapid development.
\begin{theorem}
The runtime of the discrete samplet transform and the inverse
samplet transform are \(\mathcal{O}(N)\), each.
\end{theorem}
\begin{proof}
As the samplet construction follows the construction
of Tausch and White, we refer to \cite{TW03} for the
details of the proof.
\end{proof}
\section{Numerical results I}\label{sec:Num1}
To demonstrate the efficacy of the samplet analysis,
we compress different sample data in one, two and three
spatial dimensions. For each example, we use samplets
with \(q+1=3\) vanishing moments.
\subsection*{One dimension}
We start with two one-dimensional
examples. On the one hand, we consider the test function
\[
f(x)=\frac 3 2 e^{-40|x-\frac 1 4|}
+ 2e^{-40|x|}-e^{-40|x+\frac 1 2|},
\]
sampled at $8192$ uniformly distributed points on \([-1,1]\).
On the other hand, we consider a path of a Brownian motion
sampled at the same points. The coefficients of the samplet
transformed data are thresholded with \(10^{-i}\|{\boldsymbol f}^{\Sigma}\|_\infty\),
\(i=1,2,3\), respectively.
The resulting compression ratios and the reconstructions
can be found in Figure~\ref{fig:Expcomp} and Figure~\ref{fig:BMcomp},
respectively. One readily infers that in both cases high compression
rates are achieved at high accuracy. In case of the Brownian motion,
the smoothing of the sample data can be realized by increasing the
compression rate, corresponding to throwing away more and
more detail information. Indeed, due to the orthonormality of the samplet
basis, this procedure amounts to a least squares fit of the data.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\begin{axis}[width=\textwidth, height=0.4\textwidth, xmin = -1, xmax=1, ymin=-1.1,
ymax=2.1, ylabel={$y$}, xlabel ={$x$},legend style={mark options={scale=2}},
legend pos = north east]
\addplot[line width=0.7pt,color=black]
table[each nth point=3,x index={0},y index = {1}]{./Results/ExpCompress1D.txt};
\addlegendentry{data};
\addplot[line width=0.7pt,color=blue, only marks, mark size=0.2pt]
table[each nth point=3,x index={0},y index = {5}]{./Results/ExpCompress1D.txt};
\addlegendentry{$98.55\%$ compr.};
\addplot[line width=0.7pt,color=red, only marks, mark size=0.2pt]
table[each nth point=3,x index={0},y index = {4}]{./Results/ExpCompress1D.txt};
\addlegendentry{$99.17\%$ compr.};
\addplot[line width=0.7pt,color=darkgreen, only marks, mark size=0.2pt]
table[each nth point=3,x index={0},y index = {3}]{./Results/ExpCompress1D.txt};
\addlegendentry{$99.63\%$ compr.};
\end{axis}
\end{tikzpicture}
\end{center}
\caption{\label{fig:Expcomp}Sampled test function approximated with
different compression ratios.}
\end{figure}
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}[ ausschnitt/.style={black!80}]
\begin{axis}[width=\textwidth, height=0.4\textwidth, xmin = -1, xmax=1, ymin=-1,
ymax=2.4,
ylabel={$y$}, xlabel ={$x$},legend style={mark options={scale=2}},
legend pos = south east]
\draw[ausschnitt]
(axis cs:-0.5,-0.5)coordinate(ul)--
(axis cs:0.005,-0.5)coordinate(ur)--
(axis cs:0.005,0.4)coordinate(or)--
(axis cs:-0.5,0.4) -- cycle;
\addplot[line width=0.7pt,color=black]
table[each nth point=4,x index={0},y index = {1}]{./Results/BMCompress1D.txt};
\addlegendentry{data};
\addplot[line width=0.7pt,color=blue, only marks, mark size=0.2pt]
table[each nth point=5,x index={0},y index = {5}]{./Results/BMCompress1D.txt};
\addlegendentry{$92.69\%$ compr.};
\addplot[line width=0.7pt,color=red, only marks, mark size=0.2pt]
table[each nth point=5,x index={0},y index = {4}]{./Results/BMCompress1D.txt};
\addlegendentry{$99.24\%$ compr.};
\addplot[line width=0.7pt,color=darkgreen, only marks, mark size=0.2pt]
table[each nth point=5,x index={0},y index = {3}]{./Results/BMCompress1D.txt};
\addlegendentry{$99.88\%$ compr.};
\end{axis}
\begin{axis}[yshift=-.37\textwidth,xshift=0.25\textwidth,
width=0.5\textwidth, height=0.4\textwidth, xmin = -0.5,
xmax=0.005, ymin=-0.5,
ymax=0.4,axis line style=ausschnitt]
\addplot[line width=0.7pt,color=black]
table[each nth point=2,x index={0},y index = {1}]{./Results/BMCompress1D.txt};
\addplot[line width=0.7pt,color=blue, only marks, mark size=0.2pt]
table[each nth point=2,x index={0},y index = {5}]{./Results/BMCompress1D.txt};
\addplot[line width=0.7pt,color=red, only marks, mark size=0.2pt]
table[each nth point=2,x index={0},y index = {4}]{./Results/BMCompress1D.txt};
\addplot[line width=0.7pt,color=darkgreen, only marks, mark size=0.2pt]
table[each nth point=2,x index={0},y index = {3}]{./Results/BMCompress1D.txt};
\end{axis}
\draw[ausschnitt]
(current axis.north west)--(ul)
(current axis.north east)--(ur);
\end{tikzpicture}
\caption{\label{fig:BMcomp}Sampled Brownian motion approximated with
different compression ratios.}
\end{center}
\end{figure}
\subsection*{Two dimensions}
As a second application for samplets, we consider image compression.
To this end, we use a \(2000\times 2000\) pixel grayscale landscape
image. The coefficients of the samplet transformed image are thresholded
with \(10^{-i}\|{\boldsymbol f}^{\Sigma}\|_\infty\), \(i=2,3,4\), respectively.
The corresponding
results and compression rates can be found in Figure~\ref{fig:compImage}.
A visualization of the samplet coefficients in case of the respective
low compression can be found in Figure~\ref{fig:coeffImage}.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\draw(0,0)node{\includegraphics[scale = 0.12,trim=65 47 65 24,clip]{%
./Results/OriginalLugano.png}};
\draw(0,2.4)node{Original image};
\draw(5,2.4)node{\(95.23\%\) compression};
\draw(5,0)node{\includegraphics[scale = 0.12,trim=65 47 65 24,clip]{%
./Results/CompressedLowLugano.png}};
\draw(0,-2.6)node{\(99.89\%\) compression};
\draw(0,-5)node{\includegraphics[scale = 0.12,trim=65 47 65 24,clip]{%
./Results/CompressedIntermedLugano.png}};
\draw(5,-2.6)node{\(99.99\%\) compression};
\draw(5,-5)node{\includegraphics[scale = 0.12,trim=65 47 65 24,clip]{%
./Results/CompressedHighLugano.png}};
\end{tikzpicture}
\caption{\label{fig:compImage}Different compression rates of the test image.}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\draw(0,0)node{\includegraphics[scale = 0.12,trim=1000 195 1000 195,clip]{%
./Results/LuganoCoeffs.png}};
\draw(3.3,0)node{\includegraphics[scale = 0.14,trim=2100 400 460 400,clip]{%
./Results/LuganoCoeffs.png}};
\end{tikzpicture}
\caption{\label{fig:coeffImage}Visualization of the samplet coefficients for the
test image.}
\end{center}
\end{figure}
\subsection*{Three dimensions}
Finally, we show a result in three dimensions.
Here, the points are given by a uniform subsample of
a triangulation of the Stanford bunny. We consider data on the
Stanford bunny generated by the function
\[
f({\boldsymbol x})=e^{-20\|{\boldsymbol x}-{\boldsymbol p}_0\|_2}+e^{-20\|{\boldsymbol x}-{\boldsymbol p}_1\|_2},
\]
where the points \({\boldsymbol p}_0\) and \({\boldsymbol p}_1\) are located at the tips
of the bunny's ears. Moreover, the geometry has been rescaled to a
diameter of 2. The plot on the left-hand side of Figure~\ref{fig:coeffStanford}
visualizes the sample data, while the plot on the right-hand side
shows the dominant coefficients in case of a threshold parameter
of \(10^{-2}\|{\boldsymbol f}^{\Sigma}\|_\infty\).
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\draw(0,0)node{\includegraphics[scale = 0.12,trim=1040 230 1050 280,clip]{%
./Results/StanfordBunnySignal.png}};
\draw(5,0)node{\includegraphics[scale = 0.12,trim=1000 200 1000 200,clip]{%
./Results/StanfordBunny1e-2Coeff.png}};
\draw(8,0)node{\includegraphics[scale = 0.14,trim=2130 400 600 400,clip]{%
./Results/StanfordBunny1e-2Coeff.png}};
\end{tikzpicture}
\caption{\label{fig:coeffStanford}Data on the Stanford bunny (left) and
dominant samplet coefficients (right).}
\end{center}
\end{figure}
\section{Compression of kernel matrices}\label{sec:kernelCompression}
\subsection{Kernel matrices}
The second application of samplets we consider
is the compression of matrices arising from positive
(semi-) definite kernels, as they emerge in kernel
methods, such as scattered data analysis, kernel
based learning or Gaussian process regression,
see for example \cite{HSS08,Schaback2006,
Wendland2004,Williams1998} and the references
therein.
We start by recalling the concept of a positive kernel.
\begin{definition}\label{def:poskernel}
A symmetric kernel
$\mathcal{K}\colon\Omega\times\Omega\rightarrow\Rbb$ is
called \textit{positive (semi-)definite} on $\Omega\subset\mathbb{R}^d$,
iff \([\mathcal{K}({\boldsymbol x}_i,{\boldsymbol x}_j)]_{i,j=1}^N\)
is a symmetric and positive (semi-)definite matrix
for all
$\{{\boldsymbol x}_1, \ldots,{\boldsymbol x}_N\}\subset\Omega$
and all $N\in\mathbb{N}$.
\end{definition}
As a particular class of positive definite
kernels, we consider the \emph{Mat\'ern kernels} given by
\begin{equation}\label{eq:matkern}
k_\nu(r)\mathrel{\mathrel{\mathop:}=}\frac{2^{1-\nu}}{\Gamma(\nu)}\bigg(\frac {\sqrt{2\nu}r}{\ell}\bigg)^\nu
K_\nu\bigg(\frac {\sqrt{2\nu}r}{\ell}\bigg),\quad r\geq 0,\ \ell >0 .
\end{equation}
Herein, $K_{\nu}$ is the modified Bessel function of the second
kind of order $\nu$ and $\Gamma$ is the gamma function.
The parameter $\nu$ steers for the smoothness of the
kernel function. Especially,
the analytic squared-exponential kernel is
retrieved for $\nu\to\infty$. Especially, we have
\begin{equation}
\begin{aligned}
k_{1/2}(r)=\exp\bigg(-\frac{r}{\ell}\bigg),
\quad k_{\infty}(r)=\exp\bigg(-\frac{r^2}{2\ell^2}\bigg).
\end{aligned}
\end{equation}
A positive definite kernel in the sense of Definition~\ref{def:poskernel}
is obtained by considering
\[
\mathcal{K}({\boldsymbol x},{\boldsymbol x}^\prime)\mathrel{\mathrel{\mathop:}=} k_\nu(\|{\boldsymbol x}-{\boldsymbol x}^\prime\|_2).
\]
Given the set of points \(X=\{{\boldsymbol x}_1,\ldots,{\boldsymbol x}_N\}\), many
applications require the assembly and the inversion of the
\emph{kernel matrix}
\[
{\boldsymbol K}\mathrel{\mathrel{\mathop:}=}[\mathcal{K}({\boldsymbol x}_i,{\boldsymbol x}_j)]_{i,j=1}^N\in\Rbb^{N\times N}
\]
or an appropriately regularized version
\[
{\boldsymbol K}+\rho{\boldsymbol I},\quad \rho>0,
\]
thereof. In case that
\(N\) is a large number, already the assembly and storage of
\({\boldsymbol K}\)
can easily become prohibitive. For the solution of an associated
linear system, the situation is even worse.
Fortunately, the kernel matrix can be compressed
by employing samplets. To this end, the evaluation of
the kernel function at the points ${\boldsymbol x}_i$ and ${\boldsymbol x}_j$
will be denoted by
\[
(\mathcal{K},\delta_{{\boldsymbol x}_i}\otimes\delta_{{\boldsymbol x}_j}
)_{\Omega\times\Omega}\mathrel{\mathrel{\mathop:}=}\mathcal{K}({\boldsymbol x}_i,{\boldsymbol x}_j).
\]
Hence, in view of $V = \{\delta_{{\boldsymbol x}_1},\ldots,\delta_{{\boldsymbol x}_N}\}$,
we may write the kernel matrix as
\[
{\boldsymbol K} = \big[(\mathcal{K},\delta_{{\boldsymbol x}_i}
\otimes\delta_{{\boldsymbol x}_j})_{\Omega\times\Omega}\big]_{i,j=1}^N.
\]
\subsection{Asymptotically smooth kernels}
The essential ingredient for the samplet compression of
kernel matrices is the \emph{asymptotical smoothness}
property of the kernel
\begin{equation}\label{eq:kernel_estimate}
\frac{\partial^{|\boldsymbol\alpha|+|\boldsymbol\beta|}}
{\partial{\boldsymbol x}^{\boldsymbol\alpha}
\partial{\boldsymbol y}^{\boldsymbol\beta}} \mathcal{K}({\boldsymbol x},{\boldsymbol y})
\le c_\mathcal{K} \frac{(|\boldsymbol\alpha|+|\boldsymbol\beta|)!}
{r^{|\boldsymbol\alpha|+|\boldsymbol\beta|}
\|{\boldsymbol x}-{\boldsymbol y}\|_2^{|\boldsymbol\alpha|+|\boldsymbol\beta|}},\quad
c_\mathcal{K},r>0,
\end{equation}
which is for example satisfied by the Mat\'ern kernels.
Using this estimate, we obtain the following result,
which is the basis for the matrix compression introduced
thereafter.
\begin{lemma}\label{lem:kernel_decay}
Consider two samplets $\sigma_{j,k}$ and $\sigma_{j',k'}$,
exhibiting $q+1$ vanishing moments with supporting
clusters \(\nu\) and \(\nu'\), respectively.
Assume that $\dist(\nu,\nu') > 0$. Then, for kernels
satisfying \eqref{eq:kernel_estimate}, it holds that
\begin{equation}\label{eq:kernel_decay}
(\mathcal{K},\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}\le
c_\mathcal{K} \frac{\diam(\nu)^{q+1}\diam(\nu')^{q+1}}
{(dr\dist(\nu_{j,k},\nu_{j',k'}))^{2(q+1)}}
\|{\boldsymbol\omega}_{j,k}\|_{1}\|{\boldsymbol\omega}_{j',k'}\|_{1}.
\end{equation}
\end{lemma}
\begin{proof}
Let ${\boldsymbol x}_0\in\nu$ and ${\boldsymbol y}_0\in\nu'$.
A Taylor expansion of the kernel with respect to
${\boldsymbol x}$ yields
\[
\mathcal{K}({\boldsymbol x},{\boldsymbol y}) = \sum_{|\boldsymbol\alpha|\le q}
\frac{\partial^{|\boldsymbol\alpha|}}
{\partial{\boldsymbol x}^{\boldsymbol\alpha}\mathcal{K}({\boldsymbol x}_0,{\boldsymbol y})}
\frac{({\boldsymbol x}-{\boldsymbol x}_0)^{\boldsymbol\alpha}}{\boldsymbol\alpha!}
+ R_{{\boldsymbol x}_0}({\boldsymbol x},{\boldsymbol y}),
\]
where the remainder $R_{{\boldsymbol x}_0}({\boldsymbol x},{\boldsymbol y})$ is given by
\begin{align*}
R_{{\boldsymbol x}_0}({\boldsymbol x},{\boldsymbol y}) &= (q+1)\sum_{|\boldsymbol\alpha|=q+1}
\frac{({\boldsymbol x}-{\boldsymbol x}_0)^{\boldsymbol\alpha}}{\boldsymbol\alpha!}
\int_0^1\frac{\partial^{q+1}}{\partial{\boldsymbol x}^{\boldsymbol\alpha}}
\mathcal{K}\big({\boldsymbol x}_0+s({\boldsymbol x}-{\boldsymbol x}_0),{\boldsymbol y}\big)(1-s)^q\d s.
\end{align*}
Next, we expand the remainder $R_{{\boldsymbol x}_0}({\boldsymbol x},{\boldsymbol y})$ with
respect to ${\boldsymbol y}$ and derive
\begin{align*}
R_{{\boldsymbol x}_0}({\boldsymbol x},{\boldsymbol y}) &= (q+1)\sum_{|\boldsymbol\alpha|=q+1}
\frac{({\boldsymbol x}-{\boldsymbol x}_0)^{\boldsymbol\alpha}}{\boldsymbol\alpha!}
\sum_{|\boldsymbol\beta|\le q}\frac{({\boldsymbol y}-{\boldsymbol y}_0)^{\boldsymbol\beta}}{\boldsymbol\beta!}\\
&\times\int_0^1\frac{\partial^{q+1}}{\partial{\boldsymbol x}^{\boldsymbol\alpha}}
\frac{\partial^{|\boldsymbol\beta|}}{\partial{\boldsymbol y}^{\boldsymbol\beta}}
\mathcal{K}\big({\boldsymbol x}_0+s({\boldsymbol x}-{\boldsymbol x}_0),{\boldsymbol y}_0\big)(1-s)^q\d s
+ R_{{\boldsymbol x}_0,{\boldsymbol y}_0}({\boldsymbol x},{\boldsymbol y}).
\end{align*}
Here, the remainder $R_{{\boldsymbol x}_0,{\boldsymbol y}_0}({\boldsymbol x},{\boldsymbol y})$
is given by
\begin{align*}
&R_{{\boldsymbol x}_0,{\boldsymbol y}_0}({\boldsymbol x},{\boldsymbol y}) = (q+1)^2
\sum_{|\boldsymbol\alpha|,|\boldsymbol\beta|=q+1}
\frac{({\boldsymbol x}-{\boldsymbol x}_0)^{\boldsymbol\alpha}}{\boldsymbol\alpha!}
\frac{({\boldsymbol y}-{\boldsymbol y}_0)^{\boldsymbol\beta}}{\boldsymbol\beta!}\\
&\qquad\times\int_0^1\int_0^1\frac{\partial^{2(q+1)}}
{\partial{\boldsymbol x}^{\boldsymbol\alpha}\partial{\boldsymbol y}^{\boldsymbol\beta}}
\mathcal{K}\big({\boldsymbol x}_0+s({\boldsymbol x}-{\boldsymbol x}_0),{\boldsymbol y}_0+t({\boldsymbol y}-{\boldsymbol y}_0)\big)(1-s)^q(1-t)^q\d t\d s.
\end{align*}
We thus arrive at the decomposition
\[
\mathcal{K}({\boldsymbol x},{\boldsymbol y}) = p_{{\boldsymbol y}}({\boldsymbol x}) + p_{{\boldsymbol x}}({\boldsymbol y})
+ R_{{\boldsymbol x}_0,{\boldsymbol y}_0}({\boldsymbol x},{\boldsymbol y}),
\]
where $p_{{\boldsymbol y}}({\boldsymbol x})$ is a polynomial of degree $q$ in ${\boldsymbol x}$,
with coefficients depending on ${\boldsymbol y}$, while $p_{{\boldsymbol x}}({\boldsymbol y})$
is a polynomial of degree $q$ in ${\boldsymbol y}$, with coefficients depending
on ${\boldsymbol x}$. Due to the vanishing moments, we obtain
\[
(\mathcal{K},\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}
=(R_{{\boldsymbol x}_0,{\boldsymbol y}_0},
\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}.
\]
In view of \eqref{eq:kernel_estimate}, we thus find
\begin{align*}
&|(\mathcal{K},\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}|
= |(R_{{\boldsymbol x}_0,{\boldsymbol y}_0},
\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}|\\
&\qquad\le c_\mathcal{K} \Bigg(\sum_{|\boldsymbol\alpha|,|\boldsymbol\beta|=q+1}
\frac{(|\boldsymbol\alpha|+|\boldsymbol\beta|)!}{\boldsymbol\alpha!\boldsymbol\beta!}\Bigg)
\frac{(\|\cdot-{\boldsymbol x}_0\|^{q+1}_2,|\sigma_{j,k}|)_\Omega
(\|\cdot-{\boldsymbol y}_0\|^{q+1}_2,|\sigma_{j',k'}|)_\Omega}{r^{2(q+1)}\dist(\nu,\nu')^{2(q+1)}}.
\end{align*}
Next, we have by means of multinomial coefficients that
\[
(|\boldsymbol\alpha|+|\boldsymbol\beta|)!
={|\boldsymbol\alpha|+|\boldsymbol\beta|\choose |\boldsymbol\beta|}
{|\boldsymbol\alpha|\choose\boldsymbol\alpha}
{|\boldsymbol\beta|\choose\boldsymbol\beta}
\boldsymbol\alpha!\boldsymbol\beta!,
\]
which in turn implies that
\begin{align*}
\sum_{|\boldsymbol\alpha|,|\boldsymbol\beta|=q+1}
\frac{(|\boldsymbol\alpha|+|\boldsymbol\beta|)!}{\boldsymbol\alpha!\boldsymbol\beta!}
&= {2(q+1)\choose q+1} \sum_{|\boldsymbol\alpha|,|\boldsymbol\beta|=q+1}
{|\boldsymbol\alpha|\choose\boldsymbol\alpha}
{|\boldsymbol\beta|\choose\boldsymbol\beta}\\
&= {2(q+1)\choose q+1} d^{2(q+1)}
\le d^{2(q+1)} 2^{2(q+1)}.
\end{align*}
Moreover, we use
\[
(\|\cdot-{\boldsymbol x}_0\|_2^{q+1},|\sigma_{j,k}|)_\Omega
\le\bigg(\frac{\diam(\nu)}{2}\bigg)^{q+1}\|{\boldsymbol\omega}_{j,k}\|_{1},
\]
and likewise
\[
(\|\cdot-{\boldsymbol y}_0\|_2^{q+1},|\sigma_{j',k'}|)_\Omega
\le\bigg(\frac{\diam(\nu')}{2}\bigg)^{q+1}\|{\boldsymbol\omega}_{j',k'}\|_{1}.
\]
Combining all the estimates, we arrive at the desired
result \eqref{eq:kernel_decay}.
\end{proof}
\subsection{Matrix compression}
Lemma~\ref{lem:kernel_decay} immediately suggests
a compression strategy for kernel matrices in
samplet representation. We mention that this compression
differs from the wavelet matrix compression introduced
in \cite{DHS}, since we do not exploit the decay of the
samplet coefficients with respect to the level in case of
smooth data. This enables us to also consider a non-uniform
distribution of the points in $V$. Consequently, we use
on all levels the same accuracy, what is more similar
to the setting in \cite{BCR}.
\begin{theorem}
Set all coefficients of the kernel matrix
\[
{\boldsymbol K}^\Sigma\mathrel{\mathrel{\mathop:}=}\big[(\mathcal{K},\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}
\big]_{j,j',k,k'}
\]
to zero which satisfy
\begin{equation}\label{eq:cutoff}
\dist(\nu,\nu')\ge\eta\max\{\diam(\nu),\diam(\nu')\},\quad\eta>0,
\end{equation}
where \(\nu\) is the cluster supporting \(\sigma_{j,k}\) and
\(\nu'\) is the cluster supporting \(\sigma_{j',k'}\), respectively.
Then, it holds
\[
\big\|{\boldsymbol K}^\Sigma-{\boldsymbol K}^\Sigma_\varepsilon\big\|_F
\le c_\mathcal{K} c_{\operatorname{sum}} {(\eta dr)^{-2(q+1)}}
m_q N\sqrt{\log(N)}.
\]
for some constant \(c_{\operatorname{sum}}>0\),
where \(m_q\) is given by \eqref{eq:mq}.
\end{theorem}
\begin{proof}
We first fix the levels $j$ and $j'$. In view
\eqref{eq:kernel_decay}, we can estimate any coefficient
which satisfies \eqref{eq:cutoff} by
\begin{align*}
&|(\mathcal{K},\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}|\\
&\qquad\le
c_\mathcal{K} \bigg(\frac{\min\{\diam(\nu),\diam(\nu')\}}
{\max\{\diam(\nu),\diam(\nu')\}}\bigg)^{q+1}
{(\eta dr)^{-2(q+1)}}\|{\boldsymbol\omega}_{j,k}\|_{1}\|{\boldsymbol\omega}_{j',k'}\|_{1}.
\end{align*}
If we next set
\[
\theta_{j,j'}\mathrel{\mathrel{\mathop:}=} \max_{\nu\in\Tcal_j,\nu'\in\Tcal_{j'}}\bigg\{\frac{\min\{\diam(\nu),\diam(\nu')\}}
{\max\{\diam(\nu),\diam(\nu')\}}\bigg\},
\]
then we obtain
\[
|(\mathcal{K},\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}|
\le c_\mathcal{K}\theta_{j,j'}^{q+1}{(\eta dr)^{-2(q+1)}}
\|{\boldsymbol\omega}_{j,k}\|_{1}\|{\boldsymbol\omega}_{j',k'}\|_{1}
\]
for all coefficients such that \eqref{eq:cutoff} holds.
In view of \eqref{eq:ell1-norm} and the fact that there are
at most $m_q$ samplets
per cluster, we arrive at
\[
\sum_{k,k'} \|{\boldsymbol\omega}_{j,k}\|_{1}^2\|{\boldsymbol\omega}_{j',k'}\|_{1}^2
\leq\sum_{k,k'}|\nu|\cdot|\nu'| = m_q^2 N^2.
\]
Thus, for a fixed level-level block, we arrive at the estimate
\begin{align*}
\big\|{\boldsymbol K}^\Sigma_{j,j'}-{\boldsymbol K}^\Sigma_{\varepsilon,j,j'}\big\|_F^2
&\le\sum_{\begin{smallmatrix}k,k':\ \dist(\nu,\nu')\\
\ge\eta\max\{\diam(\nu),\diam(\nu')\}\end{smallmatrix}}
|(\mathcal{K},\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}|^2\\
&\le c_\mathcal{K}^2 \theta_{j,j'}^{2(q+1)} {(\eta dr)^{-4(q+1)}} m_q^2 N^2.
\end{align*}
Finally, summation over all levels yields
\begin{align*}
\big\|{\boldsymbol K}^\Sigma-{\boldsymbol K}^\Sigma_{\varepsilon}\big\|_F^2
&= \sum_{j,j'}\big\|{\boldsymbol K}^\Sigma_{j,j'}-{\boldsymbol K}^\Sigma_{\varepsilon,j,j'}\big\|_F^2\\
&\le c_\mathcal{K}^2 {(\eta dr)^{-4(q+1)}}m_q^2 N^2\sum_{j,j'} \theta_{j,j'}^{2(q+1)}\\
&\le c_\mathcal{K}^2 c_{\operatorname{sum}} {(\eta dr)^{-4(q+1)}} m_q^2 N^2\log N,
\end{align*}
which is the desired claim.
\end{proof}
\begin{remark}
In case of uniformly distributed points ${\boldsymbol x}_i\in X$,
we have $\big\|{\boldsymbol K}^\Sigma\big\|_F\sim N$. Thus,
in this case we immediately obtain
\[
\frac{\big\|{\boldsymbol K}^\Sigma-{\boldsymbol K}^\Sigma_\varepsilon\big\|_F}
{\big\|{\boldsymbol K}^\Sigma\big\|_F} \le c_\mathcal{K}
\sqrt{c_{\operatorname{sum}}} {(\eta dr)^{-2(q+1)}} m_q \log N.
\]
\end{remark}
\begin{theorem}
The matrix consists of only $\mathcal{O}(m_q^2
N\log N)$ relevant matrix coefficients provided
that the points in $V$ are uniformly
distributed in $\Omega$.
\end{theorem}
\begin{proof}
We fix $j,j'$ and assume $j\ge j'$. In case of uniformly
distributed points, it holds $\diam(v)\sim 2^{-j_\nu/d}$.
Hence, for the cluster $\nu_{j',k'}$, there exist only
$\mathcal{O}([2^{j-j'}]^d)$ clusters $\nu_{j,k}$ from
level $j$, which do not satisfy the cut-off criterion
\eqref{eq:cutoff}. Since each cluster contains at most
$m_q$ samplets, we hence arrive at
\[
\sum_{j=0}^J \sum_{j'\le j}m_q^2( 2^{j'} 2^{(j-j')})^d
= m_q^2 \sum_{j=0}^J j 2^{jd} \sim m_q^2 N\log N,
\]
which implies the assertion.
\end{proof}
\begin{remark}
The chosen cut-off criterion \eqref{eq:cutoff} coincides
with the so called \emph{admissibility condition} used
by hierarchical matrices. We particularly refer here to
\cite{Boe10}, as we will later on rely the \(\mathcal{H}^2\)-matrix
method presented there for the fast assembly of the
compressed kernel matrix.
\end{remark}
\subsection{Compressed matrix assembly}
For a given pair of clusters, we can now determine whether the
corresponding entries need to be calculated. As there are $\mathcal{O}(N)$
clusters, naively checking the cut-off criterion for all pairs would
still take $\mathcal{O}(N^{2})$ operations, however. Hence, we require
smarter means to determine the non-negligible cluster pairs.
For this purpose, we first state the transferability of the cut-off criterion
to son clusters, compare \cite{DHS} for a proof.
\begin{lemma}
Let $\nu$ and $\nu'$ be clusters satisfying the cut-off criterion
\eqref{eq:cutoff}. Then, for the son clusters $\nu_{\mathrm{son}}$
of $\nu$ and $\nu_{\mathrm{son}}'$ of $\nu'$, we have
\begin{align*}
\dist(\nu,\nu_{\mathrm{son}}')
&\ge\eta\max\{\diam(\nu),\diam(\nu_{\mathrm{son}}')\},\\
\dist(\nu_{\mathrm{son}},\nu')
&\ge\eta\max\{\diam(\nu_{\mathrm{son}}),\diam(\nu')\},\\
\dist(\nu_{\mathrm{son}},\nu_{\mathrm{son}}')
&\ge\eta\max\{\diam(\nu_{\mathrm{son}}),\diam(\nu_{\mathrm{son}}')\}.
\end{align*}
\end{lemma}
The lemma tells us that we may omit cluster pairs whose father
clusters already satisfy the cut-off criterion. This will be essential for
the assembly of the compressed matrix.
The computation of the compressed kernel matrix
can be sped up further by using
\(\Hcal^2\)-matrix techniques, see
\cite{HB02,Gie01}. Similarly to \cite{AHK14,HKS05}, we shall
rely here on \(\Hcal^2\)-matrices for this purpose.
The idea of \(\Hcal^2\)-matrices is to approximate the kernel interaction
for sufficiently distant clusters \(\nu\) and \(\nu'\) in the sense
of the admissibility condition \eqref{eq:cutoff} by means
of the interpolation based \(\Hcal^2\)-matrix approach.
More precisely, given a suitable set of interpolation
points \(\{{\boldsymbol\xi}_t^\nu\}_t\) for each cluster \(\nu\) with
associated Lagrange polynomials \(\{\mathcal{L}_{t}^{\nu}
({\boldsymbol x})\}_t\), we introduce the interpolation operator
\[
\mathcal{I}^{\nu,\nu'}[\mathcal{K}]({\boldsymbol x}, {\boldsymbol y})
= \sum_{s,t} \mathcal{K}({\boldsymbol\xi}_{s}^{\nu}, {\boldsymbol\xi}_{t}^{\nu'})
\mathcal{L}_{s}^{\nu}({\boldsymbol x}) \mathcal{L}_{t}^{\nu'}({\boldsymbol y})
\]
and approximate an admissible matrix block via
\begin{align*}
{\boldsymbol K}^\Delta_{\nu,\nu'}
&=[(\mathcal{K},\delta_{\boldsymbol x}\otimes\delta_{\boldsymbol y})_{\Omega\times\Omega}]_{{\boldsymbol x}\in\nu,{\boldsymbol y}\in\nu'}\\
&\approx\sum_{s,t} \mathcal{K}({\boldsymbol\xi}_{s}^{\nu}, {\boldsymbol\xi}_{t}^{\nu'})
[(\mathcal{L}_{s}^{\nu},\delta_{\boldsymbol x})_\Omega]_{{\boldsymbol x}\in\nu}
[(\mathcal{L}_{t}^{\nu'},\delta_{\boldsymbol y})_\Omega]_{{\boldsymbol y}\in\nu'}
\mathrel{=\mathrel{\mathop:}}{\boldsymbol V}^{\nu}_\Delta{\boldsymbol S}^{\nu,\nu'}({\boldsymbol V}^{\nu'}_\Delta)^\intercal.
\end{align*}
Herein, the \emph{cluster bases} are given according to
\begin{equation}\label{eq:cluster bases}
{\boldsymbol V}^{\nu}_\Delta\mathrel{\mathrel{\mathop:}=} [(\mathcal{L}_{s}^{\nu},\delta_{\boldsymbol x})_\Omega]_{{\boldsymbol x}\in\nu},\quad
{\boldsymbol V}^{\nu'}_\Delta\mathrel{\mathrel{\mathop:}=}[(\mathcal{L}_{t}^{\nu'},\delta_{\boldsymbol y})_\Omega]_{{\boldsymbol y}\in\nu'},
\end{equation}
while the \emph{coupling matrix} is given by
\(
{\boldsymbol S}^{\nu,\nu'}\mathrel{\mathrel{\mathop:}=}[\mathcal{K}({\boldsymbol\xi}_{s}^{\nu}, {\boldsymbol\xi}_{t}^{\nu'})]_{s,t}.
\)
Directly transforming the cluster bases into their corresponding
samplet representation results in a log-linear cost. This can be
avoided by the use of nested cluster bases, as they have been
introduced for \(\Hcal^2\)-matrices. For the sake of simplicity, we
assume from now on that tensor product polynomials of degree
\(p\) are used for the kernel interpolation at all different cluster
combinations. As a consequence, the Lagrange polynomials
of a father cluster can exactly be represented by those of the
son clusters. Introducing the \emph{transfer matrices}
\(
{\boldsymbol T}^{\nu_{\mathrm{son}}}
\mathrel{\mathrel{\mathop:}=}[\mathcal{L}_s^\nu({\boldsymbol\xi}_t^{\nu_{\mathrm{son}}})]_{s,t},
\)
there holds
\[
\mathcal{L}_s^\nu({\boldsymbol x})=\sum_t{\boldsymbol T}^{\nu_{\mathrm{son}}}_{s,t}
\mathcal{L}_t^{\nu_{\mathrm{son}}}({\boldsymbol x}),\quad{\boldsymbol x}\in B_{\nu_{\mathrm{son}}}.
\]
Exploiting this relation in the construction of the cluster bases
\eqref{eq:cluster bases} finally leads to
\[
{\boldsymbol V}^{\nu}_\Delta=\begin{bmatrix}
{\boldsymbol V}^{\nu_{\mathrm{son}_1}}_\Delta{\boldsymbol T}^{\nu_{\mathrm{son}_1}}\\
{\boldsymbol V}^{\nu_{\mathrm{son}_2}}_\Delta{\boldsymbol T}^{\nu_{\mathrm{son}_2}}
\end{bmatrix}.
\]
Combining this refinement relation with the recursive nature of the
samplet basis, results
in the variant of the discrete samplet transform summarized in
Algorithm~\ref{algo:multiscaleClusterBasis}.\bigskip
\begin{algorithm}[H]
\caption{Recursive computation of the multiscale cluster basis}
\label{algo:multiscaleClusterBasis}
\KwData{Cluster tree $\Tcal$, transformations
$[{\boldsymbol Q}_{j,\Phi}^{\nu}$, ${\boldsymbol Q}_{j,\Sigma}^{\nu}]$,
nested cluster bases ${\boldsymbol V}_{\Delta}^{\nu}$ for leaf clusters and
transformation matrices ${\boldsymbol T}^{\nu_{\mathrm{son}_1}}\), \(
{\boldsymbol T}^{\nu_{\mathrm{son}_2}}$ for non-leaf clusters.
}
\KwResult{Multiscale cluster basis matrices ${\boldsymbol V}_{\Phi}^{\nu}$,
${\boldsymbol V}_{\Sigma}^{\nu}$ for all clusters $\nu \in\Tcal$.}
\Begin{
\FuncSty{computeMultiscaleClusterBasis}($X$)\;
}
\end{algorithm}
\begin{function}[H]
\caption{computeMultiscaleClusterBasis($\nu$)}
\Begin{
\uIf{$\nu$ is a leaf cluster}{
store $\begin{bmatrix}
{\boldsymbol V}_{\Phi}^{\nu} \\
{\boldsymbol V}_{\Sigma}^{\nu}
\end{bmatrix}\mathrel{\mathrel{\mathop:}=}\big[{\boldsymbol Q}_{j,\Phi}^{\nu},
{\boldsymbol Q}_{j,\Sigma}^{\nu}\big]^{\intercal} {\boldsymbol V}_{\Delta}^{\nu}$
}
\Else{
\For{all sons $\nu'$ of $\nu$}{
$\computeMultiscaleClusterBasis(\nu')$
}
store $\begin{bmatrix}
{\boldsymbol V}_{\Phi}^{\nu} \\
{\boldsymbol V}_{\Sigma}^{\nu}
\end{bmatrix}\mathrel{\mathrel{\mathop:}=}\big[{\boldsymbol Q}_{j,\Phi}^{\nu},
{\boldsymbol Q}_{j,\Sigma}^{\nu}\big]^{\intercal} \begin{bmatrix}
{\boldsymbol V}^{\nu_{\mathrm{son}_1}}_\Phi{\boldsymbol T}^{\nu_{\mathrm{son}_1}}\\
{\boldsymbol V}^{\nu_{\mathrm{son}_2}}_\Phi{\boldsymbol T}^{\nu_{\mathrm{son}_2}}
\end{bmatrix}$
}
}
\end{function}\medskip
Having the multiscale cluster bases at our disposal, the next step is
the assembly of the compressed kernel matrix. The computation of the
required matrix blocks is exclusively
based on the two refinement relations
\begin{align*}
\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Sigma}
\end{bmatrix}
&=
\begin{bmatrix}
(\mathcal{K},{\boldsymbol \Phi}^\nu\otimes{\boldsymbol\Phi}^{\nu'})_{\Omega\times\Omega} &
(\mathcal{K},{\boldsymbol \Phi}^\nu\otimes{\boldsymbol\Sigma}^{\nu'})_{\Omega\times\Omega} \\
(\mathcal{K},{\boldsymbol \Sigma}^\nu\otimes{\boldsymbol\Phi}^{\nu'})_{\Omega\times\Omega} &
(\mathcal{K},{\boldsymbol \Sigma}^\nu\otimes{\boldsymbol\Sigma}^{\nu'})_{\Omega\times\Omega}
\end{bmatrix}\\
&=\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_1}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_2}'}^{\Phi,\Phi}\\
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_1}'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_2}'}^{\Sigma,\Phi}
\end{bmatrix}\big[{\boldsymbol Q}_{j,\Phi}^{\nu'},
{\boldsymbol Q}_{j,\Sigma}^{\nu'}\big]
\end{align*}
and
\begin{align*}
\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Sigma}
\end{bmatrix}
&=\begin{bmatrix}
(\mathcal{K},{\boldsymbol \Phi}^\nu\otimes{\boldsymbol\Phi}^{\nu'})_{\Omega\times\Omega} &
(\mathcal{K},{\boldsymbol \Phi}^\nu\otimes{\boldsymbol\Sigma}^{\nu'})_{\Omega\times\Omega} \\
(\mathcal{K},{\boldsymbol \Sigma}^\nu\otimes{\boldsymbol\Phi}^{\nu'})_{\Omega\times\Omega} &
(\mathcal{K},{\boldsymbol \Sigma}^\nu\otimes{\boldsymbol\Sigma}^{\nu'})_{\Omega\times\Omega}
\end{bmatrix}\\
&=
\big[{\boldsymbol Q}_{j,\Phi}^{\nu},
{\boldsymbol Q}_{j,\Sigma}^{\nu}\big]^\intercal
\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu'}^{\Phi,\Phi}\\
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu'}^{\Sigma,\Phi}
\end{bmatrix}.
\end{align*}
We obtain the following function, which is the key ingredient for the computation
of the compressed kernel matrix.\bigskip
\begin{function}[H]
\caption{recursivelyDetermineBlock($\nu$, $\nu'$)}
\KwResult{Approximation of the block \scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Sigma}
\end{bmatrix}$}.}
\Begin{
\uIf{$(\nu, \nu')$ is admissible}{
\Return{\scalebox{1}{$\begin{bmatrix}
{\boldsymbol V}_{\Phi}^{\nu} \\
{\boldsymbol V}_{\Sigma}^{\nu}
\end{bmatrix}
{\boldsymbol S}^{\nu,\nu'} \big[
({\boldsymbol V}_{\Phi}^{\nu'})^\intercal,
({\boldsymbol V}_{\Sigma}^{\nu'})^\intercal
\big]$}}
}
\uElseIf{$\nu$ and $\nu'$ are leaf clusters}{
\Return{\scalebox{1}{$\big[{\boldsymbol Q}_{j,\Phi}^{\nu},
{\boldsymbol Q}_{j,\Sigma}^{\nu}\big]^{\intercal}{\boldsymbol K}_{\nu,\nu'}^{\Delta}
\big[{\boldsymbol Q}_{j,\Phi}^{\nu'},
{\boldsymbol Q}_{j,\Sigma}^{\nu'}\big]$}}
}
\uElseIf{$\nu'$ is not a leaf cluster and $\nu$ is a leaf cluster}{
\For{all sons $\nu_{\mathrm{son}}'$ of $\nu'$}{
\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Sigma,\Sigma}
\end{bmatrix}$} $
\mathrel{\mathrel{\mathop:}=}\recursivelyDetermineBlock(\nu, \nu_{\mathrm{son}'})$
}
\Return{\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu_{\mathrm{son},1}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son},2}'}^{\Phi,\Phi}\\
{\boldsymbol K}_{\nu,\nu_{\mathrm{son},1}'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son},2}'}^{\Sigma,\Phi}
\end{bmatrix}\big[{\boldsymbol Q}_{j,\Phi}^{\nu'},
{\boldsymbol Q}_{j,\Sigma}^{\nu'}\big]$}}
}
\uElseIf{$\nu$ is not a leaf cluster and $\nu'$ is a leaf cluster}{
\For{all sons \(\nu_{\mathrm{son}}\) of \(\nu\)}{
\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Sigma}
\end{bmatrix}$} $\mathrel{\mathrel{\mathop:}=} \recursivelyDetermineBlock(\nu_{\mathrm{son}}, \nu')$
}
\Return{\scalebox{1}{$\big[{\boldsymbol Q}_{j,\Phi}^{\nu},
{\boldsymbol Q}_{j,\Sigma}^{\nu}\big]^\intercal\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu'}^{\Phi,\Phi}\\
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu'}^{\Sigma,\Phi}
\end{bmatrix}$}.
}
}
\Else(){
\For{all sons $\nu_{\mathrm{son}}$ of $\nu$ {\bf and}
all sons $\nu_{\mathrm{son}}'$ of $\nu'$}{
\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu_{\mathrm{son}}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu_{\mathrm{son}}'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu_{\mathrm{son}}'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu_{\mathrm{son}}'}^{\Sigma,\Sigma}
\end{bmatrix}$}$\mathrel{\mathrel{\mathop:}=}\recursivelyDetermineBlock(\nu_{\mathrm{son}},
\nu_{\mathrm{son}'})$
}
\Return{\scalebox{1}{$\big[{\boldsymbol Q}_{\Phi}^{\nu},
{\boldsymbol Q}_{\Sigma}^{\nu}\big]^\intercal\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu_{\mathrm{son}_1}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu_{\mathrm{son}_2}'}^{\Phi,\Phi}\\
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu_{\mathrm{son}_1}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu_{\mathrm{son}_2}'}^{\Phi,\Phi}
\end{bmatrix} \big[{\boldsymbol Q}_{\Phi}^{\nu'},
{\boldsymbol Q}_{\Sigma}^{\nu'}\big]$}}
}
}
\end{function}\bigskip
Now, in order to assemble the compressed kernel matrix, we require
two nested recursive calls of the cluster tree, which is traversed in
a depth first search way. Algorithm~\ref{algo:h2Wavelet}
first computes the lower right matrix block and advances from bottom
to top and from right to left. To this end, the two recursive
functions \texttt{setupColumn} and \texttt{setupRow} are introduced.\bigskip
\begin{algorithm}[H]
\caption{Computation of the compressed kernel matrix}
\label{algo:h2Wavelet}
\KwData{Cluster tree $\Tcal$, multiscale cluster bases ${\boldsymbol V}_{\Phi}^{\nu}$, ${\boldsymbol V}_{\Sigma}^{\nu}$
and transformations $[{\boldsymbol Q}_{j,\Phi}^{\nu},{\boldsymbol Q}_{j,\Sigma}^{\nu}]$.}
\KwResult{Sparse matrix ${\boldsymbol K}^\Sigma_\varepsilon$}
\Begin{
\FuncSty{setupColumn}($X$)
store the blocks the remaining blocks
${\boldsymbol K}^\Sigma_{\varepsilon,\nu,X}$ for \(\nu\in\Tcal\setminus\{X\}\)
in ${\boldsymbol K}^\Sigma_\varepsilon$(they have already been computed by
earlier calls to \FuncSty{recursivelyDetermineBlock})
}
\end{algorithm}\bigskip
The purpose of the function \texttt{setupColumn} is to
recursively traverse the column cluster tree, i.e.\ the
cluster tree associated to the columns of the matrix.
Before returning, each instance of \texttt{setupColumn}
calls the function \texttt{setupRow}, which performs the
actual assembly of the compressed matrix.\bigskip
\begin{function}[H]
\caption{setupColumn($\nu'$)}
\Begin{
\For{all sons $\nu_{\mathrm{son}}'$ of $\nu'$}{
$\setupColumn(\nu_{\mathrm{son}}')$
}
store ${\boldsymbol K}^\Sigma_{\varepsilon,X,\nu'}\mathrel{\mathrel{\mathop:}=}
\FuncSty{setupRow}(X, \nu')$
in ${\boldsymbol K}^\Sigma_{\varepsilon}$
}
\end{function}\bigskip
For a given column cluster \(\nu'\), the function \texttt{setupRow}
recursively traverses the row cluster tree, i.e.\
the cluster tree associated to the rows of the matrix, and
assembles the corresponding column of the compressed matrix.
The function reuses the already computed blocks to the right of the column
under consideration and blocks at the bottom of the very same
column.\bigskip
\begin{function}[H]
\caption{setupRow($\nu$, $\nu'$)}
\Begin{
\uIf{$\nu$ is not a leaf}{
\For{all sons \(\nu_{\mathrm{son}}\) of \(\nu\)}{
\uIf{\(\nu_{\mathrm{son}}\) and \(\nu'\) are not admissible}{
\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Sigma}
\end{bmatrix}$}
$\mathrel{\mathrel{\mathop:}=} \setupRow(\nu_{\mathrm{son}}, \nu')$
}
\Else{
\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Sigma}
\end{bmatrix}$}
$\mathrel{\mathrel{\mathop:}=}\recursivelyDetermineBlock(\nu_{\mathrm{son}}, \nu')$}
}
\scalebox{1}{$
\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Sigma}
\end{bmatrix}\mathrel{\mathrel{\mathop:}=}\big[{\boldsymbol Q}_{\Phi}^{\nu},
{\boldsymbol Q}_{\Sigma}^{\nu}\big]^\intercal\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu'}^{\Phi,\Phi}\\
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu'}^{\Sigma,\Phi}
\end{bmatrix}$
}
}
\Else{
\uIf{$\nu'$ is a leaf cluster}{
\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Sigma}
\end{bmatrix}$}
$\mathrel{\mathrel{\mathop:}=}\recursivelyDetermineBlock(\nu_{\mathrm{son}}, \nu')$
}
\Else{
\For{all sons \(\nu_{\mathrm{son}}'\) of \(\nu\)'}{
\uIf{\(\nu\) and \(\nu_{\mathrm{son}}'\) are not admissible}{
load already computed block \scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Sigma,\Sigma}
\end{bmatrix}$}
}
\Else
{
\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Sigma,\Sigma}
\end{bmatrix}$}
$\mathrel{\mathrel{\mathop:}=} \recursivelyDetermineBlock(\nu, \nu_{\mathrm{son}'})$
}
}
}
\scalebox{1}{
$\begin{bmatrix}{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Sigma}
\end{bmatrix}\mathrel{\mathrel{\mathop:}=}\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_1}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_2}'}^{\Phi,\Phi}\\
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_1}'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_2}'}^{\Sigma,\Phi}
\end{bmatrix}\big[{\boldsymbol Q}_{\Phi}^{\nu'},
{\boldsymbol Q}_{\Sigma}^{\nu'}\big]$}
}
store ${\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Sigma}$ as part of ${\boldsymbol K}^\Sigma_\varepsilon$
\Return{\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Sigma}
\end{bmatrix}$}}
}
\end{function}
\begin{remark}
Algorithm~\ref{algo:h2Wavelet} has a cost of \(\mathcal{O}(N\log N)\)
and requires an additional storage of \(\mathcal{O}(N\log N)\)
if all stored blocks are directly released when they are not required
anymore. We refer to \cite{AHK14} for all the details.
\end{remark}
\section{Numerical results I\!I}\label{sec:Num2}
All computations in this section have been performed on a single node
with two Intel Xeon E5-2650 v3 @2.30GHz CPUs and up to 512GB
of main memory\footnote{The full specifications can be found on
https://www.euler.usi.ch/en/research/resources.}. In order to obtain
consistent timings, only a single core was used for all computations.
\subsection*{Benchmark problem}
To benchmark the compression of kernel matrices, we consider
the exponential kernel
\[
k({\boldsymbol x},{\boldsymbol y})=e^{-10\frac{\|{\boldsymbol x}-{\boldsymbol y}\|_2}{\sqrt{d}}}
\]
evaluated at an increasing number of uniformly distributed
random sample points
in the hypercube \([-1,1]^d\) for \(d=1,2,3\). As a measure of sparsity,
we introduce the \emph{average number of nonzeros per row}
\[
\operatorname{anz}({\boldsymbol A})\mathrel{\mathrel{\mathop:}=}\frac{\operatorname{nnz}({\boldsymbol A})}{N},\quad
{\boldsymbol A}\in\Rbb^{N\times N},
\]
where \(\operatorname{nnz}({\boldsymbol A})\) is the number of nonzero entries of
\({\boldsymbol A}\). Besides the compression, we also report the fill-in generated
by the Cholesky factorization in combination with the nested dissection
reordering from \cite{KK98}. For the reordering and the Cholesky
factorization, we rely on \textsc{Matlab} R2020a\footnote{Version 9.8.0.1396136,
The MathWorks Inc., Natick, Massachusetts, 2020.} while the
samplet compression is implemented in \texttt{C++11} using the
\texttt{Eigen} template library\footnote{\texttt{https://eigen.tuxfamily.org/}}
for linear algebra operations. For the computations, we consider
a polynomial degree of 3 for the \(\Hcal^2\)-matrix representation
and \(q+1=3\) vanishing moments for the samplets. In addition,
we have performed a thresholding of the computed matrix
coefficients that were smaller than \(\varepsilon=10^{-3}\).
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\begin{loglogaxis}[width=0.42\textwidth,grid=both, ymin= 0, ymax = 1e5, xmin = 256, xmax =1.2e6,
legend style={legend pos=south east,font=\small}, ylabel={\small wall time}, xlabel ={\small $N$}]
\addplot[line width=0.7pt,color=blue,mark=triangle]
table[x=npts,y=ctim]{./Results/matlabLogger1.txt};
\label{pgfplots:plot1D}
\addplot[line width=0.7pt,color=darkgreen,mark=square]
table[x=npts,y=ctim]{./Results/matlabLogger2.txt};
\label{pgfplots:plot2D}
\addplot[line width=0.7pt,color=red,mark=o]
table[x=npts,y=ctim]{./Results/matlabLogger3.txt};
\label{pgfplots:plot3D}
\addplot[line width=0.7pt,color=black, dashed]
table[x=npts,y expr={1e-5 * x}]{./Results/matlabLogger1.txt};
\addplot[line width=0.7pt,color=black, dashed]
table[x=npts,y expr={1e-5 * x * ln(x)}]{./Results/matlabLogger1.txt};
\addplot[line width=0.7pt,color=black, dashed]
table[x=npts,y expr={1e-5 * x * ln(x)^2}]{./Results/matlabLogger1.txt};
\addplot[line width=0.7pt,color=black, dashed]
table[x=npts,y expr={1e-5 * x * ln(x)^3}]{./Results/matlabLogger1.txt};
\label{pgfplots:asymps}
\end{loglogaxis}
\begin{loglogaxis}[xshift=0.405\textwidth,width=0.42\textwidth,grid=both, ymin= 0, ymax = 2e3, xmin = 256, xmax =1.2e6, ytick={1e1, 1e2, 1e3, 1e4},
legend style={legend pos=south east,font=\small}, ylabel={\small $\operatorname{anz}({\boldsymbol K}^\Sigma_\varepsilon)$}, xlabel ={\small $N$}]
\addplot[line width=0.7pt,color=blue,mark=triangle] table[x=npts,
y expr = {\thisrow{nzS}}]{./Results/matlabLogger1.txt};
\addplot[line width=0.7pt,color=darkgreen,mark=square] table[x=npts,
y expr = {\thisrow{nzS}}]{./Results/matlabLogger2.txt};
\addplot[line width=0.7pt,color=red,mark=o] table[x=npts,
y expr = {\thisrow{nzS}}]{./Results/matlabLogger3.txt};
\end{loglogaxis}
\matrix[
matrix of nodes,
anchor=north west,
draw
inner sep=0.1em,
column 1/.style={nodes={anchor=center}},
column 2/.style={nodes={anchor=west},font=\strut},
draw
]
at([xshift=0.02\textwidth]current axis.north east){
\ref{pgfplots:plot1D}& \(d=1\)\\
\ref{pgfplots:plot2D}& \(d=2\)\\
\ref{pgfplots:plot3D}& \(d=3\)\\
\ref{pgfplots:asymps}& \(N\!\log^\alpha\!\! N\)\\};
\end{tikzpicture}
\caption{\label{fig:compTimesNNZ}Assembly times (left) and
average numbers of nonzeros per row (right) versus the number
sample points $N$ in case of the exponential kernel matrix.}
\end{center}
\end{figure}
The left-hand side of Figure~\ref{fig:compTimesNNZ} shows
the wall time for the assembly of the compressed kernel matrices.
The different dashed lines indicate the asymptotics \(N\log^\alpha N\)
for \(\alpha=0,1,2,3\). It can be seen that, for increasing number
\(N\) of points and the dimensions \(d=1,2,3\) under
consideration, all computation times approach the expected rate
of \(N\log N\). The right-hand side of Figure~\ref{fig:compTimesNNZ}
shows the average number of nonzeros per row for an increasing number
\(N\) of points. Except for the case of \(d=1\), where this number even
decreases, it becomes constant as expected.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\begin{loglogaxis}[width=0.42\textwidth,grid=both, ymin= 0, ymax = 3e4, xmin = 500, xmax =1.2e6,
legend style={legend pos=south east,font=\small}, ylabel={\small wall time}, xlabel ={\small $N$}]
\addplot[line width=0.7pt,color=blue,mark=triangle]
table[x=npts,y=Ltim]{./Results/matlabLogger1.txt};
\label{pgfplots:plot1D1}
\addplot[line width=0.7pt,color=darkgreen,mark=square]
table[x=npts,y=Ltim]{./Results/matlabLogger2.txt};
\label{pgfplots:plot2D1}
\addplot[line width=0.7pt,color=red,mark=o]
table[x=npts,y=Ltim]{./Results/matlabLogger3.txt};
\label{pgfplots:plot3D1}
\addplot[line width=0.7pt,color=black,dashed]
table[x=npts,y expr={0.7e-6 * x^1.5}]{./Results/matlabLogger1.txt};
\addplot[line width=0.7pt,color=black,dashed]
table[x=npts,y expr={0.1e-6 * x^2}]{./Results/matlabLogger1.txt};
\label{pgfplots:asymps1}
\end{loglogaxis}
\begin{loglogaxis}[xshift=0.405\textwidth,width=0.42\textwidth,grid=both, ymin= 0, ymax = 4e4, xmin = 256, xmax =1.2e6,ytick={1e1, 1e2, 1e3, 1e4},
legend style={legend pos=south east,font=\small}, ylabel={\small $\operatorname{anz}({\boldsymbol L})$}, xlabel ={\small $N$}]
\addplot[line width=0.7pt,color=blue,mark=triangle] table[x=npts,
y = nzL]{./Results/matlabLogger1.txt};
\addplot[line width=0.7pt,color=darkgreen,mark=square] table[x=npts,
y = nzL]{./Results/matlabLogger2.txt};
\addplot[line width=0.7pt,color=red,mark=o] table[x=npts,
y = nzL]{./Results/matlabLogger3.txt};
\end{loglogaxis}
\matrix[
matrix of nodes,
anchor=north west,
draw
inner sep=0.1em,
column 1/.style={nodes={anchor=center}},
column 2/.style={nodes={anchor=west},font=\strut},
draw
]
at([xshift=0.02\textwidth]current axis.north east){
\ref{pgfplots:plot1D1}& \(d=1\)\\
\ref{pgfplots:plot2D1}& \(d=2\)\\
\ref{pgfplots:plot3D1}& \(d=3\)\\
\ref{pgfplots:asymps1}& \(N^{\frac{3}{2}}\), \(N^2\)\\};
\end{tikzpicture}
\caption{\label{fig:cholTimesNNZ}Computation times for the
Cholesky factorization (left) and average numbers of nonzeros
per row for the Cholesky factor (right) versus the number
sample points $N$ in case of the exponential kernel matrix.}
\end{center}
\end{figure}
Next, we examine the Cholesky factorization of the compressed
kernel matrix. As the largest eigenvalue of the kernel matrix
grows proportionally to the number \(N\) of points,
while the smallest eigenvalue is
given by the ridge parameter, the condition number grows with \(N\) as well.
Hence, to obtain a constant condition number for increasing \(N\),
the ridge parameter needs to be adjusted accordingly.
However, as we are only interested in the generated fill-in and
the computation times,
we neglect this fact and just fix the ridge parameter to
\(\rho=1\) for all considered \(N\) and \(d=1,2,3\).
The obtained results are found in
Figure~\ref{fig:cholTimesNNZ}. Herein, on the left-hand
side, the wall times for the Cholesky factorization of
the reordered matrix are found. For \(d=1\) the behavior is a bit peculiar as
the average number of nonzeros per row decreases when
the number \(N\) of points increases. This indicates that the kernel
function is already fully resolved up to the threshold parameter on
the coarser levels. For \(d=2\), the observed rate is slightly better than
the expected one of \(N^{\frac{3}{2}}\) for the Cholesky factorization, while the scaling
is approximately like \(N^2\) for \(d=3\). On the right-hand side of the
same figure, it can be seen that the fill-in remains rather moderate.
A visualization of the matrix patterns for the matrix
\({\boldsymbol K}^\Sigma_\varepsilon+\rho{\boldsymbol I}\),
the reordered matrix and the Cholesky factor for \(N=131\,072\) points is
shown in Figure~\ref{fig:patterns}. Each dot corresponds to a block of
\(256\times 256\) matrix entries and its intensity indicates the number
of nonzero entries, where darker blocks contain more entries than lighter blocks.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\draw(0,4.5) node {\includegraphics[scale=0.31,frame,trim= 0 0 0 13.4,clip]{./Results/Kmat_1D.eps}};
\draw(4,4.5) node {\includegraphics[scale=0.31,frame,trim= 0 0 0 13.4,clip]{./Results/KmatND_1D.eps}};
\draw(8,4.5) node {\includegraphics[scale=0.31,frame,trim= 0 0 0 13.4,clip]{./Results/Lmat_1D.eps}};
\draw(4,6.6) node {$d=1$};
\draw(0,0) node {\includegraphics[scale=0.31,frame,trim= 0 0 0 13.4,clip]{./Results/Kmat_2D.eps}};
\draw(4,0) node {\includegraphics[scale=0.31,frame,trim= 0 0 0 13.4,clip]{./Results/KmatND_2D.eps}};
\draw(8,0) node {\includegraphics[scale=0.31,frame,trim= 0 0 0 13.4,clip]{./Results/Lmat_2D.eps}};
\draw(4,2.1) node {$d=2$};
\draw(0,-4.5) node {\includegraphics[scale=0.31,frame,trim= 0 0 0 13.4,clip]{./Results/Kmat_3D.eps}};
\draw(4,-4.5) node {\includegraphics[scale=0.31,frame,trim= 0 0 0 13.4,clip]{./Results/KmatND_3D.eps}};
\draw(8,-4.5) node {\includegraphics[scale=0.31,frame,trim= 0 0 0 13.4,clip]{./Results/Lmat_3D.eps}};
\draw(4,-2.4) node {$d=3$};
\end{tikzpicture}
\caption{\label{fig:patterns}Sparsity pattern of \({\boldsymbol K}^\Sigma_\varepsilon+\rho{\boldsymbol I}\) (left),
the reordered matrices (middle) and the Cholesky factors \({\boldsymbol L}\) (right)
for \(d=1,2,3\) and \(N=131\,072\).}
\end{center}
\end{figure}
\subsection*{Simulation of a Gaussian random field}
As our last example, we consider a Gaussian random field evaluated at
100\,000 randomly chosen points at the surface of the Stanford bunny.
As before, the Stanford bunny has been rescaled to have a diameter of 2.
In order to demonstrate that our approach works also for higher dimensions,
the Stanford bunny has been embedded into \(\mathbb{R}^4\) and randomly
rotated to prevent axis-aligned bounding boxes. The polynomial degree for
the \(\Hcal^2\)-matrix representation is set to 3 as before and likewise
we consider \(q+1=3\) vanishing moments. The covariance
function is given by the exponential kernel
\[
k({\boldsymbol x},{\boldsymbol y})=e^{-25\|{\boldsymbol x}-{\boldsymbol y}\|_2}.
\]
Moreover, we discard all computed matrix entries which are
below the threshold of \(\varepsilon=10^{-6}\).
The ridge
parameter is set to \(\rho=10^{-2}\).
The compressed covariance matrix exhibits
\(\operatorname{anz}({\boldsymbol K}^\Sigma_\varepsilon)=5985\)
nonzero
matrix entries per row on average, while the corresponding Cholesky
factor exhibits \(\operatorname{anz}({\boldsymbol L})=12\,010\) nonzero
matrix entries per row on average. This is comparable to the
benchmark case on the hypercube for \(d=3\).
Having the Cholesky factor \({\boldsymbol L}\) at hand,
the computation of a realization of the
Gaussian random field is extremely fast, as it only requires
a simple sparse matrix-vector multiplication of \({\boldsymbol L}\)
by a Gaussian random vector and an inverse samplet transform.
Four different realizations of the random field projected
to \(\mathbb{R}^3\) are shown in Figure~\ref{fig:GRF}.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\draw (0,5) node {\includegraphics[scale=0.12,clip, trim= 1000 250 1000 300]{./Results/bunnyField1.png}};
\draw (5,5) node {\includegraphics[scale=0.12,clip, trim= 1000 250 1000 300]{./Results/bunnyField2.png}};
\draw (0,0) node {\includegraphics[scale=0.12,clip, trim= 1000 250 1000 300]{./Results/bunnyField3.png}};
\draw (5,0) node {\includegraphics[scale=0.12,clip, trim= 1000 250 1000 300]{./Results/bunnyField4.png}};
\draw (8.5,2.5) node {\includegraphics[scale=0.2,clip, trim= 2430 400 300 500]{./Results/bunnyField4.png}};
\end{tikzpicture}
\caption{\label{fig:GRF}Four different realizations of a Gaussian
random field based on an exponential covariance kernel.}
\end{center}
\end{figure}
\section{Conclusion}\label{sec:Conclusion}
Samplets provide a new methodology for the analysis
of large data sets. They are easy to construct and discrete
data can be transformed into the samplet basis in linear cost.
In our construction, we deliberately let out the discussion
of a level dependent compression of the given data, as it
is known from wavelet analysis, in favor of a robust error
analysis. We emphasize however that, under the assumption
of uniformly distributed points, different norms can be
incorporated, allowing for the construction of band-pass
filters and level dependent thresholding. In this situation,
also an improved samplet matrix compression is possible
such that a fixed number of vanishing moments is sufficient
to achieve a precision proportional to the fill distance with
log-linear cost.
Besides data compression, detection of singularities
and adaptivity, we have demonstrated how
samplets can be employed for the compression kernel
matrices to obtain an essentially sparse matrix.
Having a sparse representation of the kernel matrix,
algebraic operations, such as matrix vector multiplications
can considerably be sped up. Moreover, in combination
with a fill-in reducing reordering, the factorization of
the compressed kernel matrices becomes computationally
feasible, which allows for the fast application of the
inverse kernel matrix on the one hand and the efficient
solution of linear systems involving the kernel matrix
on the other hand. The numerical results, featuring about
\(10^6\) data points in up to four dimensions, impressively
demonstrate the capabilities of samplets.
A straightforward future research direction is
the incorporation of different clustering strategies, such as
manifold aware clustering, to optimally resolve lower dimensional
manifolds in high dimensional data.
\bibliographystyle{plain}
\section{Introduction}\label{sec:intro}
Wavelet techniques have a long standing history in the field of data
science. Applications comprise signal processing, image analysis and
machine learning, see for instance
\cite{Chui,Dahmen,Daubechies,Mallat,Mallat2016}
and the references therein. Assuming a signal generated by some
function, the pivotal idea of wavelet techniques is the splitting of
this function into its respective contributions with respect to a
hierarchy of scales. Such a multiscale ansatz starts from an
approximation on a relatively coarse scale and successively resolves
details at the finer scales. Hence, compression and adaptive
representation are inherently built into this ansatz. The transformation
of a given signal into its wavelet representation and the inverse
transformation can be performed with linear cost in terms of the degrees
of freedom.
Classically, wavelets are constructed by refinement relations and
therefore require a sequence of nested approximation spaces which are
copies of each other, except for a different scaling. This restricts the
concept of wavelets to structured data. Some adaption of the general
principle is possible in order to treat intervals, bounded domains and
surfaces, compare \cite{DKU,Quak,HS,STE} for example. The seminal work
\cite{TW03} by Tausch and White overcomes this obstruction by
constructing wavelets as suitable linear combinations of functions at a
given fine scale. In particular, the stability of the resulting basis,
which is essential for numerical algorithms is guaranteed by
orthonormality.
In this article, we take the concept of wavelets to the next level and
consider discrete data. To this end, we modify the construction of
Tausch and White accordingly and construct a multiscale basis which
consists of localized and discrete signed measures. Inspired by the term
wavelet, we call such signed measures \emph{samplets}. Samplets can be
constructed such that their associated measure integrals vanish for
polynomial integrands. If this is the case for all polynomials of total
degree less or equal than \(q\), we say that the samplets have
\emph{vanishing moments} of order $q+1$. We remark that lowest order
samplets, i.e.\ \(q=0\), have been considered earlier for data
compression in \cite{RE11}. Another concept for constructing multiscale
bases on data sets are \emph{diffusion wavelets}, which employ a
diffusion operator to construct the multiscale hierarchy, see
\cite{CM06}. In contrast to diffusion wavelets, however, the
construction of samplets is solely based on discrete structures and can
always be performed with linear cost for a balanced cluster tree, even
for non-uniformly distributed data.
When representing discrete data by samplets, then, due to the vanishing
moments, there is a fast decay of the corresponding samplet coefficients
with respect to the support size if the data are smooth. This
straightforwardly enables data compression. In contrast, non-smooth
regions in the data are indicated by large samplet coefficients. This,
in turn, enables singularity detection and extraction. Furthermore, the
construction of samplets is not limited to the use of polynomials.
Indeed, it is easily be possible to adapt the construction to other
primitives with different desired properties.
The second application of samplets we consider is compression of kernel
matrices, as they arise in kernel based machine learning and scattered
data approximation, compare \cite{Fasshauer2007,HSS08,Rasmussen2006,%
Schaback2006,Wendland2004,Williams1998}. Kernel matrices are typically
densely populated, since the underlying kernels are nonlocal.
Nonetheless, these kernels are usually \emph{asymptotically smooth},
meaning that they behave like smooth functions apart from the diagonal.
A discretization of an asymptotical smooth kernel with respect to a
samplet basis with vanishing moments results in quasi-sparse kernel
matrices, which means that they can be compressed such that only a
sparse matrix remains, compare \cite{BCR,DHS,DPS,PS,SCHN}. Especially,
it has been demonstrated in \cite{HM} that nested dissection, see
\cite{Geo73,LRT79}, is applicable in order to obtain a fill-in reducing
reordering of the matrix. This reordering in turn allows for the rapid
factorization of the system matrix by the Cholesky factorization without
introducing additional errors.
The asymptotic smoothness of the kernels is also exploited by cluster
methods, like the fast multipole method, see \cite{GR,RO,YBZ04} and
particularly \cite{MXTY+2015} for high-dimensional data. However,
these methods do not allow for the direct and exact factorization, which
is for example advantageous advantagous for the simulation of Gaussian
random fields. A further approach, which is more in line of the present
work, is the use of \emph{gamblets}, see \cite{Owh17}, for the
compression of the kernel matrix, cp.\ \cite{SSO21}. Different from the
the discrete construction of samplets with vanishing
moments, the construction of gamblets is adapted to some underlying
pseudo-differential operator in order to obtain basis functions with
localized supports, while localized supports are automatically obtained
by the samplet construction.
As samplets are directly constructed with respect on a discrete data
set, their applications are manifold. Within this article, we
particularly consider time-series data, image data, kernel matrix
representation and the simulation of Gaussian random fields as examples.
We remark, however, that we do claim to have invented a new method for
high-dimensional data approximation. The current construction is based
on total degree polynomials and is hence not dimension robust, thus
limited to data of moderate dimension. Even so, we believe that samplets
provide most of the advantages of other approaches for scattered data,
while being easy to implement. Especially, most of the algorithms
available for wavelets with vanishing moments are
transferable.
The rest of this article is organized as follows. In
Section~\ref{section:Samplets}, the concept of samplets is introduced.
The subsequent Section~\ref{sct:construction} is devoted to the actual
construction of samplets and to their properties. The change of basis
by means of the discrete samplet transform is the topic of
Section~\ref{sec:FST}. In Section \ref{sec:Num1}, we demonstrate the
capabilities of samplets for data compression and smoothing for data in
one, two and three dimensions. Section~\ref{sec:kernelCompression} deals
with the samplet compression of kernel matrices. Especially, we also
develop an interpolation based \(\Hcal^2\)-matrix approach in order to
efficiently assemble the compressed kernel matrix. Corresponding
numerical results are then presented in Section \ref{sec:Num2} for up
to four dimensions. Finally, in Section~\ref{sec:Conclusion}, we state
concluding remarks.
\section{Samplets}
\label{section:Samplets}
Let \(X\mathrel{\mathrel{\mathop:}=}\{{\boldsymbol x}_1,\ldots,{\boldsymbol x}_N\}\subset\Omega\)
denote a set of points within some region \(\Omega\subset\Rbb^d\).
Associated to each point \({\boldsymbol x}_i\), we introduce
the Dirac measure
\[
\delta_{{\boldsymbol x}_i}({\boldsymbol x})\mathrel{\mathrel{\mathop:}=}
\begin{cases}
1,&\text{if }{\boldsymbol x}={\boldsymbol x}_i\\
0,&\text{otherwise}.
\end{cases}
\]
With a slight abuse of notation, we also introduce the
point evaluation functional
\[
(f,\delta_{{\boldsymbol x}_i})_\Omega=\int_\Omega
f({\boldsymbol x})\delta_{{\boldsymbol x}_i}({\boldsymbol x})\d{\boldsymbol x}\mathrel{\mathrel{\mathop:}=}
\int_{\Omega}f({\boldsymbol x})\delta_{{\boldsymbol x}_i}(\d{\boldsymbol x})
=f({\boldsymbol x}_i),
\]
where $f\in C(\Omega)$ is a continuous function.
Next, we define the space
\(V\mathrel{\mathrel{\mathop:}=}\spn\{\delta_{{\boldsymbol x}_1},\ldots,\delta_{{\boldsymbol x}_N}\}\)
as the \(N\)-dimensional vector space
of all discrete and finite signed measures supported at the
points in \(X\).
An inner product on \(V\) is defined by
\[
\langle u,v\rangle_V\mathrel{\mathrel{\mathop:}=}\sum_{i=1}^N u_iv_i,\quad\text{where }
u=\sum_{i=1}^Nu_i\delta_{{\boldsymbol x}_i},\ v=\sum_{i=1}^Nv_i\delta_{{\boldsymbol x}_i}.
\]
Indeed, the space \(V\) is isometrically isomorphic to \(\Rbb^N\)
endowed with the canonical inner product.
Similar to the
idea of a multiresolution analysis in the construction of
wavelets, we introduce the spaces \(V_j\mathrel{\mathrel{\mathop:}=}\spn{\boldsymbol \Phi_j}\), where
\[
{\boldsymbol\Phi_j}\mathrel{\mathrel{\mathop:}=}\{\varphi_{j,k}:k\in\Delta_j\}.
\]
Here, $\Delta_j$ denotes a suitable
index set with cardinality $|\Delta_j|=\dim V_j$ and
\(j\in\Nbb\) is referred to as \emph{level}.
Moreover, each basis function \(\varphi_{j,k}\) is a linear
combination of Dirac measures
such that
\[
\langle \varphi_{j,k},\varphi_{j,k'}\rangle_V=0\quad\text{for }k\neq k'
\]
and, in case of uniformly distributed points,
it holds
\[
\diam(\supp\varphi_{j,k})\mathrel{\mathrel{\mathop:}=}
\diam(\{{\boldsymbol x}_{i_1},\ldots,{\boldsymbol x}_{i_p}\})\sim 2^{-j/d}.
\]
For the sake of notational convenience, we shall identify bases
by row vectors,
such that, for ${\boldsymbol v}_j
= [v_{j,k}]_{k\in\Delta_j}$, the corresponding measure
can simply be written as a dot product according to
\[
v_j = \mathbf\Phi_j{\boldsymbol v}_j=\sum_{k\in\Delta_j} v_{j,k}\varphi_{j,k}.
\]
Rather than using the multiresolution
analysis corresponding to the hierarchy
\[
V_0\subset V_1\subset\cdots\subset V,
\]
the idea of samplets is
to keep track of the increment of information
between two consecutive levels $j$ and $j+1$. Since we have
$V_{j}\subset V_{j+1}$, we may decompose
\begin{equation}\label{eq:decomposition}
V_{j+1} = V_j\overset{\perp}{\oplus} S_j
\end{equation}
by using the \emph{detail space} $S_j$. Of practical interest
is the particular choice of the basis of the detail space $S_j$ in $V_{j+1}$.
This basis is assumed to be orthonormal as well and will be denoted by
\[
{\boldsymbol\Sigma}_j = \{\sigma_{j,k}:k\in\nabla_j\mathrel{\mathrel{\mathop:}=}\Delta_{j+1}
\setminus \Delta_j\}.
\]
Recursively applying the decomposition \eqref{eq:decomposition},
we see that the set
\[
\mathbf\Sigma_J = {\boldsymbol\Phi}_0 \bigcup_{j=0}^{J-1}{\boldsymbol\Sigma}_j
\]
forms a basis of \(V_J\mathrel{\mathrel{\mathop:}=} V\), which we call a \emph{samplet basis}.
In order to employ samplets for the compression of data and
kernel matrices, it is favorable
that the measures $\sigma_{j,k}$
are localized with respect to the corresponding level $j$, i.e.\
\begin{equation}\label{eq:localized}
\diam(\supp\sigma_{j,k})\sim 2^{-j/d},
\end{equation}
however, this is not a requirement in our construction,
and that they are stable, i.e.\
\[
\langle \sigma_{j,k},\sigma_{j,k'}\rangle_V=0\quad\text{for }k\neq k'.
\]
Moreover, an essential ingredient is the vanishing moment
condition, meaning that
\begin{equation}\label{eq:vanishingMoments}
(p,\sigma_{j,k})_\Omega
= 0\quad \text{for all}\ p\in\Pcal_q(\Omega),
\end{equation}
where \(\Pcal_q(\Omega)\) denotes the space of all polynomials
with total degree at most \(q\).
We say then that the samplets have $q+1$ \emph{vanishing
moments}.
\begin{remark}
The concept of samplets has a very natural interpretation
in the context of reproducing kernel Hilbert spaces, compare
\cite{Aronszajn50}. If \((\Hcal,\langle\cdot,\cdot\rangle_{\Hcal})\)
is a reproducing kernel Hilbert space with reproducing kernel
\(\mathcal{K}\), then there holds
\((f,\delta_{{\boldsymbol x}_i})_\Omega
=\langle \mathcal{K}({\boldsymbol x}_i,\cdot),f\rangle_{\Hcal}\). Hence,
the samplet
\(\sigma_{j,k}=\sum_{\ell=1}^p\beta_\ell\delta_{{\boldsymbol x}_{i_\ell}}\)
can directly be identified with the function
\[
\hat{\sigma}_{j,k}\mathrel{\mathrel{\mathop:}=}
\sum_{\ell=1}^p\beta_\ell \mathcal{K}({\boldsymbol x}_{i_\ell},\cdot)\in\mathcal{H}.
\]
In particular, it holds
\[
\langle\hat{\sigma}_{j,k},h\rangle_\Hcal=0
\]
for any \(h\in\Hcal\) which satisfies
\(h|_{\supp\sigma_{j,k}}\in\Pcal_q(\supp\sigma_{j,k})\).
\end{remark}
\section{Construction of samplets}\label{sct:construction}
\subsection{Cluster tree}
In order to construct samplets with the desired properties,
especially vanishing moments, cf.\ \eqref{eq:vanishingMoments},
we shall transfer the wavelet construction of Tausch and
White from \cite{TW03} into our setting. The first step is to
construct subspaces of signed measures with localized
supports. To this end, we perform a hierarchical
clustering on the set \(X\).
\begin{definition}\label{def:cluster-tree}
Let $\mathcal{T}=(P,E)$ be a tree with vertices $P$ and edges $E$.
We define its set of leaves as
\[
\mathcal{L}(\mathcal{T})\mathrel{\mathrel{\mathop:}=}\{\nu\in P\colon\nu~\text{has no sons}\}.
\]
The tree $\mathcal{T}$ is a \emph{cluster tree} for
the set $X=\{{\boldsymbol x}_1,\ldots,{\boldsymbol x}_N\}$, iff
the set $X$ is the \emph{root} of $\mathcal{T}$ and
all $\nu\in P\setminus\mathcal{L}(\mathcal{T})$
are disjoint unions of their sons.
The \emph{level} \(j_\nu\) of $\nu\in\mathcal{T}$ is its distance from the root,
i.e.\ the number of son relations that are required for traveling from
$X$ to $\nu$. The \emph{depth} \(J\) of \(\Tcal\) is the maximum level
of all clusters. We define the set of clusters
on level $j$ as
\[
\mathcal{T}_j\mathrel{\mathrel{\mathop:}=}\{\nu\in\mathcal{T}\colon \nu~\text{has level}~j\}.
\]
Finally, the \emph{bounding box} $B_{\nu}$ of \(\nu\)
is defined as the smallest axis-parallel cuboid that
contains all its points.
\end{definition}
There exist several possibilities for the choice of a
cluster tree for the set \(X\). However, within this article,
we will exclusively consider binary trees and remark that it is of course
possible to consider other options, such as
\(2^d\)-trees, with the obvious modifications.
Definition~\ref{def:cluster-tree} provides a hierarchical cluster
structure on the set \(X\). Even so, it does not provide guarantees
for the sizes and cardinalities of the clusters.
Therefore, we introduce the concept
of a balanced binary tree.
\begin{definition}
Let $\Tcal$ be a cluster tree on $X$ with depth $J$.
$\Tcal$ is called a \emph{balanced binary tree}, if all
clusters $\nu$ satisfy the following conditions:
\begin{enumerate}
\item
The cluster $\nu$ has exactly two sons
if $j_{\nu} < J$. It has no sons if $j_{\nu} = J$.
\item
It holds $|\nu|\sim 2^{J-j_{\nu}}$.
\end{enumerate}
\end{definition}
A balanced binary tree can be constructed by \emph{cardinality
balanced clustering}. This means that the root cluster
is split into two son clusters of identical (or similar)
cardinality. This process is repeated recursively for the
resulting son clusters until their cardinality falls below a
certain threshold.
For the subdivision, the cluster's bounding box
is split along its longest edge such that the
resulting two boxes both contain an equal number of points.
Thus, as the cluster cardinality halves with each level,
we obtain $\mathcal{O}(\log N)$ levels in total.
The total cost for constructing the cluster tree
is therefore $\mathcal{O}(N \log N)$. Finally, we remark that a
balanced tree is only required to guarantee the cost bounds
for the presented algorithms. The error and compression estimates
we shall present later on are robust in the sense that they
are formulated directly in terms of the actual cluster sizes
rather than the introduced cluster level.
\subsection{Multiscale hierarchy}
Having a cluster tree at hand, we
shall now construct a samplet basis on the resulting
hierarchical structure. We begin by introducing a \emph{two-scale}
transform between basis elements on a cluster $\nu$ of level $j$.
To this end, we create \emph{scaling functions} $\mathbf{\Phi}_{j}^{\nu}
= \{ \varphi_{j,k}^{\nu} \}$ and \emph{samplets} $\mathbf{\Sigma}_{j}^{\nu}
= \{ \sigma_{j,k}^{\nu} \}$ as linear combinations of the scaling
functions $\mathbf{\Phi}_{j+1}^{\nu}$ of $\nu$'s son clusters.
This results in the \emph{refinement relation}
\begin{equation}\label{eq:refinementRelation}
[ \mathbf{\Phi}_{j}^{\nu}, \mathbf{\Sigma}_{j}^{\nu} ]
\mathrel{\mathrel{\mathop:}=}
\mathbf{\Phi}_{j+1}^{\nu}
{\boldsymbol Q}_j^{\nu}=
\mathbf{\Phi}_{j+1}^{\nu}
\big[ {\boldsymbol Q}_{j,\Phi}^{\nu},{\boldsymbol Q}_{j,\Sigma}^{\nu}\big].
\end{equation}
In order to provide both, vanishing moments and orthonormality,
the transformation \({\boldsymbol Q}_{j}^{\nu}\) has to be
appropriately constructed. For this purpose, we consider an orthogonal
decomposition of the \emph{moment matrix}
\[
{\boldsymbol M}_{j+1}^{\nu}\mathrel{\mathrel{\mathop:}=}
\begin{bmatrix}({\boldsymbol x}^{\boldsymbol 0},\varphi_{j+1,1})_\Omega&\cdots&
({\boldsymbol x}^{\boldsymbol 0},\varphi_{j+1,|\nu|})_\Omega\\
\vdots & & \vdots\\
({\boldsymbol x}^{\boldsymbol\alpha},\varphi_{j+1,1})_\Omega&\cdots&
({\boldsymbol x}^{\boldsymbol\alpha},\varphi_{j+1,|\nu|})_\Omega
\end{bmatrix}=
[({\boldsymbol x}^{\boldsymbol\alpha},\mathbf{\Phi}_{j+1}^{\nu})_\Omega]_{|\boldsymbol\alpha|\le q}
\in\Rbb^{m_q\times|\nu|},
\]
where
\begin{equation}\label{eq:mq}
m_q\mathrel{\mathrel{\mathop:}=}\sum_{\ell=0}^q{\ell+d-1\choose d-1}={q+d\choose d}\leq(q+1)^d
\end{equation}
denotes the dimension of \(\Pcal_q(\Omega)\).
In the original construction by
Tausch and White, the matrix \({\boldsymbol Q}_{j}^{\nu}\) is obtained
by a singular value decomposition of \({\boldsymbol M}_{j+1}^{\nu}\).
For the construction of samplets, we follow the idea
form \cite{AHK14} and rather
employ the QR decomposition, which has the advantage of generating
samplets with an increasing number of vanishing moments.
It holds
\begin{equation}\label{eq:QR}
({\boldsymbol M}_{j+1}^{\nu})^\intercal = {\boldsymbol Q}_j^\nu{\boldsymbol R}
\mathrel{=\mathrel{\mathop:}}\big[{\boldsymbol Q}_{j,\Phi}^{\nu} ,
{\boldsymbol Q}_{j,\Sigma}^{\nu}\big]{\boldsymbol R}
\end{equation}
Consequently, the moment matrix
for the cluster's own scaling functions and samplets is then
given by
\begin{equation}\label{eq:vanishingMomentsQR}
\begin{aligned}
\big[{\boldsymbol M}_{j,\Phi}^{\nu}, {\boldsymbol M}_{j,\Sigma}^{\nu}\big]
&= \left[({\boldsymbol x}^{\boldsymbol\alpha},[\mathbf{\Phi}_{j}^{\nu},
\mathbf{\Sigma}_{j}^{\nu}])_\Omega\right]_{|\boldsymbol\alpha|\le q}
= \left[({\boldsymbol x}^{\boldsymbol\alpha},\mathbf{\Phi}_{j+1}^{\nu}[{\boldsymbol Q}_{j,\Phi}^{\nu}
, {\boldsymbol Q}_{j,\Sigma}^{\nu}])_\Omega
\right]_{|\boldsymbol\alpha|\le q} \\
&= {\boldsymbol M}_{j+1}^{\nu} [{\boldsymbol Q}_{j,\Phi}^{\nu} , {\boldsymbol Q}_{j,\Sigma}^{\nu} ]
= {\boldsymbol R}^\intercal.
\end{aligned}
\end{equation}
As ${\boldsymbol R}^\intercal$ is a lower triangular matrix, the first $k-1$
entries in its $k$-th column are zero. This corresponds to
$k-1$ vanishing moments for the $k$-th function generated
by the transformation
${\boldsymbol Q}_{j}^{\nu}=[{\boldsymbol Q}_{j,\Phi}^{\nu} , {\boldsymbol Q}_{j,\Sigma}^{\nu} ]$.
By defining the first $m_{q}$ functions as scaling functions and
the remaining as samplets, we obtain samplets with vanishing
moments at least up to order $q+1$. By increasing
the polynomial degree to \(\hat{q}\geq q\) at the leaf clusters
such that \(m_{\hat{q}}\geq 2m_q\), we can even construct
samplets with an increasing number of vanishing moments up to order \(\hat{q}+1\)
without any additional cost.
\begin{remark}
We remark that the samplet construction using vanishing moments
is inspired by the classical wavelet theory. However, it is easily
possible to adapt the construction to other primitives of interest.
\end{remark}
\begin{remark}
\label{remark:introCQ}
Each cluster has at most a constant number of scaling
functions and samplets: For a particular cluster $\nu$, their number
is identical to the cardinality of $\mathbf{\Phi}_{j+1}^{\nu}$. For leaf
clusters, this number is bounded by the leaf size.
For non-leaf clusters, it is bounded by the number of scaling functions
provided from all its son clusters. As there are at most two
son clusters with a maximum of $m_q$ scaling functions each,
we obtain the bound $2 m_q$ for non-leaf clusters. Note that,
if $\mathbf{\Phi}_{j+1}^{\nu}$ has at most $m_q$ elements, a
cluster will not provide any samplets at all and all functions
will be considered as scaling functions.
\end{remark}
For leaf clusters, we define the scaling functions by
the Dirac measures supported at the points \({\boldsymbol x}_i\), i.e.\
$\mathbf{\Phi}_J^{\nu}\mathrel{\mathrel{\mathop:}=}\{ \delta_{{\boldsymbol x}_i} : {\boldsymbol x}_i\in\nu \}$,
to account for the lack of son clusters that could provide scaling
functions.
The scaling functions of all clusters on a specific level $j$
then generate the spaces
\begin{equation}\label{eq:Vspaces}
V_{j}\mathrel{\mathrel{\mathop:}=} \spn\{ \varphi_{j,k}^{\nu} : k\in \Delta_j^\nu,\ \nu \in\Tcal_{j} \},
\end{equation}
while the samplets span the detail spaces
\begin{equation}\label{eq:Wspaces}
S_{j}\mathrel{\mathrel{\mathop:}=}
\spn\{ \sigma_{j,k}^{\nu} : k\in \nabla_j^\nu,\
\nu \in \Tcal_{j} \} =
V_{j+1}\overset{\perp}{\ominus} V_j.
\end{equation}
Combining the scaling functions of the root cluster with all
clusters' samplets gives rise to the samplet basis
\begin{equation}\label{eq:Wbasis}
\mathbf{\Sigma}_{N}\mathrel{\mathrel{\mathop:}=}\mathbf{\Phi}_{0}^{X}
\cup \bigcup_{\nu \in T} \mathbf{\Sigma}_{j}^{\nu}.
\end{equation}
Writing $\mathbf{\Sigma}_{N}
= \{ \sigma_{k} : 1 \leq k \leq N \}$, where
$\sigma_{k}$ is either a samplet or a scaling function
at the root cluster, we can establish a unique indexing of
all the signed measures comprising the samplet
basis. The indexing induces an order on the
basis set $\mathbf{\Sigma}_{N}$, which we choose
to be level-dependent: Samplets belonging to a particular
cluster are grouped together, with those on finer levels
having larger indices.
\begin{remark}\label{remark:waveletLeafSize}
We remark that the samplet basis on a balanced
cluster tree can be computed in cost $\mathcal{O}(N)$,
we refer to \cite{AHK14} for a proof.
\end{remark}
\subsection{Properties of the samplets}
By construction, the samplets satisfy the following
properties, which can directly be inferred from
the corresponding results in \cite{HKS05,TW03}.
\begin{theorem}\label{theo:waveletProperties}
The spaces $V_{j}$ defined in equation \eqref{eq:Vspaces}
exhibit the desired multiscale hierarchy
\[
V_0\subset V_1\subset\cdots\subset V_J = V,
\]
where the corresponding complement spaces $S_{j}$ from \eqref{eq:Wspaces}
satisfy $V_{j+1}=V_j\overset{\perp}{\oplus} S_{j}$ for all $j=0,1,\ldots,
J-1$. The associated samplet basis $\mathbf{\Sigma}_{N}$ defined in
\eqref{eq:Wbasis} constitutes an orthonormal basis of $V$.
In particular:
\begin{enumerate}
\item The number of all samplets on level $j$ behaves like $2^j$.
\item The samplets have $q+1$ vanishing moments.
\item
Each samplet is supported in a specific cluster $\nu$.
If the points in $X$ are uniformly distributed, then the
diameter of the cluster satisfies $\diam(\nu)\sim
2^{-j_\nu/d}$ and it holds \eqref{eq:localized}.
\end{enumerate}
\end{theorem}
\begin{remark}
Due to $S_j\subset V$ and $V_0\subset V$,
we conclude that each samplet is a linear combination of the
Dirac measures supported at the points in $X$. Especially, the
related coefficient vectors ${\boldsymbol\omega}_{j,k}$ in
\begin{equation}\label{eq:coefficientVectorsOfWavelets}
\sigma_{j,k} = \sum_{i=1}^{N}
\omega_{j,k,i} \delta_{{\boldsymbol x}_i} \quad
\text{and} \quad \varphi_{0,k} = \sum_{i=1}^{N}
\omega_{0,k,i} \delta_{{\boldsymbol x}_i}
\end{equation}
are pairwise orthonormal with respect to the inner
product on \(\Rbb^N\).
\end{remark}
Later on, the following bound on the samplets'
coefficients $\|\cdot\|_1$-norm will
be essential:
\begin{lemma}\label{lemma:waveletL1Norm}
The coefficient vector ${\boldsymbol\omega}_{j,k}=\big[\omega_{j,k,i}\big]_i$ of
the samplet $\sigma_{j,k}$ on the cluster $\nu$ fulfills
\begin{equation}\label{eq:ell1-norm}
\|{\boldsymbol\omega}_{j,k}\|_{1}\le\sqrt{|\nu|}.
\end{equation}
The same holds for the scaling functions $\varphi_{j,k}$.
\end{lemma}
\begin{proof}
It holds $\|{\boldsymbol\omega}_{j,k}\|_{\ell^2}=1$. Hence,
the assertion follows immediately from the Cauchy-Schwarz
inequality
\[
\|{\boldsymbol\omega}_{j,k}\|_{1}\le\sqrt{|\nu|}\|{\boldsymbol\omega}_{j,k}\|_{2}
=\sqrt{|\nu|}.
\]
\end{proof}
The key for data compression and singularity detection
is the following estimate which shows that the samplet
coefficients decay with respect to the samplet's level
provided that the data result from the evaluation of a smooth function.
Therefore, in case of smooth data, the samplet
coefficients are small and can be set to zero without
compromising the accuracy. Vice versa, a large samplet
coefficients reflects that the data are singular in the
region of the samplet's support.
\begin{lemma}\label{lemma:decay}
Let $f\in C^{q+1}(\Omega)$. Then, it holds for
a samplet $\sigma_{j,k}$ supported
on the cluster $\nu$ that
\begin{equation}\label{eq:decay}
|(f,\sigma_{j,k})_\Omega|\le
\diam(\nu)^{q+1}\|f\|_{C^{q+1}(\Omega)}\|{\boldsymbol\omega}_{j,k}\|_{1}.
\end{equation}
\end{lemma}
\begin{proof}
For ${\boldsymbol x}_0\in\nu$, the Taylor expansion of $f$ yields
\[
f({\boldsymbol x}) = \sum_{|\boldsymbol\alpha|\le q}
\frac{\partial^{|\boldsymbol\alpha|}}{\partial{\boldsymbol x}^{\boldsymbol\alpha}}f({\boldsymbol x}_0)
\frac{({\boldsymbol x}-{\boldsymbol x}_0)^{\boldsymbol\alpha}}{\boldsymbol\alpha!}
+ R_{{\boldsymbol x}_0}({\boldsymbol x}).
\]
Herein, the remainder $R_{{\boldsymbol x}_0}({\boldsymbol x})$ reads
\begin{align*}
R_{{\boldsymbol x}_0}({\boldsymbol x}) &= (q+1)\sum_{|\boldsymbol\alpha|=q+1}
\frac{({\boldsymbol x}-{\boldsymbol x}_0)^{\boldsymbol\alpha}}{\boldsymbol\alpha!}
\int_0^1\frac{\partial^{q+1}}{\partial{\boldsymbol x}^{\boldsymbol\alpha}}
f\big({\boldsymbol x}_0+s({\boldsymbol x}-{\boldsymbol x}_0)\big)(1-s)^q\d s.
\end{align*}
In view of the vanishing moments, we conclude
\begin{align*}
|(f,\sigma_{j,k})_\Omega|
&= |(R_{{\boldsymbol x}_0},\sigma_{j,k})_\Omega|
\le\sum_{|\boldsymbol\alpha|=q+1}
\frac{\|{\boldsymbol x}-{\boldsymbol x}_0\|_2^{|\boldsymbol\alpha|}}{\boldsymbol\alpha!}
\max_{{\boldsymbol x}\in\nu}\bigg|\frac{\partial^{q+1}}
{\partial{\boldsymbol x}^{\boldsymbol\alpha}}f(\boldsymbol x)\bigg|
\|{\boldsymbol\omega}_{j,k}\|_{1}\\
&\le\diam(\nu)^{q+1}\|f\|_{C^{q+1}(\Omega)}\|{\boldsymbol\omega}_{j,k}\|_{1}.
\end{align*}
Here, we used the estimate
\[
\sum_{|\boldsymbol\alpha|=q+1}\frac{2^{-(q+1)}}{\boldsymbol\alpha!}\le 1,
\]
which is obtained by choosing \({\boldsymbol x}_0\) as the
cluster's midpoint.
\end{proof}
\section{Discrete samplet transform}\label{sec:FST}
In order to transform between the samplet basis
and the basis of Dirac measures, we introduce
the \emph{discrete samplet transform} and its inverse.
To this end, we assume that the data
\(({\boldsymbol x}_1,y_1),\ldots,({\boldsymbol x}_N,y_N)\)
result from the evaluation of some (unknown) function
\(f\colon\Omega\to\Rbb\),
i.e.\
\[y_i=f_i^{\Delta}=(f,\delta_{{\boldsymbol x}_i})_\Omega.
\]
Hence, we may represent the function \(f\) on \(X\)
according to
\[f = \sum_{i = 1}^{N} f_i^{\Delta} \delta_{{\boldsymbol x}_i}.
\]
Our goal is now to compute the representation
\[f =
\sum_{i = 1}^{N} f_{k}^{\Sigma} \sigma_{k}
\]
with respect to the samplet basis.
For
sake of a simpler notation, let
${\boldsymbol f}^{\Delta}\mathrel{\mathrel{\mathop:}=} [f_i^{\Delta}]_{i=1}^N$
and ${\boldsymbol f}^{\Sigma}\mathrel{\mathrel{\mathop:}=} [f_i^\Sigma]_{i=1}^N$ denote
the associated coefficient vectors.
\begin{figure}[htb]
\begin{center}
\scalebox{0.75}{
\begin{tikzpicture}[x=0.4cm,y=0.4cm]
\tikzstyle{every node}=[circle,draw=black,fill=shadecolor,
minimum size=1.2cm]%
\tikzstyle{ptr}=[draw=none,fill=none,above]%
\node at (0,5) (1) {${\boldsymbol f}^{\Delta}$};
\node at (8,5) (2) {${\boldsymbol f}_{J-1}^{\Phi}$};
\node at (8,1) (3) {${\boldsymbol f}_{J-1}^{\Sigma}$};
\node at (16,5) (4) {${\boldsymbol f}_{J-2}^{\Phi}$};
\node at (16,1) (5) {${\boldsymbol f}_{J-2}^{\Sigma}$};
\node at (24,5) (6) {${\boldsymbol f}_{J-3}^{\Phi}$};
\node at (24,1) (7) {${\boldsymbol f}_{J-3}^{\Sigma}$};
\node at (30,5) (8) {${\boldsymbol f}_{1}^{\Phi}$};
\node at (38,5) (9) {${\boldsymbol f}_{0}^{\Phi}$};
\node at (38,1) (10) {${\boldsymbol f}_{0}^{\Sigma}$};
\tikzstyle{forward}=[draw,-stealth]%
\tikzstyle{every node}=[style=ptr]
\draw
(1) edge[forward] node[above,sloped]{${\boldsymbol Q}_{J-1,\Phi}^\intercal$} (2)
(1) edge[forward] node[above,sloped]{${\boldsymbol Q}_{J-1,\Sigma}^\intercal$}%
(3)
(2) edge[forward] node[above,sloped]{${\boldsymbol Q}_{J-2,\Phi}^\intercal$} (4)
(2) edge[forward] node[above,sloped]{${\boldsymbol Q}_{J-2,\Sigma}^\intercal$}%
(5)
(4) edge[forward] node[above,sloped]{${\boldsymbol Q}_{J-3,\Phi}^\intercal$} (6)
(4) edge[forward] node[above,sloped]{${\boldsymbol Q}_{J-3,\Sigma}^\intercal$}%
(7)
(8) edge[forward] node[above,sloped]{${\boldsymbol Q}_{0,\Phi}^\intercal$} (9)
(8) edge[forward] node[above,sloped]{${\boldsymbol Q}_{0,\Sigma}^\intercal$}%
(10);
\tikzstyle{every node}=[style=ptr]%
\tikzstyle{ptr}=[draw=none,fill=none]%
\node at (27,5) (16) {$\hdots$};
\end{tikzpicture}}
\caption{\label{fig:haar}Visualization of the discrete samplet transform.}
\end{center}
\end{figure}
The discrete samplet transform is based on
recursively applying the refinement relation
\eqref{eq:refinementRelation} to the point evaluations
\begin{equation}\label{eq:refinementRelationInnerProducts}
(f, [ \mathbf{\Phi}_{j}^{\nu}, \mathbf{\Sigma}_{j}^{\nu} ])_\Omega
=(f, \mathbf{\Phi}_{j+1}^{\nu} [{\boldsymbol Q}_{j,\Phi}^{\nu} ,{\boldsymbol Q}_{j,\Sigma}^{\nu} ])_\Omega \\
=(f, \mathbf{\Phi}_{j+1}^{\nu})_\Omega [{\boldsymbol Q}_{j,\Phi}^{\nu} , {\boldsymbol Q}_{j,\Sigma}^{\nu} ].
\end{equation}
On the finest level, the entries of the vector
$(f, \mathbf{\Phi}_{J}^{\nu})_\Omega$
are exactly those of ${\boldsymbol f}^{\Delta}$. Recursively
applying equation \eqref{eq:refinementRelationInnerProducts} therefore
yields all the coefficients $(f, \mathbf{\Psi}_{j}^{\nu})_\Omega$,
including $(f, \mathbf{\Phi}_{0}^{X})_\Omega$,
required for the representation of $f$ in the samplet basis,
see Figure~\ref{fig:haar} for a visualization. The
complete procedure is
formulated in Algorithm~\ref{algo:DWT}.\bigskip
\begin{algorithm}[H]
\caption{Discrete samplet transform}
\label{algo:DWT}
\KwData{Data ${\boldsymbol f}^\Delta$,
cluster tree $\Tcal$ and transformations
$[{\boldsymbol Q}_{j,\Phi}^{\nu},{\boldsymbol Q}_{j,\Sigma}^{\nu}]$.}
\KwResult{Coefficients ${\boldsymbol f}^{\Sigma}$
stored as
$[(f,\mathbf{\Phi}_{0}^{X})_\Omega]^\intercal$ and
$[(f,\mathbf{\Sigma}_{j}^{\nu})_\Omega]^\intercal$.}
\Begin{
store $[(f,\mathbf{\Phi}_{0}^{X})_\Omega]^\intercal\mathrel{\mathrel{\mathop:}=}$
\FuncSty{transformForCluster}($X$)
}
\end{algorithm}
\begin{function}[H]
\caption{transformForCluster($\nu$)}
\Begin{
\uIf{$\nu=\{{\boldsymbol x}_{i_{1}}, \dots,{\boldsymbol x}_{i_{|\nu|}}\}$
is a leaf of \(\Tcal\)}{
set ${\boldsymbol f}_{j+1}^{\nu}\mathrel{\mathrel{\mathop:}=}
\big[f_{i_{k}}^\Delta\big]_{k=1}^{|\nu|}$
}
\Else{
\For{all sons $\nu'$ of $\nu$}{
execute $\transformForCluster(\nu')$\\
append the result to ${\boldsymbol f}_{j+1}^{\nu}$
}
}
set $[(f,\mathbf{\Sigma}_{j}^{\nu})_\Omega]^\intercal
\mathrel{\mathrel{\mathop:}=}({\boldsymbol Q}_{j,\Sigma}^{\nu})^\intercal {\boldsymbol f}_{j+1}^{\nu}$
\Return{$({\boldsymbol Q}_{j,\Phi}^{\nu})^\intercal{\boldsymbol f}_{j+1}^{\nu}$}
}
\end{function}\bigskip
\begin{remark}
Algorithm \ref{algo:DWT} employs the transposed version of
\eqref{eq:refinementRelationInnerProducts} to preserve
the column vector structure of ${\boldsymbol f}^\Delta$ and ${\boldsymbol f}^{\Sigma}$.
\end{remark}
The inverse transformation is obtained by reversing
the steps of the discrete samplet transform:
For each cluster, we compute
\[
(f, \mathbf{\Phi}_{j+1}^{\nu})_\Omega
= (f, [ \mathbf{\Phi}_{j}^{\nu}, \mathbf{\Sigma}_{j}^{\nu} ]
)_\Omega[{\boldsymbol Q}_{j,\Phi}^{\nu} ,{\boldsymbol Q}_{j,\Sigma}^{\nu} ]^\intercal
\]
to either obtain the
coefficients of the
son clusters' scaling functions
or, for leaf clusters, the coefficients ${\boldsymbol f}^{\Delta}$.
The procedure is summarized in Algorithm~\ref{algo:iDWT}.\bigskip
\begin{algorithm}[H]
\caption{Inverse samplet transform}
\label{algo:iDWT}
\KwData{Coefficients ${\boldsymbol f}^\Sigma$,
cluster tree $\Tcal$ and transformations
$[{\boldsymbol Q}_{j,\Phi}^{\nu},{\boldsymbol Q}_{j,\Sigma}^{\nu}]$.}
\KwResult{Coefficients ${\boldsymbol f}^{\Delta}$
stored as
$[(f,\mathbf{\Phi}_{j}^{\nu})_\Omega]^\intercal$.}
\Begin{
\FuncSty{inverseTransformForCluster}($X$,
$[(f,\mathbf{\Phi}_{0}^{X})_\Omega]^\intercal$)
}
\end{algorithm}
\begin{function}[H]
\caption{inverseTransformForCluster($\nu$,
\unexpanded{$[(f,{\boldsymbol\Phi}_{j}^\nu)_\Omega]^\intercal$})}
\Begin{
$[(f,{\boldsymbol\Phi}_{j+1}^\nu)_\Omega]^\intercal
\mathrel{\mathrel{\mathop:}=} [{\boldsymbol Q}_{j,\Phi}^{\nu} ,{\boldsymbol Q}_{j,\Sigma}^{\nu} ]
\begin{bmatrix}
[(f,{\boldsymbol\Phi}_{j}^\nu)_\Omega]^\intercal\\
[(f,{\boldsymbol\Sigma}_{j}^\nu)_\Omega]^\intercal
\end{bmatrix}$
\uIf{$\nu=\{{\boldsymbol x}_{i_{1}}, \dots,{\boldsymbol x}_{i_{|\nu|}}\}$
is a leaf of \(\Tcal\)}{set $\big[f_{i_{k}}^\Delta\big]_{k=1}^{|\nu|}
\mathrel{\mathrel{\mathop:}=}[(f,{\boldsymbol\Phi}_{j_\nu+1}^\nu)_\Omega]^\intercal$
}
\Else{
\For{all sons $\nu'$ of $\nu$}{
assign the part of $[(f,{\boldsymbol\Phi}_{j+1}^\nu)_\Omega]^\intercal$
belonging to \(\nu'\) to $[(f,{\boldsymbol\Phi}_{j'}^{\nu'})_\Omega]^\intercal$
execute \FuncSty{inverseTransformForCluster}($\nu'$,
$[(f,{\boldsymbol\Phi}_{j'}^{\nu'})_\Omega]^\intercal$) }
}
}
\end{function}\bigskip
The discrete samplet transform and its inverse
can be performed in linear cost. This
result is well known in case of wavelets and was
crucial for their rapid development.
\begin{theorem}
The runtime of the discrete samplet transform and the inverse
samplet transform are \(\mathcal{O}(N)\), each.
\end{theorem}
\begin{proof}
As the samplet construction follows the construction
of Tausch and White, we refer to \cite{TW03} for the
details of the proof.
\end{proof}
\section{Numerical results I}\label{sec:Num1}
To demonstrate the efficacy of the samplet analysis,
we compress different sample data in one, two and three
spatial dimensions. For each example, we use samplets
with \(q+1=3\) vanishing moments.
\subsection*{One dimension}
We start with two one-dimensional
examples. On the one hand, we consider the test function
\[
f(x)=\frac 3 2 e^{-40|x-\frac 1 4|}
+ 2e^{-40|x|}-e^{-40|x+\frac 1 2|},
\]
sampled at $8192$ uniformly distributed points on \([-1,1]\).
On the other hand, we consider a path of a Brownian motion
sampled at the same points. The coefficients of the samplet
transformed data are thresholded with \(10^{-i}\|{\boldsymbol f}^{\Sigma}\|_\infty\),
\(i=1,2,3\), respectively.
The resulting compression ratios and the reconstructions
can be found in Figure~\ref{fig:Expcomp} and Figure~\ref{fig:BMcomp},
respectively. One readily infers that in both cases high compression
rates are achieved at high accuracy. In case of the Brownian motion,
the smoothing of the sample data can be realized by increasing the
compression rate, corresponding to throwing away more and
more detail information. Indeed, due to the orthonormality of the samplet
basis, this procedure amounts to a least squares fit of the data.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\begin{axis}[width=\textwidth, height=0.4\textwidth, xmin = -1, xmax=1,
ymin=-1.1,
ymax=2.1, ylabel={$y$}, xlabel ={$x$},legend style={mark
options={scale=2}},
legend pos = north east]
\addplot[line width=0.7pt,color=black]
table[each nth point=3,x index={0},y index = {1}]{%
./Results/ExpCompress1D.txt};
\addlegendentry{data};
\addplot[line width=0.7pt,color=blue, only marks, mark size=0.2pt]
table[each nth point=3,x index={0},y index = {5}]{%
./Results/ExpCompress1D.txt};
\addlegendentry{$98.55\%$ compr.};
\addplot[line width=0.7pt,color=red, only marks, mark size=0.2pt]
table[each nth point=3,x index={0},y index = {4}]{%
./Results/ExpCompress1D.txt};
\addlegendentry{$99.17\%$ compr.};
\addplot[line width=0.7pt,color=darkgreen, only marks, mark size=0.2pt]
table[each nth point=3,x index={0},y index = {3}]{%
./Results/ExpCompress1D.txt};
\addlegendentry{$99.63\%$ compr.};
\end{axis}
\end{tikzpicture}
\end{center}
\caption{\label{fig:Expcomp}Sampled test function approximated with
different compression ratios.}
\end{figure}
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}[ ausschnitt/.style={black!80}]
\begin{axis}[width=\textwidth, height=0.4\textwidth, xmin = -1, xmax=1,
ymin=-1,
ymax=2.4,
ylabel={$y$}, xlabel ={$x$},legend style={mark options={scale=2}},
legend pos = south east]
\draw[ausschnitt]
(axis cs:-0.5,-0.5)coordinate(ul)--
(axis cs:0.005,-0.5)coordinate(ur)--
(axis cs:0.005,0.4)coordinate(or)--
(axis cs:-0.5,0.4) -- cycle;
\addplot[line width=0.7pt,color=black]
table[each nth point=4,x index={0},y index = {1}]{%
./Results/BMCompress1D.txt};
\addlegendentry{data};
\addplot[line width=0.7pt,color=blue, only marks, mark size=0.2pt]
table[each nth point=5,x index={0},y index = {5}]{%
./Results/BMCompress1D.txt};
\addlegendentry{$92.69\%$ compr.};
\addplot[line width=0.7pt,color=red, only marks, mark size=0.2pt]
table[each nth point=5,x index={0},y index = {4}]{%
./Results/BMCompress1D.txt};
\addlegendentry{$99.24\%$ compr.};
\addplot[line width=0.7pt,color=darkgreen, only marks, mark size=0.2pt]
table[each nth point=5,x index={0},y index = {3}]{%
./Results/BMCompress1D.txt};
\addlegendentry{$99.88\%$ compr.};
\end{axis}
\begin{axis}[yshift=-.37\textwidth,xshift=0.25\textwidth,
width=0.5\textwidth, height=0.4\textwidth, xmin = -0.5,
xmax=0.005, ymin=-0.5,
ymax=0.4,axis line style=ausschnitt]
\addplot[line width=0.7pt,color=black]
table[each nth point=2,x index={0},y index = {1}]{%
./Results/BMCompress1D.txt};
\addplot[line width=0.7pt,color=blue, only marks, mark size=0.2pt]
table[each nth point=2,x index={0},y index = {5}]{%
./Results/BMCompress1D.txt};
\addplot[line width=0.7pt,color=red, only marks, mark size=0.2pt]
table[each nth point=2,x index={0},y index = {4}]{%
./Results/BMCompress1D.txt};
\addplot[line width=0.7pt,color=darkgreen, only marks, mark size=0.2pt]
table[each nth point=2,x index={0},y index = {3}]{%
./Results/BMCompress1D.txt};
\end{axis}
\draw[ausschnitt]
(current axis.north west)--(ul)
(current axis.north east)--(ur);
\end{tikzpicture}
\caption{\label{fig:BMcomp}Sampled Brownian motion approximated with
different compression ratios.}
\end{center}
\end{figure}
\subsection*{Two dimensions}
As a second application for samplets, we consider image compression.
To this end, we use a \(2000\times 2000\) pixel grayscale landscape
image. The coefficients of the samplet transformed image are thresholded
with \(10^{-i}\|{\boldsymbol f}^{\Sigma}\|_\infty\), \(i=2,3,4\), respectively.
The corresponding
results and compression rates can be found in Figure~\ref{fig:compImage}.
A visualization of the samplet coefficients in case of the respective
low compression can be found in Figure~\ref{fig:coeffImage}.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\draw(0,0)node{\includegraphics[scale = 0.12,trim=65 47 65 24,clip]{%
./Results/OriginalLugano.png}};
\draw(0,2.4)node{Original image};
\draw(5,2.4)node{\(95.23\%\) compression};
\draw(5,0)node{\includegraphics[scale = 0.12,trim=65 47 65 24,clip]{%
./Results/CompressedLowLugano.png}};
\draw(0,-2.6)node{\(99.89\%\) compression};
\draw(0,-5)node{\includegraphics[scale = 0.12,trim=65 47 65 24,clip]{%
./Results/CompressedIntermedLugano.png}};
\draw(5,-2.6)node{\(99.99\%\) compression};
\draw(5,-5)node{\includegraphics[scale = 0.12,trim=65 47 65 24,clip]{%
./Results/CompressedHighLugano.png}};
\end{tikzpicture}
\caption{\label{fig:compImage}Different compression rates of the
test image.}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\draw(0,0)node{\includegraphics[scale = 0.12,trim=1000 195 1000 195,clip]{%
./Results/LuganoCoeffs.png}};
\draw(3.3,0)node{\includegraphics[scale = 0.14,trim=2100 400 460 400,clip]{%
./Results/LuganoCoeffs.png}};
\end{tikzpicture}
\caption{\label{fig:coeffImage}Visualization of the samplet coefficients
for the test image.}
\end{center}
\end{figure}
\subsection*{Three dimensions}
Finally, we show a result in three dimensions.
Here, the points are given by a uniform subsample of
a triangulation of the Stanford bunny. We consider data on the
Stanford bunny generated by the function
\[
f({\boldsymbol x})=e^{-20\|{\boldsymbol x}-{\boldsymbol p}_0\|_2}
+e^{-20\|{\boldsymbol x}-{\boldsymbol p}_1\|_2},
\]
where the points \({\boldsymbol p}_0\) and \({\boldsymbol p}_1\) are located at the tips
of the bunny's ears. Moreover, the geometry has been rescaled to a
diameter of 2. The plot on the left-hand side of
Figure~\ref{fig:coeffStanford}
visualizes the sample data, while the plot on the right-hand side
shows the dominant coefficients in case of a threshold parameter
of \(10^{-2}\|{\boldsymbol f}^{\Sigma}\|_\infty\).
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\draw(0,0)node{\includegraphics[%
scale = 0.12,trim=1040 230 1050 280,clip]{%
./Results/StanfordBunnySignal.png}};
\draw(5,0)node{\includegraphics[%
scale = 0.12,trim=1000 200 1000 200,clip]{%
./Results/StanfordBunny1e-2Coeff.png}};
\draw(8,0)node{\includegraphics[%
scale = 0.14,trim=2130 400 600 400,clip]{%
./Results/StanfordBunny1e-2Coeff.png}};
\end{tikzpicture}
\caption{\label{fig:coeffStanford}Data on the Stanford bunny (left) and
dominant samplet coefficients (right).}
\end{center}
\end{figure}
\section{Compression of kernel matrices}\label{sec:kernelCompression}
\subsection{Kernel matrices}
The second application of samplets we consider
is the compression of matrices arising from positive
(semi-) definite kernels, as they emerge in kernel
methods, such as scattered data analysis, kernel
based learning or Gaussian process regression,
see for example \cite{HSS08,Schaback2006,
Wendland2004,Williams1998} and the references
therein.
We start by recalling the concept of a positive kernel.
\begin{definition}\label{def:poskernel}
A symmetric kernel
$\mathcal{K}\colon\Omega\times\Omega\rightarrow\Rbb$ is
called \textit{positive (semi-)definite} on $\Omega\subset\mathbb{R}^d$,
iff \([\mathcal{K}({\boldsymbol x}_i,{\boldsymbol x}_j)]_{i,j=1}^N\)
is a symmetric and positive (semi-)definite matrix
for all
$\{{\boldsymbol x}_1, \ldots,{\boldsymbol x}_N\}\subset\Omega$
and all $N\in\mathbb{N}$.
\end{definition}
As a particular class of positive definite
kernels, we consider the \emph{Mat\'ern kernels} given by
\begin{equation}\label{eq:matkern}
k_\nu(r)\mathrel{\mathrel{\mathop:}=}\frac{2^{1-\nu}}{\Gamma(\nu)}\bigg(\frac {\sqrt{2\nu}r}
{\ell}\bigg)^\nu
K_\nu\bigg(\frac {\sqrt{2\nu}r}{\ell}\bigg),\quad r\geq 0,\ \ell >0 .
\end{equation}
Herein, $K_{\nu}$ is the modified Bessel function of the second
kind of order $\nu$ and $\Gamma$ is the gamma function.
The parameter $\nu$ steers for the smoothness of the
kernel function. Especially,
the analytic squared-exponential kernel is
retrieved for $\nu\to\infty$. Especially, we have
\begin{equation}
\begin{aligned}
k_{1/2}(r)=\exp\bigg(-\frac{r}{\ell}\bigg),
\quad k_{\infty}(r)=\exp\bigg(-\frac{r^2}{2\ell^2}\bigg).
\end{aligned}
\end{equation}
A positive definite kernel in the sense of
Definition~\ref{def:poskernel}
is obtained by considering
\[
\mathcal{K}({\boldsymbol x},{\boldsymbol x}^\prime)\mathrel{\mathrel{\mathop:}=}
k_\nu(\|{\boldsymbol x}-{\boldsymbol x}^\prime\|_2).
\]
Given the set of points \(X=\{{\boldsymbol x}_1,\ldots,{\boldsymbol x}_N\}\), many
applications require the assembly and the inversion of the
\emph{kernel matrix}
\[
{\boldsymbol K}\mathrel{\mathrel{\mathop:}=}[\mathcal{K}({\boldsymbol x}_i,{\boldsymbol x}_j)]_{i,j=1}^N\in\Rbb^{N\times N}
\]
or an appropriately regularized version
\[
{\boldsymbol K}+\rho{\boldsymbol I},\quad \rho>0,
\]
thereof. In case that
\(N\) is a large number, already the assembly and storage of
\({\boldsymbol K}\)
can easily become prohibitive. For the solution of an associated
linear system, the situation is even worse.
Fortunately, the kernel matrix can be compressed
by employing samplets. To this end, the evaluation of
the kernel function at the points ${\boldsymbol x}_i$ and ${\boldsymbol x}_j$
will be denoted by
\[
(\mathcal{K},\delta_{{\boldsymbol x}_i}\otimes\delta_{{\boldsymbol x}_j}
)_{\Omega\times\Omega}\mathrel{\mathrel{\mathop:}=}\mathcal{K}({\boldsymbol x}_i,{\boldsymbol x}_j).
\]
Hence, in view of $V = \{\delta_{{\boldsymbol x}_1},\ldots,\delta_{{\boldsymbol x}_N}\}$,
we may write the kernel matrix as
\[
{\boldsymbol K} = \big[(\mathcal{K},\delta_{{\boldsymbol x}_i}
\otimes\delta_{{\boldsymbol x}_j})_{\Omega\times\Omega}\big]_{i,j=1}^N.
\]
\subsection{Asymptotically smooth kernels}
The essential ingredient for the samplet compression of
kernel matrices is the \emph{asymptotical smoothness}
property of the kernel
\begin{equation}\label{eq:kernel_estimate}
\frac{\partial^{|\boldsymbol\alpha|+|\boldsymbol\beta|}}
{\partial{\boldsymbol x}^{\boldsymbol\alpha}
\partial{\boldsymbol y}^{\boldsymbol\beta}} \mathcal{K}({\boldsymbol x},{\boldsymbol y})
\le c_\mathcal{K} \frac{(|\boldsymbol\alpha|+|\boldsymbol\beta|)!}
{r^{|\boldsymbol\alpha|+|\boldsymbol\beta|}
\|{\boldsymbol x}-{\boldsymbol y}\|_2^{|\boldsymbol\alpha|+|\boldsymbol\beta|}},\quad
c_\mathcal{K},r>0,
\end{equation}
which is for example satisfied by the Mat\'ern kernels.
Using this estimate, we obtain the following result,
which is the basis for the matrix compression introduced
thereafter.
\begin{lemma}\label{lem:kernel_decay}
Consider two samplets $\sigma_{j,k}$ and $\sigma_{j',k'}$,
exhibiting $q+1$ vanishing moments with supporting
clusters \(\nu\) and \(\nu'\), respectively.
Assume that $\dist(\nu,\nu') > 0$. Then, for kernels
satisfying \eqref{eq:kernel_estimate}, it holds that
\begin{equation}\label{eq:kernel_decay}
(\mathcal{K},\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}\le
c_\mathcal{K} \frac{\diam(\nu)^{q+1}\diam(\nu')^{q+1}}
{(dr\dist(\nu_{j,k},\nu_{j',k'}))^{2(q+1)}}
\|{\boldsymbol\omega}_{j,k}\|_{1}\|{\boldsymbol\omega}_{j',k'}\|_{1}.
\end{equation}
\end{lemma}
\begin{proof}
Let ${\boldsymbol x}_0\in\nu$ and ${\boldsymbol y}_0\in\nu'$.
A Taylor expansion of the kernel with respect to
${\boldsymbol x}$ yields
\[
\mathcal{K}({\boldsymbol x},{\boldsymbol y}) = \sum_{|\boldsymbol\alpha|\le q}
\frac{\partial^{|\boldsymbol\alpha|}}
{\partial{\boldsymbol x}^{\boldsymbol\alpha}\mathcal{K}({\boldsymbol x}_0,{\boldsymbol y})}
\frac{({\boldsymbol x}-{\boldsymbol x}_0)^{\boldsymbol\alpha}}{\boldsymbol\alpha!}
+ R_{{\boldsymbol x}_0}({\boldsymbol x},{\boldsymbol y}),
\]
where the remainder $R_{{\boldsymbol x}_0}({\boldsymbol x},{\boldsymbol y})$ is given by
\begin{align*}
R_{{\boldsymbol x}_0}({\boldsymbol x},{\boldsymbol y}) &= (q+1)\sum_{|\boldsymbol\alpha|=q+1}
\frac{({\boldsymbol x}-{\boldsymbol x}_0)^{\boldsymbol\alpha}}{\boldsymbol\alpha!}
\int_0^1\frac{\partial^{q+1}}{\partial{\boldsymbol x}^{\boldsymbol\alpha}}
\mathcal{K}\big({\boldsymbol x}_0+s({\boldsymbol x}-{\boldsymbol x}_0),{\boldsymbol y}\big)(1-s)^q\d s.
\end{align*}
Next, we expand the remainder $R_{{\boldsymbol x}_0}({\boldsymbol x},{\boldsymbol y})$ with
respect to ${\boldsymbol y}$ and derive
\begin{align*}
R_{{\boldsymbol x}_0}({\boldsymbol x},{\boldsymbol y}) &= (q+1)\sum_{|\boldsymbol\alpha|=q+1}
\frac{({\boldsymbol x}-{\boldsymbol x}_0)^{\boldsymbol\alpha}}{\boldsymbol\alpha!}
\sum_{|\boldsymbol\beta|\le q
}\frac{({\boldsymbol y}-{\boldsymbol y}_0)^{\boldsymbol\beta}}{\boldsymbol\beta!}\\
&\times\int_0^1\frac{\partial^{q+1}}{\partial{\boldsymbol x}^{\boldsymbol\alpha}}
\frac{\partial^{|\boldsymbol\beta|}}{\partial{\boldsymbol y}^{\boldsymbol\beta}}
\mathcal{K}\big({\boldsymbol x}_0+s({\boldsymbol x}-{\boldsymbol x}_0),{\boldsymbol y}_0\big)(1-s)^q\d s
+ R_{{\boldsymbol x}_0,{\boldsymbol y}_0}({\boldsymbol x},{\boldsymbol y}).
\end{align*}
Here, the remainder $R_{{\boldsymbol x}_0,{\boldsymbol y}_0}({\boldsymbol x},{\boldsymbol y})$
is given by
\begin{align*}
&R_{{\boldsymbol x}_0,{\boldsymbol y}_0}({\boldsymbol x},{\boldsymbol y}) = (q+1)^2
\sum_{|\boldsymbol\alpha|,|\boldsymbol\beta|=q+1}
\frac{({\boldsymbol x}-{\boldsymbol x}_0)^{\boldsymbol\alpha}}{\boldsymbol\alpha!}
\frac{({\boldsymbol y}-{\boldsymbol y}_0)^{\boldsymbol\beta}}{\boldsymbol\beta!}\\
&\qquad\times\int_0^1\int_0^1\frac{\partial^{2(q+1)}}
{\partial{\boldsymbol x}^{\boldsymbol\alpha}\partial{\boldsymbol y}^{\boldsymbol\beta}}
\mathcal{K}\big({\boldsymbol x}_0+s({\boldsymbol x}-{\boldsymbol x}_0),{\boldsymbol y}_0
+t({\boldsymbol y}-{\boldsymbol y}_0)\big)(1-s)^q(1-t)^q\d t\d s.
\end{align*}
We thus arrive at the decomposition
\[
\mathcal{K}({\boldsymbol x},{\boldsymbol y}) = p_{{\boldsymbol y}}({\boldsymbol x}) + p_{{\boldsymbol x}}({\boldsymbol y})
+ R_{{\boldsymbol x}_0,{\boldsymbol y}_0}({\boldsymbol x},{\boldsymbol y}),
\]
where $p_{{\boldsymbol y}}({\boldsymbol x})$ is a polynomial of degree $q$ in ${\boldsymbol x}$,
with coefficients depending on ${\boldsymbol y}$, while $p_{{\boldsymbol x}}({\boldsymbol y})$
is a polynomial of degree $q$ in ${\boldsymbol y}$, with coefficients depending
on ${\boldsymbol x}$. Due to the vanishing moments, we obtain
\[
(\mathcal{K},\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}
=(R_{{\boldsymbol x}_0,{\boldsymbol y}_0},
\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}.
\]
In view of \eqref{eq:kernel_estimate}, we thus find
\begin{align*}
&|(\mathcal{K},\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}|
= |(R_{{\boldsymbol x}_0,{\boldsymbol y}_0},
\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}|\\
&\qquad\le c_\mathcal{K} \Bigg(\sum_{|\boldsymbol\alpha|,|\boldsymbol\beta|=q+1}
\frac{(|\boldsymbol\alpha|+|\boldsymbol\beta|)!}{\boldsymbol\alpha!\boldsymbol\beta!}\Bigg)
\frac{(\|\cdot-{\boldsymbol x}_0\|^{q+1}_2,|\sigma_{j,k}|)_\Omega
(\|\cdot-{\boldsymbol y}_0\|^{q+1}_2,|\sigma_{j',k'}|)_\Omega}{
r^{2(q+1)}\dist(\nu,\nu')^{2(q+1)}}.
\end{align*}
Next, we have by means of multinomial coefficients that
\[
(|\boldsymbol\alpha|+|\boldsymbol\beta|)!
={|\boldsymbol\alpha|+|\boldsymbol\beta|\choose |\boldsymbol\beta|}
{|\boldsymbol\alpha|\choose\boldsymbol\alpha}
{|\boldsymbol\beta|\choose\boldsymbol\beta}
\boldsymbol\alpha!\boldsymbol\beta!,
\]
which in turn implies that
\begin{align*}
\sum_{|\boldsymbol\alpha|,|\boldsymbol\beta|=q+1}
\frac{(|\boldsymbol\alpha|+|\boldsymbol\beta|)!}{\boldsymbol\alpha!\boldsymbol\beta!}
&= {2(q+1)\choose q+1} \sum_{|\boldsymbol\alpha|,|\boldsymbol\beta|=q+1}
{|\boldsymbol\alpha|\choose\boldsymbol\alpha}
{|\boldsymbol\beta|\choose\boldsymbol\beta}\\
&= {2(q+1)\choose q+1} d^{2(q+1)}
\le d^{2(q+1)} 2^{2(q+1)}.
\end{align*}
Moreover, we use
\[
(\|\cdot-{\boldsymbol x}_0\|_2^{q+1},|\sigma_{j,k}|)_\Omega
\le\bigg(\frac{\diam(\nu)}{2}\bigg)^{q+1}\|{\boldsymbol\omega}_{j,k}\|_{1},
\]
and likewise
\[
(\|\cdot-{\boldsymbol y}_0\|_2^{q+1},|\sigma_{j',k'}|)_\Omega
\le\bigg(\frac{\diam(\nu')}{2}\bigg)^{q+1}\|{\boldsymbol\omega}_{j',k'}\|_{1}.
\]
Combining all the estimates, we arrive at the desired
result \eqref{eq:kernel_decay}.
\end{proof}
\subsection{Matrix compression}
Lemma~\ref{lem:kernel_decay} immediately suggests
a compression strategy for kernel matrices in
samplet representation. We mention that this compression
differs from the wavelet matrix compression introduced
in \cite{DHS}, since we do not exploit the decay of the
samplet coefficients with respect to the level in case of
smooth data. This enables us to also consider a non-uniform
distribution of the points in $V$. Consequently, we use
on all levels the same accuracy, what is more similar
to the setting in \cite{BCR}.
\begin{theorem}
Set all coefficients of the kernel matrix
\[
{\boldsymbol K}^\Sigma\mathrel{\mathrel{\mathop:}=}\big[(\mathcal{K},\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}
\big]_{j,j',k,k'}
\]
to zero which satisfy
\begin{equation}\label{eq:cutoff}
\dist(\nu,\nu')\ge\eta\max\{\diam(\nu),\diam(\nu')\},\quad\eta>0,
\end{equation}
where \(\nu\) is the cluster supporting \(\sigma_{j,k}\) and
\(\nu'\) is the cluster supporting \(\sigma_{j',k'}\), respectively.
Then, it holds
\[
\big\|{\boldsymbol K}^\Sigma-{\boldsymbol K}^\Sigma_\varepsilon\big\|_F
\le c_\mathcal{K} c_{\operatorname{sum}} {(\eta dr)^{-2(q+1)}}
m_q N\sqrt{\log(N)}.
\]
for some constant \(c_{\operatorname{sum}}>0\),
where \(m_q\) is given by \eqref{eq:mq}.
\end{theorem}
\begin{proof}
We first fix the levels $j$ and $j'$. In view
\eqref{eq:kernel_decay}, we can estimate any coefficient
which satisfies \eqref{eq:cutoff} by
\begin{align*}
&|(\mathcal{K},\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}|\\
&\qquad\le
c_\mathcal{K} \bigg(\frac{\min\{\diam(\nu),\diam(\nu')\}}
{\max\{\diam(\nu),\diam(\nu')\}}\bigg)^{q+1}
{(\eta dr)^{-2(q+1)}}\|{\boldsymbol\omega}_{j,k}\|_{1}\|
{\boldsymbol\omega}_{j',k'}\|_{1}.
\end{align*}
If we next set
\[
\theta_{j,j'}\mathrel{\mathrel{\mathop:}=} \max_{\nu\in\Tcal_j,\nu'\in\Tcal_{j'}}
\bigg\{\frac{\min\{\diam(\nu),\diam(\nu')\}}
{\max\{\diam(\nu),\diam(\nu')\}}\bigg\},
\]
then we obtain
\[
|(\mathcal{K},\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}|
\le c_\mathcal{K}\theta_{j,j'}^{q+1}{(\eta dr)^{-2(q+1)}}
\|{\boldsymbol\omega}_{j,k}\|_{1}\|{\boldsymbol\omega}_{j',k'}\|_{1}
\]
for all coefficients such that \eqref{eq:cutoff} holds.
In view of \eqref{eq:ell1-norm} and the fact that there are
at most $m_q$ samplets
per cluster, we arrive at
\[
\sum_{k,k'} \|{\boldsymbol\omega}_{j,k}\|_{1}^2\|{\boldsymbol\omega}_{j',k'}\|_{1}^2
\leq\sum_{k,k'}|\nu|\cdot|\nu'| = m_q^2 N^2.
\]
Thus, for a fixed level-level block, we arrive at the estimate
\begin{align*}
\big\|{\boldsymbol K}^\Sigma_{j,j'}-{\boldsymbol K}^\Sigma_{\varepsilon,j,j'}\big\|_F^2
&\le\sum_{\begin{smallmatrix}k,k':\ \dist(\nu,\nu')\\
\ge\eta\max\{\diam(\nu),\diam(\nu')\}\end{smallmatrix}}
|(\mathcal{K},\sigma_{j,k}\otimes\sigma_{j',k'})_{\Omega\times\Omega}|^2\\
&\le c_\mathcal{K}^2 \theta_{j,j'}^{2(q+1)} {(\eta dr)^{-4(q+1)}}
m_q^2 N^2.
\end{align*}
Finally, summation over all levels yields
\begin{align*}
\big\|{\boldsymbol K}^\Sigma-{\boldsymbol K}^\Sigma_{\varepsilon}\big\|_F^2
&= \sum_{j,j'}\big\|{\boldsymbol K}^\Sigma_{j,j'}
-{\boldsymbol K}^\Sigma_{\varepsilon,j,j'}\big\|_F^2\\
&\le c_\mathcal{K}^2 {(\eta dr)^{-4(q+1)}}m_q^2 N^2
\sum_{j,j'} \theta_{j,j'}^{2(q+1)}\\
&\le c_\mathcal{K}^2 c_{\operatorname{sum}} {(\eta dr)^{-4(q+1)}}
m_q^2 N^2\log N,
\end{align*}
which is the desired claim.
\end{proof}
\begin{remark}
In case of uniformly distributed points ${\boldsymbol x}_i\in X$,
we have $\big\|{\boldsymbol K}^\Sigma\big\|_F\sim N$. Thus,
in this case we immediately obtain
\[
\frac{\big\|{\boldsymbol K}^\Sigma-{\boldsymbol K}^\Sigma_\varepsilon\big\|_F}
{\big\|{\boldsymbol K}^\Sigma\big\|_F} \le c_\mathcal{K}
\sqrt{c_{\operatorname{sum}}} {(\eta dr)^{-2(q+1)}} m_q \log N.
\]
\end{remark}
\begin{theorem}
The matrix consists of only $\mathcal{O}(m_q^2
N\log N)$ relevant matrix coefficients provided
that the points in $V$ are uniformly
distributed in $\Omega$.
\end{theorem}
\begin{proof}
We fix $j,j'$ and assume $j\ge j'$. In case of uniformly
distributed points, it holds $\diam(v)\sim 2^{-j_\nu/d}$.
Hence, for the cluster $\nu_{j',k'}$, there exist only
$\mathcal{O}([2^{j-j'}]^d)$ clusters $\nu_{j,k}$ from
level $j$, which do not satisfy the cut-off criterion
\eqref{eq:cutoff}. Since each cluster contains at most
$m_q$ samplets, we hence arrive at
\[
\sum_{j=0}^J \sum_{j'\le j}m_q^2( 2^{j'} 2^{(j-j')})^d
= m_q^2 \sum_{j=0}^J j 2^{jd} \sim m_q^2 N\log N,
\]
which implies the assertion.
\end{proof}
\begin{remark}
The chosen cut-off criterion \eqref{eq:cutoff} coincides
with the so called \emph{admissibility condition} used
by hierarchical matrices. We particularly refer here to
\cite{Boe10}, as we will later on rely the \(\mathcal{H}^2\)-matrix
method presented there for the fast assembly of the
compressed kernel matrix.
\end{remark}
\subsection{Compressed matrix assembly}
For a given pair of clusters, we can now determine whether the
corresponding entries need to be calculated. As there are
$\mathcal{O}(N)$ clusters, naively checking the cut-off criterion for
all pairs would still take $\mathcal{O}(N^{2})$ operations, however.
Hence, we require smarter means to determine the non-negligible cluster
pairs. For this purpose, we first state the transferability of the
cut-off criterion to son clusters, compare \cite{DHS} for a proof.
\begin{lemma}
Let $\nu$ and $\nu'$ be clusters satisfying the cut-off criterion
\eqref{eq:cutoff}. Then, for the son clusters $\nu_{\mathrm{son}}$
of $\nu$ and $\nu_{\mathrm{son}}'$ of $\nu'$, we have
\begin{align*}
\dist(\nu,\nu_{\mathrm{son}}')
&\ge\eta\max\{\diam(\nu),\diam(\nu_{\mathrm{son}}')\},\\
\dist(\nu_{\mathrm{son}},\nu')
&\ge\eta\max\{\diam(\nu_{\mathrm{son}}),\diam(\nu')\},\\
\dist(\nu_{\mathrm{son}},\nu_{\mathrm{son}}')
&\ge\eta\max\{\diam(\nu_{\mathrm{son}}),\diam(\nu_{\mathrm{son}}')\}.
\end{align*}
\end{lemma}
The lemma tells us that we may omit cluster pairs whose father
clusters already satisfy the cut-off criterion. This will be essential
for the assembly of the compressed matrix.
The computation of the compressed kernel matrix
can be sped up further by using
\(\Hcal^2\)-matrix techniques, see
\cite{HB02,Gie01}. Similarly to \cite{AHK14,HKS05}, we shall
rely here on \(\Hcal^2\)-matrices for this purpose.
The idea of \(\Hcal^2\)-matrices is to approximate the kernel
interaction
for sufficiently distant clusters \(\nu\) and \(\nu'\) in the sense
of the admissibility condition \eqref{eq:cutoff} by means
of the interpolation based \(\Hcal^2\)-matrix approach.
More precisely, given a suitable set of interpolation
points \(\{{\boldsymbol\xi}_t^\nu\}_t\) for each cluster \(\nu\) with
associated Lagrange polynomials \(\{\mathcal{L}_{t}^{\nu}
({\boldsymbol x})\}_t\), we introduce the interpolation operator
\[
\mathcal{I}^{\nu,\nu'}[\mathcal{K}]({\boldsymbol x}, {\boldsymbol y})
= \sum_{s,t} \mathcal{K}({\boldsymbol\xi}_{s}^{\nu}, {\boldsymbol\xi}_{t}^{\nu'})
\mathcal{L}_{s}^{\nu}({\boldsymbol x}) \mathcal{L}_{t}^{\nu'}({\boldsymbol y})
\]
and approximate an admissible matrix block via
\begin{align*}
{\boldsymbol K}^\Delta_{\nu,\nu'}
&=[(\mathcal{K},\delta_{\boldsymbol x}\otimes
\delta_{\boldsymbol y})_{\Omega\times\Omega}]_{{\boldsymbol x}\in\nu,{\boldsymbol y}\in\nu'}\\
&\approx\sum_{s,t} \mathcal{K}({\boldsymbol\xi}_{s}^{\nu}, {\boldsymbol\xi}_{t}^{\nu'})
[(\mathcal{L}_{s}^{\nu},\delta_{\boldsymbol x})_\Omega]_{{\boldsymbol x}\in\nu}
[(\mathcal{L}_{t}^{\nu'},\delta_{\boldsymbol y})_\Omega]_{{\boldsymbol y}\in\nu'}
\mathrel{=\mathrel{\mathop:}}{\boldsymbol V}^{\nu}_\Delta{\boldsymbol S}^{\nu,\nu'}
({\boldsymbol V}^{\nu'}_\Delta)^\intercal.
\end{align*}
Herein, the \emph{cluster bases} are given according to
\begin{equation}\label{eq:cluster bases}
{\boldsymbol V}^{\nu}_\Delta\mathrel{\mathrel{\mathop:}=} [(\mathcal{L}_{s}^{\nu},
\delta_{\boldsymbol x})_\Omega]_{{\boldsymbol x}\in\nu},\quad
{\boldsymbol V}^{\nu'}_\Delta\mathrel{\mathrel{\mathop:}=}[(\mathcal{L}_{t}^{\nu'},
\delta_{\boldsymbol y})_\Omega]_{{\boldsymbol y}\in\nu'},
\end{equation}
while the \emph{coupling matrix} is given by
\(
{\boldsymbol S}^{\nu,\nu'}\mathrel{\mathrel{\mathop:}=}[\mathcal{K}({\boldsymbol\xi}_{s}^{\nu},
{\boldsymbol\xi}_{t}^{\nu'})]_{s,t}.
\)
Directly transforming the cluster bases into their corresponding
samplet representation results in a log-linear cost. This can be
avoided by the use of nested cluster bases, as they have been
introduced for \(\Hcal^2\)-matrices. For the sake of simplicity, we
assume from now on that tensor product polynomials of degree
\(p\) are used for the kernel interpolation at all different cluster
combinations. As a consequence, the Lagrange polynomials
of a father cluster can exactly be represented by those of the
son clusters. Introducing the \emph{transfer matrices}
\(
{\boldsymbol T}^{\nu_{\mathrm{son}}}
\mathrel{\mathrel{\mathop:}=}[\mathcal{L}_s^\nu({\boldsymbol\xi}_t^{\nu_{\mathrm{son}}})]_{s,t},
\)
there holds
\[
\mathcal{L}_s^\nu({\boldsymbol x})=\sum_t{\boldsymbol T}^{\nu_{\mathrm{son}}}_{s,t}
\mathcal{L}_t^{\nu_{\mathrm{son}}}({\boldsymbol x}),\quad{\boldsymbol x}
\in B_{\nu_{\mathrm{son}}}.
\]
Exploiting this relation in the construction of the cluster bases
\eqref{eq:cluster bases} finally leads to
\[
{\boldsymbol V}^{\nu}_\Delta=\begin{bmatrix}
{\boldsymbol V}^{\nu_{\mathrm{son}_1}}_\Delta{\boldsymbol T}^{\nu_{\mathrm{son}_1}}\\
{\boldsymbol V}^{\nu_{\mathrm{son}_2}}_\Delta{\boldsymbol T}^{\nu_{\mathrm{son}_2}}
\end{bmatrix}.
\]
Combining this refinement relation with the recursive nature of the
samplet basis, results
in the variant of the discrete samplet transform summarized in
Algorithm~\ref{algo:multiscaleClusterBasis}.\bigskip
\begin{algorithm}[H]
\caption{Recursive computation of the multiscale cluster basis}
\label{algo:multiscaleClusterBasis}
\KwData{Cluster tree $\Tcal$, transformations
$[{\boldsymbol Q}_{j,\Phi}^{\nu}$, ${\boldsymbol Q}_{j,\Sigma}^{\nu}]$,
nested cluster bases ${\boldsymbol V}_{\Delta}^{\nu}$ for leaf clusters and
transformation matrices ${\boldsymbol T}^{\nu_{\mathrm{son}_1}}\), \(
{\boldsymbol T}^{\nu_{\mathrm{son}_2}}$ for non-leaf clusters.
}
\KwResult{Multiscale cluster basis matrices ${\boldsymbol V}_{\Phi}^{\nu}$,
${\boldsymbol V}_{\Sigma}^{\nu}$ for all clusters $\nu \in\Tcal$.}
\Begin{
\FuncSty{computeMultiscaleClusterBasis}($X$)\;
}
\end{algorithm}
\begin{function}[H]
\caption{computeMultiscaleClusterBasis($\nu$)}
\Begin{
\uIf{$\nu$ is a leaf cluster}{
store $\begin{bmatrix}
{\boldsymbol V}_{\Phi}^{\nu} \\
{\boldsymbol V}_{\Sigma}^{\nu}
\end{bmatrix}\mathrel{\mathrel{\mathop:}=}\big[{\boldsymbol Q}_{j,\Phi}^{\nu},
{\boldsymbol Q}_{j,\Sigma}^{\nu}\big]^{\intercal} {\boldsymbol V}_{\Delta}^{\nu}$
}
\Else{
\For{all sons $\nu'$ of $\nu$}{
$\computeMultiscaleClusterBasis(\nu')$
}
store $\begin{bmatrix}
{\boldsymbol V}_{\Phi}^{\nu} \\
{\boldsymbol V}_{\Sigma}^{\nu}
\end{bmatrix}\mathrel{\mathrel{\mathop:}=}\big[{\boldsymbol Q}_{j,\Phi}^{\nu},
{\boldsymbol Q}_{j,\Sigma}^{\nu}\big]^{\intercal} \begin{bmatrix}
{\boldsymbol V}^{\nu_{\mathrm{son}_1}}_\Phi{\boldsymbol T}^{\nu_{\mathrm{son}_1}}\\
{\boldsymbol V}^{\nu_{\mathrm{son}_2}}_\Phi{\boldsymbol T}^{\nu_{\mathrm{son}_2}}
\end{bmatrix}$
}
}
\end{function}\medskip
Having the multiscale cluster bases at our disposal, the next step is
the assembly of the compressed kernel matrix. The computation of the
required matrix blocks is exclusively
based on the two refinement relations
\begin{align*}
\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Sigma}
\end{bmatrix}
&=
\begin{bmatrix}
(\mathcal{K},{\boldsymbol \Phi}^\nu\otimes{\boldsymbol\Phi}^{\nu'})_{\Omega\times\Omega} &
(\mathcal{K},{\boldsymbol \Phi}^\nu\otimes{\boldsymbol\Sigma}^{\nu'})_{\Omega\times\Omega}
\\
(\mathcal{K},{\boldsymbol \Sigma}^\nu\otimes{\boldsymbol\Phi}^{\nu'})_{\Omega\times\Omega}
&
(\mathcal{K},{\boldsymbol \Sigma}^\nu\otimes{\boldsymbol\Sigma}^{\nu'})_{\Omega\times\Omega}
\end{bmatrix}\\
&=\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_1}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_2}'}^{\Phi,\Phi}\\
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_1}'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_2}'}^{\Sigma,\Phi}
\end{bmatrix}\big[{\boldsymbol Q}_{j,\Phi}^{\nu'},
{\boldsymbol Q}_{j,\Sigma}^{\nu'}\big]
\end{align*}
and
\begin{align*}
\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Sigma}
\end{bmatrix}
&=\begin{bmatrix}
(\mathcal{K},{\boldsymbol \Phi}^\nu\otimes{\boldsymbol\Phi}^{\nu'})_{\Omega\times\Omega} &
(\mathcal{K},{\boldsymbol \Phi}^\nu\otimes{\boldsymbol\Sigma}^{\nu'})_{\Omega\times\Omega}
\\
(\mathcal{K},{\boldsymbol \Sigma}^\nu\otimes{\boldsymbol\Phi}^{\nu'})_{\Omega\times\Omega}
&
(\mathcal{K},{\boldsymbol \Sigma}^\nu\otimes{\boldsymbol\Sigma}^{\nu'})_{\Omega\times\Omega}
\end{bmatrix}\\
&=
\big[{\boldsymbol Q}_{j,\Phi}^{\nu},
{\boldsymbol Q}_{j,\Sigma}^{\nu}\big]^\intercal
\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu'}^{\Phi,\Phi}\\
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu'}^{\Sigma,\Phi}
\end{bmatrix}.
\end{align*}
We obtain the following function, which is the key ingredient for the
computation of the compressed kernel matrix.\bigskip
\begin{function}[H]
\caption{recursivelyDetermineBlock($\nu$, $\nu'$)}
\KwResult{Approximation of the block \scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Phi} & {\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Sigma}
\end{bmatrix}$}.}
\Begin{
\uIf{$(\nu, \nu')$ is admissible}{
\Return{\scalebox{1}{$\begin{bmatrix}
{\boldsymbol V}_{\Phi}^{\nu} \\
{\boldsymbol V}_{\Sigma}^{\nu}
\end{bmatrix}
{\boldsymbol S}^{\nu,\nu'} \big[
({\boldsymbol V}_{\Phi}^{\nu'})^\intercal,
({\boldsymbol V}_{\Sigma}^{\nu'})^\intercal
\big]$}}
}
\uElseIf{$\nu$ and $\nu'$ are leaf clusters}{
\Return{\scalebox{1}{$\big[{\boldsymbol Q}_{j,\Phi}^{\nu},
{\boldsymbol Q}_{j,\Sigma}^{\nu}\big]^{\intercal}{\boldsymbol K}_{\nu,\nu'}^{\Delta}
\big[{\boldsymbol Q}_{j,\Phi}^{\nu'},
{\boldsymbol Q}_{j,\Sigma}^{\nu'}\big]$}}
}
\uElseIf{$\nu'$ is not a leaf cluster and $\nu$ is a leaf cluster}{
\For{all sons $\nu_{\mathrm{son}}'$ of $\nu'$}{
\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Sigma,\Sigma}
\end{bmatrix}$} $
\mathrel{\mathrel{\mathop:}=}\recursivelyDetermineBlock(\nu, \nu_{\mathrm{son}'})$
}
\Return{\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu_{\mathrm{son},1}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son},2}'}^{\Phi,\Phi}\\
{\boldsymbol K}_{\nu,\nu_{\mathrm{son},1}'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son},2}'}^{\Sigma,\Phi}
\end{bmatrix}\big[{\boldsymbol Q}_{j,\Phi}^{\nu'},
{\boldsymbol Q}_{j,\Sigma}^{\nu'}\big]$}}
}
\uElseIf{$\nu$ is not a leaf cluster and $\nu'$ is a leaf cluster}{
\For{all sons \(\nu_{\mathrm{son}}\) of \(\nu\)}{
\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Sigma}
\end{bmatrix}$} $\mathrel{\mathrel{\mathop:}=}
\recursivelyDetermineBlock(\nu_{\mathrm{son}}, \nu')$
}
\Return{\scalebox{1}{$\big[{\boldsymbol Q}_{j,\Phi}^{\nu},
{\boldsymbol Q}_{j,\Sigma}^{\nu}\big]^\intercal\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu'}^{\Phi,\Phi}\\
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu'}^{\Sigma,\Phi}
\end{bmatrix}$}.
}
}
\Else(){
\For{all sons $\nu_{\mathrm{son}}$ of $\nu$ {\bf and}
all sons $\nu_{\mathrm{son}}'$ of $\nu'$}{
\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu_{\mathrm{son}}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu_{\mathrm{son}}'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu_{\mathrm{son}}'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu_{\mathrm{son}}'}^{\Sigma,\Sigma}
\end{bmatrix}$}$\mathrel{\mathrel{\mathop:}=}\recursivelyDetermineBlock(\nu_{\mathrm{son}},
\nu_{\mathrm{son}'})$
}
\Return{\scalebox{1}{$\big[{\boldsymbol Q}_{\Phi}^{\nu},
{\boldsymbol Q}_{\Sigma}^{\nu}\big]^\intercal\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu_{\mathrm{son}_1}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu_{\mathrm{son}_2}'}^{\Phi,\Phi}\\
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu_{\mathrm{son}_1}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu_{\mathrm{son}_2}'}^{\Phi,\Phi}
\end{bmatrix} \big[{\boldsymbol Q}_{\Phi}^{\nu'},
{\boldsymbol Q}_{\Sigma}^{\nu'}\big]$}}
}
}
\end{function}\bigskip
Now, in order to assemble the compressed kernel matrix, we require
two nested recursive calls of the cluster tree, which is traversed in
a depth first search way. Algorithm~\ref{algo:h2Wavelet}
first computes the lower right matrix block and advances from bottom
to top and from right to left. To this end, the two recursive
functions \texttt{setupColumn} and \texttt{setupRow} are introduced.%
\bigskip
\begin{algorithm}[H]
\caption{Computation of the compressed kernel matrix}
\label{algo:h2Wavelet}
\KwData{Cluster tree $\Tcal$, multiscale cluster bases
${\boldsymbol V}_{\Phi}^{\nu}$, ${\boldsymbol V}_{\Sigma}^{\nu}$
and transformations $[{\boldsymbol Q}_{j,\Phi}^{\nu},
{\boldsymbol Q}_{j,\Sigma}^{\nu}]$.}
\KwResult{Sparse matrix ${\boldsymbol K}^\Sigma_\varepsilon$}
\Begin{
\FuncSty{setupColumn}($X$)
store the blocks the remaining blocks
${\boldsymbol K}^\Sigma_{\varepsilon,\nu,X}$ for \(%
\nu\in\Tcal\setminus\{X\}\)
in ${\boldsymbol K}^\Sigma_\varepsilon$ (they have already
been computed by earlier calls to %
\FuncSty{recursivelyDetermineBlock})
}
\end{algorithm}\bigskip
The purpose of the function \texttt{setupColumn} is to
recursively traverse the column cluster tree, i.e.\ the
cluster tree associated to the columns of the matrix.
Before returning, each instance of \texttt{setupColumn}
calls the function \texttt{setupRow}, which performs
the actual assembly of the compressed matrix.\bigskip
\begin{function}[H]
\caption{setupColumn($\nu'$)}
\Begin{
\For{all sons $\nu_{\mathrm{son}}'$ of $\nu'$}{
$\setupColumn(\nu_{\mathrm{son}}')$
}
store ${\boldsymbol K}^\Sigma_{\varepsilon,X,\nu'}\mathrel{\mathrel{\mathop:}=}
\FuncSty{setupRow}(X, \nu')$
in ${\boldsymbol K}^\Sigma_{\varepsilon}$
}
\end{function}\bigskip
For a given column cluster \(\nu'\), the function
\texttt{setupRow}
recursively traverses the row cluster tree, i.e.\
the cluster tree associated to the rows of the matrix,
and
assembles the corresponding column of the compressed
matrix.
The function reuses the already computed blocks to the
right of the column
under consideration and blocks at the bottom of the
very same
column.\bigskip
\begin{function}[H]
\caption{setupRow($\nu$, $\nu'$)}
\Begin{
\uIf{$\nu$ is not a leaf}{
\For{all sons \(\nu_{\mathrm{son}}\) of \(\nu\)}{
\uIf{\(\nu_{\mathrm{son}}\) and \(\nu'\) are not %
admissible}{
\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Sigma}
\end{bmatrix}$}
$\mathrel{\mathrel{\mathop:}=} \setupRow(\nu_{\mathrm{son}}, \nu')$
}
\Else{
\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Sigma}
\end{bmatrix}$}
$\mathrel{\mathrel{\mathop:}=}\recursivelyDetermineBlock(\nu_{\mathrm{son}},%
\nu')$}
}
\scalebox{1}{$
\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Sigma}
\end{bmatrix}\mathrel{\mathrel{\mathop:}=}\big[{\boldsymbol Q}_{\Phi}^{\nu},
{\boldsymbol Q}_{\Sigma}^{\nu}
\big]^\intercal\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_1},\nu'}^{\Phi,\Phi}\\
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}_2},\nu'}^{\Sigma,\Phi}
\end{bmatrix}$
}
}
\Else{
\uIf{$\nu'$ is a leaf cluster}{
\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu_{\mathrm{son}},\nu'}^{\Sigma,\Sigma}
\end{bmatrix}$}
$\mathrel{\mathrel{\mathop:}=}\recursivelyDetermineBlock(%
\nu_{\mathrm{son}}, \nu')$
}
\Else{
\For{all sons \(\nu_{\mathrm{son}}'\) of \(\nu\)'}{
\uIf{\(\nu\) and \(\nu_{\mathrm{son}}'\) %
are not admissible}{
load already computed block \scalebox{1}{%
$\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Sigma,\Sigma}
\end{bmatrix}$}
}
\Else
{
\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}}'}^{\Sigma,\Sigma}
\end{bmatrix}$}
$\mathrel{\mathrel{\mathop:}=} \recursivelyDetermineBlock(%
\nu, \nu_{\mathrm{son}'})$
}
}
}
\scalebox{1}{
$\begin{bmatrix}{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Sigma}
\end{bmatrix}\mathrel{\mathrel{\mathop:}=}\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_1}'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_2}'}^{\Phi,\Phi}\\
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_1}'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu_{\mathrm{son}_2}'}^{\Sigma,\Phi}
\end{bmatrix}\big[{\boldsymbol Q}_{\Phi}^{\nu'},
{\boldsymbol Q}_{\Sigma}^{\nu'}\big]$}
}
store ${\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Sigma}$ %
as part of ${\boldsymbol K}^\Sigma_\varepsilon$
\Return{\scalebox{1}{$\begin{bmatrix}
{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Phi} &
{\boldsymbol K}_{\nu,\nu'}^{\Phi,\Sigma}\\
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Phi} &
{\boldsymbol K}_{\nu,\nu'}^{\Sigma,\Sigma}
\end{bmatrix}$}}
}
\end{function}
\begin{remark}
Algorithm~\ref{algo:h2Wavelet} has a cost of
\(\mathcal{O}(N\log N)\) and requires an additional
storage of \(\mathcal{O}(N\log N)\) if all stored
blocks are directly released when they are not required
anymore. We refer to \cite{AHK14} for all the details.
\end{remark}
\section{Numerical results I\!I}\label{sec:Num2}
All computations in this section have been performed on
a single node with two Intel Xeon E5-2650 v3 @2.30GHz
CPUs and up to 512GB of main memory\footnote{The full
specifications can be found on
https://www.euler.usi.ch/en/research/resources.}.
In order to obtain consistent timings, only a single
core was used for all computations.
\subsection*{Benchmark problem}
To benchmark the compression of kernel matrices,
we consider the exponential kernel
\[
k({\boldsymbol x},{\boldsymbol y})=e^{-10
\frac{\|{\boldsymbol x}-{\boldsymbol y}\|_2}{\sqrt{d}}}
\]
evaluated at an increasing number of uniformly
distributed random sample points
in the hypercube \([-1,1]^d\) for \(d=1,2,3\). As a
measure of sparsity, we introduce the
\emph{average number of nonzeros per row}
\[
\operatorname{anz}({\boldsymbol A})
\mathrel{\mathrel{\mathop:}=}\frac{\operatorname{nnz}({\boldsymbol A})}{N},\quad
{\boldsymbol A}\in\Rbb^{N\times N},
\]
where \(\operatorname{nnz}({\boldsymbol A})\) is the number of
nonzero entries of
\({\boldsymbol A}\). Besides the compression, we also report
the fill-in generated
by the Cholesky factorization in combination with the
nested dissection reordering from \cite{KK98}.
For the reordering and the Cholesky
factorization, we rely on \textsc{Matlab}%
R2020a\footnote{Version 9.8.0.1396136,
The MathWorks Inc., Natick, Massachusetts, 2020.},
while the
samplet compression is implemented in \texttt{C++11}
using the
\texttt{Eigen} template
library\footnote{\texttt{https://eigen.tuxfamily.org/}}
for linear algebra operations. For the computations,
we consider
a polynomial degree of 3 for the \(\Hcal^2\)-matrix
representation
and \(q+1=3\) vanishing moments for the samplets.
In addition, we have performed a thresholding of the
computed matrix coefficients that were smaller than
\(\varepsilon=10^{-3}\).
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\begin{loglogaxis}[width=0.42\textwidth,grid=both,
ymin= 0, ymax = 1e5, xmin = 256, xmax =1.2e6,
legend style={legend pos=south east,font=\small},
ylabel={\small wall time}, xlabel ={\small $N$}]
\addplot[line width=0.7pt,color=blue,mark=triangle]
table[x=npts,y=ctim]{./Results/matlabLogger1.txt};
\label{pgfplots:plot1D}
\addplot[line width=0.7pt,color=darkgreen,mark=square]
table[x=npts,y=ctim]{./Results/matlabLogger2.txt};
\label{pgfplots:plot2D}
\addplot[line width=0.7pt,color=red,mark=o]
table[x=npts,y=ctim]{./Results/matlabLogger3.txt};
\label{pgfplots:plot3D}
\addplot[line width=0.7pt,color=black, dashed]
table[x=npts,y expr={1e-5 * x}]{%
./Results/matlabLogger1.txt};
\addplot[line width=0.7pt,color=black, dashed]
table[x=npts,y expr={1e-5 * x * ln(x)}]{%
./Results/matlabLogger1.txt};
\addplot[line width=0.7pt,color=black, dashed]
table[x=npts,y expr={1e-5 * x * ln(x)^2}]{%
./Results/matlabLogger1.txt};
\addplot[line width=0.7pt,color=black, dashed]
table[x=npts,y expr={1e-5 * x * ln(x)^3}]{%
./Results/matlabLogger1.txt};
\label{pgfplots:asymps}
\end{loglogaxis}
\begin{loglogaxis}[%
xshift=0.405\textwidth,width=0.42\textwidth,grid=both,
ymin= 0, ymax = 2e3, xmin = 256, xmax =1.2e6,
ytick={1e1, 1e2, 1e3, 1e4},
legend style={legend pos=south east,font=\small},
ylabel={\small $\operatorname{anz}({\boldsymbol K}%
^\Sigma_\varepsilon)$}, xlabel ={\small $N$}]
\addplot[line width=0.7pt,color=blue,%
mark=triangle] table[x=npts,
y expr = {\thisrow{nzS}}]{./Results/matlabLogger1.txt};
\addplot[line width=0.7pt,color=darkgreen,mark=square]%
table[x=npts,
y expr = {\thisrow{nzS}}]{./Results/matlabLogger2.txt};
\addplot[line width=0.7pt,color=red,mark=o]%
table[x=npts,
y expr = {\thisrow{nzS}}]{./Results/matlabLogger3.txt};
\end{loglogaxis}
\matrix[
matrix of nodes,
anchor=north west,
draw
inner sep=0.1em,
column 1/.style={nodes={anchor=center}},
column 2/.style={nodes={anchor=west},font=\strut},
draw
]
at([xshift=0.02\textwidth]current axis.north east){
\ref{pgfplots:plot1D}& \(d=1\)\\
\ref{pgfplots:plot2D}& \(d=2\)\\
\ref{pgfplots:plot3D}& \(d=3\)\\
\ref{pgfplots:asymps}& \(N\!\log^\alpha\!\! N\)\\};
\end{tikzpicture}
\caption{\label{fig:compTimesNNZ}Assembly times
(left) and average numbers of nonzeros per row (right)
versus the number sample points $N$ in case of the
exponential kernel matrix.}
\end{center}
\end{figure}
The left-hand side of Figure~\ref{fig:compTimesNNZ}
shows the wall time for the assembly of the compressed
kernel matrices. The different dashed lines indicate
the asymptotics \(N\log^\alpha N\)
for \(\alpha=0,1,2,3\). It can be seen that, for
increasing number
\(N\) of points and the dimensions \(d=1,2,3\) under
consideration, all computation times approach the
expected rate
of \(N\log N\). The right-hand side of
Figure~\ref{fig:compTimesNNZ} shows the average number
of nonzeros per row for an increasing number
\(N\) of points. This number becomes constant or even
decreases, as expected.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\begin{loglogaxis}[width=0.42\textwidth,grid=both,
ymin= 0, ymax = 6e4, xmin = 500, xmax =1.2e6,
legend style={legend pos=south east,font=\small},
ylabel={\small wall time}, xlabel ={\small $N$}]
\addplot[line width=0.7pt,color=blue,mark=triangle]
table[x=npts,y=Ltim]{./Results/matlabLogger1.txt};
\label{pgfplots:plot1D1}
\addplot[line width=0.7pt,color=darkgreen,mark=square]
table[x=npts,y=Ltim]{./Results/matlabLogger2.txt};
\label{pgfplots:plot2D1}
\addplot[line width=0.7pt,color=red,mark=o]
table[x=npts,y=Ltim]{./Results/matlabLogger3.txt};
\label{pgfplots:plot3D1}
\addplot[line width=0.7pt,color=black,dashed]
table[x=npts,y expr={0.7e-6 * x^1.5}]{%
./Results/matlabLogger1.txt};
\addplot[line width=0.7pt,color=black,dashed]
table[x=npts,y expr={0.1e-6 * x^2}]{%
./Results/matlabLogger1.txt};
\label{pgfplots:asymps1}
\end{loglogaxis}
\begin{loglogaxis}[%
xshift=0.405\textwidth,width=0.42\textwidth,grid=both,
ymin= 0, ymax = 4e4, xmin = 256,
xmax =1.2e6,ytick={1e1, 1e2, 1e3, 1e4},
legend style={legend pos=south east,font=\small},
ylabel={\small $\operatorname{anz}({\boldsymbol L})$},
xlabel ={\small $N$}]
\addplot[line width=0.7pt,color=blue,mark=triangle]%
table[x=npts,
y = nzL]{./Results/matlabLogger1.txt};
\addplot[line width=0.7pt,color=darkgreen,mark=square]%
table[x=npts,
y = nzL]{./Results/matlabLogger2.txt};
\addplot[line width=0.7pt,color=red,mark=o]%
table[x=npts,
y = nzL]{./Results/matlabLogger3.txt};
\end{loglogaxis}
\matrix[
matrix of nodes,
anchor=north west,
draw
inner sep=0.1em,
column 1/.style={nodes={anchor=center}},
column 2/.style={nodes={anchor=west},font=\strut},
draw
]
at([xshift=0.02\textwidth]current axis.north east){
\ref{pgfplots:plot1D1}& \(d=1\)\\
\ref{pgfplots:plot2D1}& \(d=2\)\\
\ref{pgfplots:plot3D1}& \(d=3\)\\
\ref{pgfplots:asymps1}& \(N^{\frac{3}{2}}\),
\(N^2\)\\};
\end{tikzpicture}
\caption{\label{fig:cholTimesNNZ}Computation times
for the
Cholesky factorization (left) and average numbers
of nonzeros
per row for the Cholesky factor (right) versus
the number
sample points $N$ in case of the exponential
kernel matrix.}
\end{center}
\end{figure}
Next, we examine the Cholesky factorization of
the compressed
kernel matrix. As the largest eigenvalue of the
kernel matrix
grows proportionally to the number \(N\) of points,
while the smallest eigenvalue is
given by the ridge parameter, the condition number
grows with \(N\) as well.
Hence, to obtain a constant condition number for
increasing \(N\), the ridge parameter needs to be
adjusted accordingly.
However, as we are only interested in the generated
fill-in and the computation times,
we neglect this fact and just fix the ridge parameter
to \(\rho=1\) for all considered \(N\) and \(d=1,2,3\).
The obtained results are found in
Figure~\ref{fig:cholTimesNNZ}. Herein, on the left-hand
side, the wall times for the Cholesky factorization of
the reordered matrix are found. For \(d=1\)
the behavior is a bit peculiar as
the average number of nonzeros per row decreases when
the number \(N\) of points increases. This indicates
that the kernel function is already fully resolved up
to the threshold parameter on the coarser levels.
For \(d=2\), the observed rate is slightly better than
the expected one of \(N^{\frac{3}{2}}\) for the
Cholesky factorization, while the scaling
is approximately like \(N^2\) for \(d=3\). On the
right-hand side of the
same figure, it can be seen that the fill-in
remains rather moderate.
A visualization of the matrix patterns for the matrix
\({\boldsymbol K}^\Sigma_\varepsilon+\rho{\boldsymbol I}\),
the reordered matrix and the Cholesky factor for
\(N=131\,072\) points is
shown in Figure~\ref{fig:patterns}. Each dot
corresponds to a block of
\(256\times 256\) matrix entries and its intensity
indicates the number
of nonzero entries, where darker blocks contain more
entries than lighter blocks.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\draw(0,4.5) node%
{\includegraphics[scale=0.31,frame,%
trim= 0 0 0 13.4,clip]{%
./Results/Kmat_1D.eps}};
\draw(4,4.5) node {\includegraphics[scale=0.31,frame,%
trim= 0 0 0 13.4,clip]{./Results/KmatND_1D.eps}};
\draw(8,4.5) node%
{\includegraphics[scale=0.31,frame,%
trim= 0 0 0 13.4,clip]{./Results/Lmat_1D.eps}};
\draw(4,6.6) node {$d=1$};
\draw(0,0) node%
{\includegraphics[scale=0.31,frame,%
trim= 0 0 0 13.4,clip]{./Results/Kmat_2D.eps}};
\draw(4,0) node%
{\includegraphics[scale=0.31,frame,%
trim= 0 0 0 13.4,clip]{./Results/KmatND_2D.eps}};
\draw(8,0) node%
{\includegraphics[scale=0.31,frame,%
trim= 0 0 0 13.4,clip]{./Results/Lmat_2D.eps}};
\draw(4,2.1) node {$d=2$};
\draw(0,-4.5) node%
{\includegraphics[scale=0.31,frame,%
trim= 0 0 0 13.4,clip]{./Results/Kmat_3D.eps}};
\draw(4,-4.5) node%
{\includegraphics[scale=0.31,frame,%
trim= 0 0 0 13.4,clip]{./Results/KmatND_3D.eps}};
\draw(8,-4.5) node%
{\includegraphics[scale=0.31,frame,%
trim= 0 0 0 13.4,clip]{./Results/Lmat_3D.eps}};
\draw(4,-2.4) node {$d=3$};
\end{tikzpicture}
\caption{\label{fig:patterns}Sparsity pattern of
\({\boldsymbol K}^\Sigma_\varepsilon+\rho{\boldsymbol I}\) (left),
the reordered matrices (middle) and the Cholesky
factors \({\boldsymbol L}\) (right)
for \(d=1,2,3\) and \(N=131\,072\).}
\end{center}
\end{figure}
\subsection*{Simulation of a Gaussian random field}
As our last example, we consider a Gaussian random
field evaluated at 100\,000 randomly chosen points at
the surface of the Stanford bunny. As before, the
Stanford bunny has been rescaled to have a diameter
of 2. In order to demonstrate that our approach works
also for larger dimensions, the Stanford bunny has been
embedded into \(\mathbb{R}^4\) and randomly rotated to
prevent axis-aligned bounding boxes. The polynomial
degree for the \(\Hcal^2\)-matrix representation is
set to 3 as before and likewise we consider \(q+1=3\)
vanishing moments. The covariance function is given by
the exponential kernel
\[
k({\boldsymbol x},{\boldsymbol y})=e^{-25\|{\boldsymbol x}-{\boldsymbol y}\|_2}.
\]
Moreover, we discard all computed matrix entries which
are below the threshold of \(\varepsilon=10^{-6}\).
The ridge parameter is set to \(\rho=10^{-2}\).
The compressed covariance matrix exhibits
\(\operatorname{anz}({\boldsymbol K}^\Sigma_\varepsilon)=6457\)
nonzero matrix entries per row on average, while the
corresponding Cholesky factor exhibits
\(\operatorname{anz}({\boldsymbol L})=14\,898\) nonzero
matrix entries per row on average.
Having the Cholesky factor \({\boldsymbol L}\) at hand,
the computation of a realization of the
Gaussian random field is extremely fast, as it only
requires a simple sparse matrix-vector multiplication
of \({\boldsymbol L}\) by a Gaussian random vector and an
inverse samplet transform. Four different realizations
of the random field projected
to \(\mathbb{R}^3\) are shown in Figure~\ref{fig:GRF}.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}
\draw (0,5) node {\includegraphics[scale=0.1,clip,%
trim= 840 330 850 400]{./Results/bunnyField1.png}};
\draw (5,5) node {\includegraphics[scale=0.1,clip,%
trim= 840 330 850 400]{./Results/bunnyField2.png}};
\draw (0,0) node {\includegraphics[scale=0.1,clip,%
trim= 840 330 850 400]{./Results/bunnyField3.png}};
\draw (5,0) node {\includegraphics[scale=0.1,clip,%
trim= 840 330 850 400]{./Results/bunnyField4.png}};
\draw (9,2.5) node {\includegraphics[scale=0.2,clip,%
trim= 2260 450 450 750]{./Results/bunnyField4.png}};
\end{tikzpicture}
\caption{\label{fig:GRF}Four different realizations of
a Gaussian random field based on an exponential covariance kernel.}
\end{center}
\end{figure}
\section{Conclusion}\label{sec:Conclusion}
Samplets provide a new methodology for the analysis
of large data sets. They are easy to construct and
discrete data can be transformed into the samplet
basis in linear cost. In our construction, we
deliberately let out the discussion of a level
dependent compression of the given data, as it
is known from wavelet analysis, in favor of a robust
error analysis. We emphasize however that, under the
assumption of uniformly distributed points, different
norms can be incorporated, allowing for the
construction of band-pass filters and level dependent
thresholding. In this situation, also an improved
samplet matrix compression is possible
such that a fixed number of vanishing moments is
sufficient to achieve a precision proportional to the
fill distance with log-linear cost.
Besides data compression, detection of singularities
and adaptivity, we have demonstrated how samplets can
be employed for the compression kernel matrices to
obtain an essentially sparse matrix. Having a sparse
representation of the kernel matrix, algebraic
operations, such as matrix vector multiplications
can considerably be sped up. Moreover, in combination
with a fill-in reducing reordering, the factorization
of the compressed kernel matrices becomes
computationally feasible, which allows for the fast
application of the inverse kernel matrix on the one
hand and the efficient solution of linear systems
involving the kernel matrix on the other hand. The
numerical results, featuring about \(10^6\) data points
in up to four dimensions, demonstrate the capabilities
of samplets.
Future research will be directed to
the extension of samplets towards high-dimensional
data.
This extension requires the incorporation of different
clustering strategies, such as locality sensitive
hashing,
to obtain a manifold-aware cluster tree and the careful
construction for the vanishing moments, for example
by anisotropic polynomials.
\bibliographystyle{plain}
|
1,314,259,994,958 | arxiv | \section{Introduction}
The lack of a generically meaningful notion of local energy density in
general relativity is well-known \cite{MTW,ABS,BY}. Essentially, this is
due to the absence of an unambiguous prescription for decomposing the
spacetime metric into ``background'' and ``dynamical'' components.
\footnote{See, however, the field formulation of general relativity
\cite{ff}.} If such a prescription were available, then one could associate
energy in general relativity with the dynamical component of the metric. In
the past there have been attempts to define quasilocal energy using
pseudotensor methods \cite{MTW,ABS,LL}. However, these approaches led to
coordinate-dependent expressions, which lacked an unambiguous geometrical
interpretation. Another way of defining quasilocal energy has been via
the spinor constructions \cite{Witten,Nester,Penrose,DM}. There are, however,
several unresolved questions regarding this approach, a key issue being the
lack of a rigorous proof of the Witten-Nester integral being a boundary value
of the gravitational Hamiltonian \cite{Mason}. \footnote{For a more complete
list of references on quasilocal energy, also see the ones cited in Ref.
\cite{BY}.} Nevertheless, the total energy of an isolated system has been
defined in terms of the behavior of the gravitational field at large distances
from the system \cite{ADM}. Moreover, Brown and York have introduced in Ref.
\cite{BY} (henceforth referred to as BY) a way to define the quasilocal energy
of a spatially bounded system in general relativity in terms of the total mean
curvature of the boundary. Further, in spacetimes with a hypersurface-forming
timelike Killing vector on the boundary of the system, it can be shown that
there exists a conserved charge, which can be defined to be the quasilocal
mass associated with the bounded region \cite{BCM}.
The past few years have seen a revival of interest in scalar-tensor theories
of gravity, primarily, string-inspired four-dimensional dilaton gravity,
which has been shown to yield cosmological as well as charged black hole
solutions (see Refs. \cite{rev1,rev2,rev3,rev4} for reviews.)
In particular, the spacetime structure (i.e., geodesics and singularities)
of these black hole solutions, and also those of the Brans-Dicke-Maxwell
theory (in higher dimensions) \cite{Xan,Cai}, are known to have significant
differences with respect to the Reissner-Nordstr{\o}m black holes. This
prompts one to investigate the form of the classical laws of black hole
mechanics and the ensuing picture of black hole thermodynamics in these
theories. But the study of the thermodynamical laws entails the knowledge of
the energy and entropy associated with
these spacetimes. Moreover, {\em equilibrium} thermodynamics of
a black hole (specifically, in the case of an asymptotically flat solution),
requires that it be put in a finite-sized ``box'', just as one does in general
relativity. Thus such a study requires the knowledge of the quasilocal energy
of these ``finite-sized'' systems.
Recently the BY formalism has been extended to the case of a generic
scalar-tensor theory of gravity in spacetime dimensions greater than two
\cite{CM,CCM}. Since solutions of two conformally related scalar-tensor
theories will themselves be related by a conformal transformation, it
is interesting to ask if the quasilocal masses of these solutions are also
related. In the past, it has been suggested that the quasilocal mass is a
conformal invariant. The reason that is usually provided is that a conformal
transformation is simply a local field reparametrization, which is supposed
to leave physical quantities, such as the mass of a system, unchanged. In
fact, in Ref. \cite{CCM}, Chan, Creighton, and Mann (henceforth referred to
as CCM) argue that the quasilocal
mass is indeed invariant under a conformal transformation.
As in general relativity, the (unreferenced) mass of a spacetime in a
scalar-tensor theory of gravity is typically divergent unless one subtracts a
(divergent) contribution from a suitable reference geometry to obtain a
meaningful finite result \cite{BY}. Recently, Hawking and Horowitz (henceforth
referred to as HH) gave a prescription in general relativity for obtaining
an appropriate reference action based on the asymptotic behavior of the fields
on a classical solution \cite{HH}. This reference action is subtracted from
the original action of the theory to define what is called the physical
action. The surface term that arises in the Hamiltonian associated with this
action can be taken as the definition of ``total mass''. This mass turns out
to be finite and is termed as the physical mass. When evaluated on an
asymptotically flat spacetime solution, the physical mass of the full
spacetime coincides with the corresponding ADM expression.
Following HH, one could ask if a similar prescription can be formulated for
scalar-tensor theories of gravity. If so, it would be interesting to see
whether the total mass arising from such an action is conformally invariant.
It has been claimed
by CCM that the generalization of the HH prescription to scalar-tensor
theories does {\it not} lead to a conformally invariant physical mass.
They propose their own reference action for a general scalar-tensor theory of
gravity and show that the associated physical mass of static, spherically
symmetric (SSS) solutions is conformally invariant. However, unlike what
happens in the HH prescription, the CCM reference action is not motivated by
any boundary conditions on the fields that define the spacetime solution of
interest.
In this paper, we first show that the HH prescription can be generalized to
the case of scalar-tensor gravity. This reduces the arbitrariness
in the choice of the reference action. More importantly, we prove that under
certain conditions the
resulting reference action leads to a conformally invariant referenced
quasilocal mass. In the following, we will directly deal with only
that conformal transformation that relates the scalar-tensor-gravity metric
to that in the Einstein frame. Of course, our results can be readily
extended to study the behavior of physical quantities under a conformal
transformation relating one scalar-tensor theory to another. In section
\ref{sec:QE} we derive the expression for the (unreferenced) quasilocal mass
of a bounded region in $(D+1)$-dimensional spacetime solution of a
scalar-tensor theory of gravity and prove it to be conformally invariant.
In section \ref{sec:refac} we generalize the HH prescription
to the case of scalar-tensor gravity. It is shown that for
such a choice of the reference action, the referenced quasilocal mass
is a conformal invariant provided the conformal factor is a monotonic function
of the scalar field. Using this prescription, we give an expression for
quasilocal mass of static spherically symmetric solutions (with arbitrary
asymptotics). In section \ref{sec:QEex}, we demonstrate the conformal
invariance of this quasilocal mass formula by applying it to specific cases of
four-dimensional (4D) black hole
spacetimes, of both the asymptotically flat and non-flat kinds, in conformally
related theories. We briefly summarize and discuss our results in section
\ref{sec:disc}. In appendix \ref{app:PT}, we show how the standard
prescription for determining the stress-energy pseudotensor in general
relativity can be suitably adapted to find the quasilocal energy in
scalar-tensor theories of gravity. Finally, we demonstrate the consistency of
the results of the pseudotensor method with the quasilocal formalism of Brown
and York as applied to scalar-tensor theories.
Throughout this paper, we use the conventions of Misner, Thorne, and Wheeler
\cite{MTW} and work in ``geometrized units'' $c=1=G$.
\section{Quasilocal Mass under Conformal Transformation}
\label{sec:QE}
Consider a spatially bounded region of a $(D+1)$-dimensional spacetime that is
a classical solution of a scalar-tensor theory of gravity, such as dilaton
gravity or Brans-Dicke theory. In this section we extend the formalism of
Brown and York \cite{BY} to derive an expression for the quasilocal energy of
gravitational and matter fields associated with such regions. Subsequently, we
will give an expression for the quasilocal mass.
The BY derivation of the quasilocal energy, as applied to a
$(D+1)$-dimensional spacetime can be summarized as follows. The system we
consider is a $D$-dimensional spatial hypersurface $\Sigma$ bounded by a
$(D-1)$-dimensional spatial hypersurface ${\sf B}$ in a spacetime region that
can be decomposed as a product of a $D$-dimensional hypersurface and a real
line interval representing time (see Fig. 1). The time-evolution of the
boundary ${\sf B}$ is the surface ${}^D {\sf B}$. One can then obtain a
surface stress-tensor on ${}^D {\sf B}$ by taking the functional derivative of
the action with respect to the $D$-dimensional metric on ${}^D {\sf B}$. The
energy surface density is the projection of the surface stress-tensor normal
to a family of spacelike surfaces like ${\sf B}$ that foliate ${}^D {\sf B}$.
The integral of the energy surface density over such a boundary ${\sf B}$ is
the quasilocal energy associated with a spacelike hypersurface $\Sigma$ whose
{\em orthogonal} ~intersection with ${}^D {\sf B}$ is the boundary ${\sf B}$.
Here we assume that there are no inner boundaries, such that the spatial
hypersurfaces $\Sigma$ are complete. In the case where horizons form, one
simply evolves the spacetime inside as well as outside the horizon.
We follow the same notation as BY. The spacetime metric is $g_{\mu\nu}$ and
$n^{\mu}$ is the outward pointing unit normal to the surface ${}^D {\sf B}$.
The metric and extrinsic curvature of ${}^D {\sf B}$ are denoted by
$\gamma_{\mu\nu}$ and $\Theta_{\mu\nu}$, respectively, and they obey $n^{\mu}
\gamma_{\mu\nu} =0$ and $n^{\mu} \Theta_{\mu\nu} =0$. Alternatively,
$\gamma_{\mu\nu}$ and $\Theta_{\mu\nu}$ can be viewed as tensors on ${}^D
{\sf B}$, denoted by $\gamma_{ij}$ and $\Theta_{ij}$, where $i,j$ refer to
coordinates in ${}^D {\sf B}$. Similarly, the metric and extrinsic curvature
of $\Sigma$ are given by the spacetime tensors $h_{\mu\nu}$ and $K_{\mu\nu}$,
respectively. When viewed as tensors on $\Sigma$, they will be denoted by
$h_{ij}$ and $K_{ij}$. As in BY, here we will assume that the hypersurface
foliation $\Sigma$ is ``orthogonal'' to the surface ${}^D {\sf B}$ in the
sense that on the boundary ${}^D {\sf B}$, the future-pointing unit normal
$u^{\mu}$ to the hypersurface $\Sigma$ and the outward pointing
spacelike unit normal $n^{\mu}$ to the surface ${}^D {\sf B}$
satisfy $(u \cdot n) |_{{}^D {\sf B}} =0$. This implies that the shift
vector, $V^i$, normal to the boundary vanishes, i.e., $V^i n_i =0$.
\subsection{Action}
\label{subsec:qebd}
We study the following action for a scalar-tensor theory of gravity in
a $(D+1)$-dimensional spacetime:
\begin{equation}
\label{SBD}
S \left[ \bar{g}_{ab} , \phi, {\cal F}\right] = {1\over 2\kappa}\int d^{(D+1)}
x \> \sqrt{-\bar{g}} \> U(\phi ) \left[ \bar{R} - W(\phi)
(\bar{\nabla} \phi)^2 - V (\phi) + X(\phi) \bar{L}_{\rm m}
\right]
\ \ ,
\end{equation}
where $\bar{g}_{ab}$ is the ``physical'' metric, $\phi$ is a scalar field,
${\cal F}$ represents matter fields, $\kappa \equiv 8\pi$, and
$U$, $V$, $W$, and $X$ are functions of $\phi$. Also, $\bar{L}_{\rm m}$ is the
matter Lagrangian that includes a possible cosmological constant
term. The overbar denotes the functional dependence of quantities on the
physical metric $\bar{g}_{ab}$. Here we assume that $\bar{L}_{\rm m}$ does not
involve any derivatives of the metric. The dynamics of the
scalar field is governed by its kinetic term, the effective potential term
$V(\phi )$ and the non-minimal coupling to the scalar curvature, $\bar{R}$,
described by $U(\phi )$. The effective potential term can be inclusive of an
arbitrary additive constant which may occur as a Lagrange multiplier, an
integration constant, or even a fall out of the renormalization
procedure of the other matter fields described by $\bar{L}_{\rm m}$. It is
this constant that is responsible for the ``cosmological constant problem''.
One can consider the potential term as an effective term obtained by
integrating over ``heavy'' degrees of freedom as long as one does not go
beyond the leading semiclassical approximation for the scalar field $\phi$.
The BY analysis is of course readily applicable in the ``Einstein'' frame,
which is associated with the ``auxiliary'' metric
\begin{equation}
\label{auxmet}
g_{ab} \equiv U^{2/(D-1)} \bar{g}_{ab}
\ \ ,
\end{equation}
where $D>1$. In terms of the auxiliary metric, one can always cast action
(\ref{SBD}) as a sum of the Hilbert action, which is independent of the scalar
and matter fields, and a functional $S_f$ that depends on these fields:
\begin{equation}
\label{SBDaux}
S [g_{ab}] = {1\over 2\kappa}\int d^{(D+1)} x \> \sqrt{-g}\> {R} + S_f
\ \ ,
\end{equation}
where
\begin{eqnarray}
S_f \equiv {1\over 2\kappa } \int d^{(D+1)} x \> \sqrt{-g}\> \Bigglb[ &&
{D\over D-1} \left(\nabla \ln U(\phi)\right)^2 - W(\phi)
(\nabla \phi)^2 \nonumber \\
&&- U^{2/(1-D)} \left( V(\phi) - X(\phi ) L_{\rm m} \right) \Biggrb]\,.
\label{Sf}
\end{eqnarray}
Above, $L_{m}$ is a functional of the matter fields, their
derivatives, the auxiliary metric $g_{ab}$, and the scalar field $\phi$. In
Eq. (\ref{SBDaux}), we have ignored the surface term contributions for the
present. The subscript $f$ represents the scalar field $\phi$ and the matter
fields. Note that $S_f$ does not involve any derivatives of the metric. In the
following, $(\bar{g}, \bar{f})$ will denote a field configuration that is a
solution to (\ref{SBD}), whereas $(g, f)$ is the conformally related solution
in the theory (\ref{SBDaux}). Although there is no bar over $\phi$, note that
$\phi$ is implicitly included in the configuration $(\bar{g}, \bar{f})$.
\subsection{Quasilocal energy and mass}
\label{subsec:QEaux}
We begin this section by briefly discussing the Hamilton-Jacobi analysis
used by BY to evaluate the quasilocal energy of a spatially bounded region
in Einstein gravity. We will later give the expression for the quasilocal
energy in a generic scalar-tensor theory of gravity.
In general relativity, to make the action functionally differentiable
under boundary conditions that fix the metric on the boundary,
one appends appropriate surface terms to the Hilbert action.
The resulting action in $(D+1)$-dimensions is
\begin{equation}
\label{SBDfunc}
S^1 = {1\over 2\kappa}\int d^{(D+1)} x \> \sqrt{-{g}} \> {R} + {1\over \kappa}
\int_{t'}^{t''} d^D x \>\sqrt{h} \> K - {1\over \kappa}
\int_{{}^D {\sf B}}
d^D x \>\sqrt{-\gamma} \> \Theta + S_f
\ \ ,
\end{equation}
where $\int_{t'}^{t''} d^D x$ represents the difference of the integral over a
spatial three-surface $t=t''$ and that over a three-surface $t=t'$. Of course,
the equations of motion obtained from the variation of the above action are
unaffected by the addition of an arbitrary function $S^0$ of
fixed boundary data to $S^1$. Hence, the variation of such an action
restricted to classical solutions gives
\begin{eqnarray}
\label{varS}
\delta S_{\rm cl} &=& \{ {\rm terms~involving~variations~in~the~matter~
fields}\} \nonumber \\
&+& \int_{t'}^{t''} d^D x \> P^{ij}_{\rm cl} \delta h_{ij}
+ \int_{{}^D B} d^D x \> (\pi^{ij}_{\rm cl} - \pi^{ij}_0) \delta \gamma_{ij}
\ \ ,
\end{eqnarray}
where $ P^{ij}$ and $\pi^{ij}$ are, respectively, the momenta conjugate
to $h_{ij}$ and $\gamma_{ij}$, and
\begin{equation}
\label{pi0}
\pi_0 \equiv {\delta S^0 \over \delta \gamma_{ij} } \,.
\end{equation}
Above, the subscript ``${\rm cl}$'' denotes the value of a quantity on a classical
solution.
As we discuss in detail later, different choices for $S^0$ arise by imposing
different physical requirements on the quasilocal energy.
{}From Eqs. (\ref{varS}) and (\ref{pi0}) given above, we obtain the
following Hamilton-Jacobi equations:
\begin{eqnarray}
P^{ij}_{\rm cl} |_{t''} &=& {\delta S_{\rm cl} \over \delta h_{ij} (t'')}
\ \ , \label{P} \\
(\pi^{ij}_{\rm cl} - \pi^{ij}_0 )
&=& {\delta S_{\rm cl} \over \delta \gamma_{ij} } \ .
\label{pi}
\end{eqnarray}
The quantity that is of interest to us is the surface stress-tensor
for spacetime and the fields, which is given by
\begin{equation}
\tau^{ij} \equiv {2 \over \sqrt{-\gamma}} {
\delta S_{\rm cl} \over \delta \gamma_{ij}} \ .
\end{equation}
Using (\ref{pi}), we obtain
\begin{equation}
\tau^{ij} = {2 \over \sqrt{-\gamma}}(\pi^{ij}_{\rm cl} - \pi^{ij}_0) \ .
\end{equation}
If $u^i$ is the unit timelike normal to $\Sigma$ on the boundary
${\sf B}$, then the proper energy surface density $\epsilon$ is
\begin{equation}
\label{energydens}
\epsilon \equiv u_i u_j \tau^{ij} = -{1\over\sqrt{\sigma}} {\delta S_{\rm cl}
\over \delta N} \ \ ,
\end{equation}
where $\sigma_{ij}$ is the metric on the boundary ${\sf B}$. Above, we
made use of the following identity
\begin{equation}
{\partial \gamma_{ij} \over \partial N} = - {2 u_i u_j \over N} \,.
\end{equation}
Equation (\ref{energydens}) together with (\ref{pi}) can be used to show that
that the energy surface density is related to the trace of the extrinsic
curvature, $k_{\rm cl}$, of the boundary ${\sf B}$ embedded in a spatial
hypersurface $\Sigma$ (which in turn is embedded in a classical solution):
\begin{equation}
\label{energydensaux}
\epsilon = {1\over \kappa} ( k_{\rm cl} - k_0) \ ,
\end{equation}
where $k_0$ is the trace of the extrinsic curvature of a surface that is
{\em isometric} to ${\sf B}$, but is embedded in a reference space.
Following BY, the extrinsic curvature of ${\sf B}$ as embedded in $\Sigma$ is
defined by
\begin{equation}
\label{kdef}
k_{\mu \nu} = - \sigma^{\alpha}_\mu D_\alpha n_\nu \ \ ,
\end{equation}
where $D_\alpha$ is the covariant derivative on $\Sigma$. Therefore, $k=
\sigma^{\mu\nu} k_{\mu\nu}$. The quasilocal energy associated with all
the fields on the spacelike hypersurface $\Sigma$ with boundary
${\sf B}$ in this ``auxiliary'' spacetime is
\begin{eqnarray}
E &=& \int_{\sf B} d^{(D-1)} x \> \sqrt{\sigma} \> \epsilon = -\int_{\sf B}
d^{(D-1)} {\delta S_{\rm cl} \over \delta N}\nonumber \\
&=& {1\over \kappa} \int_{\sf B} d^{(D-1)} x \> \sqrt{\sigma}\>(k_{\rm cl}
- k_0) \ .
\label{energyaux}
\end{eqnarray}
In other words, $E$ represents the proper quasilocal energy in the Einstein
frame. The above expression is interpreted as energy because it is minus the
change in the classical action due to a uniform, unit increase in the proper
time along ${}^D {\sf B}$. Also, for a unit lapse and zero shift, it is equal
to the Hamiltonian corresponding to the action (\ref{SBDfunc}), as evaluated
on a classical solution. It is satisfying to note that this is a geometric
expression independent of the coordinates on the quasilocal surface. However,
it does depend on the choice of the quasilocal surface and also on the
foliation of the spacetime by spacelike hypersurfaces.
When there is a timelike Killing vector field $\xi^\mu$ on the boundary ${}^D
{\sf B}$, such that it is also hypersurface forming, one can define an
associated conserved quasilocal mass for the bounded system \cite{BCM,BY}:
\begin{equation}
\label{massaux}
M = \int_{\sf B} d^{(D-1)} x \> \sqrt{\sigma} \> N \epsilon
\ \ ,
\end{equation}
where $N$ is the lapse function related to $\xi^\mu$ by $\xi^\mu = Nu^\mu$.
Further, if $\xi \cdot u = -1$, then $N=1$ and
consequently the quasilocal mass is the same as the quasilocal energy
(\ref{energyaux}). Unlike the quasilocal energy (\ref{energyaux}), the
quasilocal mass is independent of any foliation of the bounded system.
We now ask: What is the analogous expression for the quasilocal energy or
mass for a bounded spatial region of a spacetime solution of the scalar-tensor
theory (\ref{SBD})? Note that under boundary conditions
that fix the metric on the boundary, the appropriate surface action to be
added to the action (\ref{SBD}) is
\begin{equation}
\label{SBDbdry}
S_{{}^D {\sf B}} [\bar{g}_{ab} ,\phi ] = {1\over \kappa}\int_{t'}^{t''}
d^{D} x \> \sqrt{- \bar{h}} \> U(\phi) \bar{K}
- {1\over \kappa}\int_{{}^D {\sf B}}
d^{D} x \>
\sqrt{- \bar{\gamma}} \> U(\phi) \bar{\Theta}
\ \ ,
\end{equation}
where $\bar{K}$ is the trace of the extrinsic curvature of a spatial
hypersurface and $\bar{\Theta}$ that of the boundary ${}^D {\sf B}$ when
embedded in a spacetime solution of action (\ref{SBD}). It can be verified
that under a conformal transformation (\ref{auxmet}), the action $S^1$
in (\ref{SBDfunc}) transforms exactly to $S[\bar{g}_{ab}, \phi, {\cal F}] +
S_{{}^D {\sf B}} [\bar{g}_{ab} ,\phi ]$ , which is defined by Eqs. (\ref{SBD})
and (\ref{SBDbdry}). As in the case of
Einstein gravity discussed above, one can use the
BY approach to derive the expression for quasilocal energy from the above
surface action in a non-minimally coupled theory. Such a calculation was
done by Creighton and Mann \cite{CM} for four-dimensional pure
dilaton-gravity. A straightforward generalization of their derivation to
the case of a $(D+1)$-dimensional scalar-tensor theory (\ref{SBD}) including
matter fields gives the quasilocal energy in such theories to be
\begin{eqnarray}
\bar{E} &=& \int_{\sf B} d^{(D-1)} x \> \sqrt{\bar{\sigma}} \>
\bar{\epsilon } \nonumber \\
&=& {1\over \kappa} \int_{\sf B} d^{(D-1)} x \> \sqrt{\bar{\sigma}} \>
\left( U(\phi ) \bar{k} - \bar{n}^i \partial_i U(\phi ) \right)
-\bar{E}_0 \,.
\label{energyBD}
\end{eqnarray}
In the next section we will consider appropriate reference actions $S^0$ and
their respective contributions, $\bar{E}_0$, to the above expression. In the
appendix we give an alternative derivation of the above
energy expression using the pseudotensor method.
Analogous to Eq. (\ref{massaux}) one can also define the quasilocal mass
in the scalar-tensor theory to be
\begin{eqnarray}
\label{massBD}
\bar{M} &=& \int_{\sf B} d^{(D-1)} x \> \sqrt{\bar{\sigma}} \>
\bar{N} \bar{\epsilon} \nonumber \\
&=& {1\over \kappa} \int_{\sf B} d^{(D-1)} x \> \sqrt{\bar{\sigma}} \>
\bar{N} \left( U(\phi ) \bar{k} - \bar{n}^i \partial_i U(\phi ) \right)
-\bar{M}_0
\ \ ,
\end{eqnarray}
where $\bar{M}_0$, is an appropriate reference term.
\subsection{Conformal transformation}
\label{subsec:conftrans}
We now study how the quasilocal mass, ${\bar M}$, modulo the reference term
${\bar M}_0$, behaves under a conformal transformation. Equation
(\ref{massaux}) shows that this requires a knowledge of how the total mean
curvature $k$ of the boundary ${\sf B}$ behaves under a conformal
transformation. Let the physical metric $\bar{g}_{ab}$ of the
scalar-tensor theory be related to the auxiliary metric $g_{ab}$ by the
conformal transformation
\begin{equation}
\label{confmet}
\bar{g}_{ab} \equiv \Omega^2 g_{ab} \ \ ,
\end{equation}
where $\Omega$ is generally a function of the spacetime coordinates.
Comparing with (\ref{auxmet}), we find that
\begin{equation} \label{OU}
\Omega = U^{-1/(D-1)} \,.
\end{equation}
Note that on-shell $U(\phi)$ will be determined through the equations of
motion pertaining to the action (\ref{SBD}).
Let us embed the $(D-1)$-dimensional spatial boundary ${\sf B}$ in each of the
two conformally related spacetimes, assuming that such embeddings are feasible
and unique. Then the unit timelike normal $\bar{u}^i$ in the physical
spacetime is related to that in the auxiliary spacetime, $u^i$, as follows
\begin{equation}
\label{normal1}
\bar{u}^i = \Omega^{-1} u^i
\ \ .
\end{equation}
Similarly, the outward pointing unit normal to the surface
${}^D {\sf B}$ in the two spacetimes are related as
\begin{equation}
\label{normal2}
\bar{n}^i = \Omega^{-1} n^i
\ \ .
\end{equation}
One can further show that the extrinsic and the total mean curvature of the
boundary ${\sf B}$, as embedded in these spacetimes, are related as follows:
\begin{eqnarray}
\bar{k}_{ij} &=& \Omega \left[ k_{ij} - \sigma_{ij} n^l \nabla_l (\ln \Omega)
\right] \ ,
\label{barkij} \\
\bar{k} &=& \Omega^{-1} \left[ k - (D-1) n^l \nabla_l (\ln \Omega)
\right] \ ,
\label{bark}
\end{eqnarray}
where $n^l$ is the spacelike unit normal to the surface ${}^D {\sf B}$
(embedded in the auxiliary spacetime). Formally, we associate the covariant
derivatives $\nabla_l$ and $\bar{\nabla}_l$ with metrics $g_{ab}$ and $\bar{
g}_{ab}$, respectively. In spacetime regions where $\Omega$ is non-singular,
one can invert the above relation to obtain
\begin{equation}
\label{k}
k = \Omega {\bar k} + (D-1) \Omega \>\bar{n}^l \bar{\nabla}_l
(\ln \Omega) \ \ ,
\end{equation}
where $k$ is the total mean curvature of the boundary ${\sf B}$, as
embedded in the auxiliary spacetime.
Equation (\ref{k}) shows that under a conformal transformation the quasilocal
mass defined in Eq. (\ref{massaux}), modulo the reference term arising from
$k_0$, transforms as follows:
\begin{equation}
{1\over \kappa} \int_{\sf B} d^{(D-1)} x \> N \sqrt{\sigma} \> k
= {1\over \kappa}\int_{\sf B} d^{(D-1)} x \>\bar{N}
\sqrt{\bar{\sigma}} \left[ U(\phi) \bar{k} - \bar{n}^i
\partial_i U(\phi) \right] \ \ ,
\label{CTbdryac}
\end{equation}
where we have used Eq. (\ref{OU}). Applying the above identity to the mass
expressions (\ref{massaux}) and (\ref{massBD}) proves that the quasilocal
masses of conformally related spacetimes are the same, provided the reference
term $\bar{M}_0$ is conformally invariant.
Consider the behavior of the timelike vector $\xi^\mu$ defined above Eq.
(\ref{massaux}). It is assumed to be Killing in a given frame, say, the
Einstein
frame. It is also a conformal invariant, i.e., $\bar{\xi}^\mu = \bar{N}\bar{
u}^\mu = Nu^\mu = \xi^\mu$. However, it will not remain Killing in a general
conformal transformation. Thus, although the (unreferenced) quasilocal mass is
a conformal invariant, its property of being a conserved charge in a given
frame is not. However, if $\xi^\mu$ obeys $\xi^\mu \nabla_\mu \phi =0$, then it
continues to remain Killing in the conformally related frame. In such an
event, the associated quasilocal mass remains a conserved charge in that frame
too. Furthermore, the unreferenced quasilocal energy is not invariant under
a conformal transformation. When $\Omega$ is independent of the coordinates on
the quasilocal surface, it transforms as
\begin{equation} \label{qEct}
\bar{E} -\bar{E}_0 = \Omega^{-1} (E-E_0)
\end{equation}
and, hence, is not a conformal invariant.
Finally, note that when we compare the quasilocal masses of conformally
related spacetimes above, we assume that the boundary ${}^D {\sf B}$,
which is taken to be embedded in a particular spacetime, is also
embeddable in the conformally related spacetime. However, the embeddability
of a hypersurface requires
that the intrinsic and extrinsic geometry of the boundary obey the
Gauss-Codazzi, Codazzi-Mainardi, and Ricci integrability conditions in both
spacetimes separately. In general, not all of these integrability conditions
are conformal invariants. Therefore, embeddability
of a hypersurface in a spacetime does not guarantee its embeddability in a
conformally related spacetime. Nevertheless, it can be shown that one can
always embed a $(D-1)$-dimensional spacelike spherical boundary in
$(D+1)$-dimensional SSS spacetime solutions, which are Ricci flat, and
in spacetimes related through conformal transformations that preserve these
spacetime properties \cite{CCG,TJW}.
\section{Reference action and quasilocal mass}
\label{sec:refac}
The Brown-York definition of the quasilocal energy (\ref{energyaux})
associated with a spatially bounded region of a given spacetime solution is
not unique. This is because an arbitrary functional $S^0$ of the boundary data
can be added to the action without affecting the equations of motion. On the
other hand, to get a well-defined (finite) expression for the quasilocal
energy of spatially non-compact geometries, one is usually required to
subtract the (divergent) contribution of some reference background.
At the level of the action such a ``regularization'' is tantamount to the
addition of a reference action $S^0$, which is a functional of appropriate
background fields $(g_0 , f_0 )$, to the original action $S^1$.
For 4D Einstein gravity, BY prescribe the following reference action
\begin{equation}
\label{BYref}
S^0 = -\int_{ {}^3 {\sf B}} d^3 x \> \left[ N \sqrt{\sigma } (k/ \kappa )|_0
+ 2\sqrt{\sigma} V^a (\sigma_{ai} n_j P^{ij} / \sqrt{h} )|_0 \right]
\ \ ,
\end{equation}
which is a linear functional of the lapse $N$ and shift $V^a$. Above,
${}^3 {\sf B}$ is the time-evolution of a two-boundary ${\sf B}$ that
is embedded in a fixed three-dimensional spacelike slice $\Sigma$ of some
fixed reference spacetime. Also, $k|_0$ and $(\sigma_{ai} n_j P^{ij} /
\sqrt{h} )|_0$ are arbitrary functions of the two-metric $\sigma_{ab}$ on
the boundary ${\sf B}$, $n^j$ is the unit normal to the 2-boundary ${\sf B}$,
and $\{ h_{ij} , P^{ij} \}$ are the canonical 3-metric and the conjugate
momentum on the three-dimensional spacelike slice $\Sigma$. Varying the
lapse in the first term in (\ref{BYref}) gives the energy surface density,
whereas varying the shift in the second term gives the momentum surface
density in the reference spacetime \cite{BY}. Since we mainly discuss the
application of (\ref{BYref}) to evaluate the {\em proper} quasilocal mass or
energy, which is obtained by the variation of the total action (on classical
solutions) with respect to $N$, we will henceforth drop the last term in
(\ref{BYref}) from our consideration.
To calculate the quasilocal energy associated with regions of spacetime
solutions of $(D+1)$ -dimensional Einstein gravity, the appropriate
generalization of the BY reference action is again given by (\ref{BYref}),
except that the integration is now over the boundary ${}^D {\sf B}$. The
boundary ${}^D {\sf B}$ itself is the time-evolution of the
$(D-1)$-dimensional spatial boundary ${\sf B}$. For asymptotically flat
spacetimes an appropriate reference background might be
vacuum flat spacetime.
However, for an arbitrary spacetime solution (eg., spacetimes that are neither
spatially closed nor asymptotically flat), a more well defined prescription
for the choice of $S^0$ is required. Recently, one such prescription was
given by Hawking and Horowitz in their quest for obtaining the total mass of
spacetimes with arbitrary asymptotic behavior in general relativity
\cite{HH}. Their starting point is the ``physical'' action defined as
\begin{equation}
\label{HHref}
S_P (g, f) \equiv S (g, f) - S (g_0, f_0)
\ \ ,
\end{equation}
where $(g_0, f_0)$ are fields specifying a reference static background, which
is a {\em solution} to the field equations. Therefore, the physical action
of the reference background is zero. Given a solution $(g,f)$, in order to
determine a reference background, $(g_0, f_0)$, HH fix a three-boundary
(${}^3 {\sf B}$) near infinity and require that $(g,f)$ induce the same fields
on this boundary as $(g_0, f_0)$. The energy of a solution can be obtained
from the physical Hamiltonian associated with $S_P$ (for details, see Ref.
\cite{HH}) and is similar to the BY quasilocal expression. For asymptotically
flat spacetime solutions, the reference background is chosen to be flat
space and the resulting energy expression agrees with the
one obtained in the ADM formalism.
It is important to note that the HH prescription allows one to compute the
total energy associated with a general time translation $t^\mu = Nu^\mu +
V^\mu$. In a generic case, the resulting energy will have a shift-dependent
contribution, such as the second term in (\ref{BYref}). However, such a term
vanishes when the spacetime is taken to approach a static
background solution and the resulting expression (with $N=1$) is the same as
the BY energy (\ref{energyaux}). Even if the spacetime is asymptotically
non-static this term will vanish when $V^a \sigma_{ab} =0$. This happens, eg.,
for cosmological solutions with the Robertson-Walker metric.
Building on the work of Brown and York, Chan, Creighton, and Mann
\cite{CCM,KCK} chose a particular reference action to compute the
quasilocal masses of solutions in scalar-tensor theories. In the special case
of SSS spacetimes, it has been shown by CCM that their
choice leads to a conformally invariant referenced quasilocal mass.
A second possibility of obtaining a reference action is to generalize the HH
prescription to scalar-tensor theories. Such an attempt was also made by CCM
\cite{CCM}. However, they conclude that the mass formula obtained using their
generalization of the HH prescription is not conformally invariant. For
details on this issue, we refer the reader to Ref. \cite{CCM}.
In this section, we extend the BY formalism to obtain the referenced
quasilocal mass associated with bounded regions of spacetime solutions (with
arbitrary asymptotic behavior) in scalar-tensor gravity. A relevant question
in such an analysis is whether the reference action can be specified in a
unique way. Finding an answer to this would in itself be an interesting pursuit
and involves addressing issues of positivity of the mass or energy of such
solutions as well as the stability of the corresponding reference solution.
Here, we do not attempt to find if the reference action or solution can be
uniquely specified at all. Below, after discussing the CCM analysis briefly,
we present our alternative generalization of the HH prescription to
scalar-tensor gravity. Although, this does not select a unique reference
action, nevertheless, invoking this prescription reduces the number of
allowed reference actions. We prove that under certain conditions such a
prescription does lead to a conformally invariant referenced quasilocal mass.
\subsection{The CCM prescription}
\label{subsec:CCMpres}
For a non-minimally coupled action of the type (\ref{SBD}), the reference
action suggested by CCM is \cite{CCM}
\begin{equation}
\label{CCMref}
S^0 = -\int_{{}^D {\sf B}} d^D x \> \bar{N} \sqrt{\bar{\sigma}} U(\phi)
(\bar{k}_{\rm flat} / \kappa )
\ \ ,
\end{equation}
where $\bar{k}_{\rm flat}$ is the trace of the extrinsic curvature of the
$(D-1)$-boundary ${\sf B}$ embedded in a $D$-dimensional flat spatial slice.
Consider the special case of an asymptotically flat SSS
spacetime metric, as a solution in this theory:
\begin{equation}
\label{SSS}
d\bar{s}^2 = - \bar{N}^2 (r) dt^2 + {dr^2 \over \bar{\lambda}^2 (r)} +
r^2 d \omega^2
\ \ ,
\end{equation}
where $\bar{N}$ and $\bar{\lambda}$ are functions of $r$ only, and $d \omega^2$ is
the line element on a unit $(D-1)$-sphere. Let us now make the following
conformal transformation
\begin{equation}
\label{CT1}
\tilde{g}_{ab} = \tilde{\Omega}^2 \bar{g}_{ab} \quad , \quad
\tilde{U} = \tilde{\Omega}^{(1-D)} U \ \ ,
\end{equation}
where $U(\phi)$ is the scalar-field dependent coupling appearing in
(\ref{CCMref}). Note that under this conformal transformation the functional
form of $S^0$,
\begin{equation} \label{CCMref2}
S^0 = -\int_{{}^D {\sf B}} d^D x\>\tilde{N} \sqrt{\tilde{\sigma}}
\tilde{U}(\phi) (\tilde{k}_{\rm flat} / \kappa )
\ \ ,
\end{equation}
remains unchanged provided we assume $\tilde{N} = \tilde{\Omega} \bar{N}$.
Let the metric (\ref{SSS}) be related through this
conformal transformation to the following SSS metric:
\begin{equation}
\label{SSSc}
d\tilde{s}^2 = - \tilde{N}^2 (r) dt^2 + {dr^2 \over \tilde{\lambda}^2 (r)} +
\tilde{\Omega}^2 r^2 d \omega^2 \ \ ,
\end{equation}
which is assumed to arise as a solution to another scalar-tensor theory that
is related to action (\ref{SBD}) by the conformal transformation (\ref{CT1}).
Above, $\tilde{N} = \tilde{\Omega} \bar{N}$ and $\tilde{\lambda}=\tilde\Omega^{-1}
\bar{\lambda}$, where $\tilde{\Omega}$ is a function of $r$ only.
For the special case of the spacetime solution (\ref{SSS}), and with the
choice of reference action (\ref{CCMref}), CCM argue that the quasilocal mass
associated with the region inside a sphere of curvature radius $r$, which is
embedded in spacetime (\ref{SSS}), can be expressed as
\begin{equation}
\label{CCMM}
\bar{M}(r) = {\bar{N}(r) \over \kappa} \left( { (D-1) \bar{A}_{D-1} (r)
U(\phi) \over r} - \bar{\lambda}(r) {d\over dr} \left( \bar{A}_{D-1}
(r) U(\phi) \right) \right) \,.
\end{equation}
Above, $\bar{A}_{D-1}$ is the area of the boundary $(D-1)$-sphere of radius
$r$ given by
\begin{eqnarray}
\bar{A}_{n} &=& \int_{\sf B} d^{n} x \> \sqrt{\bar{\sigma}} \nonumber\\
&=& {(4\pi)^{n/2} \Gamma (n/2) \over \Gamma (n)} r^n \ \ , \label{Sarea}
\end{eqnarray}
$\bar{\sigma}_{ij}$ being the metric on ${\sf B}$. Note that the first term in
Eq. (\ref{CCMM}) is just the reference term
\begin{equation}
\label{CCMrefterm}
\bar{M}^0 = -\int_{{}^D {\sf B}} d^{D-1} x \> \bar{N} \sqrt{\bar{\sigma}}
U(\phi) (\bar{k}_{\rm flat} / \kappa )
\ \ ,
\end{equation}
whereas the second term arises from the quasilocal mass definition
(\ref{massBD}) on using the identity
\begin{equation}
\label{kA}
{1\over \kappa} \int_{\sf B} d^{(D-1)} x \> \sqrt{\bar{\sigma}} \>
\bar{N} \bar{k} = -{\bar{N} (r) \bar{\lambda} (r) \over \kappa} {d\over dr}
\bar{A}_{(D-1)} (r) \ \ ,
\end{equation}
which holds for the SSS metric (\ref{SSS}). We will call Eq. (\ref{CCMM}) the
CCM mass expression. Similarly, CCM find that for the metric (\ref{SSSc}), the
quasilocal mass is
\begin{equation}
\label{CCMMc}
\tilde{M}(r) = {\tilde{N}(r) \over \kappa} \left( { (D-1) \tilde{A}_{D-1} (r)
\tilde{U} \over \tilde{\Omega} r} - \tilde{\lambda}(r) {d\over dr} \left(
\tilde{A}_{D-1} (r) \tilde{U} \right) \right) \ \ ,
\end{equation}
where $\tilde{A}_{(D-1)} = \tilde{\Omega}^{(D-1)} \bar{A}_{(D-1)}$.
Thus, the CCM mass $\bar{M}(r)$ defined in Eq.
(\ref{CCMM}) is invariant under the conformal transformation (\ref{CT1}),
namely, $\bar{M}(r) = \tilde{M}(r)$. To be precise, each term
in (\ref{CCMM}) is separately conformally invariant. Finally, let us emphasize
that unlike in the HH prescription, in the CCM prescription one does not
require the `background' fields appearing in the reference action
(\ref{CCMref}) to constitute a solution of that action. Also, the choice of
the CCM reference action is independent of the asymptotic behavior of the
fields of the solution. (This is the reason why the referenced quasilocal mass
(with $N=1$) in this prescription differs from the Abbott-Deser definition
of the total energy \cite{AD} when applied to asymptotically anti-de Sitter
SSS spacetimes. One can, however, recover this energy expression by
generalizing the CCM reference action to the case of such spacetimes (for
details, see Ref. \cite{BCM}).)
\subsection{An alternative prescription for reference action
and quasilocal mass}
\label{subsec:BL}
First, we extend the applicability of the HH prescription to scalar-tensor
gravity in order to obtain an appropriate reference action. Given a solution,
$(\bar{g},\bar{f})$, one chooses a reference background {\em solution}, $(\bar{
g}_0 ,\bar{f}_0 )$, by using the HH prescription as enunciated above in this
section. Then the appropriate reference action is simply
\begin{equation} \label{HHrefac}
\left[ S[\bar{g}_{ab}, \phi , {\cal F}]+S_{{}^D {\sf B}} [\bar{g}_{ab}, \phi]
\right]_{\rm ref} \ \ ,
\end{equation}
where $S$ and $S_{{}^D {\sf B}}$ are given by Eqs. (\ref{SBD}) and
(\ref{SBDbdry}), respectively, and $[{\rm term}]_{\rm ref}$ denotes the value
of the term as evaluated on the reference solution. In
general, the reference action can depend on the initial and final metrics
$\bar{h}_{ij} (t')$ and $\bar{h}_{ij} (t'')$ through spatial boundary terms,
namely, the first term on the right-hand side of Eq. (\ref{SBDbdry}).
However, in the present calculation such contributions can be dropped since
they do not affect the BY quasilocal mass.
Second, we address the question: If the HH prescription is obeyed by a pair
of solutions, $(\bar{g},\bar{f})$ and $(\bar{g}_0 ,\bar{f}_0 )$, for the
boundary ${\sf B}$ in a given frame, then, will it also be obeyed by
conformally related fields in a conformally related frame? We answer this as
follows. Note that the reference solution $(\bar{g}_0
, \bar{f}_0 )$ is conformally related to that in Einstein gravity, $(g_0 , f_0
)$, by Eq. (\ref{confmet}), where $U$ is now a function of $\phi_0$. Here,
both $U(\phi_0 )$ and the conformal factor $\Omega$ are positive-definite
quantities. Thus, for a solution in scalar-tensor gravity, $(\bar{g},\bar{f})$,
if the lapse $\bar{N}$ and the fields $(\bar{\sigma}_{ab} , \phi)$ induced on
the boundary ${\sf B}$ match with the lapse $\bar{N}_0$ and the fields
$(\bar{\sigma}_{0ab} , \phi_0 )$ at ${\sf B}$ in the reference spacetime, then
for the conformally related configuration $(g, f)$ in the Einstein frame, the
lapse $N$ and the field $\sigma_{ab}$ at ${\sf B}$ will necessarily match with
their reference spacetime counterparts $N_0$ and $\sigma_{0ab}$ induced on the
corresponding boundary. This holds provided $\Omega$ is a monotonic function of
$\phi$. To repeat, let
\begin{equation} \label{proofBD}
\bar{N} |_{\sf B} = \bar{N}_0 |_{\sf B} \>, \quad
\bar{\sigma}_{ab} |_{\sf B} = \bar{\sigma}_{0ab} |_{\sf B}, \quad
\phi |_{\sf B} = \phi_0 |_{\sf B} \,.
\end{equation}
Then, using the above conditions, we can infer the following requirements on
the Einstein frame fields:
\begin{eqnarray} \label{proofEF}
N|_{\sf B} &=& \left[\bar{N} \Omega^{-1} (\phi)\right]_{\sf B} =
\left[ \bar{N}_0 \Omega^{-1} (\phi_0 ) \right]_{\sf B} = N_0 |_{\sf B} \ \ ,
\nonumber \\
\sigma_{ab} |_{\sf B} &=&\left[ \bar{\sigma}_{ab} \Omega^{-2} (\phi)
\right]_{\sf B} = \left[\bar{\sigma}_{0ab} \Omega^{-2} (\phi_0 )
\right]_{\sf B} = \sigma_{0ab} |_{\sf B} \,.
\end{eqnarray}
This proves that, for such a conformal factor, if the HH
prescription is obeyed in a given frame, say, the scalar-tensor frame, it will
automatically be satisfied in the Einstein frame. It is easy to extend this
proof to the case of any two conformally related frames.
A meaningful referenced quasilocal mass can now be defined. It is simply given
by Eq. (\ref{massBD}), where the reference term, $\bar{M}_0$, is obtained
from the HH prescribed reference action (\ref{HHrefac}) by a BY type
analysis (as described in section \ref{subsec:QEaux}), i.e.,
\begin{equation}
\label{massBDref}
\bar{M}_0= {1\over \kappa} \int_{\sf B} d^{(D-1)} x \> \sqrt{\bar{\sigma}_0}
\>\bar{N}_0 \left( U(\phi_0 ) \bar{k}_0 - \bar{n}^i_0 \partial_i U(\phi_0 )
\right)
\ \ ,
\end{equation}
which is just the first term on the right-hand side of Eq. (\ref{massBD}) as
evaluated on the reference solution.
We now show that the referenced quasilocal mass so obtained is
conformally invariant. In the previous section, we proved that
the unreferenced quasilocal mass is a conformal invariant. What remains to be
verified is that the reference term $\bar{M}_0$ is also invariant under the
transformation $\bar{g}_{0ab} = \Omega (\phi_0) g_{0ab}$. This is easily done
by applying the curvature-transformation identity (\ref{CTbdryac}) to the
above expression for $\bar{M}_0$. This shows that $\bar{M}_0 = M_0$,
which proves the conformal invariance of the reference term. Hence the
referenced quasilocal mass is conformally invariant.
(This, of course, presumes a monotonic $\Omega$.)
We now illustrate this invariance explicitly for the case of SSS spacetimes.
By applying the mass expressions (\ref{massBD}) and (\ref{massBDref})
to the SSS metric (\ref{SSS}), we obtain
\begin{equation}
\label{BLM}
\bar{M}(r) = \left[ {\bar{N}(r) \over \kappa} \bar{\lambda}(r)
{d\over dr} \left( \bar{A}_{D-1} (r) U(\phi) \right) \right]^0_{\rm cl}
\ \ ,
\end{equation}
where $\bar{A}$ is given in Eq. (\ref{Sarea}) and $[{\rm term}]^0_{\rm cl}$
is defined as the difference in the values of the term evaluated on the
reference spacetime and on the spacetime solution whose mass we aim to
compute. Note that in keeping with the HH prescription, we require that at the
boundary, $r=r_B$, the SSS solution satisfies $\bar{N}(r_B) = \bar{N}_0
(r_B)$, $\bar{\sigma}_{ab}(r_B) = \bar{\sigma}_{ab}(r_B)|_0$, and $U(\phi(r_B)
) = U(\phi_0 (r_B))$. To obtain the total mass of an asymptotically flat
spacetime, one first evaluates $\bar{M}(r)$ for general $r$ and then imposes
the limit $r\rightarrow \infty$. In this limit, Eq. (\ref{BLM}) yields the ADM
mass when the reference solution is chosen to be flat. The referenced
quasilocal mass defined in Eq. (\ref{BLM}) is manifestly invariant under the
conformal transformation (\ref{CT1}) and, therefore, is the same as the
expression obtained upon removing the overbars in that equation.
Alternatively, consider applying the mass expressions (\ref{massBD}) and
(\ref{massBDref}) to an SSS metric of the form (\ref{SSSc}), namely,
\begin{equation}
\label{SSSc2}
d\bar{s}^2 = - \bar{N}^2 (r) dt^2 + {dr^2 \over \bar{\lambda}^2 (r)} +
{\Omega}^2 r^2 d \omega^2 \,.
\end{equation}
It is easy to verify that the resulting quasilocal mass expression is
identical to Eq. (\ref{BLM}). However, the area of ${\sf B}$ (as embedded in
metrics of the type (\ref{SSSc2})) is now given as
\begin{eqnarray}
\bar{A}_{n} &=& \int_{\sf B} d^{n} x \> \sqrt{\bar{\sigma}} \nonumber\\
&=& {(4\pi)^{n/2} \Gamma (n/2) \over \Gamma (n)} r^n {\Omega}^n
\,. \label{Sareac}
\end{eqnarray}
Here too the referenced quasilocal mass (\ref{BLM}) remains invariant under
conformal transformations of the metric (\ref{SSSc2}), provided the HH
prescription is followed in determining the reference solution.
To summarise, we define the referenced quasilocal mass of a solution
associated with a boundary ${\sf B}$ as the difference of its unreferenced
quasilocal mass from that of a reference field configuration, which is also a
solution of the theory and obeys the HH prescription. Under a conformal
transformation, this pair of solutions has its ``image'' pair, which comprises
of two solutions in the conformally related frame; in that frame, the
referenced quasilocal mass is again the difference of the unreferenced
quasilocal masses of these two image solutions. To investigate the behavior of
the referenced quasilocal mass under a conformal transformation, one must
therefore study how the unreferenced quasilocal masses of these two solutions
transform under the conformal map ${\bar g}_{ab} = \Omega(\phi) g_{ab}$ and
${\bar g}_{0ab} = \Omega(\phi_0) g_{0ab}$, {\em respectively}. Such a study
reveals the conformal invariance of our referenced quasilocal mass (\ref{BLM}).
We end this section by noting that, when applied to the case of asymptotically
flat SSS spacetimes, there is a subtle but significant
difference between the quasilocal mass definition (\ref{BLM}), which we
propose above, and the mass definition that CCM obtain by their
generalization of the HH prescription \cite{CCM}, namely,
\begin{equation}
\label{HHCCM}
\bar{M}(r) = \left[ {\bar{N}(r) \over \kappa} \left( 1-\bar{\lambda}(r) \right)
{d\over dr} \left( \bar{A}_{D-1} (r) U(\phi) \right) \right]_{\rm cl}
\,.
\end{equation}
Specifically, consider the case of SSS metrics of the form (\ref{SSS}). Then,
the above formula can be obtained from (\ref{BLM}) in two steps. First, one
sets $\bar{\lambda}(r)|_0 =1$ in Eq. (\ref{BLM}). This can always be done, for,
the reference spacetime solution in such a case is flat. Second, and more
importantly, one {\em assumes} that
\begin{equation}
\label{HHCCMcond}
{d U(\phi_{\rm cl}) \over dr} = {d U(\phi_0) \over dr}
\ \ , \end{equation}
at the boundary ${\sf B}$. This, however, is an additional requirement over and
above those included in the HH prescription. Consequently, Eq. (\ref{HHCCM})
is different from Eq. (\ref{BLM}), where condition (\ref{HHCCMcond}) is not
assumed. This is also the reason why Eq. (\ref{HHCCM}), as opposed to Eq.
(\ref{BLM}), fails to be conformally invariant.
In the next section we apply our mass definition (\ref{BLM}) to find
the referenced quasilocal masses of charged black holes in 4D dilaton gravity,
and their conformally related cousins in 4D Einstein gravity.
\section{Quasilocal mass in scalar-tensor theories of gravity: Examples}
\label{sec:QEex}
\subsection{Asymptotically flat SSS spacetimes}
\label{subsec:4Ddilgrav}
Let us consider the charged black hole solutions of the four-dimensional
dilaton gravity action (see Refs. \cite{rev1,rev2} for reviews)
\begin{equation}
\label{S4Ddil}
S = {1\over 2\kappa} \int d^4 x \> \sqrt{-\bar{g}}e^{-2\phi}
[ \bar{R}+ 4(\bar{\nabla} \phi)^2 - 2\Lambda - \bar{F}^2]
\ \ ,
\end{equation}
where $\bar{R}$ is the four-dimensional Ricci scalar, $\Lambda$ is a
cosmological constant and $\bar{F}_{\mu\nu}$ is the Maxwell field associated
with a U(1) subgroup of ${\rm E}_8 \times {\rm E}_8$. In this subsection we
will consider the case where $\Lambda = 0$. The magnetically charged black
hole solution to the above action is \cite{GM,GHS}
\begin{eqnarray}
d \bar{s}^2 &=& -{e^{2\phi_\infty}(1-2m e^{\phi_\infty} /r)
\over (1-Q^2 e^{-\phi_\infty} /(mr))}dt^2 \nonumber \\
&&+ {dr^2 \over (1-2me^{\phi_\infty} /r) (1-Q^2 e^{-\phi_\infty} /(mr))}
+r^2 d\omega^2 \ \ ,
\label{4Ddilbh} \\
e^{-2\phi} &=& e^{-2\phi_\infty} \left( 1 - {Q^2 e^{-\phi_\infty} \over mr}
\right) = U(\phi)\ \ ,
\label{4Ddil} \\
\bar{F}&=&Q\sin \theta d\theta \wedge d\phi \ \ ,
\label{4DF}
\end{eqnarray}
where $m$ and $Q$ are classical hairs of the stringy black hole and
$\phi_\infty$ is the asymptotic constant value of the dilaton. Above, $m$ is
also called the Schwarzschild mass of the spacetime and $Q$ is the magnetic
charge of the black hole. The strings couple to the above metric,
$\bar{g}_{\mu\nu}$, as opposed to the one related through the conformal
transformation $g_{\mu\nu} \equiv e^{-2\phi} \bar{g}_{\mu\nu}$, which casts
the above action in the Hilbert form.
We will now demonstrate that the quasilocal mass of a spatial region enclosed
inside the two-sphere of curvature radius $r_B$ is conformally invariant. We
first calculate the mass in the string frame. Since the spacetime
(\ref{4Ddilbh}) is asymptotically flat, we choose the reference metric to be
flat\footnote{see the discussion in section \ref{sec:disc}}:
\begin{equation}
\label{GHS-ref}
d \bar{s}^2_0 = -\bar{N}^2_0 {dt}^2 + dr^2 +{r}^2 d\omega^2
\ \ , \end{equation}
where $\bar{N}_0$ is a constant. Note that the above metric is a solution of
the action (\ref{S4Ddil}) with $\phi_0 =$ constant and $\bar{F}_0 = 0$. A
two-sphere boundary of curvature radius $r=r_B$ can be isometrically
embedded in both the above spacetimes (\ref{4Ddilbh}) and (\ref{GHS-ref}).
For the lapse at the boundary to match in these spacetimes, we choose
\begin{equation}
\bar{N}_0 = {e}^{\phi_\infty} \left(1- {2me^{\phi_\infty} \over r_B}
\right)^{1/2} \left(1- {Q^2 e^{-\phi_\infty} \over mr_B} \right)^{-1/2} \,.
\end{equation}
For the remaining HH requirement to be satisfied, the value of $\phi$ induced
at the boundary in these spacetimes should match. This implies that on the
reference spacetime (\ref{GHS-ref}), one must have
\begin{equation}
\label{phi-string}
e^{-2\phi_0} = e^{-2\phi_\infty} \left(1- {Q^2 e^{-\phi_\infty} \over
mr_B}\right) = U(\phi_0 )
\end{equation}
everywhere. Using these expressions in (\ref{BLM}), we find that the
quasilocal mass is
\begin{eqnarray}
\bar{M}(r_B) &=& e^{-\phi_\infty} r_B \Bigg[
-\left(1- {Q^2 e^{-\phi_\infty} \over mr_B}\right) \left(1-{2me^{\phi_\infty}
\over r_B}\right)- {e^{-\phi_\infty} Q^2 \over 2mr_B} \left(1 -
{2me^{\phi_\infty} \over r_B}\right) \nonumber\\
&&+\sqrt{\left(1-{Q^2 e^{-\phi_\infty} \over mr_B}\right) \left(1-
{2me^{\phi_\infty} \over r_B}\right)}\Bigg] \label{M1b-string} \,.
\end{eqnarray}
In the limit $r_B \to \infty$, $\bar {M}(r_B) \to m$.
We next study the Einstein-frame solution that is conformally related to
(\ref{4Ddilbh}) through the conformal transformation (\ref{auxmet}), where
\begin{equation}
\label{CT2}
U = {e}^{-2\phi_\infty}
\left(1- {Q^2 {e}^{-\phi_\infty} \over {mr}}\right) \,.
\end{equation}
Thus, the Einstein metric is
\begin{eqnarray}
{ds}^2 &=& -\left( 1-{2me^{\phi_\infty} \over r}\right){dt}^2 +
{e}^{-2\phi_\infty} \left(1- {2me^{\phi_\infty} \over r}\right)^{-1} {dr}^2
\nonumber \\
&&+ {e}^{-2\phi_\infty} r^2 \left( 1- {Q^2 e^{-\phi_\infty} \over mr}\right)
{d\omega}^2 \,. \label{GHSE}
\end{eqnarray}
Once again, since the above spacetime is asymptotically flat, we choose the
reference metric to be flat:
\begin{equation}
\label{GHSE-flat}
{ds}^2_0 = -{N}^2_0 {dt}^2 + d\rho^2 + \rho^2 {d\omega}^2 \ \ ,
\end{equation}
where $\rho$ is the radial coordinate and $N_0 =$ constant. In the
above coordinates, a two-sphere (with $t$ and $\rho$ constant) embedded in
this reference spacetime is not isometric with a two-sphere (with $t$ and
$r$ constant) embedded in spacetime (\ref{GHSE}). However, they can be made
isometric by defining $\rho$ in terms of the curvature coordinate $r$ as
\begin{equation}
\label{r-rho}
{\rho} = r \left( 1- {Q^2 {e}^{-\phi_\infty} \over mr_B}\right)^{1/2}
{e}^{-\phi_\infty} \,.
\end{equation}
One can implement this coordinate transformation in either Eq. (\ref{GHSE})
or (\ref{GHSE-flat}). Both choices yield the same mass expressions. We choose
to apply it in Eq. (\ref{GHSE-flat}). In these coordinates, the flat metric
gets recast to
\begin{equation}
\label{GHSE-ref}
{ds}^2_0 = -{N^2}_0 {dt}^2 + {e}^{-2\phi_\infty}\left(1-{Q^2
{e}^{-\phi_\infty} \over mr_B}\right) dr^2 + r^2 \left(1-{Q^2
{e}^{-\phi_\infty} \over mr_B}\right) {e}^{-2\phi_\infty} d{\omega}^2.
\end{equation}
For matching the lapse on the boundary at $r=r_B$, we require
\begin{equation}
\label{lapse1-ref}
N_0 = \left( 1- {2m{e}^{\phi_\infty} \over r_B}\right)^{ 1/2}
\end{equation}
everywhere. Note that the flat metric (\ref{GHSE-ref}) is indeed conformally
related to the reference solution (\ref{GHS-ref}) in the string frame.
By the application of (\ref{BLM}), we find that the
quasilocal mass turns out to be that given in (\ref{M1b-string}). This shows
that the quasilocal mass at any $r$ is a conformal invariant.
We now consider electrically charged black hole solutions of the action
(\ref{S4Ddil}). The associated metric, dilaton, and the non-vanishing
Maxwell field tensor components are
\begin{eqnarray}
d \bar{s}^2 &=& -{e^{-\phi_\infty}\left(1+ (Q_e^2 -2m^2
e^{2\phi_\infty}) /(me^{\phi_\infty} r)\right)
\over \left(1+Q^2 /(m e^{\phi_\infty} r) \right)^2 }dt^2 \nonumber \\
&& + {dr^2 \over \left(1+(Q_e^2 -2m^2 e^{2\phi_\infty}) /(me^{\phi_\infty} r)
\right)} +r^2 d\omega^2
\label{4Ddilbhes} \ \ , \\
U(\phi) &=& e^{-2\phi}= \left( 1 + {Q^2_e e^{-\phi_\infty} \over mr} \right)
\label{4Ddile} \ \ , \\
\bar{F}_{tr}&=&{Q_e e^{4\phi} \over r^2 } \,.
\label{4DFes}
\end{eqnarray}
Since this spacetime is asymptotically flat, we choose the reference solution
to be flat with the metric (\ref{GHS-ref}), where the lapse is just
$\sqrt{-\bar{g}_{tt}}$ in Eq. (\ref{4Ddilbhes}) evaluated at $r=r_B$.
By applying our prescription for finding the quasilocal mass (as we did in the
case of the magnetically charged black holes) to this case,
we find that in the string frame
\begin{equation}\label{M1es}
\bar{M}(r_B) = {e}^{-\phi_\infty} r_B \Bigg\{\bar{\lambda}
- \bar{\lambda}^2 + {Q_e^2 \bar{\lambda}^2 \over 2m{e}^{
\phi_\infty}r_B} \left(1+{Q_e^2 \over m{e}^{
\phi_\infty}r_B}\right)^{-1} \Bigg\} \ \ ,
\end{equation}
where $\bar{\lambda}^{-2} \equiv \bar{g}_{rr} (r_B )$ in Eq.
(\ref{4Ddilbhes}). Thus, the total mass of the spacetime is once again $m$.
On the other hand, in the Einstein frame the metric is
\begin{eqnarray}
d{s}^2 &=& -e^{-\phi_\infty}\left(1- {2m^2
e^{2\phi_\infty} \over me^{\phi_\infty} r + Q_e^2 }\right) dt^2 \nonumber \\
&&+ \left(1- {2m^2 e^{2\phi_\infty} \over me^{\phi_\infty} r + Q_e^2 }
\right)^{-1} dr^2
+ r^2 \left(1+ {Q_e^2 \over me^{\phi_\infty} r }\right) d\omega^2
\ \ , \label{4DdilbheE}
\end{eqnarray}
which is related to the string metric (\ref{4Ddilbhes}) via the conformal
transformation (\ref{auxmet}), where $U$ is given by (\ref{4Ddile}). Since the
above solution is asymptotically flat, the reference metric is chosen to be
flat once again:
\begin{equation}
d {s}^2_0 = -{N}_0^2 dt^2 + \left(1+ {Q_e^2 \over me^{\phi_\infty}r_B
}\right) dr^2 + r^2_B \left(1+ {Q_e^2 \over me^{\phi_\infty} r_B }\right)
d\omega^2 \ \ ,
\end{equation}
where, as in (\ref{GHSE-ref}), we use coordinates such that the 2-sphere
boundary at $r=r_B$ is manifestly isometric with that in the spacetime
(\ref{4DdilbheE}). Also, $\phi_0$ is defined so as to match with the solution
(\ref{4Ddile}) at the boundary 2-sphere at $r=r_B$
\begin{equation}
e^{-2\phi_0} = U(\phi_0)|_{r=r_B} = 1+Q_e^2 / (me^{\phi_\infty} r_B) \,.
\end{equation}
This is then the value of $\phi_0$ everywhere in the reference spacetime.
Similarly the reference lapse ${N}_0$ is chosen to be $\sqrt{-g_{tt}}$
in (\ref{4DdilbheE}) evaluated at $r=r_B$. Our prescription for evaluating the
quasilocal mass then yields the same expression as found in
(\ref{M1es}) for the string frame, thus demonstrating its conformal
invariance.
\subsection{Asymptotically non-flat black holes}
To demonstrate that our prescription yields a conformally invariant
definition of quasilocal mass even for asymptotically non-flat solutions
we consider a particular black hole solution of Chan, Horne, and Mann
that arises from the following action \cite{CHM}:
\begin{equation}
\label{CHMacE}
S = {1\over 2 \kappa} \int{d^4 x }\sqrt{-g} \> [R -2(\nabla\phi)^2
-{e}^{-2\phi} F^2] \,.
\end{equation}
The fields of the electrically charged black hole solution in this theory are
\begin{eqnarray}
{ds}^2 &=& -{1 \over{\gamma^4}} (r^2 - 4{\gamma^2}M){dt^2} +
{4r^2 \over r^2 -{4\gamma^2}M }{dr^2} + r^2 d\omega^2 \label{CHMbhE} \ \ ,\\
{e}^{-2\phi} &=& {2Q^2 \over r^2} \label{CHMdil} \ \ ,\\
F_{tr} &=& {r \over 2Q\gamma^2} \label{CHMF} \ \ ,
\end{eqnarray}
where $\gamma$ is a constant with dimensions of $\sqrt{r}$ and $Q$ is the
electric charge.
In this case, there is no unique way to choose the reference geometry. Here,
we choose to compare the quasilocal mass of the above solution with respect to
a geometry whose (non-flat) space part of the metric is determined by setting
$M=0$ in (\ref{CHMbhE}). Thus, our reference geometry is:
\begin{equation}
\label{CHM-refE}
{ds^2}_0 = - N_0^2 dt^2 + 4{dr}^2 + r^2d{\omega}^2.
\end{equation}
For the 2-sphere boundary at $r = r_B$, the HH prescription dictates that
\begin{equation} \label{N2}
N_0 = {1 \over \gamma^2} ({r^2}_B - 4\gamma^2 M)^{1/2}
\end{equation}
is obeyed everywhere. Our prescription for the quasilocal mass then yields
\begin{equation}
\label{MCHME}
M(r_B) = {r_B^2 \over 2{\gamma}^2} \left[
\sqrt{ 1-{4{\gamma}^2 M \over
{r^2}_B}} - 1 + {4{\gamma}^2 M \over {r^2}_B} \right]
\,. \end{equation}
As $r_B \to \infty$, $M(r_B) \to M$.
We next consider the string action conformally related to (\ref{CHMacE}):
\begin{equation}
\label{CHMacs}
S = {1 \over 2{\kappa}} \int d^4 x \sqrt{-\bar g} e^{-2\phi}
[\bar R + 4 (\bar{\nabla} \phi)^2 - \bar{F}^2]
\end{equation}
where the conformal factor is given by
\begin{equation}
\label{CT-CHM}
{\Omega} = e^{\phi} = {r \over \sqrt{2} Q} \>\>\> \left( = U^{-1/2}\right) \,.
\end{equation}
Therefore, the string metric conformally related to (\ref{CHMbhE}) is
\begin{equation}
\label{CHMs}
d\bar{s}^{2} = -{r^2 \over 2Q^2 {\gamma}^4} ( r^2 - 4{\gamma}^2 M ) dt^2 +
{2r^2/Q^2 \over 1-4{\gamma}^2 M/r^2}dr^2 + {r^4 \over 2Q^2} d{\omega}^2 \,.
\end{equation}
The space part of the reference string metric is chosen by setting $M=0$
above, which gives the reference metric to be:
\begin{equation}
\label{CHM-refs}
ds_0^2 = - \bar{N}_0^2 dt^2 +{ 2r^2 \over Q^2} dr^2 +
{r^4 \over 2Q^2} d{\omega}^2 \ \ ,
\end{equation}
where the reference lapse is obtained by matching with that in Eq.
(\ref{CHMs}) at the boundary $r=r_B$.
\begin{equation}
\bar N_0 = {r_B \over \sqrt{2} Q\gamma^2} ({r^2}_B - 4\gamma^2 M)^{1/2} \,.
\end{equation}
With these prescribed choices for the reference fields, we find that the
quasilocal mass is
\begin{equation}
\label{MCHMs}
\bar M (r_B) = { r_B \over 2{\gamma}^2 } \left[ 1- \sqrt{1-4{\gamma}^2 M
\over {r^2}_B}\right] ({r^2}_B - 4\gamma^2 M )^{1/2} \ \ ,
\end{equation}
which is the same as that evaluated in the Einstein frame, namely,
Eq. (\ref{MCHME}).
\section{Discussion}
\label{sec:disc}
Naive expectations from quantum field theory would suggest that physical
quantities should remain invariant under a conformal transformation. However,
when it comes to the behavior of quasilocal mass under such a transformation,
one must bear caution. This is because {\em a priori} it can not be ruled out
that in some frames the scalar field $\phi$, which defines the conformal
factor, itself contributes to the energy-momentum of the spacetime. In this
paper we showed that, the preceding caveat notwithstanding, the unreferenced
BY quasilocal mass is indeed conformally invariant.
However, to obtain the physical mass of a spacetime one is often required to
subtract a reference term. At the level of the action, this is achieved by
subtracting a reference action. Different choices of reference action will
lead to different physical masses for the same classical solution. Moreover,
the reference term $\bar{M}_0$ arising from such actions and, consequently,
the referenced quasilocal mass may not be conformally invariant.
In this paper, we attempted to reduce the arbitrariness in the choice of a
reference action. We motivated this choice from a basic principle, in the form
of the Hawking-Horowitz prescription, which requires the reference geometry to
obey certain conditions. We proved that this prescription automatically gives
rise to a conformally invariant referenced quasilocal mass if the conformal
factor is monotonic in the scalar field.
We note, however, that the HH prescription does not attempt to specify a
unique reference geometry, owing to which the referenced quasilocal mass
is non-unique, albeit conformally invariant. It is only in some special
cases that one can obtain a unique physical mass. Asymptotically flat
spacetime solutions of general relativity belong to this category. There the
positive energy theorem and the stability criterion for Minkowski spacetime
ensure that under certain positivity conditions on the energy-momentum
tensor, the total energy of such spacetimes is positive; it is zero only for
the Minkowski spacetime. This selects the flat spacetime as a very
special reference geometry for calculating the total energy and, in certain
cases, the quasilocal mass/ energy of regions in such spacetimes. The
conformal invariance of quasilocal mass implies that in conformally related
spacetimes, which are asymptotically flat, the flat spacetime continues
to be a special reference geometry.
In this vein, one may argue that if the positive energy theorem could be
shown to hold for asymptotically non-flat cases, atleast of a limited type
such as the SSS spacetimes, then a corresponding special reference geometry
may emerge, which could be used under the HH prescription to compute the
referenced quasilocal mass in such spacetimes in some unique way. This and
other related issues are currently under study \cite{SB}.
\section{Acknowledgments}
We thank Valerio Faraoni, Sayan Kar, Jorma Louko, and Sukanya Sinha for
helpful discussions. One of us (S. B.) is grateful to the members of the
Theoretical Physics Group at the Raman Research Institute, Bangalore, India,
where part of this work was done, for their hospitality. We would also like to
thank IUCAA for financial support.
|
1,314,259,994,959 | arxiv | \section{Introduction}
\label{sec:intro}
We live in a dynamic and changing world. In the NLP context, a language model (LM) on COVID may be quickly outdated due to the ever-changing variants. It is desirable to continually update the LM using the latest data. A dialogue system needs to keep learning new skills.
A sentiment analysis system needs to constantly learn to classify opinions about new products.
How to make these systems incrementally learn new tasks/domains as we humans do is a \textit{continual learning} problem.
Continual learning (CL) is defined as incrementally learning a sequence of tasks, 1, ...., $\mathcal{T}$~\cite{chen2018lifelong}.\footnote{{\color{black}We refer readers to \cite{chen2018lifelong} for the difference between CL and other machine learning paradigms.}}
Once a task is learned, its training data (at least a majority of it) is no longer accessible. In NLP, a \textit{task} can be an \textit{end-task} (e.g., text classification, summarization and information extraction), which is usually supervised~\cite{ke2021achieving,DBLP:conf/aaai/MonaikulCFR21,DBLP:conf/iclr/QinJ22}, or a corpus \textit{domain} used to further pre-train or post-train
an LM, which is usually unsupervised~\cite{gururangan2021demix,ke2022cpt,scialom2022continual}. The former is called \textit{continual end-task fine-tuning}, and the latter
\textit{continual pre-training} (Cpre) or \textit{continual post-training} (CPost). Cpost differs from Cpre in that Cpre trains an LM from scratch while Cpost trains a pre-trained LM with a sequence of domains~\cite{DBLP:conf/acl/GururanganMSLBD20}.
\begin{table}
\centering
\scalebox{0.665}{
\begin{tabular}{l|l|l}
\hline\hline
Task-id & Domain/Task & One Training Example (in that domain/task)\\
\hline\hline
1 & Vacuum Cleaner [CF] & This vacuum cleaner \textit{sucks} !!! \\
2 & Desktop [KT] & The keyboard is clicky .\\
3 & Tablet [KT] & The soft keyboard is hard to use. \\
\hline
4 (new task) & Laptop & The new keyboard \textit{sucks} and is hard to click!\\
\hline
\end{tabular}
\vspace{-1mm}
}
\caption{Examples show that both CF and KT are essential in NLP. We can see tasks 2 and 3 have task-shared where KT is needed. Although the same word is used, task 1 and the new task has task-specific knowledge where CF prevention is needed.
Note that here we use only one sentence to represent a task, but each task actually represents a domain with all its sentences.}
\label{tbl:example}
\vspace{-3mm}
\end{table}
CL has two main objectives: overcoming \textit{catastrophic forgetting} (CF) and performing \textit{knowledge transfer} (KT) across tasks. CF refers to the phenomenon that when learning a new task, the system needs to update the existing network parameters learned from previous tasks, which may degrade the performances of the previous tasks~\cite{mccloskey1989catastrophic}. KT refers to the ability to transfer the knowledge learned in the past to the new task learning (\textit{forward transfer}). It is also desirable to enable the new task learning to improve the models of previous tasks (\textit{backward transfer}).
There are two CL modes:
\textit{offline CL} and \textit{online CL}~\cite{mai2022online}. In offline CL, all the training data for a task is available upfront and the training can take any number of epochs. In online CL, the data for each task comes in a data stream. Whenever a batch of data is accumulated in the stream, it is trained in one iteration. So online CL trains only in one epoch. There are also several CL settings.
\textbf{(1) Task-incremental learning (TIL)}~\cite{Ven2019Three}. In this setting, the task-identifier (task-id) is available in training and testing. The task-id can be used in continual end-task fine-tuning, CPre or CPost to identify the task-specific model to use in testing. The classes (if any) in the tasks may or may not be disjoint (e.g., one task classifies different types of political documents and another task classifies documents of different broad topics, including a \textit{politics} class). When the classes are not disjoint, task-id is neccessay in testing.
Some recent approaches can already prevent CF for TIL. The main challenge is KT.
\textbf{(2) Task incremental learning without task-id in testing (TILw)}. TILw differs from TIL in that TILw assumes the task-id is unknown in testing. This setting is mainly for CPre and Cpost because the task-id is usually not required in the end-task fine-tuning in evaluation.
\textbf{(3) Class-incremental learning (CIL)}~\cite{Ven2019Three}. CIL assumes no task-id is provided in testing. CIL is mainly used for supervised end-task learning, where each task has a set of non-overlapping classes. Only a single model is built for all classes learned so far. In testing, a test instance from any class may be presented for classification. CIL is more challenging than TIL as the system needs to distinguish among tasks.
\textbf{(4) Domain-incremental learning (DIL)}. DIL assumes the class labels are the same across all tasks (from different domains)~\cite{Ven2019Three}. A single head is used in the model. Generation tasks fit the DIL setting as the LM head has the same number of ``classes'' (vocabulary tokens). In testing, task-id is also provided.
\textbf{(5) Domain-incremental learning without task-id in testing (DILw).} DILw differs from DIL in that no task-id is provided in testing. Like CIL, DIL and DILw are for supervised task learning.
In existing machine learning (ML) and computer vision (CV) research, almost only TIL and CIL are studied. However, in NLP, all settings have been used due to NLP's rich types of tasks. Furthermore, CL research in ML and CV mainly focuses on overcoming CF. In NLP, KT is also extremely important. That is because (1) in text, words and phrases in different tasks or domains normally have the same meaning and (2) many NLP tasks are similar and have common knowledge, e.g., different information extraction tasks and different sentiment classification tasks (Table~\ref{tbl:example} shows some examples in aspect sentiment classification~\cite{ke2021Classic}). In practice, dealing with CF alone leads to lower performance, which is often not acceptable in applications. Only if both CF and KT are handled can CL become practically useful. Several techniques have achieved improved performance compared to training each task separately~\cite{ke2021achieving,ke2021adapting,ke2021Classic,DBLP:conf/kdd/0001PLCQZHCL021}.
\textbf{Existing surveys.} To our knowledge, this survey is the first to cover all CL settings (TIL, TILw, CIL, DIL, DILw), the state-of-the-art approaches that deal with both CF and KT (4 families for CF and 5 families for KT), both offline and online CL and {a wide range of} NLP tasks (see Table~\ref{tab:nlp_to_cl}).
To our knowledge,
\cite{biesialska2020continual} is the only survey on NLP-based CL. {\color{black}However, as it was published 2 years ago, it has a very small coverage.}
It has no mention of KT or different CL settings, which are critical because different settings are for different problems and use different techniques and KT is extremely important for NLP. It also misses many approaches specifically for NLP (e.g., instruction-based methods).
Other CL surveys all focus on CV tasks. \cite{DBLP:journals/corr/abs-1909-08383} is only about TIL, \cite{masana2020class,belouadah2021comprehensive} is only about CIL, and~\cite{mai2022online} is only about online CL. Some tried to cover more settings~\cite{parisi2019continual,hadsell2020embracing,van2019three,qu2021recent}. \cite{10.1162/neco_a_01433} focuses on biological inspirations and~\cite{lesort2020continual} focuses on autonomous systems. None of them considers pre-training, post-training, or KT because these are more specific to NLP tasks.
\section{Problem Formulation and Applications}
This section formally defines CL and summarizes different CL settings and their NLP applications.
\textbf{Continual Learning (CL).}
CL learns a sequence of tasks 1, ...., $\mathcal{T}$ incrementally. Each task has a training data set $D_{t}$.
We denote the samples in $D_t$ as $(\mathcal{X}_t,\mathcal{Y}_t)$, where $\mathcal{X}_t$ is the set of data samples of task $t$ and $\mathcal{Y}_t$ is the corresponding class labels (for a unsupervised task, there is no $\mathcal{Y}_t$). In the following, we use supervised tasks as an example (for unsupervised tasks, they are usually converted to self-supervised tasks). The goal of CL is to minimize the expected loss of \textit{all seen tasks} given limited or no access to data from previous tasks $t < \mathcal{T}$, where $\mathcal{T}$ is the number of tasks seen so far:
\begin{equation}
\label{eq:cl}
\sum_{t=1}^{\mathcal{T}}\mathop{\mathbb{E}}_{(\mathcal{X}_t,\mathcal{Y}_t)}[\mathcal{L}(f_t(\mathcal{X}_t;\theta),\mathcal{Y}_t)]
\end{equation}
$f_t$ represents the network for task $t$ and $\theta$ represents the network parameters/weights. Eq.~\ref{eq:cl} is easily approximated for the current task $\mathcal{T}$ alone because the current task data is available:
\begin{equation}
\label{eq:cur_task}
\frac{1}{N_{\mathcal{T}}}\sum_{i=1}^{N_{\mathcal{T}}}\mathcal{L}(f(x^i_{\mathcal{T}};\theta),y^i_{\mathcal{T}})
\end{equation}
However, the training data is no longer available for old tasks, so the expected loss cannot be approximated using the statistical risk. The main research question in CL is how to determine the optimal parameters $\theta$ to optimize Eq.~\ref{eq:cl}.
\textbf{Different CL settings and their NLP applications.} See Table~\ref{tab:nlp_to_cl}. The first two columns show different CL settings and their properties. These affect how a NLP problem being formulated into a CL task. We can see a majority of NLP problems belongs to DIL or DILw. This is very different from CV where most problems fit TIL or CIL. This is mainly because a language model (LM) is naturally a text generator and recent work on large LMs relies on the intuition that most NLP tasks can be described via natural language instructions. With the instructions, different types of tasks (e.g., classification or extraction) can be converted to generation tasks and thus a shared LM head is applied across all tasks (Sec.~\ref{sec:preliminary}).
\textbf{Overcoming catastrophic forgetting (CF).} Optimizing Eq.~\ref{eq:cl} is very challenging. Researchers have observed that a naive training of each task using Eq.~\ref{eq:cur_task} results in very poor performance for previously learned tasks due to changes to parameters learned for previous tasks in learning each new task (i.e., CF).
Extensive CL approaches have been proposed to address CF to \textit{maintain} the performance of the previously learned tasks.
\textbf{Encouraging knowledge transfer (KT).} Dealing with CF is insufficient to optimize Eq.~\ref{eq:cl}. Ideally, the optimized $\theta$ after learning the current task ${\mathcal{T}}$ should not only be good for ${\mathcal{T}}$ but also better for previous tasks than $\theta$ before learning ${\mathcal{T}}$
(\textit{backward knowledge transfer}).~Similarly, the new task ${\mathcal{T}}$ should leverage the previously learned knowledge to do better (\textit{forward knowledge transfer}).
\begin{table*}[t]
\centering
\resizebox{0.8\textwidth}{!}{
\begin{tabular}{c||cccc}
\specialrule{.2em}{.1em}{.1em}
CL Settings & Property & NLP Problems & CL Papers \\
\specialrule{.1em}{.05em}{.05em}
\multirow{7}{*}{TIL} & \multirow{7}{*}{\begin{tabular}[c]{@{}c@{}}$p(\mathcal{X}_i) \neq p(\mathcal{X}_{j}), \text{for}~i \ne j$\\
task-id provided \\ in training and testing\end{tabular}} & \multirow{2}{*}{Aspect Sentiment Classification} & B-CL~\cite{ke2021adapting} \\
& & & CTR~\cite{ke2021achieving} \\
\cline{3-4}
& & \multirow{2}{*}{Intent Classification} & MeLL~\cite{DBLP:conf/kdd/0001PLCQZHCL021} \\
& & & PCLL~\cite{DBLP:journals/corr/abs-2210-07783} \\
\cline{3-4}
& & Slot Filling & PCLL~\cite{DBLP:journals/corr/abs-2210-07783}\\
& & Topic Classification & CTR~\cite{ke2021achieving}\\
& & Mixed diverse tasks & CLIF~\cite{DBLP:conf/emnlp/JinLR021} \\
\hline
\multirow{3}{*}{TIL in Cpost} & \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Same as TIL but \\ each task is \\ a corpus domain\end{tabular}} & \multirow{3}{*}{4 post-training domains} & \multirow{3}{*}{CPT~\cite{ke2022cpt}} \\
& & & \\
& & & \\
\hline
\multirow{3}{*}{TILw} & \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Same as TIL but\\ task-id only provided \\ in training\end{tabular}} & Mixed 5 classification tasks & IDBR~\cite{huang2021continual} \\
& & Slot Filling & ProgM~\cite{shen-etal-2019-progressive}\\
& & Sentence Representation & SRC~\cite{DBLP:conf/naacl/LiuUS19} \\
\hline
\multirow{3}{*}{TILw in Cpre} & \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Same as TILw but\\ each task is a corpus \\ domain for pre-training\end{tabular}} & 5 pre-training domains & ELLE~\cite{qin2022elle} \\
& & \multirow{2}{*}{8 pre-training domains} & \multirow{2}{*}{DEMIX~\cite{gururangan2021demix}}\\
& & & & \\
\hline
TILw for Cpost & \begin{tabular}[c]{@{}c@{}}Same as TILw but \\ each task is a corpus\\ domain for post-training\end{tabular} & 10 post-training domains & Continual-T0~\cite{scialom2022continual}\\
\hline
\multirow{6}{*}{CIL} & \multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}
$p(\mathcal{X}_i) \neq p(\mathcal{X}_{j}), \text{for}~i \ne j$ \\
$\mathcal{Y}_i \cap \mathcal{Y}_{j}= \varnothing, \text{for}~i \ne j$
\\ task-id only provided \\ in training\end{tabular}} & \multirow{2}{*}{NER} & \multirow{2}{*}{ExtendNER~\cite{DBLP:conf/aaai/MonaikulCFR21}} \\
& & & \\
\cline{3-4}
& & \multirow{4}{*}{Intent Classification} & ENTAILMENT~\cite{DBLP:conf/naacl/XiaYFY21}\\
& & & CFID~\cite{DBLP:conf/coling/LiZCGZZ22} \\
& & & CID~\cite{DBLP:journals/corr/abs-2108-04445} \\
& & & PAGeR~\cite{DBLP:conf/naacl/VarshneyPKVS22} \\
\hline
\multirow{2}{*}{Online CIL} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}No task-id in training or testing\\ only one epoch per task\end{tabular}} & \multirow{2}{*}{Mixed 5 classification tasks} & MBPA++~\cite{DBLP:conf/nips/dAutumeRKY19} \\
& & & Meta-MBPA++~\cite{wang2020efficient} \\
\hline
\multirow{6}{*}{DIL} & \multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}$p(\mathcal{X}_i) \neq p(\mathcal{X}_{j}), \text{for}~i \ne j$ \\
$\mathcal{Y}_i=\mathcal{Y}_{j}, \text{for}~i \ne j$ \\task-id provided \\ in training and testing\end{tabular}} & Mixed 5 classification tasks & LAMOL~\cite{sun2020lamol}\\
& & Mixed classification and labeling tasks & LAMOL~\cite{sun2020lamol} \\
& & Dialogue State Tracking & C-PT~\cite{zhu2022continual} \\
& & Dialogue Response Generation & TPEM~\cite{DBLP:conf/acl/GengYXSX020} \\
& & Mixed classification and generation & ConTinTin~\cite{DBLP:conf/acl/0001LX22} \\
& & Mixed 4 generation tasks & ACM~\cite{zhang2022continual} \\
\hline
\multirow{12}{*}{DILw} & \multirow{12}{*}{\begin{tabular}[c]{@{}c@{}}$p(\mathcal{X}_i) \neq p(\mathcal{X}_{j}), \text{for}~i \ne j$
\\ $\mathcal{Y}_i=\mathcal{Y}_{j}, \text{for}~i \ne j$ \\task-id only provided \\ in training\end{tabular}} & Mixed 5 classification tasks & LFPT5~\cite{DBLP:conf/iclr/QinJ22} \\
\cline{3-4}
& & \multirow{2}{*}{NER} & \multirow{2}{*}{LFPT5~\cite{DBLP:conf/iclr/QinJ22}} \\
& & & \\
\cline{3-4}
& & \multirow{3}{*}{Summerization} & \multirow{3}{*}{LFPT5~\cite{DBLP:conf/iclr/QinJ22}} \\
& & & \\
& & & \\
\cline{3-4}
& & Paraphrase &RMR-DSE~\cite{li-etal-2022-overcoming} \\
\cline{3-4}
& & \multirow{2}{*}{Dialogue Response Generation} & RMR-DSE~\cite{li-etal-2022-overcoming} \\
& & & AdapterCL~\cite{madotto2020continual} \\
\cline{3-4}
& & Dialogue State Tracking & AdapterCL~\cite{madotto2020continual} \\
\cline{3-4}
& & Dialogue end2end & AdapterCL~\cite{madotto2020continual} \\
\cline{3-4}
& & Aspect Sentiment Classification & CLASSIC~\cite{ke2021Classic} \\
\hline
\multirow{3}{*}{Online DIL} & \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}No task-id in training or testing\\ only one epoch per task\end{tabular}}& \multirow{3}{*}{QA} & \multirow{3}{*}{MBPA++~\cite{DBLP:conf/nips/dAutumeRKY19}} \\
& & & \\
& & & \\
\specialrule{.1em}{.05em}{.05em}
\end{tabular}
}
\vspace{-2mm}
\caption{CL tasks and NLP problems. The same datasets may be formulated for different CL settings, depending on how a task is defines. In the second column, $p(\mathcal{X})$ refers to the distribution of the input data,
$\mathcal{Y}_i$ refers to the label space of task $i$, and $i$ and $j$ refer to task-ids. Note that the 4 intent classification systems in CIL involves few-shot continual learning. The two systems in online CIL assumes task boundaries are unknown in both training and testing.
}
\label{tab:nlp_to_cl}
\vspace{-4mm}
\end{table*}
\section{NLP Preliminaries}
\label{sec:preliminary}
Almost all NLP systems nowadays are based on Transformer~\cite{liu2019roberta,DBLP:conf/acl/LewisLGGMLSZ20,DBLP:conf/naacl/DevlinCLT19}. This leads to CL techniques specifically for Transformer. Before discussing the detailed CL approaches, it is necessary to discuss these Transformer based NLP techniques.
\textbf{Adapter, prefix, and prompt tuning.}
These approaches (a.k.a., light-weight tuning) add a small number of parameters to a pre-trained Transformer LM. Research has shown that only fine-tuning the added parameters (with the LM frozen) can already achieve similar performance to fine-tuning the whole LM.
The popular light-weight fine-tuning in CL for NLP include \textit{adapter-tuning}~\cite{Houlsby2019Parameter}, which inserts two fully-connected networks to each Transformer layer;
\textit{prefix-tuning}~\cite{DBLP:conf/acl/LiL20}, which prepends some (a hyper-parameter) tunable prefix vectors to the keys and values of the multi-head attention at every layer;
and \textit{prompt-tuning}~\cite{DBLP:conf/emnlp/LesterAC21} (a.k.a., soft-prompt tuning), which adds a sequence of trianable prompt tokens to the end of the original sequence. In CL, these light-weight tuning methods make the parameter-isolation based methods (Sec.~\ref{sec:parameter_isolate}) more efficient because only a tiny number of parameters are needed to be saved to prevent CF.
\textbf{Instruction (hard prompt).} Hard prompt or instruction is a short piece of text describing the core concept of the task. Some examples are ``listing nouns'', ``output the nth word or char''~\cite{DBLP:journals/corr/abs-2010-11982} (more can be seen in \cite{DBLP:conf/acl/MishraKBH22}). By using instruction, all tasks can be formatted as a response to a natural language input and thus a generation task. In CL, this helps convert all tasks in the same generation format so that an LM can be trained as both a classifier and a sample generator. Different task-specific instructions can be applied for different tasks to prevent CF. Some datasets may also provide similar/positive instruction that can be used for KT.
\section{Approaches for Continual Learning}
\label{sec:approch}
While all surveys follow the established families of approaches, \textit{regularization-based}, \textit{replay-based} and \textit{parameter-isolation based} to deal with CF, we add one more family specifically for NLP (i.e., \textit{instruction-CF based}). We further summarize 5 different families of approaches that aim to encourage KT.
To benefit the NLP community more, we also discuss how the NLP-based CL systems are similar or different from the CV-based CL systems.
\subsection{Approaches for Overcoming CF}
\label{sec:appro_cf}
The main approach to overcoming CF/forgetting is to prevent major changes to previously learned knowledge or parameters.~We categorize the main existing techniques into 4 families, \textit{regularization-CF based} (Sec.~\ref{sec:reg_cf}), \textit{replay-based} (Sec.~\ref{sec:replay}), \textit{parameter-isolation based} (Sec.~\ref{sec:parameter_isolate}) and \textit{instruction-CF based} (Sec.~\ref{sec:instruct_cf}).
We note that one system may combine more than one family of approaches. Some systems also have additional mechanisms to encourage KT. This section focuses only on CF prevention and KT will be discussed in Sec.~\ref{sec:appro_kt}.
\begin{figure*}[t!]
\centering
\includegraphics[width=\textwidth]{figures/KT_CF.png}
\vspace{-6mm}
\caption{A taxonomy for different continual learning families of methods for CF prevention (Sec.~\ref{sec:appro_cf}) and KT (Sec.~\ref{sec:appro_kt}).
}
\label{fig:cf}
\vspace{-5mm}
\end{figure*}
\subsubsection{Regularization-CF Based Methods}
\label{sec:reg_cf}
Using a regularization to overcome CF has been a popular approach. We add ``-CF'' to differentiate it from regularization used for KT in Sec.~\ref{sec:reg_kt}.
This family typically adds a penalty or regularization to penalize changes to important parameters learned for previous tasks in learning a new task.
The main drawback of this approach is that it introduces a trade-off between the new task learning and forgetting prevention.
This approach was originally from CV~\cite{Kirkpatrick2017overcoming,zenke2017continual,He2018overcoming,DBLP:conf/iclr/EbrahimiEDR20} and has been adopted by the NLP community.
\textbf{(1). Regularizing the loss based on parameter importance.} The most popular and established regularization-CF based method in this sub-category is EWC~\cite{Kirkpatrick2017overcoming} in CV.
EWC uses the gradient of previous tasks as the importance of parameters for the previous tasks, denoted as $F^{\mathcal{T}}_k$ {\color{black}(i.e., fisher information matrix, where $k$ is the index of parameters). }
$F^{\mathcal{T}}_k$ is then multiplied with the difference between the old parameters and new parameters. {\color{black}The resulting regularization term is added to the loss to penalize the changes made to important parameters.} Several CL systems in NLP are derived from EWC or leverage the idea to penalize changes to important parameters. We group these systems according to their CL settings.
\textbf{DILw.}~RMR-DSE \cite{li-etal-2022-overcoming} applies EWC to seq2seq modeling. It converts $F^{\mathcal{T}}_k$ to tunable hyper-parameters so that no additional backward pass is needed after training each task.
AEWC~\cite{DBLP:journals/corr/abs-1712-09943} employs an improved EWC algorithm~\cite{zenke2017continual}, where the importance can be computed in an online manner. The main idea is to replace the fisher matrix with an importance estimator so that the additional re-training and point-estimation can be removed.
\textbf{TILw. }SRC~\cite{DBLP:conf/naacl/LiuUS19} continually updates the sentence encoder with regularization. The penalty is computed via matrix conceptors which capture the corpus-specific features of each corpus.
{\color{black}Note that updating a pre-trained LM in CL will inevitably result in forgetting of some knowledge in the LM in this sub-category.
This is because the data used in the pre-training process is not accessible to end-users of the LM. Then parameter importance is not compute-able by end-users in order to protect the important LM parameters. }
\textbf{(2). Regularizing the loss based on distillation.} This sub-category distills the knowledge from a previous model (trained on a previous task) to the model being trained on the new data. A copy of the previous model needs to be stored. The first distillation based CL method is LwF~\cite{Li2016LwF} in CV. The response of the previous model on the new task data is used as the target for the previous tasks' classifiers.
In NLP,
several systems in the DILw or CIL setting use this approach.
\textbf{CIL.} ExtendNER~\cite{DBLP:conf/aaai/MonaikulCFR21}, CFID~\cite{DBLP:conf/coling/LiZCGZZ22}, CID~\cite{DBLP:journals/corr/abs-2108-04445} and PAGeR~\cite{DBLP:conf/naacl/VarshneyPKVS22} apply \textit{KL-divergence} between the response of the previous model and the current model to achieve distillation.
\textbf{DILw.} LFPT5~\cite{DBLP:conf/iclr/QinJ22} applies KL-divergence between the saved old prompt and the new prompt.
\textbf{Other regularization-based methods in CV.} Though existing NLP systems only regularize the loss, CV-based systems also regularize the gradient.
For example, in learning a new task, OWM~\cite{zeng2019continuous} projects the gradient in the orthogonal direction to the input of the old tasks. \cite{Wang_2021_CVPR} maps the gradient to the null space of the previous tasks to mitigate CF.
Some others regularize the learning rate to slow down parameter updates or control the geometry of the local minima~\cite{DBLP:conf/iclr/EbrahimiEDR20,DBLP:conf/nips/MirzadehFPG20}.
\subsubsection{Replay-based Methods}
\label{sec:replay}
Most replay-based methods either (a) store a small subset of training samples of previous tasks
in a memory or (b) learn a data generator to generate pseudo samples of previous tasks. (b) is usually called \textit{pseudo/generative replay}. In learning a new task, the saved samples or the generated samples and the new task data are both used in training~\cite{Rebuffi2017,Lopez2017gradient,Shin2017continual,Kemker2018fearnet}.
\textbf{(1). Replaying raw samples in training.} This is the case (a) above.
\textbf{CIL.} CFID~\cite{DBLP:conf/coling/LiZCGZZ22} replays raw samples with a dynamic weighting to control the sample weight. CID~\cite{DBLP:journals/corr/abs-2108-04445} further considers the imbalance between the rich new data and the small old data. CID also leverages the idea in the imbalanced learning community (Inter-class margin loss and cosine normalization) to alleviate the data imbalance issue in replay.
\textbf{TILw for CPost.}
Continual-T0~\cite{scialom2022continual}
saves 1\% of the previous task samples in the memory buffer. It can preserve previous knowledge quite well and even preserve the zero-shot performance of the T0 model.
\textbf{TILw for CPre.} ELLE~\cite{qin2022elle} uses a large memory buffer (around 1G per task).
\textbf{TILw.} IDBR~\cite{huang2021continual} proposes a memory selection rule based on K-means to store as fewer samples as possible. That is, it
saves only examples closest to each cluster centroid.
\textbf{(2). Replaying raw samples in training and inference.} Besides joint training of both the replay samples and new samples, this approach also uses the replay samples in inference for local adaptation.
This idea was used in the online CL setting.
\textbf{Online CIL and online DIL.}
MBPA++ \cite{DBLP:conf/nips/dAutumeRKY19} and Meta-MBPA++ \cite{wang2020efficient} selectively use the saved old samples in inference. During inference, the representation of a test sample is used to retrieve $K$ nearest saved samples in the replay memory. Based on the $K$ samples, gradient updates are performed to achieve sample-specific fine-tuning. The fine-tuned network is then used to output the final prediction for the testing sample.
\textbf{(3). Replaying prototypes. }Instead of storing raw samples, one can also save class prototypes.
\textbf{TIL. }MeLL~\cite{DBLP:conf/kdd/0001PLCQZHCL021} stores class prototype representations for meta-learning (Sec.~\ref{sec:meta}). Both the stored prototypes and current task feature representations are used to prevent CF.
\textbf{(4). Optimizing on the replay memory in CV.} Using the replay samples has the risk of over-fitting. This sub-category
uses the saved data only to prevent CF, but does not train the replay samples. Representative systems are GEM~\cite{Lopez2017gradient} and A-GEM~\cite{Chaudhry2019ICLR}. Both systems uses an inequality constraint to prevent parameter update from increasing the loss of each previous task. The previous task loss is estimated by using samples in the memory.
\textbf{(5). Replaying generated pseudo samples.} This is the case (b). Some NLP systems leverage the language model (LM) to generate pseudo samples for previous tasks. This approach is becoming popular because the LM itself is naturally a generator and can generate high-quality pseudo samples. While this is promising, it has been observed that the LM has difficulty to generate samples for some tasks, such as summarization~\cite{DBLP:conf/iclr/QinJ22} and aspect sentiment classification~\cite{ke2021achieving}. This approach has been adopted in several CL settings, TIL, CIL, DIL and DILw.
\textbf{TIL.} PCLL~\cite{DBLP:journals/corr/abs-2210-07783} uses prompt and conditional variational autoencoder (CVAE) to generate previous samples. It saves the task-specific prompt for each task so the saved previous task prompt can be used to help condition the LM when generating previous task samples.
\textbf{CIL.} PAGeR~\cite{DBLP:conf/naacl/VarshneyPKVS22} replays the generated samples by using instructions (hard prompts) containing both current and previous intent words.
\textbf{DIL.}~LAMOL \cite{sun2020lamol} and ACM~\cite{zhang2022continual} convert all tasks into generation tasks following \cite{McCann2018decaNLP} and train the underlying LM (GPT-2) so that it can do both task learning and sample generation. When a new task comes, LAMOL or ACM first leverages the LM to generate old task samples and then train the ``old'' and the new samples together to avoid CF. Different from LAMOL, ACM makes use of adapters and selectively shares them to further enable KT (Sec.~\ref{sec:appro_kt}).
\textbf{DILw.} Similarly, LFPT5~\cite{DBLP:conf/iclr/QinJ22} leverages prompt and T5 to achieve both task learning and sample generation. Recall that LFPT5 also has the prompt distillation (Sec.~\ref{sec:reg_cf}) to prevent forgetting, so LFPT5 is a combination of regularization-based and replay-based method.
\textbf{(6). Relaying pseudo features (latent representations) in CV.} Generating high-quality samples can be very challenging. Some CV papers propose to generate features. Example systems are \cite{van2020brain} and \cite{DBLP:conf/eccv/YeB20}.
\subsubsection{Parameter-isolation Based Methods}
\label{sec:parameter_isolate}
\textit{Parameter-isolation} based methods allocate different parameters to different tasks to prevent subsequent tasks from interfering with the previously learned parameters.
{\color{black}This family requires task-id in both training and testing so it is mainly for TIL and DIL settings.}
This family also usually suffers from the capacity problem and makes KT challenging. Like regularization-CF and replay-based methods, parameter-isolation based methods are also originated from CV~\cite{DBLP:journals/corr/RusuRDSKKPH16,yoon2018lifelong,fernando2017pathnet,Mallya2017packnet}.
\textbf{(1). Fixed architecture.} This sub-category isolates a subset of parameters for each task in a fixed network.
Three popular methods are \textit{task masking}, \textit{parameter generation} and \textit{sub-network masking}.
\textbf{(i). Task masking.} This approach
masks a subset of neurons at each layer for a task (identified by task-id). Since the mask can indicate what neurons have been used by previous tasks, in learning the new task, the system can freeze the used neurons to prevent CF of the previously learned knowledge. The most popular system is HAT~\cite{Serra2018overcoming} in CV. HAT initializes a task embedding $\bm{e}^{(t)}_l$ for each task $t$ and each layer $l$ in the network. A $\texttt{sigmoid}$ function is used as a pseudo-gate/step function along with a large positive number $s$ (hyper-parameter). A mask $\bm{m}_l^{(t)}$ is given by $
\bm{m}_l^{(t)} = \sigma (s\bm{e}^{(t)}_l)
$.
During training, the mask is element-wise multiplied with the output of each layer $l$.
In backward propagation, HAT blocks the used neurons (indicated by the mask) by previous tasks via multiplication of the inverse of the mask with the gradient to avoid CF.
In NLP, this task masking mechanism has been mainly used in the adapter layer to prevent CF, including,
\textbf{TIL.} CTR~\cite{ke2021achieving} and B-CL~\cite{ke2021adapting},
\textbf{DILw.} CLASSIC~\cite{ke2020continual}
\textbf{TIL for CPost.} CPT~\cite{ke2022cpt}.
\textbf{(ii). Parameter generation.} This method uses one network to generate the model parameters of another network. Since the generation can be conditioned on the task-id, it helps mitigate CF. Hypernet~\cite{von2019continual} in CV takes this approach. It builds an generated network $g$, which takes the task representation $z$ as input to generate the model parameters for a task solver network $f$. Since the generated network $g$ itself is exposed to CF, Hypernet further imposes a regularization (it is thus also a regularization-CF based method) to penalize the changes to the weights of $g$.
\textbf{TIL. }CLIF~\cite{DBLP:conf/emnlp/JinLR021} adopts this idea and uses Hypernet to generate parameters for the adapters.
\textbf{(iii). Subnetwork masking in CV.} {\color{black}This method fixes a network $N$} and trains another mask network $M$ to find a sub-network in $N$ for each task. An example in CV is SupSup~\cite{wortsman2020supermasks}, which trains a binary mask network via a ``straight-through'' trick for each task so that different tasks are totally isolated from each other. However, its subnetwork or mask potentially has the same size as the original model. Extra training and saving tricks are needed to make it efficient.
\textbf{(2). Dynamic architecture.} This sub-category expands the network once a new task comes.
\textbf{TILw for Cpre.} DEMIX~\cite{gururangan2021demix} adds adapters when a new task comes. In testing, the prediction of a test instance is given by the weighted sum (according to perplexity) of all pre-trained adapters. ELLE~\cite{qin2022elle} expands layers whenever a new task comes. It uses a large replay data (Sec.~\ref{sec:replay}) to ensure that the LM can deal with all previous and current tasks.
\textbf{DIL.}
C-PT~\cite{zhu2022continual} adds a prompt for each task. TPEM~\cite{DBLP:conf/acl/GengYXSX020} expands the network whenever a new task comes.
\textbf{DILw.} AdapterCL~\cite{madotto2020continual} adds one set of adapters for each task. It further infers the task-id in testing via perplexity.
\textbf{TILw.} ProgM~\cite{shen-etal-2019-progressive} expands the network for each new task and also transfers all the learned knowledge to the expanded component so that
it can directly use only the last expanded component in testing.
\textbf{(4). Parameter pool in CV.} This approach initializes a pool of parameters and conducts parameter selection. L2P~\cite{wang2021learning} in CV initializes a pool of prompts and selects different prompts for different tasks based on cosine similarity.
\subsubsection{Instruction-CF Based Methods}
\label{sec:instruct_cf}
This family is specific to NLP. We add ``-CF'' to differentiate it from the instruction for both CF and KT in Sec.~\ref{sec:reg_kt}.
It uses task-specific instructions to condition the LM.
While the instruction is typically a few words, the conditioning may not be sufficient to prevent CF. So far, it has been used only in TIL systems.
\textbf{TIL.} ConTinTin~\cite{DBLP:conf/acl/0001LX22} uses different instructions for different tasks to condition the LM.
ENTAILMENT~\cite{DBLP:conf/naacl/XiaYFY21} inputs both the sentence and intention label words as instruction and converts intent classification to binary classification, which helps alleviate forgetting.
\subsection{Approaches for Knowledge Transfer}
\label{sec:appro_kt}
We now review approaches for knowledge transfer (KT).
{\color{black}KT has backward and forward transfer (Sec.~\ref{sec:intro}).} The key to backward transfer is to decide what knowledge from some previous tasks can be updated so that both the previous and the current tasks can be improved. This is different from preventing CF where the focus is on how to effectively minimize the change to important previous task parameters (addressing the stability-plasticity dilemma). Backward transfer is highly challenging because we do not have previous task data and any incorrect update may result in serious CF. Forward transfer, on the other hand, is easier since one can simply fix the previous knowledge and leverage it to help learn the new task.
This section focuses on systems that involve both forward and backward transfer. We categorize existing approaches into 5 families, including \textit{regularization-KT based} (Sec.~\ref{sec:reg_kt}), \textit{importance-based} (Sec.~\ref{sec:importace}), \textit{similarity-based} (Sec.~\ref{sec:similarity}), \textit{meta-learning-based} (Sec.~\ref{sec:meta}) and \textit{instruction-KT based} (Sec.~\ref{sec:instruct_kt}).
\subsubsection{Regularization-KT Based Methods}
\label{sec:reg_kt}
Unlike regularization-CF based methods, regularization here disentangles task-shared knowledge and task-specific knowledge. A replay memory is usually needed to learn the task-shared knowledge.
\textbf{TILw.} IDBR~\cite{huang2021continual}. We have seen that IDBR leverages a replay memory to prevent CF (Sec.~\ref{sec:replay}). To encourage the LM to learn task general knowledge, it further regularizes the loss by adding a next sentence prediction task.
\subsubsection{Importance-based Methods}
\label{sec:importace}
The idea of this sub-category is to first look for the previous knowledge that is important for the current task and then train the detected important knowledge together with the current task.
It usually needs additional mechanisms to avoid CF because sharing the important knowledge with the current task may lead to CF on previous tasks.
\textbf{DIL.} ACM~\cite{zhang2022continual}, which learns an adapter for each task and trains a mixing coefficient to detect which adapter can be reused by a new task.
ACM prevents CF via a pseudo-replay method.
\subsubsection{Similarity-based Methods}
\label{sec:similarity}
The idea of this family is that two tasks can transfer knowledge to each other if and only if they are similar. KT is typically achieved by sharing the same parameters of similar tasks.
Since the similarity is not provided by the task data, this family often uses a proxy to compute task similarity.
The most popular proxy for task similarity is the feature similarity. Based on this, there are two main ideas.
\textbf{(1). Capsule Network (CapsNet).}
A simple \textit{capsule network (CapsNet)}~\cite{hinton2011transforming,sabour2017dynamic} consists of two capsule layers. The first layer stores low-level feature maps, and the second layer generates the classification probability with each capsule corresponding to one class. CapsNet uses a \textit{dynamic routing} algorithm to make each lower-level capsule send its output to a similar or ``agreed'' {\color{black}(computed by dot product)} higher-level capsule. This has been used in NLP.
\textbf{TIL.} CTR~\cite{ke2021achieving} and B-CL~\cite{ke2021adapting} are based on this idea. We know that they both use a parameter-isolation method to avoid CF. They further compare the task similarity via CapsNet. In B-CL and CTR, each task uses a capsule in the lower level (called ``task capsule''). The system leverages the routing algorithm to group similar tasks and their shareable features: if two tasks are similar, the capsules corresponding to the tasks are allowed to update (in CTR, there is a binary gate. In B-CL, a weight based on similarity is employed to control how much can be updated) so that knowledge transfer (KT) can be achieved.
Note that the parameter-isolation based method like HAT (Sec.~\ref{sec:parameter_isolate}) has parameter sharing (but fixed) across tasks, this may cause CF for dissimilar tasks that share parameters with those similar tasks.
\textbf{(2). Contrastive learning.} The idea here is to leverage contrastive learning to capture the shared knowledge between different views. If one can generate a view from previous tasks that is similar to the current task, the contrastive loss can capture the shared knowledge and learn a representation for KT to the new task learning.
\textbf{DILw. }~CLASSIC~\cite{ke2021achieving} uses this idea. Again, CLASSIC prevents forgetting via a parameter-isolation method (Sec.~\ref{sec:parameter_isolate}). It encourages KT via contrastive learning. To this end, CLASSIC generates an augmented view from all previous tasks via self-attention and uses contrastive loss to help learn the shared knowledge.
\subsubsection{Meta-learning-based Methods}
\label{sec:meta}
Meta-learning is known to be able to capture the general knowledge as it learns a large set of tasks. The main idea of this family is to leverage this property to learn general
knowledge.
A replay memory is used to store some data of previous tasks, which are used as the tasks for meta-learning.
Specifically, meta-learning can be viewed as a bi-level optimization problem as follows:
\begin{equation}
\begin{split}
\label{eq:meta}
& w^* = \text{argmin}_w\sum_{i=1}^M\mathcal{L}^{\text{meta}}(\theta^{*(i)}(w),w,\mathcal{D}_{\text{query}}^{(i)}) \\
& \text{s.t. } \theta^{*(i)}(w) = \text{argmin}_\theta \mathcal{L}^{\text{task}}(\theta,w,\mathcal{D}_{\text{support}}^{(i)})
\end{split}
\end{equation}
where $\theta$ is the network parameter and $w$ is the learning strategy of $\theta$ (e.g., the optimizer or another network that can affect $\theta$). $w$ specifies ``how to learn'' $\theta$.
The inner loop ($\mathcal{L}^{\text{task}}$)
conducts local optimization trying to do well in the specific meta-training task ($\mathcal{D}_{\text{support}}$), while the outer loop ($\mathcal{L}^{\text{meta}}$) conducts global optimization to produce a model that performs well on all meta-training validation sets ($\mathcal{D}_{\text{query}}$).
To enable KT, the meta-objective ($\mathcal{L}^{\text{meta}}$) is set to be the performance of all learned tasks. The tasks ($\mathcal{L}^{\text{task}}$) in meta-learning is set to be the saved tasks in memory. Different systems in this family are different in the design of $w$ and $\theta$.
\textbf{TIL.} MeLL~\cite{DBLP:conf/kdd/0001PLCQZHCL021} uses a memory network as $\theta$ and an LM as $w$. The memory contains all the learned class prototype representations. Its output is concatenated with the output of the LM and fed into a fusion network to perform classification. In the inner loop, the current task's class prototypes in the memory network are updated.
In the outer loop, the LM is fine-tuned.
As a result, the LM learns the general/shared information (for KT) and {\color{black}the memory network} learns the task specific information (for dealing with CF).
{\color{black}
\textbf{Online DIL and CIL.} Meta-MBPA++~\cite{wang2020efficient} uses an additional network as $\theta$ in the inner loop to retrieve samples from the replay memory. An LM as text encoder is used as $w$ to generalize well across all tasks in the outer loop.
}
\subsubsection{Instruction-KT Based Methods}
\label{sec:instruct_kt}
Apart from being used to address CF (Sec.~\ref{sec:instruct_cf}), instructions have also been used to enable KT. The idea is to give an instruction that is similar or related to previous tasks.
\textbf{DIL.} C-PT~\cite{zhu2022continual} fuses the previous instruction (the ``query'' in a state tracking task) to enable the model to learn a general skill. ConTinTin~\cite{DBLP:conf/acl/0001LX22} uses a task-specific instruction to prevent CF (Sec.~\ref{sec:instruct_cf}), and further uses the dataset provided positive instructions/samples to enable knowledge transfer (KT).
\section{Continual Learning Evaluation}
\label{sec:evaluate}
CL evaluation mainly assesses (1) the average performance of all learned tasks, (2) the rate of forgetting
and (3) the effect of knowledge transfer.
Below, we present some reference baselines and evaluation metrics.
\subsection{Reference Baselines}
Several reference (or control) baselines are commonly used as the lower or upper bounds for CL.
\textbf{Naive continual learning (NCL).} This baseline refers to a system that performs CL without any mechanism to deal with forgetting~\cite{ke2021achieving,ke2021adapting,DBLP:conf/iclr/QinJ22}. All parameters can be freely updated in learning the new task. This baseline is expected to have the worst CF and {\color{black}the base performance for KT}, and thus is often regarded as the lower bound of a CL system.
\textbf{Train a separate model for each task (ONE).} Opposite to NCL, ONE trains each task separately as an independent model~\cite{ke2021achieving,ke2021adapting,DBLP:conf/kdd/0001PLCQZHCL021}. It is thus not a CL setting and has no CF or KT. It is usually regarded as the control to assess whether a CL system has KT or CF.
\textbf{Multi-task learning (MTL).} MTL is another non-CL reference baseline that is widely regarded as the upper bound of TIL and TILw. It requires a multi-head setup.
\textbf{Aggregate.} For those systems using a single-head setup or CIL, this reference baseline learns the classes of all tasks as a single learning task. This is regarded as the upper bound for TIL for Cpre, TIL for Cpost, DIL, DILw, and CIL.
\subsection{Evaluation Metrics}
\textbf{Average Accuracy}~\cite{chaudhry2018riemannian} ($A_{\mathcal{T}}$) measures the performance of a CL method after all $\mathcal{T}$ tasks have been learned, i.e.,
$
A_{\mathcal{T}} = \frac{1}{\mathcal{T}}\sum^{\mathcal{T}}_{t=1}a_{\mathcal{T},t}
$,
where $a_{\mathcal{T},t}$ refers to the performance of the model on the testing set of task $t$ after the model has continually trained all $\mathcal{T}$ tasks.
\textbf{Average Incremental Accuracy}~\cite{Rebuffi2017,Lopez2017gradient} is a metric in CIL and derived from average accuracy. It is simply the average over the average accuracy of each task (a set of classes in CIL). One can also choose to curve the average accuracy of each task in a figure rather than giving a single number of average incremental accuracy.
\textbf{Forgetting Rate}~\cite{chaudhry2018riemannian} ($F_{\mathcal{T}}$) measures how much knowledge has been forgotten across the first $\mathcal{T}-1$ tasks, i.e.,
$
F_{\mathcal{T}} = \frac{1}{\mathcal{T}-1}\sum^{\mathcal{T}-1}_{t=1}(a_{t,t}-a_{\mathcal{T},t})
$,
where $a_{t,t}$ is the test accuracy task $t$ right after it is learned, and $a_{\mathcal{T},t}$ is the accuracy of task $t$ after training the last task $\mathcal{T}$. We average over all trained tasks except the last one as the last task has no forgetting. The higher the forgetting rate is, the more
forgetting it has. Negative rates indicate positive knowledge transfer.
\textbf{Forward Transfer}~\cite{ke2020mixed} ($\text{FWT}_{\mathcal{T}}$) measures how much forward transfer has happened to a new task after learning the task, i.e.,
$
\text{FWT}_{\mathcal{T}} = \frac{1}{\mathcal{T}}\sum^{\mathcal{T}}_{t=1}(a_{t,t}-a_{0,t})
$,
where $a_{0,t}$ refers to the performance of training task $t$ individually (i.e., the accuracy of ONE for the task). Note that there is another way to measure forward transfer. It measures whether a learned network contains some useful knowledge for the new task~\cite{Lopez2017gradient}, i.e.,
$
\text{FWT}'_{\mathcal{T}} = \frac{1}{\mathcal{T}-1}\sum^{\mathcal{T}}_{t=2}(a_{t-1,t}-r_t)
$,
where $r_t$ refers to the accuracy of task $t$ using a randomly initialized network. {\color{black}Unlike $\text{FWT}_{\mathcal{T}}$, $\text{FWT}'_{\mathcal{T}}$} does not tell how much forward transfer actually happens after learning the new task.
\textbf{Backward Transfer} ($\text{BWT}_{\mathcal{T}}$) measures how much the continual learning of subsequent tasks influences the performance of a {learned} task, i.e.,
$
\text{BWT}_{\mathcal{T}} = \frac{1}{\mathcal{T}-1}\sum^{\mathcal{T}-1}_{t=1}(a_{\mathcal{T},t}-a_{t,t})
$.
Positive $\text{BWT}_{\mathcal{T}}$ indicates the subsequent task learning can improve the performance of previous tasks.
\section{Observations and Future Directions}
We have reviewed the recent advances in CL across different settings and NLP tasks. Two focuses are \textit{forgetting}/CF \textit{prevention} and \textit{knowledge transfer} (KT). This section discusses some observations about existing methods. Based on these observation, we outline some future research directions.
{\color{black}
\textbf{Knowledge transfer is the major issue in TIL and DIL.} Several TIL and DIL methods have achieved no CF and even results close to MTL \cite{ke2021achieving,ke2021adapting,DBLP:conf/kdd/0001PLCQZHCL021,zhu2022continual}. This indicates that CF can be seen as a solved problem, but maximizing KT is still highly challenging. Although Sec.~\ref{sec:appro_kt} discussed some systems that deal with both KT and CF prevention, most of them only manage to achieve the same results as ONE or only improve ONE when tasks are similar or have extensive shared knowledge.~The main issue is that the task sequence can contain a mix of similar and dissimilar tasks (e.g., CAT~\cite{ke2020mixed}),
which violates the assumptions that all tasks have shared knowledge in existing approaches.~Although some research (similarity-based and importance-based families) tries to explicitly detect similar and dissimilar tasks for each new task and learn them differently, none of them is accurate enough. Since task similarity is fuzzy with no ground-truth, it is hard to know what is considered similar enough for positive KT. It is also difficult to identify what can and cannot be transferred and how to transfer forward and backward to achieve improved results for both the new task and the similar old tasks.}
\textbf{Forgetting prevention is still a major issue in settings other than TIL and DIL.} Except for TIL and DIL, the performance of systems of other settings are still far from ONE, not to mention MTL~\cite{DBLP:conf/naacl/XiaYFY21,wang2020efficient}. This is not only due to CF but also due to the fact that the model cannot distinguish among tasks in testing, which is particularly challenging for CIL because the tasks are not trained together and thus have no connections. Future research may need to consider both issues when proposing new methods.
\textbf{Cpre and Cpost are still in their infancy.} A pre-trained or post-trained LM is expected to improve a wide range of end-tasks, but how to maintain the needed general knowledge already in the LM is still unclear. In CPost, one has to maintain the pre-trained general knowledge without accessing the pre-training data because such data is not accessible to end-users of the LM in practice. This makes most of the CL methods in Sec.~\ref{sec:appro_cf} inapplicable because gradient, data, importance scores, or prototypes from the pre-training stage are not available for end-tasks.
A new approach to address CF in this case is needed.
\textbf{Online CL of an LM still cannot be achieved.} For example, none of the state-of-the-art LM are up to date with the current coronavirus pandemics. This makes the LM unable to understand what we are referring to when we talk about ``COVID'', ``lockdown'' or ``flat the curve''. Ideally, the LM should be updated online with the emerging new documents. Little work has been on this.
\textbf{CL in NLP still has no established benchmark datasets that are used by most systems.} Different orders of tasks in the task sequence also affect the performance. In Table~\ref{tab:nlp_to_cl}, we see different NLP problems can be formulated as different CL tasks. Most papers implement baselines themselves and conduct experiments on their specific datasets. This makes it hard to evaluate and compare to assess the progresses in the field.
\section{Conclusion}
{\color{black}An AI agent cannot be considered truly intelligent if it does not have the continual learning (CL) capability to learn and to accumulate knowledge over time and to leverage the past learned knowledge to enable better future learning.} CL is thus necessary for any field. This paper gave a comprehensive survey of the recent advances of CL in NLP. We showed that many NLP problems have been formulated as CL tasks. Existing CL settings and approaches based on the two main objectives (CF and KT) have been presented. Finally, we also gave some future research directions. We hope that this survey will inspire researchers from the both the CL and NLP communities to design better algorithms.
\bibliographystyle{named}
|
1,314,259,994,960 | arxiv | \section{Conventional phase space path integrals}
There is considerable appeal in the formal phase space path integral
\bn\<q''|\,e^{-i{\cal H} T }\,|q'\>={\cal N}\int e^{i\textstyle\int[p{\dot q}-h(p,q)]\,dt}
\,{\cal D} p\,{\cal D} q \en
which yields the propagator in the $q$-representation \cite{fey}. In this
relation the integration is over all $q$-paths $q(t)$, $t'\leq t\leq t''
\equiv t'+T$, $T>0$, subject to the boundary conditions that $q(t'')=q''$
and $q(t')=q'$, as well as all $p$-paths $p(t)$ for $t'\leq t\leq t''$. It
follows from this formula that the {\it meaning} of $q(t)$ is the same as
the meaning of $q(t'')$, namely, as the sharp eigenvalue of the position
operator $Q$, where $Q|q\>=q|q\>$.
An analogous path integral leads to the propagator in the $p$-representation
and is given by
\bn \<p''|\,e^{-i{\cal H} T }\,|p'\>={\cal N}\int e^{i\textstyle\int[-q{\dot p}-h(p,q)]
\,dt}\,{\cal D} p\,{\cal D} q\;. \en
In this expression integration runs over the $p$-paths $p(t)$, $t'\leq t
\leq t''$, subject to the requirement that $p(t'')=p''$ and $p(t')=p'$,
while in the present case, integration over all $q$-paths $q(t)$, $t'\leq t
\leq t''$, is assumed. It follows that the {\it meaning} of $p(t)$ is the
same as the meaning of $p(t'')$, namely as the sharp eigenvalue of the
momentum operator $P$, where $P|p\>=p|p\>$.
These two path integrals are of course connected with each other. In
particular, it follows that
\bn&&\hskip-2.5cm\<q''|\,e^{-i{\cal H} T}\,|q'\>
={(2\pi)}^{-1}\int e^{i(q''p''-q'p')}\,\<p''|\,e^{-i{\cal H} T}\,|p'\>\,dp''
\,dp'\nonumber\\
&&={\cal N}\int e^{i\textstyle\int[{\dot{\overline {pq}}}-q{\dot p}-h(p,q)]\,dt}\,{\cal D} p\
,{\cal D} q\nonumber\\
&&={\cal N}\int e^{i\textstyle\int[p{\dot q}-h(p,q)]\,dt}\,{\cal D} p\,{\cal D} q \en
just as before.
Is the so obtained physical meaning for $p(t)$ and $q(t)$ satisfactory?
If we were dealing with the strictly classical theory, for which $\hbar=0$,
there is absolutely no contradiction in specifying $p(t)$ and $q(t)$
simultaneously for all $t$, $t'\leq t\leq t''$. On the other hand, we are
dealing with the quantum theory and decidedly not the classical theory.
Planck's constant $\hbar=1$ (in the chosen units) and does not vanish. T
hus we are led to the conclusion that the given formal path integrals are
expressed in terms of phase space paths for which, {\it within the quantum
theory}, one may simultaneously specify both position $q(t)$ and momentum
$p(t)$, $t'< t< t''$ sharply. This assertion evidently contradicts the
Heisenberg uncertainty principle, and consequently it is unacceptable.
Something is definitely wrong!
Another indication that something is wrong follows on consideration of the
expression
\bn{\cal N}\int e^{i\textstyle\int[\frac{1}{2}(p{\dot q}-q{\dot p})-h(p,q)]\,dt}
\,{\cal D} p\,{\cal D} q\;, \en
which also involves an acceptable version of the classical action, but which
cannot be interpreted along the lines given above. Interpretation fails
because it is unclear what variable(s) are to be held fixed at the initial
and final times. For instance, should this expression be interpreted as
\bn C\int e^{i(p''q''-p'q')/2}\,\<p''|\,e^{-i{\cal H} T}\,|p'\>\,dp''\,dp'\;,\en
where $C$ is an appropriate constant, or as
\bn C\int e^{-i(p''q''-p'q')/2}\,\<q''|\,e^{-i{\cal H} T}\,|q'\>\,dq''\,dq'\en
either of which would seem to be equally possible interpretations but which
evidently lead to unequal results.
\subsection{Why do interpretational problems exist?}
The reason these expressions lead to inconsistencies of interpretation is
really very simple---although it is a reason that physicists are often
reluctant to entertain. The argument presented above fails because the
{\it indicated representations} for $\<q''|e^{-i{\cal H} T}|q'\>$ and $\<p''|
e^{-i{\cal H} T}|p'\>$ simply {\it do not exist} as given. Physicists tend to
believe that if they can write down a set of relations possessing a {\it
formal} self consistency, then the underlying existence of the relations is
not in doubt.\footnote{A simple but informative example of this issue is the
following. Let $\{1,2,3,...\}$ denote the set of positive integers. Let $X$
denote the largest such integer, and let us assume that $X>1$. Since $X^2>X$,
we observe there is an integer larger than $X$, therefore we conclude that
our assumption that $X>1$ was in error, hence $X=1$.} Of course, the dilemma
surrounding these relations can be lifted by giving alternative
representations that, in fact, do exist. One such representation is based
on a lattice limit, namely, by giving meaning to the undefined formal path
integral as the limit of a sequence of well defined finite dimensional
integrals. As one such prescription we offer \cite{sch,roe}
\bn &&\hskip-.4cm\<q''|\,e^{-i{\cal H} T}\,|q'\>\nonumber\\
&&=\hskip-.1cm\lim_{N\rightarrow\infty}\frac{1}{(2\pi)^{N+1}}\int\exp\{i
\Sigma_{l=0}^N[ p_{l+1/2}(q_{l+1}-q_l)-\epsilon h(p_{l+1/2},(q_{l+1}+q_l)/2)]
\}\nonumber\\
&&\hskip4cm\times\Pi_{l=0}^N\,dp_{l+1/2}\,\Pi_{l=1}^N\,dq_l\;. \en
Here the limit $N\rightarrow\infty$ also implies that $\epsilon\equiv T/(N+1)\ra0$,
and $q_{N+1}$ $\equiv q''$ and $q_0\equiv q'$; all $p$ values are integrated
out.
This prescription, which applies for a wide class of classical Hamiltonian
functions $h(p,q)$, generates the propagator in the Schr\"odinger
$q$-representation, and two such propagators properly fold to a third
propagator when integrated over the intermediate $q$ with a measure $dq$.
However, this is not the only prescription that can be offered for the same
formal phase space path integral.
\section{Coherent state formulation}
Another rule of definition that can also be accepted for the formal phase
space path integral (1) is given by \cite{kla1}
\bn&&\hskip-.5cm\<p'',q''|\,e^{-i{\cal H} T}\,|p',q'\>\nonumber\\
&&\equiv\lim_{N\rightarrow\infty}\frac{1}{(2\pi)^N}\int\exp(\!\!(\Sigma_{l=0}^N\{i
\textstyle{\frac{1}{2}}(p_{l+1}+p_l)(q_{l+1}-q_l)\nonumber\\
&&\hskip2cm-\textstyle{\frac{1}{4}}[(p_{l+1}-p_l)^2+(q_{l+1}-q_l)^2]\nonumber\\
&&\hskip2cm-i\epsilon h(\textstyle{\frac{1}{2}}(p_{l+1}+p_l+iq_{l+1}-iq_l),\textstyle{\frac{1}{2}}(q_{l+1}+q_l
- -ip_{l+1}+ip_l))\})\!\!)\nonumber\\
&&\hskip3cm\times\Pi_{l=1}^N\,dp_l\,dq_l\;. \en
This expression differs from the former one in that both $p$ {\it and} $q$
are held fixed at the initial {\it and} final end points. In particular,
now $(p_{N+1},q_{N+1})$ $\equiv (p'',q'')$ and $(p_0,q_0)\equiv(p',q')$. Two
such propagators properly fold together to a third propagator with an
integration over the intermediate variables $p$ and $q$ with the measure
$dp\,dq/2\pi$. Like the previous case, the present expression holds for a
wide class of classical Hamiltonian functions. However, this latter sequence
is {\it fundamentally} different than the previous one, and that difference
not only involves a different sort of representation but even goes so far as
to entail a profound change of the {\it meaning} of the symbols $p$ and $q$
from their meaning as found in the preceding section.
The states $|p,q\>$ implicitly introduced above are {\it canonical coherent
states} defined by the following expression
\bn |p,q\>\equiv e^{-iqP}e^{ipQ}\,|0\>\;, \en
where, as usual, $[Q,P]=i1\hskip-.37em 1$ and $|0\>$ denotes the ground state of a
harmonic oscillator, i.e., a normalized solution of the equation $(Q+iP)|0\>
=0$ \cite{kla2}. Observe in this case that neither $p$ nor $q$ are {\it
eigenvalues} of any operator. Instead, it follows that
\bn\<p,q|P|p,q\>=p\;,\hskip2cm\<p,q|Q|p,q\>=q\;, \en
namely, that the labels $p$ and $q$ have the meaning of {\it expectation
values} rather than eigenvalues. Thus there is absolutely no contradiction
with the Heisenberg uncertainty principle in specifying both $p$ and $q$
{\it simultaneously}. The overlap of two coherent states, given by
\bn \<p',q'|p,q\>=\exp\{i\textstyle{\frac{1}{2}}(p'+p)(q'-q)-\textstyle{\frac{1}{4}}[(p'-p)^2+(q'-q)^2]\}\;,
\en
serves as a reproducing kernel for the functional Hilbert space
representation in the present case. The folding of two such overlap
functions leads to
\bn \int\<p'',q''|p,q\>\<p,q|p',q'\>\,dp\,dq/2\pi=\<p'',q''|p',q'\>\;,
\en
an expression which shows that the coherent state overlap function serves as
the ``$\delta$-function'' in the present representation although, of course,
in the present case it is a bounded, continuous function. In short, we learn
that the choice of sequential definition adopted to give meaning to the
formal phase space path integral can lead to a dramatic change of
representation and even of the meaning of the variables involved.
We conclude these remarks with the observation that if we formally
interchange the limit and integrations in (8) and write for the integrand
the form it assumes for continuous and differential paths, the
result has the formal expression (1), namely
\bn {\cal N}\int e^{i\textstyle\int[p{\dot q}-h(p,q)]\,dt}\,{\cal D} p\,{\cal D} q\;, \en
which is just the expression we started with! It is in this sense that
we assert that the present sequential definition is just as valid as the
one customarily chosen. Moreover, with the present understanding of the
sequential definition, there is absolutely no conflict between the meaning
of the variables $p$ and $q$ and the Heisenberg uncertainty principle; in
the present case, $p$ and $q$ denote expectation values in the coherent
states involved, and these may both be specified as general functions of
time $p(t)$ and $q(t)$, $t'\leq t\leq t''$.
It is clear to this author---but apparently unclear to many others---that
the interpretation of the formal path integral (13) in terms of paths $p(t)$
and $q(t)$ for which the meaning of the variables is that of {\it expectation
values} is far more acceptable than the one in which the meaning is that of
both sharp {\it position} and sharp {\it momentum} (eigen)values. Even if
one carries to the continuum the insight gained on the lattice for the usual
formulation, namely, that $p$ and $q$ are diagonalized {\it alternately} on
successive time slices, the result is that the continuum interpretation is
strictly not one for which $p$ and $q$ are {\it simultaneously} sharp but
one where $p$ and $q$ are alternately sharp at every ``other'' instant of
time---and of course when $p(q)$ is sharp then $q(p)$ is completely unknown!
This is the true physical meaning of the variables entering the putative
formal phase space path integral with the usual interpretation. How bizarre
that interpretation is when it is fully appreciated for what it is!
Contrast the interpretation just outlined with the one appropriate to the
alternative scenario in terms of canonical coherent states. In the case of
a lattice formulation of the phase space path integral in terms of coherent
states, $p$ and $q$ are specified at each time slice simultaneously and
interpreted as expectation values. This interpretation persists in the
continuum limit, and there is no logical conflict of that interpretation
in such a limit. Moreover, there is a symmetry in the interpretation and
usage of $p$ and $q$ inherent in the coherent state formulation that is
simply unavailable in the more traditional formulation.
One is almost tempted to assert that the usual
interpretation in terms of sharp eigenvalues is ``wrong'', because it cannot
be consistently maintained, while the interpretation in terms of expectation
values is ``right'', because it can be consistently maintained.
On the other hand, the community at large may not be ready to swallow such a
heretical statement, so perhaps it would be best if it was stricken from the
record! However, before completely striking it from the record, it may not be
inappropriate to offer additional evidence as food for thought.
\section{Wiener measure regularization}
We have accepted the fact that (13) is without mathematical meaning as it
stands. Some sort of regularization and removal of that regularization is
needed to give it meaning. There are many ways to do so, two of which have
been illustrated above. In this section we discuss quite a different form of
regularization.
Consider the expression \cite{daub}
\bn \lim_{\nu\rightarrow\infty}{\cal N}\int\exp\{i\textstyle\int[p{\dot q}-h(p,q)]\,dt\}
\,\exp[-(1/2\nu)\textstyle\int({\dot p}^2+{\dot q}^2)\,dt]\,{\cal D} p\,{\cal D}
q\;. \en
This expression differs from the usual one (13) by the presence of a damping
factor---a convergence factor---involving the time derivative of both $p$
and $q$. The result of interest is given in the limit that the parameter
$\nu\rightarrow\infty$. Note that when $\nu=\infty$, formally speaking, the usual
formal path integral (13) is recovered. Although (14) is written in the same
formal
language as (13), the latter expression is in fact profoundly different. In
fact, (14) is intended to be a {\it regularized} form of (13). Admittedly, it
doesn't appear any better defined than the usual expression in its present
form; however, (14) can be given an alternative but equivalent formulation
when we group certain terms together. In particular, with a suitable
regrouping of terms (14) becomes
\bn \lim_{\nu\rightarrow\infty}(2\pi)\,e^{\nu T/2}\int e^{i\textstyle\int[p\,dq-h(p,q)\,dt]}
\,d\mu^\nu_W(p,q)\;. \en
In this expression $\mu^\nu_W$ denotes (pinned) Wiener measure on the two
dimensional plane expressed in Cartesian coordinates $(p,q)$. In addition,
$p(t)$ and $q(t)$, $t'\leq t\leq t''$, denote Brownian motion paths with
$\nu$ as the diffusion constant, and $\textstyle\int p\,dq$ denotes a stochastic
integral needed since although $p(t)$ and $q(t)$ are continuous functions
for all $\nu$ they are nowhere differentiable.
For convenience we adopt the Stratonovich (midpoint) definition of the
stochastic integral (which is equivalent to the It\^o definition in the
present case because $dp(t)dq(t)=0$ is a valid It\^o rule in these
coordinates). With those remarks the integral in (15) is a well defined
expression for each $\nu$ and one may ask the question whether the
indicated limit converges and if so whether that limit has anything to do
with a solution to the Schr\"odinger equation. For a dense set of
Hamiltonians the answer to both of these questions is {\it yes!}
However, before we relate this expression to the earlier discussion let us
take up the possible meaning of the variables $p$ and $q$ as they appear
in (15). Observe, as noted, that the expression is well defined as
it stands---indeed, it involves a {\it continuous time regularization}. Thus
if this expression is going to have something to do with quantum mechanics it
must be consistent to simultaneously specify both $p(t)$ and $q(t)$ for all
$t$ in the appropriate interval. This means that $p$ and $q$ {\it cannot}
have the meaning of sharp momentum and sharp position, respectively. On the
other hand, it would be possible for those variables to have the meaning of
expectation values which can be simultaneously given. It should thus come as
not too great a surprise that the continuous time regularization of a phase
space path integral with the help of a Wiener measure on the plane, in the
limit as the diffusion constant diverges, {\it automatically generates a
coherent state representation!}
In particular, with the Brownian paths pinned so that $p(t'')=p''$, $q(t'')
=q''$ and $p(t')=p'$, $q(t')=q'$, the resultant limit is equivalent to
\bn &&\hskip-1cm\<p'',q''|\,e^{-i{\cal H} T}\,|p',q'\>\nonumber\\
&&=\lim_{\nu\rightarrow\infty}(2\pi)\,e^{\nu T/2}\int e^{i\textstyle\int[p\,dq-h(p,q)\,dt]}
\,
d\mu_W(p,q)\;, \en
where, as implied by (16) itself, and consistent with the earlier notation,
\bn &&|p,q\>\equiv e^{-iqP}e^{ipQ}|0\>\;,\hskip1cm(Q+iP)|0\>=0\;,\\
&&\hskip.9cm{\cal H}\equiv\textstyle\int h(p,q)|p,q\>\<p,q|\,dp\,dq/2\pi \;. \en
In other words, the result of the Wiener measure regularized phase space
path integral, in the limit that the diffusion constant diverges, yields a
propagator in the coherent state representation as we had discussed earlier.
Here is an additional argument for favoring the interpretation of the formal
phase space path integral as really standing for the one expressed in terms
of coherent states rather than one that is internally inconsistent, namely,
one interpreted in terms of sharp eigenvalues for the position and momentum.
If one accepts the idea that the formal expression (13) may be best
interpreted in terms of coherent states rather than sharp Schr\"odinger
eigenstates, one may be worried that many previous calculations are
incorrect. There is no need to worry. All previous calculations which are
implicitly consistent with a lattice limit such as in (8) are perfectly
correct. Our discussion is not addressed to revising the evaluation of
properly interpreted path integrals but rather to stressing the
consistency---or possible inconsistency---of interpreting the continuum
version of the phase space path integral. With the coherent state
interpretation one is completely justified in regarding the paths as
functions defined for a continuous time parameter, and indeed within the
sequence where $\nu<\infty$, as continuous functions of time. This is a
{\it conceptual} difference with respect to how the interpretation in the
usual formulation can be taken. If there is ever any hope to define path
integrals rigorously as path integrals over a set of paths (functions of
time), then it is {\it essential} to give up the notion that the paths
involved are sharp value paths and replace that with another interpretation
of which the expectation value paths is a completely satisfactory example.
In point of fact, the present author feels that the rigorous definition (16)
in terms of a limit of a sequence of {\it completely unambiguous} path
integrals is as close as one is likely to come to a rigorous definition of a
continuous time path integral in terms of genuine (i.e., countably additive)
measures. One can hardly ask for an expression without some sort of
regularization. For example, even the one dimensional integral
\bn \int_{-\infty}^\infty e^{iy^2}\,dy \en
is effectively undefined since it is only conditionally convergent. It, too,
needs
a rule to overcome this ambiguity, and one rule is to define it as
\bn \lim_{\nu\rightarrow\infty}\int_{-\infty}^\infty e^{iy^2-y^2/\nu}\,dy\;. \en
The indicated sequence exists and the limit converges, but it has required
the use of a convergence factor; one could hardly expect a real time path
integral to require anything less!
\subsection{Generalization to non-flat phase spaces}
The point we are making here naturally leads to another line of thought on
which we shall comment but not develop since it has been adequately treated
elsewhere. If we are dealing with a conditionally convergent integral, then
it is possible to obtain fundamentally different answers by choosing a
qualitatively different form of regularization. In particular, from the
point of view of regularization, why was it necessary for us to choose a
Brownian motion regularization on a phase space that constitutes a {\it flat}
two-dimensional space; why not consider a Brownian motion regularization on
a phase space that is a curved two-dimensional manifold, say, a sphere or a
pseudo-sphere, for example, or even a space of non-constant curvature.
Brownian motion regularization on such non-flat spaces has indeed been
investigated, and the result is most interesting. For a sphere (of the right
radius) the result of the limit of the regulated phase space path integrals
over continuous paths leads to a quantization in which the kinematical
variables are spin variables, i.e., operators that obey the commutation
relations of the Lie algebra of the group SU(2) \cite{daub}. If a
pseudo-sphere is used instead, the result for the kinematical variables is
that for the Lie algebra of the group SU(1,1) (or the ``$ax+b$'' group)
\cite{daub2,kla3}. Both of these cases lead to group related coherent states
and a representation of the propagator in terms of those states. On the other
hand, for Brownian motion on a space without any special symmetry, the result
again leads to coherent states \cite{ali}, but these are coherent states of
a more general kind than traditionally considered since they are {\it not}
associated with any group!
The moral of this extended story is that phase space path integrals of an
exceedingly general kind appropriate to very general kinematical variables
can be rigorously developed with the aid of a Weiner measure regularization
each of which involves coherent states wherein the variables are {\it never}
eigenvalues of some self-adjoint operator but more typically are associated
with expectation values of suitable operators for which there is no
conceptual difficulty in their simultaneous specification. This very
desirable state of affairs has arisen by combining the {\it symplectic}
geometry of the classical theory with a {\it Riemannian} geometry needed to
carry the {\it Brownian motion} that forms the regularization.
If we may be allowed a single phrase of summary, then it is no exaggeration
to claim \cite{kla5} that, when properly interpreted, \vskip.3cm
\hskip1cm QUANTIZATION$\;=\:$GEOMETRY$\;+\;$PROBABILITY
\section{Acknowledgments}
It is a pleasure to thank the organizers of the conference in Kuala Lumpur,
especially Prof. S.C. Lim, for their fine organization and the splendid
meeting that resulted.
|
1,314,259,994,961 | arxiv | \section[Introduction]{Introduction} \label{sec:intro}
Time-varying parameter (TVP) models are
widely used in time series analysis, because of their flexibility and ability to capture gradual changes in the model \comment{parameters}
over time. The popularity of TVP models in macroeconomics and finance is based on the fact that, in most applications, the influence of certain predictors on the outcome variables varies over time \citep{pri:tim, dan-hal:pre, bel-etal:hie_tv}. TVP models, while capable of reproducing salient features of the data in a very effective way, present a concrete risk of overfitting, as often only a small subset of the parameters are time-varying. Hence, in the last decade, there has been a growing need for models and methods able to discriminate between time-varying and static parameters in TVP models.
A key contribution in this direction was the introduction of the non-centered parameterization of TVP models in \cite{fru-wag:sto}, which recasts the problem of variance selection and shrinkage in terms of variable selection, thus allowing any tool used to this end in multiple regression models to be used to perform selection or shrinkage of variances.
\cite{fru-wag:sto} employ a spike and slab prior, while continuous shrinkage priors have been utilised as a regularization alternative in, e.g., \cite{bel-etal:hie_tv},
\cite{bit-fru:ach} and \cite{cad-etal:tri}. For an excellent review of shrinkage priors, with a particular focus on high dimensional regression, the reader is directed to \cite{bha-etal:las}.
In this paper, we describe the \proglang{R} package \pkg{shrinkTVP} \citep{kna-etal:shr} for Bayesian TVP models with shrinkage. The package is available under the general public license (GPL $\geq$ 2) from the Comprehensive R
Archive Network (CRAN) at \url{https://cran.r-project.org/web/packages/shrinkTVP}.
The package efficiently implements recent developments in the Bayesian literature, in particular the ones
presented in \cite{bit-fru:ach} and \cite{cad-etal:tri}. The computationally intensive Markov chain Monte Carlo (MCMC) algorithms
in the package are written in \proglang{C++} and interfaced with \proglang{R} \citep{R} via the \pkg{Rcpp} \citep{edd-bal:ext} and the \pkg{RcppArmadillo} \citep{edd-san:acc} packages.
This approach combines the ease-of-use of R and its underlying functional programming paradigm with the computational speed of \proglang{C++}.
\commentred{The package \pkg{shrinkTVP} is designed to provide an easy entry point for fitting TVP models with shrinkage priors, while also giving more experienced users the option to adapt the model to their needs. This is achieved by providing a robust baseline model that can be estimated by only passing the data, while also allowing the user to specify more advanced options.
Additionally, the \pkg{shrinkTVP} package is designed to
ensure compatibility with well-known times series formats and to complement other packages. As input objects, time series from the \proglang{R} packages \pkg{zoo} \citep{zei:zoo} and \pkg{xts} \citep{xts} as well as time series formats like \code{ts} are supported.
Estimation output is compatible with the popular \proglang{R} package \pkg{coda} \citep{plu:cod} which can be easily applied for convergence diagnostic tests, among others.
Coupled with intuitive summary and plot methods, \pkg{shrinkTVP} is a package that is easy to use while remaining highly flexible. }
\commentred{\pkg{shrinkTVP} is, to our knowledge, the only \proglang{R} package that combines TVP models with shrinkage priors on the time-varying components in a Bayesian framework.
Several \proglang{R} packages deal with statistical inference for various specific classes of state space models, of which TVP models are a special case.
The most popular \proglang{R} package in this field is \pkg{dlm} \citep{pet:anr}, a comprehensive package providing routines for maximum likelihood estimation, Kalman filtering and smoothing, and Bayesian analysis for dynamic linear models (DLMs).
The accompanying book \citep{pet-etal:dyn} introduces the methodology and many \proglang{R} code examples.
As of now, priors are not designed to encourage shrinkage and \pkg{shrinkTVP} complements \pkg{dlm} in this regard.}
\commentred{The \proglang{R} package \pkg{bvarsv} \citep{kru:bva} implements Bayesian inference for vector autoregressive (VAR) models with time-varying parameters (TVP-VAR)
and stochastic volatility for multivariate time series as introduced by \citep{pri:tim}.
We refer to \citep{del-pro:tim2} for details on the MCMC algorithm and a later correction of the original scheme.
In addition to the very user friendly estimation function \code{bvar.sv.tvp}, \pkg{bvarsv} provides posterior predictive distributions and enables impulse response analysis. The package includes the macroeconomic data set analysed in \citep{pri:tim} as example data set, \code{usmacro.update}, which we use in our predictive exercise in Section~\ref{sec:usmacro} to showcast the effect of introducing shrinkage priors on time-varying parameters.}
\commentred{Additional packages emerged very recently.
The \proglang{R} package \pkg{tvReg} \citep{tvreg} presents a user friendly compendium of many common linear TVP models, including standard linear regression as well as autoregressive, seemingly unrelated equation and VAR models.
Estimation is based on kernel smoothing techniques.
For an illustrative application, a TVP-VAR(4) model is fitted
to the \code{usmacro.update} data set mentioned above, using the function \code{tvVAR}.}
\commentred{The \proglang{R} package \pkg{walker} \citep{walker} facilitates the estimation of DLMs and generalized DLMs using MCMC algorithms provided by \proglang{Stan} \citep{carpenter2017stan}. For inference, the importance sampling method of \citep{ISMCMC} is implemented within a Hamiltonian Monte Carlo framework.}
\commentred{The \proglang{R} package \pkg{bsts} \citep{sco:bst} performs Bayesian analysis for structural time series models, a highly relevant class of state space models including DLMs. \pkg{bsts} is a very powerful package that allows
shrinkage for static regression coefficients using spike and slab priors.
However, as for any other packages mentioned above, variation of the dynamic components is not regularized in \pkg{bsts}.}
\commentred{A main contribution of the package \pkg{shrinkTVP} is bridging the active field of \proglang{R} packages for state space models with the even more active field of \proglang{R} packages that provide regularization and shrinkage methods for common regression type models.
Among others, \pkg{ncvreg} \citep{ncvreg} is useful for fitting standard penalized regression estimators,
\pkg{glmnet} \citep{sim-etal:reg} allows elastic-net regularization for a variety of models,
\pkg{horseshoe} \citep{pas:hor} implements the horseshoe prior, while \pkg{shrink} \citep{dun-etal:shr} provides various shrinkage methods for linear, generalized linear, and Cox regression models. \pkg{biglasso} \citep{zen:big} aims at very fast lasso-type regularization for high-dimensional linear regression.
Recent \proglang{R} packages include \pkg{NormalBetaPrime} \citep{bai:nor}
for Bayesian univariate
and \pkg{MBSP} \citep{MBSP} for Bayesian multivariate linear regression analysis using, respectively, the normal-beta prime and the three parameter beta normal family for inducing shrinkage. The \proglang{R} package \pkg{monomvn} \citep{monomvn} employs a normal-gamma prior in the specific situation
of Bayesian inference for multivariate normal and Student-$t$ data with a monotone pattern of missing data.
}
\commentred{The remainder of the paper is organized as follows.
Section~\ref{sec:model} briefly introduces TVP \comment{models} and normal-gamma-gamma shrinkage priors, and describes the MCMC algorithms for posterior simulation. The package \pkg{shrinkTVP} is introduced in Section~\ref{sec:pkgshrinkTVP}. In particular, we illustrate how to run the MCMC sampler using the main function \code{shrinkTVP}, how to choose a specific model, and how to conduct posterior inference using the return object of \code{shrinkTVP}. Section~\ref{sec:LPDS} explains how to assess model performance by calculating log-predictive density scores (LPDSs), and how to use LPDSs to compare the predictive performances of different priors. This is illustrated using the \code{usmacro.update} data set.
Finally, Section~\ref{sec:conclusions} concludes the paper.}
\section{Model specification and estimation } \label{sec:model}
\subsection{TVP models}
Let us recall the state space form of a TVP model. For $t = 1, \ldots, T$, we have that
\begin{equation}
\label{eq:centeredpar}
\begin{aligned}
&y_{t} = \bm x_t \bm {\beta_{t}} + \epsilon_{t} , \qquad
\epsilon_{t} \sim \mathcal N (0,\sigma^2_t), \\
& \bm {\beta}_{t} = \bm {\beta}_{t-1} + \bm w_{t}, \qquad \bm w_{t} \sim \mathcal N_d (0, \QQ),
\end{aligned}
\end{equation}
where $y_t$ is a univariate response variable and $\bm x_t = (x_{t 1}, x_{t 2}, \ldots, x_{t d})$ is a $d$-dimensional row vector containing the regressors at time $t$, with $x_{t 1}$ corresponding to the intercept.
For simplicity, we assume here that $\QQ = \text{Diag}(\theta_1, \ldots, \theta_d)$ \comment{is a diagonal matrix}, implying that the state innovations are conditionally independent.
Moreover, we assume the initial value follows a normal distribution, i.e., $\bm \beta_{0} \sim \mathcal N_d (\bm \beta, \QQ)$\comment{, with initial mean
$\bm \beta = (\beta_1, \ldots, \beta_d)$}.
Model (\ref{eq:centeredpar}) can be rewritten equivalently in the non-centered parametrization as
\begin{equation}
\label{eq:noncenteredpar}
\begin{aligned}
&y_t= \bm x_t \bm \beta + \bm x_t \text{Diag}(\sqrt{\theta}_1, \ldots, \sqrt{\theta}_d)
\tilde{\bm \beta}_{t} + \epsilon_t, \quad \epsilon_t \sim \mathcal N (0,\sigma^2_t),\\
&\tilde{\bm \beta}_{t} =\tilde {\bm \beta}_{t-1} + \tilde{\bm u}_{t}, \qquad \tilde{\bm u}_{t} \sim \mathcal N_d (0, I_d),
\end{aligned}
\end{equation}
with $ \tilde{\bm \beta}_{0} \sim \mathcal{N}_d (\bm 0, I_d)$, where $I_d$ is the $d$-dimensional identity matrix.
\pkg{shrinkTVP} is capable of modelling the observation error both homoscedastically, i.e., $\sigma^2_t \equiv \sigma^2$ for all $t = 1, \ldots, T$ and heteroscedastically, via a stochastic volatility \comment{(SV)} specification. In the latter case, the log-volatility $h_t = \log \sigma^2_t$ follows an AR(1) model \citep{jac-etal:bayJBES,kas-fru:anc, kas:dea}. More specifically,
\begin{eqnarray}
\label{eq:svht}
h_t | h_{t-1}, \mu, \phi, \sigma_\eta^2 \sim \mathcal{N} \left(\mu +\phi ( h_{t-1}-\mu),\sigma^2_\eta \right),
\end{eqnarray}
with initial state $h_0 \sim \mathcal N \left(\mu, \sigma_\eta^2/(1-\phi^2) \right)$.
The stochastic volatility model on the errors can prevent the detection of spurious variations in the TVP coefficients \citep{nak:tim, sim:com} by capturing some of the variability in the error term.
\subsection{Prior Specification} \label{sec:priors}
\subsubsection{Shrinkage priors on variances and model parameters}
We place conditionally independent \revised{normal-gamma-gamma (NGG) priors \citep{cad-etal:tri,gri-bro:hie}}, both on the standard deviations of the innovations, that is the $\sqrt{\theta_j}$'s,
and on the means of the initial value $\beta_j$, for $j = 1, \ldots, d$. Note that, in the case of the standard deviations,
this can equivalently be seen as a triple gamma prior on the innovation variances $\theta_j$, for $j = 1, \ldots, d$.
\revised{The NGG can be represented as a conditionally normal distribution, where the component specific variance is itself a compound probability distribution resulting from two gamma distributions. In this representation, it looks as follows
\begin{eqnarray}
\sqrt{\theta}_j|\xi^{2}_j \sim \Normal{0,\xi_j^{2}}, \qquad \xi_j^2|a^\xi,\kappa_j^2 \sim \Gammad{a^\xi,\frac{a^\xi \kappa_j^2}{2}}, \quad
\kappa_j^2| c^\xi, \kappa^2_B \sim \Gammad{c^\xi, \frac{c^\xi}{\kappa^2_B}} \label{eq:normalTG1}\\
\beta_{j}|\tau^2_j \sim \Normal{0,\tau^2_j}, \qquad \tau_j^2|a^\tau ,\lambda_j^2 \sim \Gammad{a^\tau,\frac{a^\tau \lambda_j^2}{2}} \quad \lambda_j^2| c^\tau, \lambda^2_B \sim \Gammad{c^\tau, \frac{c^\tau}{\lambda^2_B}}. \label{eq:normalTG2}
\end{eqnarray}
Letting $c^\xi$ and $c^\tau$ go to infinity results in a normal-gamma (NG) prior \citep{gri-bro:inf} on the $\sqrt{\theta}_j$'s and $\beta_j$'s. It has a representation as a conditionally normal distribution, with the component specific variance following a gamma distribution, that is
\begin{eqnarray}
\sqrt{\theta}_j|\xi^{2}_j \sim \Normal{0,\xi_j^{2}}, \qquad \xi_j^2|a^\xi,\kappa_B^2 \sim \Gammad{a^\xi,\frac{a^\xi \kappa_B^2}{2}}, \label{eq:normalDG1}\\
\beta_{j}|\tau^2_j \sim \Normal{0,\tau^2_j}, \qquad \tau_j^2|a^\tau ,\lambda_B^2 \sim \Gammad{a^\tau,\frac{a^\tau \lambda_B^2}{2}}. \label{eq:normalDG2}
\end{eqnarray}
From here, letting $a^\xi$ and $a^\tau$ go to infinity yields a normal prior with fixed variance, also known as ridge regression:
\begin{eqnarray}
\sqrt{\theta}_j|\kappa_B^2 \sim \Normal{0,\frac{2}{\kappa^2_B}}, \label{eq:normalRidge1}\\
\beta_{j}|\lambda_B^2 \sim \Normal{0,\frac{2}{\lambda_B^2}}. \label{eq:normalRidge2}
\end{eqnarray}
We refer to $a^\xi$ and $a^\tau$ as the pole parameters, as marginally more mass is placed around zero as they become smaller. $c^\xi$ and $c^\tau$ are referred to as the tail parameters, as they control the amount of mass in the tails of the distribution, with smaller values equating to heavier tails. Finally, the parameters $\kappa_B^2$ and $\lambda_B^2$ are dubbed the {global shrinkage parameters}, as they influence how strongly {all} parameters are pulled to zero. The larger $\kappa_B^2$ and $\lambda_B^2$, the stronger \comment{this effect}.}
\revised{One of the key benefits of the NGG prior is that many interesting shrinkage priors are contained within it as special or limiting cases. Beyond the NG prior mentioned above, two such cases are the horseshoe prior \citep{car-etal:han} and the Bayesian Lasso \citep{par-cas:bay}. The former results from an NGG prior with the pole and tail parameters equal to $0.5$, while the latter is a special case of the NG prior with a pole parameter fixed to one. As the connection between the NGG prior and the horseshoe prior may not be entirely obvious from the parameterization presented here, the interested reader is referred to \cite{cad-etal:tri} for details.}
The parameters $a^\xi$, $a^\tau$, $c^\xi$, $c^\tau$, $\kappa_B^2$ and $\lambda_B^2$ can be learned from the data through appropriate prior distributions. Results from \cite{cad-etal:tri} motivate the use of different distributions for these parameters under the NGG and NG prior. In the NGG case, the scaled global shrinkage parameters conditionally follow F distributions, depending on their respective pole and tail parameters:
\begin{align} \label{eq:NGG_glob_pri}
\frac{\kappa_B^2}{2}|a^\xi, c^\xi \sim F (2a^\xi , 2c^\xi), \qquad \frac{\lambda_B^2}{2}|a^\tau, c^\tau \sim F (2a^\tau , 2c^\tau).
\end{align}
The scaled tail and pole parameters, in turn, follow beta distributions:
\begin{eqnarray} \label{eq:NGG_tail_pole_pri}
2a^\xi \sim \mathcal{B}\left(\alpha_{a^\xi}, \beta_{a^\xi}\right), \qquad 2c^\xi \sim \mathcal{B}\left(\alpha_{c^\xi}, \beta_{c^\xi}\right), \\
2a^\tau \sim \mathcal{B}\left(\alpha_{a^\tau}, \beta_{a^\tau}\right), \qquad 2c^\tau \sim \mathcal{B}\left(\alpha_{c^\tau}, \beta_{c^\tau}\right).
\end{eqnarray}
These priors are chosen as they imply a uniform prior on a suitably defined model size, see \cite{cad-etal:tri} for details.
In the NG case the global shrinkage parameters follow independent gamma distributions:
\begin{align} \label{eq:NG_glob_pri}
\kappa_B^2 \sim \mathcal G (d_1, d_2), \qquad \lambda_B^2 \sim \mathcal G (e_1, e_2).
\end{align}
In order to learn the pole parameters in the NG case, we generalize the approach taken in \cite{bit-fru:ach} and place the following gamma distributions as priors:
\begin{align} \label{eq:equNG02}
a^\xi\sim \mathcal G(\alpha_{a^\xi} , \alpha_{a^\xi}\beta_{a^\xi}), \qquad
a^\tau \sim \mathcal G(\alpha_{a^\tau} , \alpha_{a^\tau}\beta_{a^\tau}),
\end{align}
which correspond to the exponential priors used in \cite{bit-fru:ach} when \revised{$\alpha_{a^\xi}=1$ and $\alpha_{a^\tau}=1$. The parameters $\alpha_{a^\xi}$ and $\alpha_{a^\tau}$} act as degrees of freedom and allow the prior to be bounded away from zero.
\subsubsection{Prior on the volatility parameter}
In the homoscedastic case we employ a hierarchical prior, \comment{where the scale of an inverse gamma prior for $\sigma^2$ follows a gamma distribution}, that is,
\begin{eqnarray} \label{eq:priorsigma}
\sigma^2|C_0 \sim \Gammainv{c_0,C_0}, \qquad C_0 \sim \Gammad{g_0,G_0},
\end{eqnarray}
with hyperparameters $c_0$, $g_0$, and $G_0$.
In the case of stochastic volatility, the priors on the parameters $\mu$, $\phi$ and $\sigma^2_\eta$ in Equation~\eqref{eq:svht} are chosen as in \citet{kas-fru:anc}, that is
\begin{eqnarray} \label{eq:volpriors}
\mu \sim \mathcal{N}( b_\mu, B_\mu ), \quad \dfrac{\phi +1 }{2} \sim \mathcal{B}(\aphi, \bphi), \quad \sigma^2_\eta \sim \mathcal{G}(1/2, 1/2 \Bsv ),
\end{eqnarray}
\comment{with hyperparameters $b_\mu, B_\mu, \aphi, \bphi,$ and $\Bsv$.}
\subsection{MCMC sampling algorithm}
\label{sec:MCMC}
The package \pkg{shrinkTVP} implements an MCMC Gibbs sampling algorithm with Metropolis-Hastings steps to obtain draws from the posterior distribution of the model parameters.
Here, we roughly sketch the sampling algorithm and refer the interested reader to \cite{bit-fru:ach} \revised{and \cite{cad-etal:tri}} for further details.
\newpage
\commentred{\begin{alg} \label{facsvalg}
\mbox{\rm Gibbs Sampling Algorithm}
\begin{enumerate} \itemsep 0mm
\item[\mbox{\rm 1.}] \mbox{\rm Sample the latent states} $\tilde \betav =( \tilde \betav_0, \ldots, \tilde \betav_T)$ \mbox{\rm in the non-centered parametrization from a}\\
\mbox{\rm multivariate normal distribution;}
\item[\mbox{\rm 2.}] \mbox{\rm Sample jointly} $\beta_1, \dots,\beta_d,$ \mbox{\rm and} $\sqrt{\theta_1},\dots,\sqrt{\theta_d}$ \mbox{\rm in the non-centered parametrization from}\\
\mbox{\rm a multivariate normal distribution;}
\item[\mbox{\rm 3.}] \mbox{\rm Perform an ancillarity-sufficiency interweaving step and redraw each} $\beta_1, \dots,\beta_d$ \mbox{\rm from a}\\
\mbox{\rm normal distribution and each} ${\theta_1},\dots,{\theta_d}$ \mbox{\rm from a generalized inverse Gaussian distribution}\\
\mbox{\rm using \pkg{GIGrvg} \citep{hoe-ley:gig};}
\item[\mbox{\rm 4.}] \revised{ \mbox{\rm Sample the prior variances} $\xi^2_1, \dots \xi^2_d$ \mbox{\rm and} $\tau^2_1, \dots \tau^2_d$ \mbox{\rm and the component specific hyper-} \\
\mbox{\rm parameters. Sample (where required) the pole, tail and global shrinkage parameters.} \\
\mbox{\rm In the NGG case, this is done by emplyoing steps (c) - (f) from Algorithm 1 in} \\
\mbox{\rm \cite{cad-etal:tri}. In the NG case steps (d) and (e) from Algorithm 1 in } \\
\mbox{\rm \cite{bit-fru:ach} are used. In the ridge regression case simply}\\
\mbox{\rm set} $\xi^2_j = 2/\kappa^2_B$ \mbox{\rm and} $\tau^2_j = 2/\lambda^2_B$, \mbox{\rm for} $d = 1, \dots, d$.
}
\item[\mbox{\rm 5.}] \mbox{\rm Sample the error variance} $\sigma^2$ \mbox{\rm from an inverse gamma distribution in the homoscedastic}
\mbox{\rm case or, in the SV case, sample the level} $\mu$, \mbox{\rm the persistence} $\phi$,
\mbox{\rm the volatility of the vola-}\\ \mbox{\rm tility} $\sigma^2_{\eta}$ \mbox{\rm and the log-volatilities} $\bm h= (h_0, \ldots, h_T)$
\mbox{\rm using \pkg{stochvol} \citep{kas:dea}.}
\end{enumerate}
\end{alg}}
\revised{Step 4 presents a fork in the algorithm, as different parameterizations are used in the NGG and NG case, as to improve mixing. For details on the exact parameterization used in the NGG case, see \cite{cad-etal:tri}. Additionally, not all sampling steps are performed in all prior setups. If, for example, the user has defined that $\kappa_B^2$ should not be learned from the data, then this step is not executed.}
One key feature of the algorithm is the joint sampling of the time-varying parameters $\tilde{\bm \beta}_t$, for $t=0, \ldots, T$ in step 1 of Algorithm~\ref{facsvalg}. We employ the procedure described in
\cite{mcc-etal:sim} which exploits the sparse, block tri-diagonal structure of the precision matrix of the full conditional distribution of
\comment{$\tilde \betav =( \tilde \betav_0, \ldots, \tilde \betav_T)$},
to speed up computations.
Moreover, as described in \cite{bit-fru:ach}, in step 3 we make use of the ancillarity-sufficiency interweaving strategy (ASIS) introduced by \citet{yu-men:cen}. ASIS is well known to improve mixing by sampling certain parameters both in the centered and non-centered parameterization.
This strategy has been successfully applied to univariate SV models \citep{kas-fru:anc}, multivariate factor SV models \citep{kas-etal:eff} and dynamic linear state space models \citep{sim-etal:int}.
\revised{
\paragraph{Adaptive Metropolis-within-Gibbs}
For the pole and tail parameters, no full conditionals exist and a Metropolis-Hastings step has to be performed. To improve mixing, \code{shrinkTVP} supports adaptive Metropolis-within-Gibbs as in \cite{rob-ros:exa}. The algorithm works as follows. For each parameter $i$ that is being learned from the data, let $s_i$ represent the standard deviation of the proposal distribution. After the $n^{th}_i$ batch of $m_i$ iterations, update $s_i$ according to the following rule:
\begin{itemize}
\itemsep -1mm
\item increase the log of $s_i$ by $\text{min}(c_i, n_i^{1/2})$ if the acceptance rate of the previous batch was above $d_i$ or
\item decrease the log of $s_i$ by $\text{min}(c_i, n_i^{1/2})$ if the acceptance rate of the previous batch was below $d_i$.
\end{itemize}
The starting value of $s_i$, $m_i$, $c_i$ and $d_i$ can all be set by the user. Additionally, if adaptive Metropolis-within-Gibbs is not desired, it can be switched off and a simple Metropolis-Hastings step will be performed.
}
\section[The shrinkTVP package]{The \pkg{shrinkTVP} package}
\label{sec:pkgshrinkTVP}
\subsection{Running the model}
The core function of the package \pkg{shrinkTVP} is the function \code{shrinkTVP}, which serves as an R-wrapper for the actual sampler coded in \proglang{C++}. The function works out-of-the-box, meaning that estimation can be performed with minimal user input. With default settings, the TVP model in \comment{Equation}~\eqref{eq:centeredpar} is estimated in a Bayesian fashion with \revised{the NG prior defined in equations~\eqref{eq:normalDG1}, \eqref{eq:normalDG2}, \eqref{eq:NG_glob_pri} and \eqref{eq:equNG02}} with the following choice for the hyperparameters: $d_1 = d_2 = e_1 = e_2 = 0.001$, $\alpha_{a^\xi}=\alpha_{a^\tau}=5$ and $\beta_{a^\xi}=\beta_{a^\tau}=10$, \comment{implying a prior mean of $\Ew{a^\xi}= \Ew{a^\tau}=0.1$}.
The error is assumed to be homoscedastic, with prior defined in \comment{Equation}~\eqref{eq:priorsigma} and hyperparameters $c_0 = 2.5$, $g_0 = 5$, and $G_0 = g_0/(c_0 - 1)$.
The only compulsory argument is an object of class \ ``formula'', which most users will be familiar with (see, for example, the use in the function \code{lm} in the package \pkg{stats} \citep{R}). The second argument is an optional data frame, containing the response variable and the covariates. Exemplary usage of this function is given in the code snippet below, along with the default output.
All code was on run on a personal computer with an Intel i5-8350U CPU.
\begin{CodeChunk}
\begin{CodeInput}
R> library("shrinkTVP")
R>
R> set.seed(123)
R> sim <- simTVP(theta = c(0.2, 0, 0), beta_mean = c(1.5, -0.3, 0))
R> data <- sim$data
R> res <- shrinkTVP(y ~ x1 + x2, data = data)
\end{CodeInput}
\begin{CodeOutput}
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Timing (elapsed): 3.403 seconds.
4408 iterations per second.
Converting results to coda objects and summarizing draws... Done!
\end{CodeOutput}
\end{CodeChunk}
Note that the data in the example is generated by the function \code{simTVP}, which can create synthetic datasets of varying sizes for illustrative purposes. The inputs \code{theta} and \code{beta} can be used to specify the true $\theta_1, \ldots, \theta_d$ and $\beta_1, \ldots, \beta_d$ used in the data generating process, in order to evaluate how well \code{shrinkTVP} recaptures these true values. The values correspond to the ones used in the synthetic example of \cite{bit-fru:ach}.
The user can specify the following MCMC algorithm parameters: \code{niter}, which determines the number of MCMC iterations including the burn-in,
\code{nburn}, which equals the number of MCMC iterations discarded as burn-in, and
\code{nthin}, indicating the thinning parameter, meaning that every nthin-th draw is kept and returned.
The default values are \code{niter = 10000}, \code{nburn = round(niter/2)} and \code{nthin = 1}.
\commentred{The user is strongly encouraged to check convergence of the produced Markov chain, especially for a large number of covariates. The output is made \pkg{coda} compatible, so that the user can utilize the tools provided by the excellent \proglang{R} package to assess convergence.}
\subsection{Specifying the priors} \label{sec3:priors}
More granular control over the prior setup can be exercised by \revised{passing additional arguments to \code{shrinkTVP}. The most important argument in this regard is \code{mod\_type}, which is used to specify whether the normal-gamma-gamma (\code{mod\_type = "triple"}), the normal-gamma (\code{mod\_type = "double"}) or ridge regression (\code{mod\_type = "ridge"}) is used. Beyond this, the user can specify the hyperparameters given in Section~\ref{sec:priors} and has the possibility to fix one or both of the values of the global shrinkage \comment{parameters} ($\kappa_B^2$, $\lambda_B^2$) and the pole and tail parameters ($a^\tau$, $a^\xi$, $c^\tau$, $c^\xi$). By default, these parameters are learned from the data. The benefit of this flexibility is twofold: on the one hand, desired degrees of sparsity and global shrinkage can be achieved through fixing the hyperparameters; on the other hand, interesting special cases arise from setting certain values of hyperparameters. Under an NGG prior, for example, setting the pole and tail parameters equal to $1/2$ results in a horseshoe prior on the \comment{$\sqrt\theta_j$'s} and the \comment{$\beta_j$'s}, respectively. If the user desires a higher degree of sparsity, this can be achieved by setting the pole parameters to a value \comment{closer} to zero. Table~\ref{tab:tablepriors} gives an overview of different model specifications. Note that different hyperparameter values can be chosen for the variances and the means of the initial values.}
\revised{
\begin{table}[]
\setlength{\tabcolsep}{5pt}
\centering
\scriptsize
\begin{tabular}{@{}lllllll@{}}
\toprule
& \multicolumn{3}{l}{Shrinkage on $\sqrt{\theta_j}$} & \multicolumn{3}{l}{Shrinkage on $\beta_j$} \\
\cmidrule{2-7}
& $c^\xi$ & $a^\xi$ & $\kappa_B^2$ & $c^\tau$ & $a^\tau$ & $\lambda_B^2$ \\
\cmidrule (r{4pt}){2-4} \cmidrule (l){5-7}
\textit{NGG prior} \\
Fully hierarchical NGG & $\mathcal{B}\left(\alpha_{c^\xi}, \beta_{c^\xi}\right)$ & $\mathcal{B}\left(\alpha_{a^\xi}, \beta_{a^\xi}\right)$ & $F(2a^\xi, 2c^\xi)$ & $\mathcal{B}\left(\alpha_{c^\tau}, \beta_{c^\tau}\right)$ & $\mathcal{B}\left(\alpha_{a^\tau}, \beta_{a^\tau}\right)$ & $F(2a^\tau, 2c^\tau)$ \\
Hierarchical NGG & fixed & fixed & $F(2a^\xi, 2c^\xi)$ & fixed & fixed & $F(2a^\tau, 2c^\tau)$ \\
NGG & fixed & fixed & fixed & fixed & fixed & fixed \\
Hierarchical Horseshoe & fixed at 0.5 & fixed at 0.5 & $F(2a^\xi, 2c^\xi)$ & fixed at 0.5 & fixed at 0.5 & $F(2a^\tau, 2c^\tau)$ \\
Horseshoe & fixed at 0.5 & fixed at 0.5 & fixed & fixed at 0.5 & fixed at 0.5 & fixed \\
\textit{NG prior} \\
Fully hierarchical NG & \textit{-} & $\mathcal{G}(\alpha_{a^\xi}, \alpha_{a^\xi} \beta_{a^\xi})$ & $\mathcal{G}(d_1, d_2)$ & - & $\mathcal{G}(\alpha_{a^\tau}, \alpha_{a^\tau} \beta_{a^\tau})$ & $\mathcal{G}(e_1, e_2)$ \\
Hierarchical NG & - & fixed & $\mathcal{G}(d_1, d_2)$ & - & fixed & $\mathcal{G}(e_1, e_2)$ \\
NG & - & fixed & fixed & - & fixed & fixed\\
Bayesian Lasso & - & fixed at 1 & fixed & - & fixed at 1 & fixed \\
\\
\textit{Ridge regression} & - & - & fixed & - & - & fixed \\
\bottomrule
\end{tabular}
\caption{Overview of different possible model specifications. Note that in the NGG prior case, the priors on the hyperparameters are scaled (e.g. $2a^\xi \sim \mathcal{B}(\alpha_{a^\xi}, \beta_{a^\xi})$). These scalings are omitted from this table for the sake of brevity. See Section~\ref{sec:priors} for details.}
\label{tab:tablepriors}
\end{table}
}
In the following, we give some examples of models that can be estimated with the \pkg{shrinkTVP} package. In particular, we demonstrate how certain combinations of input arguments correspond to different model specifications. \revised{If the learning of a parameter is deactivated and no specific fixed value is provided, \code{shrinkTVP} will resort to default values. These equate to 0.1 for the pole and tail parameters and 20 for the global shrinkage parameters.} Note that in the following snippets of code, the argument \code{display_progress} is always set to \code{FALSE}, in order to suppress the progress bar and other outputs.
\paragraph{Fixing the \revised{pole parameters}}
It is possible \comment{to set}
the \revised{pole parameter} $a^\xi$($a^\tau$) \comment{to a fixed value} through the input argument \code{a_xi} (\code{a_tau}), after setting \code{learn_a_xi} (\code{learn_a_tau}) to \code{FALSE}. As an example, we show how to fit a hierarchical Bayesian Lasso, both on the $\sqrt{\theta_j}$'s and on the $\beta_j$'s:
\begin{CodeInput}
R> res_hierlasso <- shrinkTVP(y ~ x1 + x2, data = data,
+ learn_a_xi = FALSE, learn_a_tau = FALSE,
+ a_xi = 1, a_tau = 1, display_progress = FALSE)
\end{CodeInput}
\paragraph{Fixing the global shrinkage parameters}
The user can choose to fix the value of $\kappa_B^2$($\lambda_B^2$) by specifying the argument \code{kappa2_B} (\code{lambda2_B}), after setting \code{learn_kappa2_B} (\code{learn_lambda2_B}) to \code{FALSE}. In the code below, we give an example on how to fit a (non-hierarchical) Bayesian Lasso on both $\sqrt{\theta_j}$'s and $\beta_j$'s, with corresponding global shrinkage parameters fixed both to $100$:
\begin{CodeInput}
R> res_lasso <- shrinkTVP(y ~ x1 + x2, data = data,
+ learn_a_xi = FALSE, learn_a_tau = FALSE, a_xi = 1, a_tau = 1,
+ learn_kappa2_B = FALSE, learn_lambda2_B = FALSE,
+ kappa2_B = 100, lambda2_B = 100,
+ display_progress = FALSE)
\end{CodeInput}
\revised{\paragraph{Changing the prior type} To change the model type, the input argument \code{mod\_type} has to be supplied. It has to be a string equal to either \code{"triple"}, \code{"double"} or \code{"ridge"}. As an example, we fit a hierarchical NGG prior, both on the $\sqrt{\theta_j}$'s and on the $\beta_j$'s:}
\begin{CodeInput}
R> res_tg <- shrinkTVP(y ~ x1 + x2, data = data,
+ mod_type = "triple",
+ display_progress = FALSE)
\end{CodeInput}
\revised{\paragraph{Fixing the tail parameters} Much like the pole parameters, the tail parameter $c^\xi$ ($c^\tau$) can also be fixed to a value. This is done by setting} \code{learn_c_xi} (\code{learn_c_tau}) \revised{to \code{FALSE} and then supplying the input parameter} \code{c_xi} (\code{c_tau}). \revised{As an example, the code below fits a non-hierarchical horseshoe prior, both on the $\sqrt{\theta_j}$'s and on the $\beta_j$'s:}
\begin{CodeInput}
R> res_hs <- shrinkTVP(y ~ x1 + x2, data = data,
+ mod_type = "triple",
+ learn_a_xi = FALSE, learn_a_tau = FALSE, a_xi = 0.5, a_tau = 0.5,
+ learn_c_xi = FALSE, learn_c_tau = FALSE, c_xi = 0.5, c_tau = 0.5,
+ learn_kappa2_B = FALSE, learn_lambda2_B = FALSE,
+ display_progress = FALSE)
\end{CodeInput}
\subsection{Stochastic volatility specification}
The stochastic volatility specification defined in Equation~\eqref{eq:svht} can be used by setting the option \code{sv} to \code{TRUE}. This is made possible by a call to the \code{update_sv} function exposed by the \pkg{stochvol} package.
The code below fits a model with an NG prior in which all the parameters are learned and the observation equation errors are modeled through stochastic volatility:
\begin{CodeInput}
R> res_sv <- shrinkTVP(y ~ x1 + x2, data = data, sv = TRUE,
+ display_progress = FALSE)
\end{CodeInput}
The priors on the SV parameters are the ones defined in Equation~\eqref{eq:volpriors}, with hyperparameters fixed to
$b_\mu = 0$ , $B_\mu = 1$, $\aphi = 5$, $\bphi = 1.5$ , and $\Bsv = 1$.
\subsection{Specifying the hyperparameters}
Beyond simply switching off parts of the hierarchical structure of the prior setup, users can also modify the hyperparameters governing the \revised{hyperprior} distributions. This can be done through the arguments \code{hyperprior_param} and \code{sv_param}, which both have to be named lists. \revised{Hyperparameters} not specified by the user will be set to default values, \revised{which can be found in the help file of the \code{shrinkTVP} function. Note, however, that the dependence structure (e.g. $\kappa^2_B$ depends on $a^\xi$ and $c^\xi$ in the NGG specification) can not be changed. As such, if the user desires to change the hyperparameters of a prior that depends on other parameters, this can only be achieved by deactivating the learning of the parameters higher up in the hierarchy and fixing them to specific values.
To demonstrate how to change specific hyperparameters, the code below modifies those governing the prior on $a^\xi$:}
\begin{CodeInput}
R> res_hyp <- shrinkTVP(y ~ x1 + x2, data = data,
+ hyperprior_param = list(beta_a_xi = 5, alpha_a_xi = 10),
+ display_progress = FALSE)
\end{CodeInput}
\revised{
\subsection{Tuning the Metropolis-Hastings steps}
The Metropolis-Hastings algorithm discussed in Section~\ref{sec:MCMC} can be tuned via the argument \code{MH\_tuning}. Similar to \code{hyperprior\_param} and \code{sv\_param}, it is a named list where values that are not supplied are replaced by standard values. By default, adaptive Metropolis-within-Gibbs is activated for all parameters learned from the data that requrire a Metropolis-Hastings step. Below is an example where the adaptive Metropolis is deactivated for one of the pole parameters and slightly tuned for the other:
}
\begin{CodeInput}
R> res_MH <- shrinkTVP(y ~ x1 + x2, data = data,
+ MH_tuning = list(a_xi_adaptive = FALSE,
+ a_tau_max_adapt = 0.001,
+ a_tau_batch_size = 20),
+ display_progress = FALSE)
\end{CodeInput}
\subsection{Posterior inference: Summarize and visualize the posterior distribution}
The return value of \code{shrinkTVP} is an object of type \code{shrinkTVP}, which is a named list containing a variable number of elements, depending on the prior specification. For the \revised{default NG prior}, the values are:
\begin{enumerate}
\itemsep0em
\item a list holding $d$ \code{mcmc.tvp} objects (one for each $\comment{\bm{\beta}_j=(\bct{j}{0}, \ldots, \bct{j}{T})}$)
containing the parameter draws in \code{beta},
\item the parameter draws of $\bm \beta = (\beta_1, \dots, \beta_d)$ in \code{beta_mean},
\item the parameter draws of $(\sqrt{\theta_1}, \dots, \sqrt{\theta_d})$ in \code{theta_sr},
\item the parameter draws of $\tau_1^2, \ldots, \tau_d^2$ in \code{tau2},
\item the parameter draws of $\xi_1^2, \ldots,\xi_d^2,$ in \code{xi2},
\item the parameter draws of $a^{\xi}$ in \code{a_xi},
\item the parameter draws of $a^{\tau}$ in \code{a_tau},
\item the parameter draws for $\kappa_B^2$ in \code{kappa2_B},
\item the parameter draws for $\lambda_B^2$ in \code{lambda2_B},
\item the parameter draws of $\sigma^2$ in \code{sigma2},
\item the parameter draws of $C_0$ in \code{C0},
\item MH diagnostic values in \code{MH_diag},
\item the prior hyperparameters in \code{priorvals},
\item the design matrix, the response and the formula in \code{model},
\item summary statistics for the parameter draws in \code{summaries}
and
objects required for the \code{LPDS} function in \code{internals}.
\end{enumerate}
When some parameters are fixed by the user, the corresponding output value is omitted. \revised{Additionally, increasing or decreasing the amount of levels in the hierarchy of the prior also changes which values are returned. For example, if }\code{mod_type} \revised{is changed to} \code{"triple"} \revised{and the learning of the tail parameters $c^\xi$ and $c^\tau$ is not deactivated, then the output will also contain the respective parameter draws in }\code{c_xi} \revised{and} \code{c_tau}.
In the SV case, the draws for the parameters of the SV model on the errors are contained in \code{sv_mu}, \code{sv_phi} and \code{sv_sigma}. For details, see \cite{kas:dea}.
The two main tools for summarizing the output of \code{shrinkTVP} are the \code{summary} and \code{plot} methods implemented for \code{shrinkTVP} objects. \code{summary} has two arguments beyond the \code{shrinkTVP} object itself, namely \code{digits} and \code{showprior}, which control the output displayed. \code{digits} indicates the number of decimal places to round the posterior summary statistics to, while \code{showprior} determines whether or not to show the prior distributions resulting from the user input. In the example below, the default \code{digits} value of 3 is used, while the prior specification is omitted. The output of \code{summary} consists of the mean, standard deviation, median, 95\% \comment{highest} posterior density region and effective sample size (ESS) for the non time-varying parameters.
\begin{CodeChunk}
\begin{CodeInput}
R> summary(res, showprior = FALSE)
\end{CodeInput}
\begin{CodeOutput}
Summary of 5000 MCMC draws after burn-in of 5000.
Statistics of posterior draws of parameters (thinning = 1):
param mean sd median HPD 2.
beta_mean_Intercept 0.164 0.432 0 -0.376 1.362 426
beta_mean_x1 -0.248 0.157 -0.274 -0.483 0.013 110
beta_mean_x2 -0.002 0.037 0 -0.105 0.069 3741
abs(theta_sr_Intercept) 0.423 0.064 0.418 0.307 0.551 345
abs(theta_sr_x1) 0.013 0.024 0 0 0.067 133
abs(theta_sr_x2) 0.002 0.007 0 0 0.013 598
tau2_Intercept 6.989 151.023 0.001 0 4.85 4799
tau2_x1 6.615 240.363 0.096 0 4.658 5000
tau2_x2 0.438 15.431 0 0 0.105 5000
xi2_Intercept 23.525 563.676 0.265 0.007 10.827 5000
xi2_x1 1.883 89.188 0 0 0.07 5000
xi2_x2 0.011 0.331 0 0 0.003 5000
a_xi 0.08 0.039 0.074 0.014 0.154 169
a_tau 0.091 0.041 0.085 0.025 0.176 452
kappa2_B 29.519 104.414 1.507 0 148.318 4205
lambda2_B 53.677 184.784 2.182 0 257.231 1102
sigma2 0.993 0.125 0.984 0.767 1.248 1622
C0 1.72 0.634 1.648 0.606 2.962 5000
\end{CodeOutput}
\end{CodeChunk}
The \code{plot} method can be used to visualize the posterior distribution estimated by \code{shrinkTVP}. Aside from a \code{shrinkTVP} object, its main argument is \code{pars}, a character vector containing the names of the parameters to visualize. \code{plot} will call either \code{plot.mcmc.tvp} from the \pkg{shrinkTVP} package if the parameter is time-varying or \code{plot.mcmc} from the \pkg{coda} package, if the parameter is non time-varying. The default value of \code{pars} is \code{c("beta")}, leading to \code{plot.mcmc.tvp} being called on each of the $\beta_{jt}$, for $j =1, \ldots, d$. See the code below for an example and Figure~\ref{fig:beta} for the corresponding output.
\begin{CodeInput}
R> plot(res)
\end{CodeInput}
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{./figure/plot_beta.pdf}
\caption{Visualization of the evolution of the \comment{time-varying parameter $\bm{\beta}_j=(\bct{j}{0}, \ldots, \bct{j}{T}), j=1, 2, 3,$
over time $t=0,\ldots,T$}, as provided by the \code{plot} method. \code{plot} is in turn calling \code{plot.mcmc.tvp} on the individual \code{mcmc.tvp} objects. The median is displayed as a black line, and the shaded areas indicate the \comment{pointwise} 95\% and 50\% posterior credible intervals.}
\label{fig:beta}
\end{figure}
The \code{plot.mcmc.tvp} method displays empirical posterior credible intervals of a time-varying parameter over time, i.e., $\beta_{jt}$, for $j =1, \ldots, d$ and $\sigma^2_t$ in the case of stochastic volatility. By default, the \comment{pointwise} 95\% and 50\% posterior credible intervals are displayed as shaded areas layered on top of one another, with the median represented by a black line, with an additional grey, dashed line at zero. To ensure that users have flexiblity in the plots created, a host of options are implemented for customisation. The bounds of the credible intervals can be modified through the \code{probs} input, allowing for different levels of uncertainty visualization. The arguments \code{quantlines} and \code{shaded} take boolean vectors as inputs, and determine if the corresponding credible intervals will be displayed through shading and/or lines. The shaded areas can be customised via the arguments \code{shadecol} and \code{shadealpha}, which determine the color and the degree of transparency of the shaded areas. The lines representing the quantiles can be adjusted through \code{quantlty}, \code{quantcol} and \code{quantlwd}, which modify the line type, color and line width, respectively. In the spirit of \proglang{R}, all of these arguments are vectorised and the supplied vectors are recycled in the typical \proglang{R} fashion if necessary. The first element of these vectors is always applied to the outermost credible interval, the second to the second outermost and so forth. The horizontal line at zero can be similarly adjusted through \code{zerolty}, \code{zerolwd} and \code{zerocol} or entirely turned off by setting \code{drawzero} equal to \code{FALSE}. All further arguments are passed on to the standard \code{plot} method, allowing for changes to the line representing the median and other plot modifications that users of \proglang{R} are familiar with. An example of possible customisation can be seen in the code below, with the corresponding output being Figure~\ref{fig:beta2}.
\begin{CodeInput}
R> library("RColorBrewer")
R> color <- brewer.pal(5, "RdBu")
R> plot(res, pars = "beta", xlim = c(100, 200),
+ probs = seq(0.1, 0.9, by = 0.1),
+ quantlines = T, quantcol = color[5:2], quantlty = 1,
+ quantlwd = 3, col = color[1], lwd = 3, shadecol = "gold1")
\end{CodeInput}
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{./figure/plot_beta_2.pdf}
\caption{Visualization of the evolution of the \comment{time-varying parameter $\bm \beta_{t}$ over time $t=100,\ldots,200$ for $j=1, \ldots, 3$}. In this example, the x-axis of the plot was restricted with \code{xlim}, the color of the shaded areas was changed to yellow and colored solid lines have been added to delimit the credible intervals. The colored lines represent the median and the \comment{pointwise} 10\%, 20\%, 30\% 40\%, 60\%, 70\%, 80\%, and 90\% quantiles.}
\label{fig:beta2}
\end{figure}
To visualize other parameters via the \code{plot} method, the user has to change the \code{pars} argument. \code{pars} can either be set to a single character object or to a vector of characters containing the names of the parameter draws to display. In the latter case, the \code{plot} method will display groups of plots at a time, prompting the user to move on \comment{to} the next series of plots, similarly to how \pkg{coda} handles long plot outputs. Naturally, as all parameter draws are converted to \pkg{coda} objects, any
\comment{method from this package that users are familiar with}
(e.g., to check convergence) can be applied to the parameter draws contained in a \code{shrinkTVP} object. An example of this can be seen in Figure~\ref{fig:theta}, where \code{pars = "theta_sr"}, \comment{changes}
the output to a graphical summary of the parameter draws of $\sqrt{\theta_1}, \dots, \sqrt{\theta_d}$, using \pkg{coda}'s \code{plot.mcmc} function. To obtain Figure~\ref{fig:theta}, one can run
\begin{CodeInput}
R> plot(res, pars = "theta_sr")
\end{CodeInput}
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth,height=\textheight,keepaspectratio]{./figure/plot_theta.pdf}
\caption{\comment{Trace plots (left column) and kernel density estimates of the posterior density (right column) for the parameters
$\sqrt{\theta}_1, \dots, \sqrt{\theta}_3$}, as provided by the \code{plot} method. \code{plot} is in turn calling \pkg{coda}'s \code{plot.mcmc}.}
\label{fig:theta}
\end{figure}
\section{Predictive performances and model comparison}
\label{sec:LPDS}
Within a Bayesian framework, a natural way to predict a future observation is through its posterior predictive density. For this reason, log-predictive density scores (LPDSs) provide a means of assessing how well the model performs in terms of prediction on real data. The log-predictive density score for time $t_0 +1$
is obtained by evaluating at $y_{t_0 +1}$ the
\comment{log of the posterior predictive density}
obtained by fitting the model to the previous $t_0$ data points.
Given the data up to time $t_0$, the posterior predictive density at time $t_0 + 1$ is given by
\begin{align} \label{MCpred}
p(y_{t_0 + 1}| y_{1}, \ldots, y_{t_0}, \bm x_{t_0 +1} ) = \int p(y_{t_0 + 1}| \bm x_{t_0 +1}, \bm \psi ) p (\bm \psi| y_{1}, \ldots, y_{t_0} )d \bm \psi,
\end{align}
where $\bm \psi$ is the set of model parameters \comment{and latent variables up to $t_0+1$. For a TVP model with homoscedastic errors, $\bm \psi= (\tilde {\bm \beta}_0,\ldots \tilde {\bm \beta}_{t_0 +1}, \sqrt \theta_1, \ldots, \sqrt{\theta_d}, \beta_1, \ldots, \beta_d, \sigma^2)$,
whereas for a TVP model with SV errors,
$\bm \psi= (\tilde {\bm \beta}_0,\ldots \tilde {\bm \beta}_{t_0 +1}, \sqrt \theta_1, \ldots, \sqrt{\theta_d}, \beta_1, \ldots, \beta_d,
\sigma^2_1, \ldots,\sigma^2_{t_0 +1})$.}
\comment{Given $M$ samples from the posterior distribution of the parameters and latent variables, $p (\bm \psi| y_{1}, \ldots, y_{t_0} )$, Monte Carlo integration
could be applied immediately to approximate (\ref{MCpred}).
However, \cite{bit-fru:ach} propose a more efficient approximation of the predictive density, \comment{the so-called conditionally optimal Kalman mixture approximation which is}
obtained by analytically integrating out $\tilde {\bm \beta}_{t_0+1}$ from the likelihood at time $t_0 +1$.}
In the homoscedastic error case, given $M$ samples from the posterior distribution of the parameters \comment{and the latent variables up to $t_0$},
\commentred{Monte Carlo integration of the resulting predictive density yields following mixture approximation,
\begin{eqnarray} \label{eq:approx_mixture}
\hspace{-0.15cm}
p(y_{t_0+1}| y_{1},\ldots, y_{t_0}, \xv_{t_0+1}) &\approx& \dfrac{1}{M} \sum_{m=1}^M
\Normalpdfa{y_{t_0+1}}{\yhat _{t_0+1} \imm{m}, \yS _{t_0+1} \imm{m} }, \\
\yhat \imm{m} _{t_0+1}& =& \bm x_{t_0+1} \bm \beta^{(m)} + \bm F_{t_0+1}\imm{m} \bm m_{t_0} \imm{m}, \nonumber \\
\yS \imm{m} _{t_0+1} &= & \bm F_{t_0+1} \imm{m} (\SSigma_{t_0} \imm{m} + I_d) ( \bm F_{t_0+1}\imm{m}) ^\top + (\sigma^{2})\imm{m} , \nonumber
\end{eqnarray}
where the conditional predictive densities are Gaussian and the conditional moments depend on the MCMC draws.
The mean $\yhat _{t_0+1} \imm{m}$ and the variance $\yS _ {t_0+1} \imm{m} $}
are computed for the $m$th MCMC iteration from
$\bm F_{t_0+1}= \bm x_{t_0+1} \text{Diag}(\sqrt{\theta_1}, \ldots, \sqrt{\theta_d)}$ and
the mean $\bm m_{t_0}$ and the covariance matrix $\SSigma_{t_0}$ of the posterior distribution of $\tilde {\bm \beta}_{t_0}$.
These quantities can be obtained by iteratively calculating $\SSigma_{t}$ and $\bm m_{t}$ up to time $t_0$, as described in \cite{mcc-etal:sim}:
\begin{align*}
&\SSigma_1 = (\OOmega_{11})^{-1}, \qquad \bm m_1 = \SSigma_1 \bm c_1,\\
&\SSigma_t = (\OOmega_{tt} - \OOmega_{t-1,t}^{\top} \SSigma_{t-1} \OOmega_{t-1,t})^{-1}, \qquad \bm m_t = \SSigma_t (\bm c_t - \OOmega_{t-1,t}^{\top} \bm m_{t-1} ).
\end{align*}
The quantities $\cv_t$, $\OOmega_{tt}$ and $\OOmega_{t-1,t}$ for $t=1, \ldots, t_0$ are given in Appendix~\ref{sec:mccau}.
For the SV case, it is still possible to analytically integrate out $\tilde {\bm \beta}_{t_0+1}$ from the likelihood at time $t_0 +1$ conditional on
a known value of $\sigma^2_{t_0+1}$, however it is not possible to integrate the likelihood with respect to both latent variables $\tilde {\bm \beta}_{t_0+1}$ and $\sigma^2_{t_0+1}$.
Hence, at each MCMC iteration a draw is taken from the predictive distribution of $\sigma^2_{t_0+1}=\exp (h_{t_0+1})$, derived from Equation~\eqref{eq:svht}, and used to calculate the \comment{conditional predictive density of $y_{t_0+1}$}.
The approximation of the one-step ahead predictive density can then be obtained through the following steps:
\begin{enumerate}
\item for each MCMC draw of $(\mu, \phi,\sigma_{\eta}^2) \imm{m}$ and $h_{t_0} \imm{m}$, obtain a draw of $(\sigma^{2}_{t_0+1})\imm{m}$; \item calculate the conditionally optimal Kalman mixture approximation as \commentred{in (\ref{eq:approx_mixture}) with following
slightly different values $\yS _ {t_0+1} \imm{m} $:}
\commentred{ \begin{align*}
& \yS _ {t_0+1} \imm{m} = \bm F_{t_0+1} \imm{m} (\SSigma_{t_0} \imm{m} + I_d) ( \bm F_{t_0+1} \imm{m}) ^\top + (\sigma^{2}_{t_0+1})\imm{m}, \end{align*}
where $\bm F_{t_0+1}$ and $\SSigma_{t_0}$ are the same as defined above.}
\end{enumerate}
These calculations can be performed by the \code{LPDS} function, based on a fitted TVP model resulting from a call to \code{shrinkTVP}. The function's arguments are an object of class \code{shrinkTVP} and \code{data_test}, a data frame with one row, containing covariates and response at time $t_0 + 1$. The following snippet of code fits a \code{shrinkTVP} model to synthetic data up to $T - 1$, and then calculates the LPDS at time $T$. The obtained LPDS score is then displayed.
For an example on how to calculate LPDSs for $k$ points in time, please see~Section~\ref{sec:usmacro}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.6\linewidth]{./figure/plot_dens.pdf}
\caption{{One-step ahead predictive density $p(y_{t_0+1}| y_{1},\ldots, y_{t_0}, \xv_{t_0+1}) $ for a synthetic data set. The black vertical line represents the true realisation of $y_{{t_0}+1}$}.}
\label{fig:pred_dens}
\end{figure}
\begin{CodeChunk}
\begin{CodeInput}
R> res_LPDS <- shrinkTVP(y ~ x1 + x2, data = data[1:(nrow(data) - 1),],
+ display_progress = FALSE)
R> LPDS(res_LPDS, data[nrow(data), ])
\end{CodeInput}
\begin{CodeOutput}
[1] -1.231744
\end{CodeOutput}
\end{CodeChunk}
An additional functionality provided by the package \pkg{shrinkTVP} is the evaluation of the one-step ahead predictive density through the function
\code{eval_pred_dens}.
It takes as inputs an object of class \code{shrinkTVP}, a one row data frame containing $\bm x_{t_0+1}$ and a point, or vector of points, at which the predictive density is to be evaluated. It returns a vector of the same length, containing the value of the density at the points the user supplied. An example of this can be seen in the code below.
\begin{CodeChunk}
\begin{CodeInput}
R> eval_pred_dens(1:3, res_LPDS, data[nrow(data), ])
\end{CodeInput}
\begin{CodeOutput}
[1] 0.0004023221 0.0043769491 0.0285444188
\end{CodeOutput}
\end{CodeChunk}
Thanks to its vectorised nature, \code{eval_pred_dens}
can be plugged directly into functions that expect an expression that evaluates to the length of the input, such as the \code{curve} function from the \pkg{graphics} \citep{R} package. The following snippet of code exploits this behaviour to plot the posterior predictive density. The result can be seen in Figure~\ref{fig:pred_dens}.
\begin{CodeChunk}
\begin{CodeInput}
R> curve(eval_pred_dens(x, res_LPDS, data[nrow(data), ]), to = 12,
+ ylab = bquote("p(" * y[t[0]+1] * "\uff5c" * y[1] * ","
+ * ldots * "," ~ y[t[0]] * "," ~ x[t[0]+1] * ")"),
+ xlab = expression(y[t[0]+1]), lwd = 2.5, col = "skyblue", axes = FALSE)
R> abline(v = data$y[nrow(data)])
R> axis(1)
R> axis(2)
\end{CodeInput}
\end{CodeChunk}
\section{Predictive exercise: usmacro dataset}
\label{sec:usmacro}
In the following, we provide a brief demonstration on how to use the \pkg{shrinkTVP} package on real data and compare different prior specifications via LPDSs. Specifically, we consider the \code{usmacro.update} dataset from the \pkg{bvarsv} package \citep{kru:bva}. The dataset \code{usmacro.update} contains the inflation rate, unemployment rate and treasury bill interest rate for the United States, from 1953:Q1 to 2015:Q2, \comment{that is $T=250$}. The same dataset up to 2001:Q3 was used by \cite{pri:tim}. The response variable is the inflation rate \code{inf}, while the predictors are the lagged inflation rate \code{inf_lag}, the lagged unemployed rate \code{une_lag} and the lagged treasury bill interest \code{tbi_lag}. We construct our dataset as follows:
\begin{CodeInput}
R> library("bvarsv")
R> data("usmacro.update")
R>
R> # Create matrix of lags and create final data set
R> lags <- usmacro.update[1:(nrow(usmacro.update) - 1), ]
R> colnames(lags) <- paste0(colnames(lags), "_lag")
R> us_data <- data.frame(inf = usmacro.update[2:nrow(usmacro.update), "inf"],
+ lags)
\end{CodeInput}
In the snippet of code below, we \revised{estimate a TVP model with a fully hierarchical NG prior}
for $60000$ iterations, with a thinning of $10$ and a burn-in of $10000$, hence keeping $5000$ posterior draws.
\begin{CodeInput}
R> us_res <- shrinkTVP(inf ~ inf_lag + une_lag + tbi_lag, us_data,
+ niter = 60000, nburn = 10000, nthin = 10,
+ display_progress = FALSE)
\end{CodeInput}
Once we have fit the model, we can perform posterior inference by using the \code{summary} and \code{plot} methods. The summary is shown below, while Figure~\ref{fig:beta_us} shows the paths of $\bm \beta_t$ evolving over time, and Figure~\ref{fig:theta_us} displays the trace plots (left column) and posterior densities (right column) of $\sqrt{\theta_1}, \ldots, \sqrt{\theta_4}$ obtained via the \code{plot} method.
\begin{CodeChunk}
\begin{CodeInput}
R> summary(us_res, showprior = FALSE)
\end{CodeInput}
\begin{CodeOutput}
Summary of 50000 MCMC draws after burn-in of 10000.
Statistics of posterior draws of parameters (thinning = 10):
param mean sd median HPD 2.
beta_mean_Intercept 0.415 0.436 0.326 -0.132 1.266 639
beta_mean_inf_lag 0.733 0.191 0.742 0.347 1.093 756
beta_mean_une_lag -0.144 0.059 -0.149 -0.234 0.002 345
beta_mean_tbi_lag 0.008 0.022 0 -0.02 0.065 661
abs(theta_sr_Intercept) 0.144 0.025 0.145 0.098 0.195 1003
abs(theta_sr_inf_lag) 0.044 0.006 0.044 0.031 0.056 2525
abs(theta_sr_une_lag) 0.003 0.005 0 0 0.014 117
abs(theta_sr_tbi_lag) 0.001 0.002 0 0 0.006 478
tau2_Intercept 354.575 24026.311 0.158 0 15.251 5000
tau2_inf_lag 325.535 15149.831 0.793 0 36.284 5000
tau2_une_lag 20.052 729.984 0.049 0 3.887 5000
tau2_tbi_lag 30.132 1447.597 0 0 0.057 5000
xi2_Intercept 1.546 30.218 0.032 0 0.986 5000
xi2_inf_lag 0.292 5.131 0.005 0 0.268 5000
xi2_une_lag 0.014 0.717 0 0 0.005 5000
xi2_tbi_lag 0.002 0.048 0 0 0.001 5000
a_xi 0.095 0.039 0.09 0.029 0.171 1242
a_tau 0.087 0.042 0.082 0.015 0.167 2290
kappa2_B 120.337 250.836 22.127 0 609.238 5000
lambda2_B 7.123 22.902 0.601 0 33.607 3324
sigma2 0.018 0.006 0.017 0.007 0.029 1140
C0 0.126 0.062 0.114 0.027 0.247 2360
\end{CodeOutput}
\end{CodeChunk}
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{./figure/plot_usmacro_beta.pdf}
\caption{Visualization of the evolution of \comment{the time-varying parameter $\bm{\beta}_j=(\bct{j}{0}, \ldots, \bct{j}{T})$ over time $t=0,\ldots,T$ for $j=1, \ldots, 4$}
for the \code{usmacro.update} dataset. The median is displayed as a black line, and the shaded areas indicate the \comment{pointwise} 95\% and 50\% posterior credible intervals.}
\label{fig:beta_us}
\end{figure}
It appears clear by looking at Figure~\ref{fig:beta_us} that the intercept and the parameter associated with the lagged inflation rate are time-varying, while the parameters associated with the lagged treasury bill interest rate and the lagged unemployment rate are relatively constant. This can be confirmed by looking at the posterior distributions of the corresponding standard deviations, displayed in Figure~\ref{fig:theta_us}. The posterior densities of the standard deviations associated with the intercept and the lagged inflation are bimodal, with very little mass around zero. This bimodality results from the non-identifiability of the sign of the standard deviation. As a convenient side effect, noticeable bimodality in the density plots of the posterior distribution
\comment{$p(\sqrt{\theta}_j|\ym)$ of the standard deviations $\sqrt{\theta}_j$}
is a strong indication of time variability in the associated parameter \comment{$\bct{j}{t}$.}
Conversely, the posterior densities of the standard deviations associated with the lagged unemployment and the lagged treasury bill interest rate have a pronounced spike at zero, indicating strong model evidence in favor of constant parameters. Moreover, the path of the parameter of the treasury bill interest rate is centered at zero, indicating that this parameter is neither time-varying nor significant.
In order to compare the predictive performances of different shrinkage priors, we calculate one-step ahead LPDSs for the last 50 points in time for \revised{eleven} different prior choices: \revised{(1) the full hierarchical NGG prior, (2) the hierarchical NGG \comment{prior} with fixed $a^\xi = a^\tau = c^\xi = c^\tau = 0.1$, (3) the NGG \comment{prior} with $a^\xi = a^\tau = c^\xi = c^\tau = 0.1$ and $\kappa_B^2 = \lambda_B^2 = 20$, (4) the hierarchical horseshoe prior, (5) the horseshoe prior $\kappa_B^2 = \lambda_B^2 = 20$, (6) the full hierarchical NG prior, (7) the hierarchical NG prior with fixed $a^\xi = a^\tau = 0.1$, (8) the NG prior with $a^\xi = a^\tau = 0.1$ and $\kappa_B^2 = \lambda_B^2 = 20$, (9) the hierarchical Bayesian Lasso, and (10) the Bayesian Lasso with $\kappa_B^2 = \lambda_B^2 = 20$ and (11) ridge regression with $\kappa_B^2 = \lambda_B^2 = 20$.} Figure~\ref{fig:LPDS} shows the cumulative LPDSs for the last 50 quarters of the \code{usmacro.update} dataset. \revised{The default prior, the fully hierarchical NG prior on both the $\beta_j$'s and the \comment{$\sqrt{\theta_j}$'s, performs the best in terms of prediction.}} In Appendix~\ref{sec:multicore} we show how to obtain LPDSs for different models and points in time, using the packages \pkg{foreach} \citep{wes:for} and \pkg{doParallel} \citep{wes:doP}.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{./figure/plot_usmacro_theta.pdf}
\caption{\comment{Trace plots (left column) and kernel density estimates of the posterior density (right column) for the parameters} $\sqrt{\theta_1}, \ldots, \sqrt{\theta_4}$ for the \code{usmacro.update} dataset.}
\label{fig:theta_us}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{./figure/LPDS_macro.pdf}
\captionof{figure}{Cumulative LPDSs for the last 50 quarters of the \code{usmacro.update} dataset, for \revised{eleven} different shrinkage priors: \revised{(1) the full hierarchical NGG prior, (2) the hierarchical NGG \comment{prior} with fixed $a^\xi = a^\tau = c^\xi = c^\tau = 0.1$, (3) the NGG \comment{prior} with $a^\xi = a^\tau = c^\xi = c^\tau = 0.1$ and $\kappa_B^2 = \lambda_B^2 = 20$, (4) the hierarchical horseshoe prior, (5) the horseshoe prior $\kappa_B^2 = \lambda_B^2 = 20$, (6) the full hierarchical NG prior, (7) the hierarchical NG prior with fixed $a^\xi = a^\tau = 0.1$, (8) the NG prior with $a^\xi = a^\tau = 0.1$ and $\kappa_B^2 = \lambda_B^2 = 20$, (9) the hierarchical Bayesian Lasso, and (10) the Bayesian Lasso with $\kappa_B^2 = \lambda_B^2 = 20$ and (11) ridge regression with $\kappa_B^2 = \lambda_B^2 = 20$.}}
\label{fig:LPDS}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
The goal of this paper was to introduce the reader to the functionality of the \proglang{R} package \pkg{shrinkTVP} \citep{kna-etal:shr}. This \proglang{R} package provides a fully Bayesian approach for statistical inference in TVP models with shrinkage priors.
\comment{On the one hand, the package provides an easy entry point for users
who want to pass on only their data in a first step of exploring TVP models for their specific application context.
Running the function \code{shrinkTVP} under the default model with a fully hierarchical \revised{NG} shrinkage prior with predefined hyperparameters, estimation of a TVP model becomes as easy as using the well-known function \code{lm} for a standard linear regression model.
On the other hand, exploiting numerous advanced options of the package,
the more experienced user can also explore alternative model specifications such as the Bayesian Lasso or the horseshoe prior and use log-predictive density scores to compare
various model specifications.}
Various examples of the usage of \pkg{shrinkTVP} were given, and the \code{summary} and \code{plot} methods for
straightforward posterior inference were illustrated.
Furthermore, a predictive exercise with the dataset \code{usmacro.updade} from the package \pkg{bvarsv} was performed, with a focus on the calculation of LPDSs using \code{shrinkTVP}. The default model in \pkg{shrinkTVP} showed better performance than its competitors in terms of cumulative LPDSs.
\comment{While these examples were confined to univariate responses, the package can also be applied in a multivariate context, for instance to the
sparse TVP Cholesky SV model considered in \citet{bit-fru:ach}, exploiting a representation of this model as a system of independent TVP models
with univariate responses.}
|
1,314,259,994,962 | arxiv | \section{Introduction}
Fix a prime $\ell$. We assume that all schemes are qcqs, and live over $\mathbb Z[\tfrac 1\ell]$. Let $f: X\to S$ be a morphism of finite presentation between such schemes. The goal of this paper is to introduce a ``relatively (over $S$) perverse $t$-structure'' on the derived category of \'etale sheaves on $X$, and show that it interacts well with the notion of universally locally acyclic sheaves.
Although probably the case of $\overline{\mathbb Q}_\ell$-coefficients is the most interesting, we also allow some other coefficients; in particular, the case of torsion coefficients is required as an intermediate step.\footnote{As we will apply a lot of descent techniques, we prefer to work with $\infty$-categories. However, as $t$-structures only depend on the underlying triangulated category, the statements of our main results are really about the underlying triangulated categories.}
\begin{enumerate}
\item[{\rm (A)}] Let $\Lambda$ be a ring killed by some power of $\ell$, and denote by $\mathcal D_\mathrm{\acute{e}t}(X,\Lambda)$ the left-completion of the derived $\infty$-category $\mathcal D(X_\mathrm{\acute{e}t},\Lambda)$ of $\Lambda$-modules on the \'etale site $X_\mathrm{\acute{e}t}$. (If $X_\mathrm{\acute{e}t}$ has locally finite $\ell$-cohomological dimension, then the left-completion is not necessary.)
\item[{\rm (B)}] In the setting of (A), let $\mathcal D_\mathrm{cons}(X,\Lambda)\subset \mathcal D_\mathrm{\acute{e}t}(X,\Lambda)$ be the full $\infty$-subcategory of perfect-constructible complexes. (If $X_\mathrm{\acute{e}t}$ has locally finite $\ell$-cohomological dimension, then $\mathcal D_\mathrm{\acute{e}t}(X,\Lambda)$ is compactly generated with compact objects $\mathcal D_\mathrm{cons}(X,\Lambda)$.)
\item[{\rm (C)}] Let $\Lambda$ be an algebraic extension $L$ of $\mathbb Q_\ell$ or its ring of integers $\mathcal O_L$, and let $\mathcal D_\mathrm{cons}(X,\Lambda)$ be defined as in \cite{HemoRicharzScholbach}. In other words, it is the full $\infty$-subcategory of $\mathcal D(X_\mathrm{pro\acute{e}t},\Lambda)$ consisting of those objects that on a constructible stratification of $X$ become dualizable; by \cite{HemoRicharzScholbach} this agrees with more classical definitions.
\end{enumerate}
In the respective cases, we let $\mathcal D(X)=\mathcal D_\mathrm{\acute{e}t}(X,\Lambda)$, resp.~$\mathcal D(X)=\mathcal D_\mathrm{cons}(X,\Lambda)$, resp.~$\mathcal D(X)=\mathcal D_\mathrm{cons}(X,\Lambda)$. In all cases, $\mathcal D(X)$ is a $\Lambda$-linear $\infty$-category, and pullback along any map $f: Y\to X$ defines functors $\mathcal D(X)\to \mathcal D(Y)$. In fact, $\mathcal D(X)$ is naturally a full $\infty$-subcategory of $\mathcal D(X_\mathrm{pro\acute{e}t},\Lambda)$ stable under pullbacks. In all cases, there is also a symmetric monoidal tensor product, and pullback commutes with tensor products.
In setting (B), it is sometimes important to assume that $\Lambda$ is regular as otherwise this category is not stable under naive truncations. The precise condition on $\Lambda$ is that any truncation of a perfect complex of $\Lambda$-modules is still perfect, so whenever we ask that $\Lambda$ is regular, we mean this condition.
If $f: Y\to X$ is separated and of finite type (resp.~of finite presentation in cases (B) and (C)), there is a natural functor $Rf_!: \mathcal D(Y)\to \mathcal D(X)$ compatible with base change and satisfying a projection formula. In case (A), the adjoint functor theorem also gives us right adjoints, and thus internal Hom's, direct images, and exceptional inverse images, and these may or may not preserve subcategories of constructible complexes in general.
The main theorem of the paper is the following.\footnote{As far as we are aware, this notion of relative perversity is new, but in some restricted variant the notion has been considered before by Katz--Laumon \cite{KatzLaumon}.}
\begin{theorem}\label{thm:main} Let $D(X)$ denote the derived category of $\Lambda$-modules in any of the settings (A), (B), and (C). In case (B), assume that $\Lambda$ is perfect. In case (C), assume that any constructible subset of $S$ has finitely many irreducible components.
There is a (necessarily unique) $t$-structure $({}^{p/S} D^{\leq 0},{}^{p/S} D^{\geq 0})$ on $ D(X)$, called the relative perverse $t$-structure, with the following property:
An object $A\in D(X)$ lies in ${}^{p/S} D^{\leq 0}$ (resp.~${}^{p/S} D^{\geq 0}$) if and only if for all geometric points $\overline{s}\to S$ with fibre $X_{\overline{s}} = X\times_S \overline{s}$, the restriction $A|_{X_{\overline{s}}}\in D(X_{\overline{s}})$ lies in ${}^p D^{\leq 0}$ (resp.~${}^p D^{\geq 0}$), for the usual (absolute) perverse $t$-structure.
\end{theorem}
\begin{remark} The hypotheses (in cases (B), (C)) are essentially optimal, see Remark~\ref{rem:hypotheses}.
\end{remark}
\begin{remark} Another setting of interest is when $X$ and $S$ are of finite type over $\mathbb C$, in which case one can also use constructible sheaves with $\mathbb Z$- or $\mathbb Q$-coefficients. The theorem and its variants discussed below also holds true in that setting, and can be deduced from their $\ell$-adic versions.
\end{remark}
The proof of this theorem rests on two ingredients: v-descent, and the theory of nearby cycles. Roughly speaking, v-descent allows us to reduce to the case that $S$ is the spectrum of a valuation ring $V$ with algebraically closed fraction field. In that case the theorem is closely related to the perverse $t$-exactness properties of nearby cycles.
Let us first state the results regarding v-descent; the results here are mostly due to Bhatt--Mathew \cite{BhattMathew} who even prove arc-descent, and their results have been further refined by Gabber \cite{GabberLetterMathew}. In particular, there is no claim of originality in this part. Recall that a map of qcqs schemes $f: Y\to X$ is a v-cover if for any map $\mathrm{Spec} V\to X$ from a valuation ring $V$, there is a faithfully flat extension $V\subset W$ of valuation rings and a lift $\mathrm{Spec} W\to Y$. This is an extremely general class of covers. Even more general is the class of arc-covers, where this lifting condition is restricted to valuation rings of rank $\leq 1$. Intermediate between v-covers and arc-covers is the notion of universal submersions; these are the maps $f: Y\to X$ such that any base change of $f$ induces a quotient map on topological spaces. It is equivalent to the condition that for any map $\mathrm{Spec} V\to X$ as above, with fraction field $K$ of $V$, the inclusion $Y_K\subset Y_V$ is not closed.
\begin{theorem}[Bhatt--Mathew \cite{BhattMathew}, Gabber \cite{GabberLetterMathew}] In any of the settings (A), (B), and (C), the association $X\mapsto \mathcal D(X)$ defines a v-sheaf of $\infty$-categories. In fact, in settings (B) and (C) it is even an arc-sheaf of $\infty$-categories, and in setting (A) a sheaf for universal submersions.
\end{theorem}
We warn the reader that in setting (A) it is not an arc-sheaf, by an example of Gabber, see Example~\ref{ex:arcnotsubmersion}. In fact, universal submersions are the most general class of maps that one can allow. The key step in Gabber's proof is worth stating separately, as it is about general \'etale sheaves (without abelian group structure).
\begin{theorem}[\cite{GabberLetterMathew}] Sending any scheme $X$ to the category of \'etale sheaves on $X$ defines a stack with respect to universal submersions.\footnote{In \cite{GabberLetterMathew}, Gabber also sketches an extension of this result to the case where one sends $X$ to the $(2,1)$-category of ind-finite \'etale stacks.}
In particular, sending any scheme $X$ to the category of separated \'etale maps of schemes $Y\to X$ defines a stack with respect to universal submersions, and in particular a v-stack.
\end{theorem}
This strengthens some previous descent results, notably by Rydh \cite{Rydhvdescent}, \cite[Theorem 5.6]{BhattMathew}.
Using these descent results and some approximation arguments, we can reduce Theorem~\ref{thm:main} to the case that $S=\mathrm{Spec} V$ where $V$ is a valuation ring with algebraically closed fraction field $K$; one can even assume that $V$ is of rank $1$.
In that case, we rely on the theory of nearby cycles. The foundational results here are due to Deligne \cite{SGA412}, Illusie and Gabber \cite[Appendix]{IllusieAutour}, Huber \cite[Section 4.2]{HuberBook}, Zheng \cite[Appendix]{ZhengDuality}, and recently Lu--Zheng \cite{LuZhengULA}. We take the opportunity to rederive all the basic results about nearby cycles from the perspective of the notion of universal local acyclicity, using critically the recent characterization of universal local acyclicity in terms of dualizability in a symmetric monoidal $2$-category of cohomological correspondences, due to Lu--Zheng \cite{LuZhengULA}. Again, there is no claim of originality.
This symmetric monoidal $2$-category can be defined in any of the settings (A), (B), and (C), but it turns out that universal local acyclicity (i.e., dualizability in this category) implies constructibility, so settings (A) and (B) yield the same universally locally acyclic objects. For this reason, we restrict to settings (B) and (C) for the moment.
\begin{theorem} Let $f: X\to S$ be a separated map of finite presentation between qcqs schemes and let $A\in D(X)$ in one of the settings (B) and (C). The following conditions are equivalent.
\begin{enumerate}
\item[{\rm (i)}] The pair $(X,A)$ defines a dualizable object in the symmetric monoidal $2$-category of cohomological correspondences over $S$.
\item[{\rm (ii)}] The following condition holds after any base change in $S$. For any geometric point $\overline{x}\to X$ mapping to a geometric point $\overline{s}\to S$, and a generization $\overline{t}\to S$ of $\overline{s}$, the map
\[
A|_{\overline{x}} = R\Gamma(X_{\overline{x}},A)\to R\Gamma(X_{\overline{x}}\times_{S_{\overline{s}}} S_{\overline{t}},A)
\]
is an isomorphism.
\item[{\rm (iii)}] The following condition holds after any base change in $S$. For any geometric point $\overline{x}\to X$ mapping to a geometric point $\overline{s}\to S$, and a generization $\overline{t}\to S$ of $\overline{s}$, the map
\[
A|_{\overline{x}} = R\Gamma(X_{\overline{x}},A)\to R\Gamma(X_{\overline{x}}\times_{S_{\overline{s}}} \overline{t},A)
\]
is an isomorphism.
\item[{\rm (iv)}] After base change along $\mathrm{Spec} V\to S$ for any rank $1$ valuation ring $V$ with algebraically closed fraction field $K$ and any geometric point $\overline{x}\to X$ mapping to the special point of $\mathrm{Spec} V$, the map
\[
A|_{\overline{x}} = R\Gamma(X_{\overline{x}},A)\to R\Gamma(X_{\overline{x}}\times_{\mathrm{Spec} V}\mathrm{Spec} K,A)
\]
is an isomorphism.
\end{enumerate}
Moreover, these conditions are stable under any base change, and can be checked arc-locally on $S$.
\end{theorem}
In particular, this shows that the key to understanding universal local acyclicity is the case where the base is the spectrum of a (rank $1$) valuation ring with algebraically closed fraction field. The key result is the following, which rederives all the basic properties of the nearby cycles functor.
\begin{theorem} Let $X$ be a separated scheme of finite presentation over $S=\mathrm{Spec} V$, where $V$ is a valuation ring with algebraically closed fraction field $K$. Let $j: X_K\subset X$ be the inclusion of the generic fibre. Then, in the settings (B) and (C), the restriction functor
\[
j^\ast: D^{\mathrm{ULA}}(X/S)\to D(X_K)
\]
is an equivalence, whose inverse is given by $Rj_\ast: D(X_K)\subset D(X_{K,\mathrm{pro\acute{e}t}},\Lambda)\to D(X_\mathrm{pro\acute{e}t},\Lambda)$.
In particular, the formation of $Rj_\ast$ preserves constructibility, and commutes with any flat base change $V\to V'$ of valuation rings with algebraically closed fraction fields, with relative Verdier duality, and satisfies a K\"unneth formula.
\end{theorem}
Given a separated map $f: X\to S$ of finite presentation, the functor taking $S'/S$ to $\mathcal D^{\mathrm{ULA}}(X_{S'}/S')$ has good properties.
\begin{proposition} In any setting, the functor $S'\mapsto \mathcal D^{\mathrm{ULA}}(X_{S'}/S')$ is an arc-sheaf of $\infty$-categories. Moreover, it satisfies the valuative criterion of properness in the sense that if $S'=\mathrm{Spec} V$ is the spectrum of a valuation ring $V$ with algebraically closed fraction field $K$, then $\mathcal D^{\mathrm{ULA}}(X_V/V)\to \mathcal D^{\mathrm{ULA}}(X_K/K)$ is an equivalence. In setting (B), it is a finitary arc-sheaf.
In case (C), let $L$ be the algebraic extension of $\mathbb Q_\ell$ and fix some $A\in \mathcal D^{\mathrm{ULA}}(X/S,L)$. Consider the functor taking $S'/S$ to the $\infty$-category of all $A_0\in \mathcal D^{\mathrm{ULA}}(X_{S'}/S',\mathcal O_L)$ with an identification $A_0[\tfrac 1{\ell}]\cong A|_{X_{S'}}$. This is a finitary arc-sheaf satisfying the valuative criterion of properness, and is v-locally nonempty.
\end{proposition}
The second part of this proposition shows that at least h-locally on the base (where h-covers are by definition finitely presented v-covers), any universally locally acyclic sheaf with rational coefficients admits an integral structure that is also universally locally acyclic; through this result one can get a handle on the case of rational coefficients. We note that Proposition~\ref{prop:ULAintegralstructureunibranch} shows that if $S$ is geometrically unibranch, such an integral structure exists already over $S$.
Moreover, relative perversity interacts well with universal local acyclicity. More precisely:
\begin{theorem}\label{thm:ULAmain} Assume that $X$ is a separated scheme of finite presentation over $S$, and consider one of the settings (B) and (C). In case (B), assume that $\Lambda$ is regular. In case (C), assume that $S$ has only finitely many irreducible components. Then there is a relative perverse $t$-structure
\[
{}^{p/S} D^{\mathrm{ULA},\leq 0}(X/S),{}^{p/S} D^{\mathrm{ULA},\geq 0}(X/S)\subset D^{\mathrm{ULA}}(X/S)
\]
such that $A\in {}^{p/S} D^{\mathrm{ULA},\leq 0}(X/S)$ (resp.~$A\in {}^{p/S} D^{\mathrm{ULA},\geq 0}(X/S)$) if and only if for all geometric points $\overline{s}\to S$, the fibre $A|_{X_{\overline{s}}}$ lies in ${}^p D^{\leq 0}(X_{\overline{s}})$ (resp.~${}^p D^{\geq 0}(X_{\overline{s}})$).
\end{theorem}
In particular, the inclusion $D^{\mathrm{ULA}}(X/S)\subset D(X)$ is $t$-exact for the relative perverse $t$-structure, and thus for any $A\in D^{\mathrm{ULA}}(X/S)$ its relative perverse cohomologies ${}^{p/S} \mathcal H^i(A)$ are again universally locally acyclic over $S$.
If $S$ is regular of equidimension $d$, then we can also equip $D(X)$ with an absolute perverse $t$-structure. In that case, the shifted inclusion
\[
D^{\mathrm{ULA}}(X/S)\subset D(X): A\mapsto A[d]
\]
is $t$-exact. Thus, in this case the absolute perverse cohomologies ${}^p \mathcal H^i(A)$ are again universally locally acyclic over $S$. This generalizes a result of Gaitsgory \cite{GaitsgoryULA} who proved this result when $S$ is assumed to be smooth over a field.
By Theorem~\ref{thm:ULAmain}, we get in particular a well-behaved ($\Lambda$-linear) abelian category $\mathrm{Perv}^{\mathrm{ULA}}(X/S)$ of relatively perverse universally locally acyclic sheaves over $S$. Our final result concerns properties of this abelian category.
\begin{theorem}\label{thm:perverseULAmain} Consider one of the settings (B) and (C). In case (B), assume that $\Lambda$ is regular. Moreover, in all settings, assume that $S$ is irreducible, and let $\eta\in S$ be the generic point, with $j: X_\eta\subset X$ the inclusion.
\begin{enumerate}
\item[{\rm (i)}] The restriction functor
\[
j^\ast: \mathrm{Perv}^{\mathrm{ULA}}(X/S)\to \mathrm{Perv}(X_\eta)
\]
is an exact and faithful functor of abelian categories. If $\Lambda$ is noetherian, the category $\mathrm{Perv}^{\mathrm{ULA}}(X/S)$ is noetherian. If $\Lambda$ is artinian, it is also artinian.
\item[{\rm (ii)}] Assume that $S$ is geometrically unibranch. The restriction functor
\[
j^\ast: \mathrm{Perv}^{\mathrm{ULA}}(X/S)\to \mathrm{Perv}(X_\eta)
\]
is exact and fully faithful, and its image is stable under subquotients.
\end{enumerate}
\end{theorem}
We note that the fully faithfulness in part (ii) is a strengthening of a theorem of Reich \cite[Proposition IV.2.8]{ReichExtendULA} who essentially proved the case that $S$ is smooth over a field. Results of this type are used in the proof of the geometric Satake equivalence, which involves an analysis of perverse universally locally acyclic sheaves on Beilinson--Drinfeld Grassmannians. In particular, one needs to know that these are determined by their restriction to a dense open subset of the base, in order to construct the fusion product. Part (ii) gives a very general result of this form. We note that the hypothesis in part (ii) is necessary already when $X=S$, in which case one is looking at local systems on $S$.
\begin{remark} Part (ii) can be seen as giving a notion of ``good reduction'' for a perverse sheaf: If say $S=\mathrm{Spec}\, \mathbb Z_p$ and $X/S$ is a scheme of finite type and $A_0\in \mathrm{Perv}(X_{\mathbb Q_p})$ is a perverse sheaf on the generic fibre, we can ask whether $A_0$ ``has good reduction'' in the sense of extending to a (necessarily relatively perverse) universally locally acyclic sheaf on $X/S$. In that case, its special fibre agrees with the nearby cycle sheaf, so the action of the absolute Galois group of $\mathbb Q_p$ on the nearby cycles is unramified. In fact, the converse is also true. However, over higher-dimensional bases, the condition is more subtle. Let us remark that we have not investigated the relation to the theory of nearby cycles over higher-dimensional base schemes.
\end{remark}
We have the following corollary, which again recovers and extends a result of Gaitsgory \cite{GaitsgoryULA} (who treated the case where $S$ is a smooth variety over a field).
\begin{corollary} Assume that $S$ is regular and connected, of dimension $d$, and that $\Lambda$ is a field. Assume that $A\in D(X)$ is absolutely perverse, and universally locally acyclic over $S$. Then any absolutely perverse subquotient of $A$ is universally locally acyclic over $S$.
\end{corollary}
\begin{proof} The generic fibre $A_\eta$ admits a finite Jordan--H\"older filtration, which by Theorem~\ref{thm:perverseULAmain}~(ii) extends to a filtration of $A$ by universally locally acyclic sheaves that are absolutely perverse (as over a regular base, absolute and relative perversity agree up to shift for universally locally acyclic sheaves). We can thus assume that $A_\eta$ is simple. In that case one sees that $A$ is also necessarily simple: Indeed, its restriction to a smooth locally closed subscheme of $S$ is still relatively perverse up to shift by dimension $d$, and thus with respect to absolute perversity it lies in ${}^p \mathcal D^{\leq -1}$; and the same argument applies to its Verdier dual.
\end{proof}
{\bf Acknowledgments.} Several years ago, DH conjectured the existence of a well-behaved theory of ``families of perverse sheaves'', motivated by the geometric Langlands literature and in particular Gaitsgory's unpublished notes \cite{GaitsgoryULA}, and gave a talk about these ideas in Bonn in June 2019. During the writing of the proof of the geometric Satake equivalence in \cite{FarguesScholze}, PS realized that the desired theory would follow from the existence of a general relative perverse $t$-structure. The authors were then quickly able to cook up a proof by combining v-descent with the theory of nearby cycles. We thank Akhil Mathew for sharing with us Gabber's letter \cite{GabberLetterMathew}. Moreover, we thank Dennis Gaitsgory, Haoyu Hu, Gerard Laumon, Akhil Mathew, Timo Richarz, Longke Tang, Enlin Yang and Weizhe Zheng for discussions and feedback. During the writing of this manuscript, Scholze was supported by a DFG Leibniz Prize, and by the DFG under the Excellence Strategy – EXC-2047/1 – 390685813.
\section{Derived categories of \'etale sheaves}
In this section, we recall some basics on derived categories of \'etale sheaves. As in the introduction, we consider one of three settings.
\begin{enumerate}
\item[{\rm (A)}] Let $\Lambda$ be a ring killed by some power of $\ell$, and denote by $\mathcal D_\mathrm{\acute{e}t}(X,\Lambda)$ the left-completion of the derived $\infty$-category $\mathcal D(X_\mathrm{\acute{e}t},\Lambda)$ of $\Lambda$-modules on the \'etale site $X_\mathrm{\acute{e}t}$. (If $X_\mathrm{\acute{e}t}$ has locally finite $\ell$-cohomological dimension, then the left-completion is not necessary.)
\item[{\rm (B)}] In the setting of (A), let $\mathcal D_\mathrm{cons}(X,\Lambda)\subset \mathcal D_\mathrm{\acute{e}t}(X,\Lambda)$ be the full $\infty$-subcategory of perfect-constructible complexes. (If $X_\mathrm{\acute{e}t}$ has locally finite $\ell$-cohomological dimension, then $\mathcal D_\mathrm{\acute{e}t}(X,\Lambda)$ is compactly generated with compact objects $\mathcal D_\mathrm{cons}(X,\Lambda)$.)
\item[{\rm (C)}] Let $\Lambda$ be an algebraic extension $L$ of $\mathbb Q_\ell$ or its ring of integers $\mathcal O_L$, and let $\mathcal D_\mathrm{cons}(X,\Lambda)$ be the full $\infty$-subcategory of $\mathcal D(X_\mathrm{pro\acute{e}t},\Lambda)$ consisting of those objects that on a constructible stratification of $X$ become dualizable.
\end{enumerate}
We will discuss each setting in turn, and discuss the definition of the pullback, tensor, and proper pushforward functors. We start with settings (A) and (B). The starting point is the following proposition.
\begin{proposition}[{\cite[Proposition 5.3.2]{BhattScholzeProetale}}]\label{prop:pullbackproetale} Let $X$ be a qcqs scheme and let $\Lambda$ be any ring. Let $\nu_X: X_\mathrm{pro\acute{e}t}\to X_\mathrm{\acute{e}t}$ be the projection from the pro-\'etale site of $X$ to the \'etale site of $X$. Then
\[
\nu_X^\ast: \mathcal D^+(X_\mathrm{\acute{e}t},\Lambda)\to \mathcal D^+(X_\mathrm{pro\acute{e}t},\Lambda)
\]
is fully faithful, and it extends to a fully faithful functor
\[
\nu_X^\ast: \mathcal D_\mathrm{\acute{e}t}(X,\Lambda)\to \mathcal D(X_\mathrm{pro\acute{e}t},\Lambda)
\]
from the left-completion $\mathcal D_\mathrm{\acute{e}t}(X,\Lambda)$ of $\mathcal D(X_\mathrm{\acute{e}t},\Lambda)$. The essential image of $\nu_X^\ast$ is the full $\infty$-subcategory of all $A\in \mathcal D(X_\mathrm{pro\acute{e}t},\Lambda)$ such that for all $i\in \mathbb Z$, the pro-\'etale sheaf $\mathcal H^i(A)$ comes via pullback from the \'etale site.
\end{proposition}
Moreover, if $f: X\to S$ is a separated map of finite type, then choosing a compactification $\overline{f}: \overline{X}\to S$, $j: X\hookrightarrow \overline{X}$, we can define
\[
Rf_! = R\overline{f}_\ast j_!: \mathcal D_\mathrm{\acute{e}t}(X,\Lambda)\to \mathcal D_\mathrm{\acute{e}t}(S,\Lambda).
\]
It follows from the usual formalism that this functor is independent of the choice of compactification, preserves all colimits, commutes with pullbacks, and satisfies a projection formula. As $\mathcal D_\mathrm{\acute{e}t}(X,\Lambda)$ is a presentable $\infty$-category, one can also use the adjoint functor theorem to see that there are functors $R\mathcal{H}\mathrm{om}_\Lambda$, $Rf_\ast$ and $Rf^!$ right adjoint to $\otimes^{\mathbb L}_\Lambda$, $f^\ast$ and $Rf_!$, satisfying all the usual formalism. (We do not try to make the $6$-functor formalism into a coherent $\infty$-categorical structure here; all coherences between these operations are only claimed as data on the level of homotopy categories.)
For setting (B), we restrict to the full $\infty$-subcategory
\[
\mathcal D_\mathrm{cons}(X,\Lambda)\subset \mathcal D_\mathrm{\acute{e}t}(X,\Lambda)
\]
of constructible objects, i.e.~objects that become locally constant with perfect fibres over a constructible stratification. Again, this is stable under tensor products and pullbacks, and if $f: X\to S$ is separated and of finite presentation, then $Rf_!$ preserves $\mathcal D_\mathrm{cons}(X,\Lambda)$ by the usual finiteness results.
In setting (C), we first define, following \cite{HemoRicharzScholbach}
\[
\mathcal D_\mathrm{cons}(X,\Lambda)\subset \mathcal D(X_\mathrm{pro\acute{e}t},\Lambda)
\]
as the full $\infty$-subcategory of all objects $A$ that become dualizable over a constructible stratification. This definition agrees with the more classical definition. Namely, \cite{HemoRicharzScholbach} show that
\[
\mathcal D_\mathrm{cons}(X,L) = \varinjlim_{L'\subset L} \mathcal D_\mathrm{cons}(X,L'),\ \mathcal D_\mathrm{cons}(X,\mathcal O_L) = \varinjlim_{L'\subset L} \mathcal D_\mathrm{cons}(X,\mathcal O_{L'})
\]
as $L'\subset L$ ranges over finite extensions of $\mathbb Q_\ell$, reducing the study of these $\infty$-categories to the case of finite extensions $L$ of $\mathbb Q_\ell$. (In fact, this follows quickly from the definitions.) In that case the functor
\[
\mathcal D_\mathrm{cons}(X,\mathcal O_L)\to \varprojlim_n \mathcal D_\mathrm{cons}(X,\mathcal O_L/\ell^n)
\]
is an equivalence (again, this is not hard to prove), and we will show below that
\[
\mathcal D_\mathrm{cons}(X,L) = \mathcal D_\mathrm{cons}(X,\mathcal O_L)\otimes_{\mathcal O_L} L.
\]
Here, it is easy to see that the natural functor
\[
\mathcal D_\mathrm{cons}(X,\mathcal O_L)\otimes_{\mathcal O_L} L\to \mathcal D_\mathrm{cons}(X,L)
\]
is fully faithful.
It follows from the definition that $\mathcal D_\mathrm{cons}(X,\Lambda)\subset \mathcal D(X_\mathrm{pro\acute{e}t},\Lambda)$ is a symmetric monoidal $\infty$-subcategory, compatible with pullback along $f: Y\to X$. The other description of $\mathcal D_\mathrm{cons}(X,\Lambda)$ also shows that one can define a functor $Rf_!$ for separated maps of finite presentation $f: X\to S$, via reduction to case (B); they continue to satisfy all the usual properties.
These $\infty$-categories of constructible objects satisfy arc-descent.
\begin{theorem}\label{thm:arcdescentD} In settings (B) and (C), the functor $X\mapsto \mathcal D_\mathrm{cons}(X,\Lambda)$ defines an arc-sheaf of $\infty$-categories. It is a finitary arc-sheaf in setting (B).
\end{theorem}
\begin{proof} In setting (B), this result is due to Bhatt--Mathew \cite[Theorem 5.4, Theorem 5.13]{BhattMathew}, at least when $\Lambda$ is finite. Their \cite[Theorem 5.4]{BhattMathew} applies in general, as does the argument that it is a finitary presheaf, so it remains to establish effectivity of descent. This will be done later in the full setting (A) for universal submersions, to which we can reduce by noetherian approximation.
In setting (C), one can formally reduce to the case of finite extensions $L/\mathbb Q_\ell$. In that case, the case of $\mathcal O_L$-coefficients follows via passage to limits from setting (B). This also formally implies that with $L$-coefficients, for any arc-cover $Y\to X$ with Cech nerve $Y^{\bullet/X}$, the map
\[
\mathcal D_\mathrm{cons}(X,L)\to \lim_\Delta \mathcal D_\mathrm{cons}(Y^{\bullet/X},L)
\]
is fully faithful. It remains to show effectivity of descent. For this, we first prove the following result regarding the existence of $\mathcal O_L$-lattices.
\begin{proposition}\label{prop:constructiblelattices} Let $L$ be an algebraic extension of $\mathbb Q_\ell$ and fix $A\in \mathcal D_\mathrm{cons}(X,L)$. Consider the functor taking an $X$-scheme $X'$ to the $\infty$-category of $A_0\in \mathcal D_\mathrm{cons}(X',\mathcal O_L)$ together with an identification $A_0[\tfrac 1\ell]\cong A|_{X'}$. This functor is a finitary arc-sheaf, and admits a section over an \'etale cover of $X$.
\end{proposition}
\begin{proof} It is clear that it is an arc-sheaf. To construct a section over an \'etale cover, one reduces to the case that $A$ is dualizable. In that case we can arrange that $A_0$ is also dualizable. Over a w-contractible pro-\'etale cover $X'\to X$, the complex $A_0$ is then equivalent to a perfect complex of $C(\pi_0 X',L)$-modules, cf.~\cite{HemoRicharzScholbach}. But as $\pi_0 X'$ is extremally disconnected, any finitely generated ideal of $C(\pi_0 X',L)$ is principal and isomorphic as $C(\pi_0 X',L)$-module to a direct summand of $C(\pi_0 X',L)$ generated by an idempotent. Any such idempotent is necessarily integral, from which one can deduce that any perfect complex of $C(\pi_0 X',L)$-modules can be extended to a perfect complex of $C(\pi_0 X',\mathcal O_L)$-modules. This then gives an integral structure over $X'$, and by finitaryness, this section over $X'$ is already defined over an \'etale cover of $X$.
It remains to prove that it is finitary. It is enough to do this arc-locally. By the first paragraph, we can always find a section over a pro-\'etale cover, so we can assume that $A=A_1[\tfrac 1\ell]$ for some $A_1\in D_\mathrm{cons}(X,\mathcal O_L)$ that we fix. Now let $X'=\varprojlim_i X_i'$ be an inverse limit of affine $X$-schemes $X_i'=\mathrm{Spec} R_i$. It is easy to see that the functor
\[
\varinjlim_i \{A_{0,i}\in \mathcal D_\mathrm{cons}(X_i',\mathcal O_L), A_{0,i}[\tfrac 1\ell]\cong A|_{X_i'}\}\to \{A_0\in \mathcal D_\mathrm{cons}(X',\mathcal O_L), A_0[\tfrac 1\ell]\cong A|_{X'}\}
\]
is fully faithful; the point is that the cone of $A_{0,i}\to A|_{X_i'}$ is itself a finitary sheaf (namely $A_{0,i}\otimes^{\mathbb L} \mathbb Q_\ell/\mathbb Z_\ell$, which is a complex of \'etale sheaves). It remains to prove essential surjectivity, so assume given $A_0\in \mathcal D_\mathrm{cons}(X',\mathcal O_L)$ with $A_0[\tfrac 1\ell]\cong A|_{X'}\cong A_1|_{X'}[\tfrac 1\ell]$. Multiplying by a power of $\ell$ if necessary, we can assume that the map $A_0\to A|_{X'}$ arises from a map $A_0\to A_1|_{X'}$. The cone $B$ of $A_0\to A_1|_{X'}$ is then in $\mathcal D_\mathrm{cons}(X',\mathcal O_L)$ and killed by some power of $\ell$, so lies in the $\infty$-category from setting (B). As such, $B$ arises via pullback from some $B_i\in \mathcal D_\mathrm{cons}(X',\mathcal O_L)$ and also the map $A|_{X'}\to B$ can be approximated by a map $A|_{X_i'}\to B_i$ (after increasing $i$). Then the homotopy fibre $A_{0,i}$ of $A|_{X_i'}\to B_i$ gives the desired approximation of $A_0$ over $X_i'$.
\end{proof}
Now for effectivity of descent, consider some arc-cover $Y\to X$ and some $A\in \mathcal D_\mathrm{cons}(X,L)$ equipped with descent data. Let $\tilde{Y}$ be the finitary arc-sheaf of anima parametrizing $\mathcal O_L$-lattices in $A$ as in the proposition. The descent data for $A$ induce descent data for $\tilde{Y}$, which thus descends to a finitary arc-sheaf of anima $\tilde{X}$ over $X$ (necessarily arc-surjective over $X$, as $\tilde{Y}\to Y\to X$ are arc-covers). Moreover, by the case of $\mathcal O_L$-coefficients already handled, the universal $A_0\in \mathcal D_\mathrm{cons}(-,\mathcal O_L)$ over $\tilde{Y}$ descends to $\tilde{X}$. These reductions mean that we only need to prove descent along $\tilde{X}\to X$. This is an arc-cover, but $\tilde{X}$ is a finitary arc-sheaf. This means that there is some finitely presented $X$-scheme $X'\to X$ that is also an arc-cover, and a section of $\tilde{X}\to X$ over $X'$. In other words, we can reduce to the case of descent along a finitely presented arc-cover.
By the fully faithfulness already proved, we are also free to pass to a stratification. But any finitely presented arc-cover can, up to universal homeomorphisms, be refined by finite \'etale covers over a constructible stratification -- this is clear at points, and then follows by a spreading out argument. In other words, one can reduce to the case that $Y\to X$ is finite \'etale, and then even a $G$-torsor for some finite group $G$. In that case, the descent of $A$ is given by $(f_\ast A)^G$ (using that $|G|$ is invertible in $L$).
\end{proof}
As promised above, the proof gives the following corollary.
\begin{corollary}\label{cor:integralstructureexists} For any algebraic extension $L$ of $\mathbb Q_\ell$, the fully faithful functor
\[
\mathcal D_\mathrm{cons}(X,\mathcal O_L)\otimes_{\mathcal O_L} L\to \mathcal D_\mathrm{cons}(X,L)
\]
is an equivalence.
\end{corollary}
\begin{proof} We can assume that $L$ is finite over $\mathbb Q_\ell$. Take any $A\in \mathcal D_\mathrm{cons}(X,L)$. We note that the image of the functor is stable under cones and shifts. By Proposition~\ref{prop:constructiblelattices}, there is some \'etale cover $X'\to X$ over which an integral structure exists. Passing to a constructible stratification of $X$, we can assume that $X'\to X$ is finite \'etale and that $A$ is dualizable. Moreover, by the finitaryness aspect of Proposition~\ref{prop:constructiblelattices}, we can assume that $X$ is connected (as then any integral structure spreads over a connected component spreads to an open and closed neighborhood). We can also assume that $X'$ is connected. We claim that all truncations of $A$ are still dualizable. This can be checked after pullback to the universal pro-finite \'etale cover $\tilde{X}\to X'\to X$, where $A$ becomes constant (using the integral structure over $X'$, this can be checked modulo powers of $\ell$, where everything reduces to usual finite \'etale local systems), and hence all truncations of $A$ are still constant sheaves on finitely generated $\mathcal O_L$-modules, and thus dualizable. Thus, we can assume that $A$ is concentrated in degree $0$. Then $A|_{\tilde{X}}$ is the constant sheaf on a finite-dimensional $L$-vector space $V$, and the descent data to $X$ is given by a continuous representation $\pi_1(X)\to \mathrm{GL}_L(V)$. Any such representation admits an invariant $\mathcal O_L$-lattice, finishing the proof.
\end{proof}
\begin{remark}\label{rem:largecategoryC} It is occasionally helpful to embed also the categories in setting (C) into larger categories that admit internal Hom's, direct images, and exceptional inverse images. This can be done: Assume first that $L$ is a finite extension of $\mathbb Q_\ell$. In that case, one can embed $\mathcal D_\mathrm{cons}(X,\mathcal O_L)$ into $\varprojlim_n \mathcal D_\mathrm{\acute{e}t}(X,\mathcal O_L/\ell^n)$, and this admits all six operations (via passage to limits). Now for general $L$ we can take $\varinjlim_{L'\subset L} \mathcal D_\mathrm{\acute{e}t}(X,\mathcal O_{L'})$ as $L'$ runs over finite subextensions of $L$, and with $L$-coefficients, we can formally invert $\ell$ and take the idempotent completion.
\end{remark}
\section{Universal local acyclicity}
In this section, we discuss universal local acyclicity, essentially following the approach of Lu--Zheng \cite{LuZhengULA}, but with a small shift in perspective.
Fix any qcqs base scheme $S$ in which $\ell$ is invertible, and work in one of the settings (A), (B), or (C); in particular, we have fixed some coefficient $\mathbb Z_\ell$-algebra $\Lambda$, and abbreviate $D(X):=D(X,\Lambda)$ (where $D(X,\Lambda)$ is either $D_\mathrm{\acute{e}t}(X,\Lambda)$ or $D_\mathrm{cons}(X,\Lambda)$). We define a symmetric monoidal $2$-category $\mathcal C_S$ as follows. Its objects are schemes $f: X\to S$ separated and of finite presentation over $S$. The category of morphisms $\mathrm{Fun}_{\mathcal C_S}(X,Y)$ is given by $D(X\times_S Y)$; and composition is given by convolution, i.e.
\[
D(X\times_S Y)\times D(Y\times_S Z)\to D(X\times_S Z): (A,B)\mapsto A\star B = R\pi_{XZ!}(\pi_{XY}^\ast A\otimes^{\mathbb L}_\Lambda \pi_{YZ}^\ast B)
\]
where $\pi_{XY},\pi_{XZ},\pi_{YZ}$ are the obvious projections defined on $X\times_S Y\times_S Z$. The base change formula ensures that this gives an associative composition law. The identities are given by $R\Delta_{X/S!} \Lambda = R\Delta_{X/S\ast} \Lambda$, where $\Delta_{X/S}: X\to X\times_S X$ is the diagonal, which is a finitely presented closed immersion.
The symmetric monoidal structure on $\mathcal C_S$ is given on objects by $X\boxtimes Y = X\times_S Y$, and similarly on morphisms by exterior tensor products. Any object of $\mathcal C_S$ is dualizable: The dual of $X$ is given by $X$ itself, with unit $S\to X\times_S X$ and counit $X\times_S X\to S$ both given by $R\Delta_{X/S!} \Lambda\in D(X\times_S X)$. In particular, there are internal Hom's, and the internal Hom from $X$ to $Y$ is $X\times_S Y$.
We note that $\mathcal C_S$ is naturally isomorphic to the opposite $2$-category $\mathcal C_S^{\mathrm{op}}$ which exchanges the directions of the $1$-morphisms (but not of the $2$-morphisms), as $D(X\times_S Y)$ is naturally symmetric in $X$ and $Y$.
In \cite{FarguesScholze}, $\mathcal C_S$ was considered as a bare $2$-category, and the notion of adjoint maps in $2$-categories was employed to characterize universal local acyclicity. This could be done here again. However, we prefer to follow more closely \cite{LuZhengULA}. Indeed, we can also consider the (co)lax (co)slice $2$-category $\mathcal C'_S = _{S\backslash} \mathcal C_S$, which inherits a symmetric monoidal structure. Its objects are given by pairs $(X,A)$ where $f: X\to S$ is separated of finite presentation as before, and $A\in D(X) = \mathrm{Fun}_{\mathcal C_S}(S,X)$. A morphism $g: (X,A)\to (Y,B)$ in $\mathcal C'_S$ is given by some $C\in D(X\times_S Y)=\mathrm{Fun}_{\mathcal C_S}(X,Y)$ together with a map
\[
R\pi_{Y!}(\pi_X^\ast A\otimes^{\mathbb L}_\Lambda C)\to B,
\]
where $\pi_X,\pi_Y$ are the natural projections on $X\times_S Y$.
Then in setting (A), the symmetric monoidal $2$-category of cohomological correspondences (as in \cite{LuZhengULA}) has a natural symmetric monoidal functor to $\mathcal C'_S$, induced by sending a correspondence $c: C\to X\times_S Y$ to $Rc_! \Lambda\in D(X\times_S Y)$. Moreover, in setting (A), there are internal Hom's in $\mathcal C'_S$, where the internal Hom from $(X,A)$ to $(Y,B)$ is given by $(X\times_S Y,R\mathcal{H}\mathrm{om}(\pi_X^\ast A,R\pi_Y^!B))$. In fact, this already defines an internal Hom on the symmetric monoidal $2$-category considered by Lu--Zheng. This implies that $(X,A)$ is dualizable in Lu--Zheng's symmetric monoidal $2$-category if and only if it is dualizable in $\mathcal C'_S$ -- dualizability is then equivalent to the map $V\otimes \mathcal{H}\mathrm{om}(V,1)\to \mathcal{H}\mathrm{om}(V,V)$ being an isomorphism.
From setting (B) to setting (A), there is a fully faithful symmetric monoidal functor, which in particular preserves dualizable objects. We will see that all dualizable objects $(X,A)$ in fact lie in the essential image of (B) and are dualizable as objects in there, so these settings give rise to the same dualizable objects. For setting (C), we will develop techniques to reduce to setting (B).
Here is a general proposition that explains the relation between the approaches of \cite{LuZhengULA} and \cite{FarguesScholze}.
\begin{proposition}\label{prop:dualizablevsadjoint} Let $\mathcal C$ be a symmetric monoidal $2$-category with tensor unit $1\in \mathcal C$, and assume that all objects of $\mathcal C$ are dualizable. Let $\mathcal C'= _{1\backslash} \mathcal C$ be the lax coslice, which is itself a symmetric monoidal $2$-category. Then a morphism $f: 1\to X$ in $\mathcal C$ is a right adjoint if and only if $(X,f)\in \mathcal C'$ is dualizable.
\end{proposition}
\begin{proof} Assume that $(X,f)\in \mathcal C'$ is dualizable. As the forgetful functor $\mathcal C'\to \mathcal C$ is symmetric monoidal, its dual is of the form $(X^\ast,g^\ast)$ where $X^\ast\in \mathcal C$ is the dual of $X$, and $g^\ast: 1\to X^\ast$ is some map. The map $g^\ast$ is equivalent to a map $g: X\to 1$ by dualizability of $X$. We claim that $g$ is a left adjoint of $f$. To see this, we have to produce $2$-morphisms $\alpha: \mathrm{id}_1\to fg$ and $\beta: gf\to \mathrm{id}_X$ such that the composites
\[
f\xrightarrow{\alpha f} fgf\xrightarrow{f\beta} f,\ g\xrightarrow{g\alpha} gfg\xrightarrow{\beta f} g
\]
are the identity. But the dualizability of $(X,f)$ gives unit and counit maps
\[
(1,\mathrm{id}_1)\to (X\otimes X^\ast,f\otimes g^\ast), (X\otimes X^\ast,f\otimes g^\ast)\to (1,\mathrm{id}_1)
\]
satisfying similar conditions. The first map necessarily lies over the unit map $1\to X\otimes X^\ast$, and is then given by a $2$-morphism from the unit map $1\to X\otimes X^\ast$ to $f\otimes g^\ast: 1\to X\otimes X^\ast$. By dualizability of $X$, this is equivalent to a map from the identity on $X$ to $gf$. A similar analysis applies to the second map. Unraveling all the structures then shows that $g$ is a left adjoint of $f$. For the converse direction, one reverses all the steps.
\end{proof}
\begin{definition}\label{def:ULA} Let $f: X\to S$ be a separated map of finite presentation and $A\in D(X)$. Then $A$ is universally locally acyclic if
\[
A\in D(X)= \mathrm{Fun}_{\mathcal C_S}(S,X)
\]
is a right adjoint in $\mathcal C_S$; equivalently, if $(X,A)\in \mathcal C'_S$ is dualizable.
\end{definition}
We note that by the existence of internal Hom's in $\mathcal C'_S$ in setting (A), we get the following characterization.
\begin{proposition}\label{prop:ULAfirstchar} Let $f: X\to S$ be a separated map of finite presentation and $A\in D(X)$. Assume setting (A). Then $A$ is $f$-universally locally acyclic if and only if the map
\[
\pi_1^\ast \mathbb D_{X/S}(A)\otimes^{\mathbb L}_\Lambda \pi_2^\ast A\to R\mathcal{H}\mathrm{om}_\Lambda(\pi_1^\ast A,R\pi_2^! A)
\]
is an isomorphism in $D(X\times_S X)$, where $\mathbb D_{X/S}(A)$ denotes the relative Verdier dual, and $\pi_i: X\times_S X\to X$ the two projections.
\end{proposition}
\begin{proof} Indeed, an object $Y$ in a symmetric monoidal ($2$-)category with internal Hom's is dualizable if and only if the map $Y\otimes \mathcal{H}\mathrm{om}(Y,1)\to \mathcal{H}\mathrm{om}(Y,Y)$ is an isomorphism. Unraveling, we get this condition.
\end{proof}
In particular, using this proposition one verifies in setting (A) some basic properties of universal local acyclicity, such as that if $h: Y\to X$ is a map of separated $S$-schemes of finite presentation, then $Rh_\ast$ preserves universally locally acyclic sheaves if $h$ is proper, and $h^\ast$ preserves universally locally acyclic sheaves if $h$ is smooth. Also, if $g: S\to S'$ is a smooth map and $A$ is $f$-universally locally acyclic for some $f: X\to S$ as above, then $A$ is also $g\circ f$-universally locally acyclic. We will only use these properties in setting (A), and only in the proof of Theorem~\ref{thm:nearbycycles}. However, these results also hold in settings (B) and (C): Indeed, universal local acyclicity in settings (A) and (B) is the same notion, while setting (C) reduces to setting (B) at least v-locally on $S$, using the integral structures of Proposition~\ref{prop:ULAintegralstructure}.
With this definition, one can prove the following properties. Here in setting (B) and (C) we denote by
\[
\mathbb D_{X/S}(A) = R\mathcal{H}\mathrm{om}_{D(X_\mathrm{pro\acute{e}t},\Lambda)}(A,Rf^! \Lambda)\in D(X_\mathrm{pro\acute{e}t},\Lambda)
\]
the internal Hom in $X_\mathrm{pro\acute{e}t}$, where $Rf^! \Lambda$ comes from setting (A) in setting (B), and in setting (C) is defined via limits from setting (B).
\begin{proposition}\label{prop:basicpropertiesULA} Let $f: X\to S$ be a separated map of finite presentation and $A\in D(X)$ be $f$-universally locally acyclic.
\begin{enumerate}
\item[{\rm (i)}] Let $S'\to S$ be any map of schemes, and $f': X'=X\times_S S'\to S'$ the base change of $f$, and $A'\in D(X')$ the preimage of $A$. Then $A'$ is $f'$-universally locally acyclic.
\item[{\rm (ii)}] The relative Verdier dual $\mathbb D_{X/S}(A)=R\mathcal{H}\mathrm{om}_\Lambda(A,Rf^! \Lambda)$ of $A$ lies in $D(X)\subset D(X_\mathrm{pro\acute{e}t},\Lambda)$ and is $f$-universally locally acylic, and $(X,\mathbb D_{X/S}(A))$ is the dual of $(X,A)$ in $\mathcal C'_S$. In particular, the biduality map
\[
A\to \mathbb D_{X/S}(\mathbb D_{X/S}(A))
\]
is an isomorphism, and the formation of $\mathbb D_{X/S}(A)$ commutes with any base change in $S$.
\item[{\rm (iii)}] In setting (A), the complex $A$ is perfect-constructible.
\item[{\rm (iv)}] In setting (A), for any $(Y,B)\in \mathcal C'_S$, the map
\[
\pi_X^\ast A\otimes^{\mathbb L}_\Lambda \pi_Y^\ast B\to R\mathcal{H}\mathrm{om}_\Lambda(\pi_X^\ast \mathbb D_{X/S}(A),R\pi_Y^! B)
\]
is an isomorphism.
\item[{\rm (v)}] For any geometric point $\overline{x}\to X$ with image $\overline{s}\to S$, and generization $\overline{t}$ of $\overline{s}$, the maps
\[
A_{\overline{x}} = R\Gamma(X_{\overline{x}},A)\to R\Gamma(X_{\overline{x}}\times_{S_{\overline{s}}} S_{\overline{t}},A)\to R\Gamma(X_{\overline{x}}\times_{S_{\overline{s}}} \overline{t},A)
\]
are isomorphisms.
\end{enumerate}
In particular, condition (v) holds after any base change, so $A$ is universally locally acyclic in the usual sense.
\end{proposition}
We note that in many of the proofs, the case of setting (C) with rational coefficients is the hardest case. The reader is advised to omit that case on first reading; in particular, this is required to avoid any apparent vicious circles.
\begin{proof} Part (i) is a consequence of the observation that the pullback functors $\mathcal C_S\to \mathcal C_{S'}: X\mapsto X\times_S S'$ (and the induced functor $\mathcal C_S'\to \mathcal C_{S'}'$) are symmetric monoidal, and symmetric monoidal functors preserve dualizable objects. In setting (A), part (ii) follows from the description of internal Hom's in $\mathcal C_S'$. This formally gives the result in setting (B) as well, and in setting (C) for integral coefficients by reducing to a finite extension of $\mathbb Q_\ell$ and then via limits to setting (B). Setting (C) with rational coefficients is addressed later.
For part (iii), note that by Theorem~\ref{thm:arcdescentD} we can argue v-locally on $S$, so we can assume that every connected component of $S$ is the spectrum of an absolutely integrally closed valuation ring. In that case $X$ has finite $\ell$-cohomological dimension by Lemma~\ref{lem:finitecohomdim} below, and so perfect-constructibility is equivalent to compactness in $D_\mathrm{\acute{e}t}(X,\Lambda)$. But by dualizability of $A$, the map
\[
\pi_X^\ast \mathbb D_{X/S}(A)\otimes^{\mathbb L}_\Lambda \pi_Y^\ast B\to R\mathcal{H}\mathrm{om}_\Lambda(\pi_X^\ast A,R\pi_Y^! B)
\]
is an isomorphism for any $(Y,B)$; in particular, applying this in case $Y=X$ and taking $R\Delta_{X/S}^!$, we find
\[
R\Delta_{X/S}^!(\pi_1^\ast \mathbb D_{X/S}(A)\otimes^{\mathbb L}_\Lambda \pi_2^\ast B)\cong R\mathcal{H}\mathrm{om}_\Lambda(A,B),
\]
and thus
\[
R\mathrm{Hom}_{D_\mathrm{\acute{e}t}(X,\Lambda)}(A,B)\cong R\Gamma(X,R\Delta_{X/S}^!(\pi_1^\ast \mathbb D_{X/S}(A)\otimes^{\mathbb L}_\Lambda \pi_2^\ast B)).
\]
Now the functor on the right commutes with all direct sums in $B$, and hence $A$ is compact, as desired.
Part (iv) follows from the first displayed formula in the previous paragraph, applied to $\mathbb D_{X/S}(A)$, using also (ii). For part (v) in setting (A), we first specialize part (iv) to $B=\Lambda$ for any separated $g: Y\to S$ of finite presentation, and apply $R\pi_{X\ast}$. Then the left-hand side becomes $R\pi_{X\ast} A|_{X\times_S Y}$, while the right-hand side becomes
\[
R\mathcal{H}\mathrm{om}_\Lambda(\mathbb D_{X/S}(A),R\pi_{X\ast} R\pi_Y^! \Lambda) = R\mathcal{H}\mathrm{om}_\Lambda(\mathbb D_{X/S}(A),Rf^! Rg_\ast \Lambda).
\]
Applying part (iv) again for $(X,Rg_\ast \Lambda)$, we see that the map
\[
A\otimes^{\mathbb L}_\Lambda Rg_\ast \Lambda\to R\mathcal{H}\mathrm{om}_\Lambda(\mathbb D_{X/S}(A),Rf^! Rg_\ast \Lambda)
\]
is also an isomorphism. In total, we see that the natural map
\[
A\otimes^{\mathbb L}_\Lambda Rg_\ast \Lambda\to R\pi_{X\ast} A|_{X\times_S Y}
\]
is an isomorphism. A priori, this holds for all separated $Y$ of finite presentation, but then by passage to limits it follows for all (qcqs) $S$-schemes $Y$. In particular, after base changing to $S_{\overline{s}}$, we can apply it to $Y=S_{\overline{t}}$ or $Y=\overline{t}$. Taking stalks of this isomorphism at geometric points $\overline{x}\to X$ over $\overline{s}\to S$ then proves (v) in setting (A). This formally gives the result also in setting (B), and in setting (C) for integral coefficients by passage to limits.
It remains to prove parts (ii) and (v) in setting (C) with rational coefficients. Note first that the result is automatic if $A\in \mathcal D_\mathrm{cons}(X,L)$ is of the form $A_0[\tfrac 1\ell]$ where $A_0\in \mathcal D_\mathrm{cons}(X,\mathcal O_L)$ is universally locally acyclic. In general, Proposition~\ref{prop:ULAintegralstructure} below ensures that this happens arc-locally on $S$. Part (ii) then follows in general by arc-descent. Part (v) is slightly more tricky, as the statement in itself is not amenable to arc-descent. Note first that in part (v) it is enough to prove the first isomorphism; the second isomorphism is just its variant after base change to the closure of $\overline{t}$ in $S_{\overline{s}}$. We replace $S_{\overline{t}}\to S_{\overline{s}}$ by any pro-\'etale map $g: T\to S$. We can then ask whether the map
\[
A\widehat{\otimes^{\mathbb L}} Rg_\ast \mathbb Z_\ell\to R\tilde{g}_\ast A|_{X\times_S T}
\]
is an isomorphism, where $\tilde{g}: X\times_S T\to X$ is the base change of $g$, and $\widehat{\otimes^{\mathbb L}}$ denotes the $\ell$-adically completed tensor product, using any integral structure on $A$ (which exists at least up to direct factors). This is a statement that can be checked arc-locally on $S$, and holds true when $A$ admits a universally locally acyclic integral structure (by the proof of (v)), so then holds true in general by Proposition~\ref{prop:ULAintegralstructure}. Now we can apply this after base change to $S_{\overline{s}}$ to $T=S_{\overline{t}}$, which gives the desired claim.
\end{proof}
We used the following result on finite cohomological dimension due to Gabber \cite{GabberOberwolfach2020}.
\begin{lemma}\label{lem:finitecohomdim} Let $S$ be an affine scheme over $\mathbb Z[\tfrac 1\ell]$ all of whose connected components are spectra of absolutely integrally closed valuation rings, and let $f: X\to S$ be an affine scheme of finite type. Let $d$ be the maximal fibre dimension of $f$. Then the $\ell$-cohomological dimension $X$ is bounded by $d+1$.
\end{lemma}
In fact, Gabber showed that one can bound the $\ell$-cohomological dimension by $d$, by proving an even more general relative version of Artin vanishing. We will recall his result in Proposition~\ref{prop:artinvanishing} below.
\begin{proof} As $\pi_0 S$ is profinite, it suffices to check this on connected components. We can also reduce to the case that $S=\mathrm{Spec} V$ where $V$ is of finite rank, and to sheaves $\mathcal F$ concentrated in one fibre. Base changing to the closure of this fibre, we can assume that this is the generic fibre of $S$. Let $S'\subset S$ be the open subset consisting of the generic point $\eta$ and its immediate specialization (if it exists). By arc-excision applied to the cover of $S$ by $S'$ and $S\setminus \{\eta\}$, we find that $R\Gamma(X,\mathcal F)=R\Gamma(X\times_S S',\mathcal F)$; so we can assume that $S$ is of rank (at most) $1$. The case of fields is given by Artin vanishing. Now let $\tilde{X}$ be the henselization of $X$ at the special fibre. Then there is a triangle
\[
R\Gamma(X,\mathcal F)\to R\Gamma(X_\eta,\mathcal F)\oplus R\Gamma(\tilde{X},\mathcal F)\to R\Gamma(\tilde{X}_\eta,\mathcal F),
\]
and by Gabber's affine analogue of proper base change, $R\Gamma(\tilde{X},\mathcal F)=0$ (as we assumed that $\mathcal F$ is concentrated on the generic fibre). But by Artin vanishing, both $R\Gamma(X_\eta,\mathcal F)$ and $R\Gamma(\tilde{X}_\eta,\mathcal F)$ sit in degrees $\leq d$, giving the claim.
\end{proof}
\begin{remark}\label{rem:rungevanishing} The proof shows that the vanishing in cohomological degree $d+1$ has the following reinterpretation in terms of rigid-analytic geometry. Let $V$ be a complete rank $1$ valuation ring with algebraically closed fraction field $K$, and let $X$ be an affine scheme of finite type over $V$, of relative dimension $d$. Let $\hat{X}/\mathrm{Spf}\, V$ be its completion, and let $\hat{X}_K$ be its generic fibre as a rigid-analytic variety; this is an open affinoid subset of the rigid-analytic variety associated to $X_K$. Finally, let $\mathcal F$ be any constructible sheaf, of torsion order invertible in $V$. Then the map
\[
H^d(X_K,\mathcal F)\to H^d(\hat{X}_K,\mathcal F)
\]
is surjective. This is a rigid-analytic analogue (with constructible coefficients) of a known property of Runge pairs in complex-analytic geometry, cf.~e.g.~\cite{AndreottiNarasimhan}. (We thank Mohan Ramachandran for making us aware of this reference.)
\end{remark}
Next, we analyze arc-descent properties.
\begin{proposition}\label{prop:ULAfinitaryarc} Let $f: X\to S$ be a separated map of finite presentation. Consider the functor taking any $S'$ over $S$ to the $\infty$-category $\mathcal D^{\mathrm{ULA}}(X'/S')\subset \mathcal D(X')$ of universally locally acyclic sheaves on $X'=X\times_S S'$ over $S'$. This defines an arc-sheaf of $\infty$-categories, which is finitary in settings (A) and (B).
In particular, if $A\in D(X)$ and $S'\to S$ is an arc-cover such that $A|_{X'}$ is universally locally acyclic over $S'$, then $A$ is universally locally acyclic over $S$.
\end{proposition}
\begin{proof} As settings (A) and (B) give rise to the same notion of universally locally acyclic sheaves, we can assume that we are in setting (B) or (C). Then $\mathcal D^{\mathrm{ULA}}(X'/S')\subset \mathcal D(X')=\mathcal D_\mathrm{cons}(X',\Lambda)$, and we know that the latter is an arc-sheaf by Theorem~\ref{thm:arcdescentD}. Thus, we only need to prove effectivity of descent, which is exactly the final sentence, and that it is a finitary arc-sheaf in setting (B). Finitaryness in setting (B) follows from $\mathcal C'_S$ taking cofiltered limits of affine schemes $S$ to filtered colimits of symmetric monoidal $2$-categories (and hence the same happens on dualizable objects).
For the final sentence, note that the question whether the Verdier dual (formed as a pro-\'etale sheaf, as in Proposition~\ref{prop:basicpropertiesULA}~(ii)) is again in $\mathcal D_\mathrm{cons}(X,\Lambda)$ and commutes with base change in $S$ can be checked arc-locally on $S$. Thus, we have a well-defined dual $A^\vee = \mathbb D_{X/S}(A)$ of $A$. Similarly, one can produce the unit and counit maps via arc-descent. Alternatively, use the characterization of Proposition~\ref{prop:ULAfirstchar} in setting (A), which can be adapted to setting (C) by working with $\ell$-adically completed derived categories (resp.~the isogeny category).
\end{proof}
\begin{proposition}\label{prop:ULAintegralstructure} Let $f: X\to S$ be a separated map of finite presentation, and consider setting (C). Let $A\in \mathcal D^{\mathrm{ULA}}(X/S,L)$, and consider the functor taking $S'/S$ to the $\infty$-category of $A_0\in \mathcal D^{\mathrm{ULA}}(X'/S',\mathcal O_L)$ with an isomorphism $A_0[\tfrac 1\ell]\cong A|_{X'}$, where $X'=X\times_S S'$. This defines a finitary arc-sheaf of $\infty$-categories that admits a section over a v-cover of $S$.
\end{proposition}
\begin{proof} By Proposition~\ref{prop:ULAfinitaryarc} and Theorem~\ref{thm:arcdescentD}, it is an arc-sheaf of $\infty$-categories. Moreover, note that $A_0\in \mathcal D_\mathrm{cons}(X'/S',\mathcal O_L)$ is universally locally acyclic if and only if $A_0/\ell\in \mathcal D_\mathrm{cons}(X'/S',\mathcal O_L/\ell)$ is; indeed, by approximation we may assume that $L$ is a finite extension of $\mathbb Q_\ell$, and then the condition lifts to $\mathcal O_L/\ell^n$, and then to $\mathcal O_L$ by passing to the limit. Then it follows from Proposition~\ref{prop:ULAfinitaryarc} and Proposition~\ref{prop:constructiblelattices} that it is a finitary arc-sheaf.
It remains to see that it admits a section over a v-cover of $S$. As it is a finitary arc-sheaf, we can reduce to the case that $S$ is the spectrum of an absolutely integrally closed valuation ring $V$. In that case, Theorem~\ref{thm:nearbycycles} reduces the problem to the generic fibre, where one can choose any lattice.
\end{proof}
In fact, one can check universal local acyclicity after pullback to absolutely integrally closed, rank $1$ valuation rings.
\begin{corollary}\label{cor:ULAtestrank1} Let $f: X\to S$ be a separated map of finite presentation and $A\in D(X)$ in setting (B) or (C). Then $A$ is $f$-universally locally acyclic if and only if for all rank $1$ valuation rings $V$ with algebraically closed fraction field $K$ and all maps $\mathrm{Spec} V\to S$, the restriction $A|_{X_V}\in D(X_V)$ to $X_V=X\times_S \mathrm{Spec} V$ is universally locally acyclic over $V$.
\end{corollary}
\begin{proof} In setting (B), this is a consequence of $S'\mapsto \mathcal D^{\mathrm{ULA}}(X'/S',\Lambda)\subset \mathcal D_\mathrm{cons}(X',\Lambda)$ being a finitary arc-sheaf: We may first assume that all connected components of $S$ are spectra of absolutely integrally closed valuation rings, and then by finitaryness we can assume that $S$ is the spectrum of an absolutely integrally closed valuation ring, in fact one of finite rank. Then by arc-descent one can reduce to the rank $1$ case, as desired.
In setting (C) with integral coefficients, the result follows formally from setting (B). With rational coefficients, consider the finitary arc-sheaf of anima parametrizing universally locally acyclic $A_0$ with integral coefficients and $A_0[\tfrac 1\ell]\cong A$, as in Proposition~\ref{prop:ULAintegralstructure}. It suffices to see that this admits a section over an arc-cover of $S$. By finitaryness, we can reduce to the case that $S$ is the spectrum of an absolutely integrally closed valuation ring. By Theorem~\ref{thm:nearbycycles}, there is a unique universally locally acyclic extension of the restriction to the generic fibre $j: X_\eta\hookrightarrow X$. Replacing $A$ by the cone of $A\to Rj_\ast j^\ast A$, we can assume that the restriction of $A$ to $X_{\eta}$ is trivial. As $A$ is constructible, the image of its support in $S$ is constructible; we can thus find a locally closed immersion $\mathrm{Spec} V\to S$ from a rank $1$ valuation ring whose closed point maps into the support of $A$, but whose generic point does not. But then $A$ is universally locally acyclic by assumption, and its restriction to the generic fibre vanishes, so $A=0$ by Theorem~\ref{thm:nearbycycles}.
\end{proof}
\section{Nearby cycles}
The following theorem is essentially due to Lu--Zheng, \cite[Section 3]{LuZhengULA}.
\begin{theorem}\label{thm:nearbycycles} Let $S=\mathrm{Spec} V$ be an absolutely integrally closed valuation ring $V$ with fraction field $K$. Let $X$ be a separated scheme of finite presentation over $S$, with generic fibre $X_\eta$. Consider one of the settings (B) and (C).
The restriction functor
\[
D^{\mathrm{ULA}}(X/S)\to D(X_\eta)
\]
is an equivalence, whose inverse is given by $Rj_\ast: D(X_\eta)\subset D(X_{\eta,\mathrm{pro\acute{e}t}},\Lambda)\to D(X_\mathrm{pro\acute{e}t},\Lambda)$ for $j: X_\eta\to X$ the inclusion.
\end{theorem}
Before proving the theorem, we note a couple of consequences.
\begin{corollary}\label{cor:nearbycyclesnice} In the situation of Theorem~\ref{thm:nearbycycles}, the functor
\[
Rj_\ast: D(X_\eta)\subset D(X_{\eta,\mathrm{pro\acute{e}t}},\Lambda)\to D(X_\mathrm{pro\acute{e}t},\Lambda)
\]
has the following properties:
\begin{enumerate}
\item[{\rm (i)}] its image is contained in $D(X)=D_\mathrm{cons}(X,\Lambda)$;
\item[{\rm (ii)}] its formation commutes with any pullback along a map $S'=\mathrm{Spec} V'\to \mathrm{Spec} V$ where $V\to V'$ is a flat map of absolutely integrally closed valuation rings;
\item[{\rm (iii)}] it commutes with (relative) Verdier duality;
\item[{\rm (iv)}] it satisfies a K\"unneth formula: if $Y$ is another scheme of finite presentation over $S$, then the diagram
\[\xymatrix{
D(X_\eta)\times D(Y_\eta)\ar[r]^{\boxtimes}\ar[d]^{Rj_\ast \times Rj_\ast} & D((X\times_S Y)_\eta)\ar[d]^{Rj_\ast}\\
D(X)\times D(Y)\ar[r]^{\boxtimes} & D(X\times_S Y)
}\]
commutes.
\end{enumerate}
Passing to the closed fibre $i: X_s\to X$, the nearby cycles functor
\[
R\psi = i^\ast Rj_\ast: D(X_\eta)\to D(X_s)
\]
has the same properties (assuming that $V\to V'$ is faithfully flat in (i)).
\end{corollary}
We note that part (iii) was observed by Fujiwara, cf. \cite[Proof of Lemma 1.5.1]{Fujiwara}.
\begin{proof} Part (i) is part of Theorem~\ref{thm:nearbycycles}. Part (ii) follows from preservation of universal local acyclicity under pullback. Part (iii) follows from preservation of universal local acyclicity under relative Verdier duality. We note that to get the same result for $R\psi= i^\ast Rj_\ast$ we also use that formation of relative Verdier duals commutes with any base change for universally locally acyclic sheaves. Finally, part (iv) follows from preservation of universal local acyclicity under exterior tensor products.
\end{proof}
Now we prove Theorem~\ref{thm:nearbycycles}.
\begin{proof}[Proof of Theorem~\ref{thm:nearbycycles}] First we assume that we are in setting (B), which we may embed into setting (A). We start by proving fully faithfulness. In fact, for any $A\in D^{\mathrm{ULA}}(X/S)$, the natural map
\[
A\to Rj_\ast j^\ast A
\]
must be an isomorphism. This follows from Proposition~\ref{prop:basicpropertiesULA}~(iv) applied to $Y=\mathrm{Spec} K\to \mathrm{Spec} V$. (We note that for a general valuation ring, this may not be of finite type over $S$, but one can still write it as a limit of quasicompact open subsets, giving the conclusion by passing to filtered colimits.) This immediately gives fully faithfulness. It remains to show that
\[
j^\ast: D^{\mathrm{ULA}}(X/S)\hookrightarrow D_\mathrm{cons}(X_\eta)
\]
is essentially surjective: Indeed, we have just seen that the inverse functor is necessarily given by $Rj_\ast$. We note that even for $S=\mathrm{Spec} K$ a field, this is Deligne's theorem on universal local acyclicity over a field, which we will reprove here.
At this point, we follow an argument that goes back to Deligne's proof of constructibility of nearby cycles, \cite{SGA412}, cf.~also the appendix of \cite{IllusieAutour}. We argue by induction on the (relative) dimension $d$ of $X$. For the induction, it is useful to note that the theorem formally implies the similar theorem when $K$ is not assumed to be algebraically closed, but only that its absolute Galois group is pro-$p$, where $p$ is the residue characteristic of $V$. Indeed, using Proposition~\ref{prop:ULAfinitaryarc} one will then get extension to a universally locally acyclic sheaf after base change to some extension $V'$ of $V$ of $p$-power degree, and by preservation of universal local acyclicity under proper pushforward and using the trace map on the generic fibre to produce a splitting, the original sheaf is a direct summand of a sheaf that extends to a universally locally acyclic sheaf. But $D^{\mathrm{ULA}}(X/S)\subset D_\mathrm{cons}(X_\eta)$ is stable under retracts, giving the claim.
Now we first prove that there is some $Z\subset X$ that is finite over $S=\mathrm{Spec} V$ such that $(Rj_\ast A)|_{X\setminus Z}$ is universally locally acyclic over $S$. To see this, we may assume that $X$ is affine, and pick some map $g: X\to \mathbb A^1_S$. Taking the strict henselization of $\mathbb A^1_S$ at the generic point of the special fibre gives the spectrum $\mathrm{Spec} W$ of some valuation ring $W$ over $V$. Its fraction field $L$ may not be algebraically closed, but at least its absolute Galois group is pro-$p$, where $p$ is the residue characteristic of $V$ (if positive; otherwise $L$ is indeed algebraically closed): Indeed, the residue field of $W$ is separably closed, and its value group agrees with the value group of $V$, which is divisible. Thus, using the previous paragraph and induction (and smooth base change), we see that $Rj_\ast A|_{X_W}$ is universally locally acyclic over $W$. By Proposition~\ref{prop:ULAfinitaryarc}, there is some \'etale map $U\to \mathbb A^1_S$ such that $Rj_\ast A|_{X_U}$ is universally locally acyclic over $U$. But $U\to S$ is smooth, so it follows that $Rj_\ast A|_{X_U}$ is universally locally acyclic over $S$. Now the union of all such \'etale $X_U\to X$ covers an open subset $X\setminus Z\subset X$ whose complement $Z$ is finite over $S$. As universal local acyclicity can be checked \'etale locally, this finishes the claim.
The next reduction is to assume that $X$ is proper, noting that any $X$ admits a compactification (by Nagata, or simply locally by embedding into projective space); also, any $A\in D_\mathrm{cons}(X_\eta,\Lambda)$ extends to the compactification through extension by $0$. Now we are finished by Lemma~\ref{lem:ULAlocalglobal} below.
This finishes the proof in setting (B). Setting (C) formally reduces to the case of a finite extension $L/\mathbb Q_\ell$. In the case of $\mathcal O_L$-coefficients, one can then formally reduce to $\mathcal O_L/\ell^n$-coefficients, which is setting (B). In the setting of $L$-coefficients, we note that essential surjectivity follows from the case of $\mathcal O_L$-coefficients, and this is also proves the claim that $Rj_\ast$ takes image in universally locally acyclic sheaves. It remains to prove that if $A\in D^{\mathrm{ULA}}(X/S)$, then the map $A\to Rj_\ast j^\ast A$ is an isomorphism. Noting that $Rj_\ast j^\ast A\in D^{\mathrm{ULA}}(X/S)$ by what we already proved, it suffices to prove that $A=0$ if $j^\ast A=0$. To see this, note that the support of $A$ is a constructible subset of $X$ and hence its image in $S$ is also constructible. Thus, its support has a generic point; by base change, we can assume that this is the closed point of $S$. As then the closed point of $S$ is a constructible closed subset, its open complement is quasicompact and hence has a closed point, which we can assume is the generic point of $S$; we can thus assume that $V$ is of rank $1$. Now using Remark~\ref{rem:largecategoryC} one can define a variant of $\mathcal C_S$ using these big categories that admits internal Hom's, and this implies that $A=R\mathcal{H}\mathrm{om}_L(A^\vee,Rf^! L)$ is the Verdier dual (in the sense of the categories in Remark~\ref{rem:largecategoryC}) of its dual $A^\vee$ in $\mathcal C'_S$. But $Rf^! L = Rj_\ast j^\ast Rf^! L$, and hence $A=Rj_\ast R\mathcal{H}\mathrm{om}_L(j^\ast A^\vee,j^\ast Rf^! L)$ where $j^\ast A^\vee = (j^\ast A)^\vee = 0$, and hence $A=0$, as desired.
\end{proof}
\begin{lemma}\label{lem:ULAlocalglobal} Let $f: X\to S$ be a finitely presented proper map of qcqs schemes. Let $A\in D_\mathrm{\acute{e}t}(X,\Lambda)$ in setting (A), and assume that there is some closed subscheme $Z\subset X$ that is finite over $S$, such that $A|_{X\setminus Z}$ is universally locally acyclic. Moreover, assume that $Rf_\ast A\in D_\mathrm{\acute{e}t}(S,\Lambda)$ is universally locally acyclic over $S$, i.e.~locally constant with perfect fibres. Then $A$ is universally locally acyclic over $S$.
\end{lemma}
\begin{proof} We have to see that the map
\[
(X,A)^\vee\otimes (X,A)\to \mathcal{H}\mathrm{om}_{\mathcal C'_S}((X,A),(X,A))
\]
in $\mathcal C'_S$ is an isomorphism; equivalently, the map
\[
\pi_1^\ast \mathbb D_{X/S}(A)\otimes^{\mathbb L}_\Lambda \pi_2^\ast A\to R\mathcal{H}\mathrm{om}_\Lambda(\pi_1^\ast A,R\pi_2^! A)
\]
is an isomorphism on $X\times_S X$. We will prove that it is an isomorphism away from $Z\times_S Z$, and after taking the pushforward to $S$. This will give the claim: The cone of this map is supported on $Z\times_S Z$, which is finite over $S$, hence pushforward to $S$ is conservative.
Restricting to $(X\setminus Z)\times_S X$, the map is an isomorphism as $A|_{X\setminus Z}$ is universally locally acyclic over $S$, so that in $\mathcal C'_S$, we have
\[
(X\setminus Z,A|_{X\setminus Z})^\vee\otimes (X,A)\cong \mathcal{H}\mathrm{om}_{\mathcal C'_S}((X\setminus Z,A|_{X\setminus Z}),(X,A)).
\]
Similarly, the restriction to $X\times_S (X\setminus Z)$ is an isomorphism, using this time that
\[
(X,A)^\vee\otimes (X\setminus Z,A|_{X\setminus Z})\cong \mathcal{H}\mathrm{om}_{\mathcal C'_S}((X,A),(X\setminus Z,A|_{X\setminus Z})),
\]
by dualizability of the second factor.
It remains to prove that the pushforward to $S$ is an isomorphism. But unraveling, this exactly amounts to the question whether $Rf_\ast A$ is universally locally acyclic over $S$, which we have assumed.
\end{proof}
Using these results, we see that our definition of universal local acyclicity agrees with the usual definition. More precisely:
\begin{theorem}\label{thm:ULAcorrect} Let $f: X\to S$ be a separated map of finite presentation between qcqs schemes and let $A\in D(X)$ in one of the settings (B) and (C). The following conditions are equivalent.
\begin{enumerate}
\item[{\rm (i)}] The pair $(X,A)$ defines a dualizable object in the symmetric monoidal $2$-category of cohomological correspondences over $S$.
\item[{\rm (ii)}] The following condition holds after any base change in $S$. For any geometric point $\overline{x}\to X$ mapping to a geometric point $\overline{s}\to S$, and a generization $\overline{t}\to S$ of $\overline{s}$, the map
\[
A|_{\overline{x}} = R\Gamma(X_{\overline{x}},A)\to R\Gamma(X_{\overline{x}}\times_{S_{\overline{s}}} S_{\overline{t}},A)
\]
is an isomorphism.
\item[{\rm (iii)}] The following condition holds after any base change in $S$. For any geometric point $\overline{x}\to X$ mapping to a geometric point $\overline{s}\to S$, and a generization $\overline{t}\to S$ of $\overline{s}$, the map
\[
A|_{\overline{x}} = R\Gamma(X_{\overline{x}},A)\to R\Gamma(X_{\overline{x}}\times_{S_{\overline{s}}} \overline{t},A)
\]
is an isomorphism.
\item[{\rm (iv)}] After base change along $\mathrm{Spec} V\to S$ for any rank $1$ valuation ring $V$ with algebraically closed fraction field $K$ and any geometric point $\overline{x}\to X$ mapping to the special point of $\mathrm{Spec} V$, the map
\[
A|_{\overline{x}} = R\Gamma(X_{\overline{x}},A)\to R\Gamma(X_{\overline{x}}\times_{\mathrm{Spec} V}\mathrm{Spec} K,A)
\]
is an isomorphism.
\end{enumerate}
Moreover, these conditions are stable under any base change, and can be checked arc-locally on $S$.
\end{theorem}
\begin{proof} By Proposition~\ref{prop:basicpropertiesULA}, (i) implies (ii) and (iii), and each of them has (iv) as a special case. Thus, it remains to prove that (iv) implies (i). By Corollary~\ref{cor:ULAtestrank1}, we can assume that $S=\mathrm{Spec} V$ is the spectrum of an absolutely integrally closed valuation ring of rank $1$. Then Theorem~\ref{thm:nearbycycles} shows that (i) is equivalent to the map $A\to Rj_\ast j^\ast A$ being an isomorphism, where $Rj_\ast j^\ast$ is also constructible. It is clearly an isomorphism in the generic fibre, so one has to check that it is an isomorphism in the special fibre. Checking stalkwise, this is exactly the condition (iv).
The final sentence comes from Proposition~\ref{prop:ULAfinitaryarc}.
\end{proof}
We note the following corollary that we will use in the next section; it states that invariance of cohomology under change of algebraically closed base field holds in fact more generally for change of absolutely integrally closed valuation rings.
\begin{corollary}[{\cite[Corollary 4.2.7]{HuberBook}}]\label{cor:invarianceofcohomology} Let $V\to V'$ be a faithfully flat map of absolutely integrally closed valuation rings and let $X$ be a scheme of finite type over $V$, with base change $X'$ over $V'$. Let $A\in D_\mathrm{\acute{e}t}(X,\Lambda)$ in setting (A). Then the map
\[
R\Gamma(X,A)\to R\Gamma(X',A|_{X'})
\]
is an isomorphism.
In case $A\in D_\mathrm{\acute{e}t}^+(X,\Lambda)$, the same statement holds for any scheme $X$ over $V$, not necessarily of finite type.
\end{corollary}
\begin{proof} We can assume $\Lambda=\mathbb Z/\ell^n\mathbb Z$, and we can assume that $X$ is affine, and of finite presentation (by choosing a closed immersion). As $X$ has finite $\ell$-cohomological dimension by Lemma~\ref{lem:finitecohomdim}, we can reduce to $A$ being constructible. By approximation, we can assume that $V$ is of finite rank. Arguing by induction on the rank of $V$, we can use Theorem~\ref{thm:nearbycycles} and the triangle $A\to Rj_\ast j^\ast A\to A'$ to reduce to the case that $A=Rj_\ast j^\ast A$ is universally locally acyclic (as $A'$ is supported on a proper closed subset of $\mathrm{Spec} V$, and we can apply the induction hypothesis). In that case $R\Gamma(X,A) = R\Gamma(X_K,A|_{X_K})$ where $K$ is the fraction field of $V$, and similarly for $V'$. This reduces us to the case where $V$ and $V'$ are algebraically closed fields, and the result is the classical result on invariance of cohomology under change of algebraically closed base field.
For the final sentence, we can reduce to $A$ sitting in a single degree and $\Lambda=\mathbb Z/\ell^n\mathbb Z$, and then again to constructible sheaves. Moreover, one can assume $X$ is affine. Now the result follows by writing $X$ as a cofiltered limit of affine schemes of finite type, approximating the constructible sheaf, and using that \'etale cohomology becomes a filtered colimit.
\end{proof}
\section{Universally submersive descent}
The results of this section are due to Gabber \cite{GabberLetterMathew}.
\begin{definition}\label{def:submersion} A qcqs map $f: Y\to X$ of schemes is a submersion if the map $|Y|\to |X|$ is a quotient map. The map $f: Y\to X$ is a universal submersion if any base change of $f$ is a submersion.
\end{definition}
\begin{proposition}\label{prop:submersion} A qcqs map $f: Y\to X$ is a universal submersion if and only if for all valuation rings $V$ with fraction field $K\supsetneq V$, and a map $\mathrm{Spec} V\to X$, the map $Y_K\to Y_V$ is not a closed immersion.
In particular, a universal submersion is an arc-cover.
\end{proposition}
\begin{proof} Assume that $f$ is a universal submersion. To check the condition, we may assume that $X=\mathrm{Spec} V$. Assume that $Y_K\to Y_V$ was a closed immersion. Then the preimage of $\mathrm{Spec} V\setminus \mathrm{Spec} K\subset \mathrm{Spec} V$ is closed, so by the assumption that $f$ is a universal submersion, also $\mathrm{Spec} V\setminus \mathrm{Spec} K\subset \mathrm{Spec} V$ is closed, which is a contradiction.
In the converse direction, as the condition is stable under base change, it suffices to show that $f$ is a submersion. Applying the condition to rank $1$ valuation rings $V=k[[t]]$ for points $\mathrm{Spec} k\to X$, one sees $f$ must be surjective on points. Let $A\subset X$ be a subset whose preimage $B\subset Y$ is closed. Then in particular $A=f(B)\subset X$ is pro-constructible. To show that $A$ is closed, it suffices to show that it is closed under specializations. This reduces us to the case that $X=\mathrm{Spec} V$ is the spectrum of a valuation ring, and we may assume that the generic point lies in $A$. As $A$ is pro-constructible, it is itself spectral, and hence has a closed point $\xi$. Replacing $X$ by the closure of $\xi$, we can assume that the generic point of $X$ is a closed point of $A$. This actually means $A=\mathrm{Spec} K$ is just the generic point, so $B=Y_K\subset Y_V$ is closed, contradicting the assumption.
\end{proof}
\begin{example}[An arc-cover that is not a universal submersion]\label{ex:arcnotsubmersion} We give an example of an arc-cover that is not a universal submersion, showing that universal submersions are strictly between arc-covers and v-covers. Let $K=k((t_1))((t_2))\ldots((t_n))\ldots$, a Laurent series ring in infinitely many variables, with its natural $\mathbb Z^{\mathbb N}$-valued valuation (with the lexicographic ordering), and let $V\subset K$ be its valuation ring. Then $X:=\mathrm{Spec} V = \{s_0,s_1,\ldots,s_n,\ldots,\eta\}$ has a generic point $\eta$, and $s_n$ specializes to $s_m$ if and only if $n\geq m$. Each specialization from $s_{n+1}$ to $s_n$ is covered by the rank $1$ valuation ring $V_n=k((t_1))\ldots((t_{n-1}))[[t_n]]$, so letting $Y=\mathrm{Spec}(\prod_{n\geq 0} V_n)$, the map $Y\to X$ is an arc-cover. (Note that there are no rank $1$ specializations from $\eta$ to any $s_n$, and that $\eta$ lies in the image of $Y$, as the image is pro-constructible.) Note that there is a natural map from $Y$ to $\beta\mathbb N$, the Stone-\v{C}ech compactification of $\mathbb N$ -- this is always true for $\mathrm{Spec}(\prod_{n\geq 0} R_n)$ for rings $R_n$. Now $\mathbb N\subset \beta \mathbb N$ is open, and its preimage in $Y$ is $\bigsqcup_{n\geq 0} \mathrm{Spec} V_n$. This is actually also the preimage of $\mathrm{Spec} V\setminus \{\eta\}\subset \mathrm{Spec} V$: Indeed, under the composite map $Y\to X\to \{s_0,\ldots,s_m\}$ (collapsing all $s_n$, $n\geq m$, and $\eta$ to $s_m$), all of $\mathrm{Spec}(\prod_{n\geq m} V_n)$ maps to $s_m$, so the intersection of these subsets, which is exactly the preimage of $\beta\mathbb N\setminus \mathbb N$, maps to $\eta$.
Thus, we see that in this example the preimage of $\eta=\mathrm{Spec} K\subset X=\mathrm{Spec} V$ in $Y$ is closed, so $Y\to X$ is not a (universal) submersion.
\end{example}
\begin{theorem}\label{thm:etalesheavesdescend} Sending a qcqs scheme $X$ to the category of sheaves on $X_\mathrm{\acute{e}t}$ defines a stack of categories with respect to universal submersions, in particular a v-stack.
In particular, sending a qcqs scheme $X$ to the category of separated \'etale maps $Y\to X$ defines a stack of categories with respect to universal submersions, in particular a v-stack.
\end{theorem}
As the proof shows, the fully faithfulness part actually holds in the arc-topology.
\begin{proof} The second part is a consequence of the first: Indeed, then any descent datum for a separated \'etale map gives by descent some sheaf on $X_\mathrm{\acute{e}t}$, which is necessarily representable by an algebraic space \'etale over $X$, and by descent separated. By \cite[Tags 0417, 03XU]{StacksProject}, it is automatically representable a scheme over $X$. Thus, we can concentrate on the first part.
First, we prove fully faithfulness, so take two \'etale sheaves $\mathcal F$, $\mathcal G$ on $X$. This part will actually work in the arc-topology. We want to show that any morphism $f: \mathcal F|_Y\to \mathcal G|_Y$ whose two pullbacks to $Y\times_X Y$ agree descends uniquely to $X$. As the category of \'etale sheaves is generated under colimits by representable sheaves, we can reduce to the case that $\mathcal F$ is representable by some \'etale $X$-scheme $X_i$. Replacing $X$ by $X_i$, we can assume that $\mathcal F=\ast$ is a point, in which case what we have to show is that any \'etale sheaf actually defines an arc-sheaf. Let $\mathcal G'$ be the \'etale sheaf on $X$, taking any \'etale $X'\to X$ to the sections of $\mathcal G(Y\times_X X')$ invariant under the descent datum, so we get a map $\mathcal G\to \mathcal G'$ of \'etale sheaves on $X$, that we need to prove is an isomorphism. It suffices to prove that it is an isomorphism after passing to stalks, so we can assume that $X$ is strictly henselian, and reduce to checking that it is an isomorphism on global sections. It is easy to see that the map $\mathcal G(X)\to \mathcal G(Y)$ is injective (for example, by pulling back to the closed point), so we need to see that any section $s\in \mathcal G(Y)$ invariant under the descent datum descends to $X$. Pulling back to the closed point of $X$, where we get an fpqc cover of a field, hence an ind-fppf cover (along which etale sheaves descend), we find a unique section $s_0\in \mathcal G(X)$ whose pullback to $Y$ agrees with $s$ after pullback to the closed point of $X$. We need to see that $s$ is the pullback of $s_0$. This can be checked on geometric points. Thus, it suffices to check this after pullback to geometric points of $X$; so connecting these to the special point of $X$, we can assume that $X=\mathrm{Spec} V$ is the spectrum of a valuation ring (with algebraically closed fraction field $K$). As $\mathcal G$ takes cofiltered limits of affine schemes to filtered colimits, we can assume that $Y\to X$ is of finite presentation (and an arc-cover), in which case $Y\to X$ splits after pullback to a finite chain of locally closed $\mathrm{Spec} V_i\subset \mathrm{Spec} V$, connecting the special point to the generic point. This finishes the argument.
Now let $\mathcal G$ be any \'etale sheaf on $Y$ with a descent datum to $X$. Sending any $X$-scheme $X'$ to the sections of $\mathcal G(X'\times_X Y)$ invariant under the descent datum defines an arc-sheaf $\mathcal F$ on $X$ whose pullback to $Y_{\mathrm{arc}}$ is the pullback of $\mathcal G$ on $Y_\mathrm{\acute{e}t}$. We need to see $\mathcal F$ comes via pullback from its restriction to the \'etale site of $X$. Lemma~\ref{lem:recognizeetalesheaf} gives an equivalent criterium: Commutation with filtered colimits, invariance under change of separably closed base field, and invariance under passing from a strictly henselian ring to its closed point. The commutation with filtered colimits follows via descent from the same property of the pullback of $\mathcal G$ to $Y_{\mathrm{arc}}$. For the invariance under change of separably closed base field, we can assume that $X$ is a geometric point. In that case, we can assume that also $Y$ is a geometric point, in which case $\mathcal G$ is merely a set, and as $Y\times_X Y$ is connected, there are no nontrivial descent data, so the descent is trivial.
Now we check the injectivity in part (iii) of Lemma~\ref{lem:recognizeetalesheaf}, so we can now assume that $X$ is strictly henselian with closed point $x$. Take any $s,t\in \mathcal F(X)$. The locus where $s=t$ defines an arc-subsingleton sheaf, and after pullback to $Y$ it is representable by an open subset of $Y$. As $Y\to X$ is a submersion (and an arc-cover), this implies that it is representable by an open subset of $X$. If $s=t$ over $x$, then this open subset must be all of $X$, hence $s=t$, giving the injectivity.
It remains to prove surjectivity, and for this we may assume that $X=\mathrm{Spec} V$ is the spectrum of an absolutely integrally closed valuation ring. Pick any $s\in \mathcal F(x)$ and assume that $s$ does not lift to $\mathcal F(X)$. By Zorn's lemma (and the commutation with filtered colimits), we can assume that $s$ does lift to all proper closed subsets $Z\subset X$. But we know that $Y_K\subset Y_V$ is not a closed immersion, so we can find an absolutely integrally closed valuation ring $W$ with a map $\mathrm{Spec} W\to Y$ whose generic point maps to the generic point of $X=\mathrm{Spec} V$, but whose special point does not map to the generic point of $X$. By arc-descent on $X$, we may replace $X$ by the image of $\mathrm{Spec} W\to \mathrm{Spec} V$. So we can assume that $Y=\mathrm{Spec} W$, where $V\to W$ is a faithfully flat extension of absolutely integrally closed valuation rings. In particular, $s$ extends uniquely to a section over $\mathrm{Spec} W$, and its two pullbacks to $\mathrm{Spec} W\times_{\mathrm{Spec} V}\mathrm{Spec} W$ agree as in fact $\mathcal F(\mathrm{Spec} W\times_{\mathrm{Spec} V}\mathrm{Spec} W)\cong \mathcal F(\mathrm{Spec} W)$ by Lemma~\ref{lem:invarianceaicvaluationring} below (and thus in turn agrees with the sections over the closed point).
\end{proof}
\begin{lemma}\label{lem:recognizeetalesheaf} Let $X$ be a qcqs scheme, and let $\mathcal F$ be an arc-sheaf over $X$. Then $\mathcal F$ comes via pullback from a sheaf on $X_\mathrm{\acute{e}t}$ if and only if the following conditions are satisfied.
\begin{enumerate}
\item[{\rm (i)}] The arc-sheaf $\mathcal F$ is finitary, i.e.~for any cofiltered system $X_i=\mathrm{Spec} A_i$ of affine $X$-schemes with limit $X=\mathrm{Spec} A$, the map
\[
\varinjlim_i \mathcal F(X_i)\to \mathcal F(X)
\]
is a bijection.
\item[{\rm (ii)}] For any map $\mathrm{Spec} K'\to \mathrm{Spec} K$ of geometric points over $X$, the map
\[
\mathcal F(\mathrm{Spec} K)\to \mathcal F(\mathrm{Spec} K')
\]
is a bijection.
\item[{\rm (iii)}] For any strictly henselian $X$-scheme $Z$ with closed point $z$, the map $\mathcal F(Z)\to \mathcal F(z)$ is a bijection.
\end{enumerate}
Moreover, it suffices to verify (iii) in the restricted case where $Z$ is the spectrum of an absolutely integrally closed valuation ring.
\end{lemma}
We note that arc-sheaves are automatically invariant under universal homeomorphisms, in particular the difference between separably closed fields and algebraically closed fields is not relevant here.
\begin{proof} Clearly the conditions are necessary. Conversely, let $\mathcal F'$ be the pushforward of $\mathcal F$ to the \'etale site; we have to see that for all $g: Y\to X$, the map $g^\ast \mathcal F'\to \mathcal F|_{Y_\mathrm{\acute{e}t}}$ is an isomorphism. This can be checked on stalks, so using (i) we can assume that $Y$ is strictly henselian. Let $\overline{y}$ be the closed point of $Y$, mapping to a geometric point $\overline{x}$ of $X$. Then
\[(g^\ast \mathcal F')_{\overline{y}} = \mathcal F'_{\overline{x}}= \mathcal F(X_{\overline{x}})\cong \mathcal F(\overline{x})\cong \mathcal F(\overline{y})\cong \mathcal F(Y),
\]
using (iii) for $X_{\overline{x}}$, (ii), and (iii) for $Y$, respectively. This gives the first part.
Next, assume we know only (i), (ii), the injectivity in (iii), and surjectivity in (iii) when restricted to absolutely integrally closed valuation rings. Take any strictly henselian $X$-scheme $(Z,z)$; we want to show that $\mathcal F(Z)\to \mathcal F(z)$ is surjective. Fix some section $s\in \mathcal F(z)$ and assume that $s$ does not lift to $\mathcal F(X)$. We can replace $X$ by $Z$ and assume that $X$ is strictly henselian. Consider the partially ordered set of all closed subschemes $Z\subset X$ (necessarily strictly henselian) such that $s$ does not lift to $\mathcal F(Z)$. Using (i), we see that we can apply Zorn's lemma and find a minimal $Z$. Then $Z$ is irreducible, as otherwise $Z=Z_1\cup Z_2$ is a union of two proper closed subschemes to which $s$ lifts, in which case $s$ lifts to $Z$ as $Z_1\sqcup Z_2\to Z$ is an arc-cover (and we have agreement of the lifts $s_1$ (of $s$ to $Z_1$) and $s_2$ (of $s$ to $Z_2$) over $Z_1\cap Z_2$, by the injectivity already proved). Replacing $X$ by $Z$, we can assume that $X$ is the spectrum of a strictly henselian domain.
Similarly, if $X'\to X$ is finite, then necessarily $X'$ is a finite disjoint union of strictly henselian schemes whose closed points lie over $\overline{x}$, and in particular we get an injection $\mathcal F(X')\hookrightarrow \mathcal F(X'\times_X \overline{x})$. Applying this observation to $X'\times_X X'$ in case $X'\to X$ is surjective, we see that it suffices to see that $s|_{X'\times_X \overline{x}}$ extends to $X'$. Passing to a limit again, we can assume that $X$ is the spectrum of an absolutely integrally closed local domain.
Now let $g: X'\to X$ be a blowup of $X$. Assume that for all geometric points $\overline{x'}$ of $X'\times_X \overline{x}$, the section $s|_{\overline{x'}}$ extends to $X'_{\overline{x'}}$. Then $s$ extends to a global section of the pullback of $\mathcal F|_{X'_\mathrm{\acute{e}t}}$ to $(X'\times_X \overline{x})_\mathrm{\acute{e}t}$. By proper base change \cite[Tag 0A0C]{StacksProject}, this gives a unique section $s$ of $\mathcal F(X')$. Applying a similar argument to $X'\times_X X'$ and using that $X'\to X$ is an arc-cover then shows that the section of $\mathcal F(X')$ descends to $\mathcal F(X)$.
Note that the locus of geometric points $\overline{x'}$ of $X'\times_X \overline{x}$ where $s|_{\overline{x'}}$ extends to $X'_{\overline{x'}}$ defines an open subspace of $X'\times_X \overline{x}$ (using again condition (i)), so for each blowup we get a nonempty closed subset of $X'\times_X \overline{x}$ where the section $s$ does not lift. By Tychonoff, the inverse limit of these closed subsets, taken over all blowups $X'$ of $X$, is nonempty still. Picking a point in the intersection will then define a local ring which is an absolutely integrally closed valuation ring, where $s$ still does not extend. This contradicts our assumption that (iii) holds for spectra of absolutely integrally closed valuation rings, giving surjectivity in (iii) in general.
Finally, assume we know only (i), (ii), and the bijectivity in (iii) when restricted to absolutely integrally closed valuation rings. By what we have already proved, we need to see that this gives injectivity in (iii) in general. Suppose given $Z$ strictly henselian and two sections $s,t \in \mathcal{F}(Z)$. The locus where $s = t$ defines an arc subsingleton sheaf. As it is a subsingleton sheaf, the injectivity in (iii) is automatically satisfied, so by what we have proved so far, this locus defines an \'etale subsingleton sheaf over the strictly henselian scheme $Z$. Thus, if the locus contains the closed point, it must be everything, giving the injectivity of $\mathcal F(Z)\to \mathcal F(z)$.
\end{proof}
The proof of the following lemma makes a somewhat strange reduction from sheaves of sets to sheaves of abelian groups killed by some integer invertible on the scheme.
\begin{lemma}\label{lem:invarianceaicvaluationring} Let $V\to W$ be a faithfully flat extension of absolutely integrally closed valuation rings. Let $X$ be a scheme over $\mathrm{Spec} V$ with base change $f: X\times_{\mathrm{Spec} V} \mathrm{Spec} W\to X$. Let $\mathcal F$ be an \'etale sheaf (of sets) on $X$. Then the map
\[
\mathcal F(X)\to (f^\ast \mathcal F)(X\times_{\mathrm{Spec} V} \mathrm{Spec} W)
\]
is a bijection.
\end{lemma}
\begin{proof} First note that the map is injective, as $X\times_{\mathrm{Spec} V}\mathrm{Spec} W\to Y$ is faithfully flat (in particular, an arc-cover), so one has to prove surjectivity. Equivalently, any section of $(f^\ast \mathcal F)(X\times_{\mathrm{Spec} V}\mathrm{Spec} W)$ is invariant under the descent datum. This can be checked after embedding $\mathcal F$ into the free sheaf of $\mathbb F_\ell$-modules $\mathbb F_\ell[\mathcal F]$ on $\mathcal F$, for some chosen prime $\ell$ invertible in $V$. Thus, we can assume that $\mathcal F$ is an abelian torsion sheaf, killed by some prime $\ell$ invertible in $V$. Now the result follows from a theorem of Huber \cite[Corollary 4.2.7]{HuberBook}, which we have reproved in the previous section as Corollary~\ref{cor:invarianceofcohomology}.
\end{proof}
Combining this with the arc-descent results of Bhatt--Mathew \cite{BhattMathew}, we obtain the following result. Here we denote by $\mathcal D^+_{\mathrm{tor}}(S_\mathrm{\acute{e}t})$ the bounded to the left derived $\infty$-category of torsion abelian sheaves on $S_\mathrm{\acute{e}t}$.
\begin{theorem}\label{thm:univsubmersivedescent} The association taking any qcqs scheme $S$ to $\mathcal D^+_{\mathrm{tor}}(S_\mathrm{\acute{e}t})$ defines a sheaf of $\infty$-categories for the topology of universal submersions.
\end{theorem}
One can also formally deduce an unbounded variant by passing to left-completions. In particular, $S\mapsto \mathcal D_\mathrm{\acute{e}t}(S,\Lambda)$ in setting (A) defines a sheaf of $\infty$-categories for the topology of universal submersions.
\begin{proof} To prove fully faithfulness, we need to see that for any $A\in D^+_{\mathrm{tor}}(S_\mathrm{\acute{e}t})$, the functor $T/S\mapsto R\Gamma(T,A|_T)$ defines a sheaf for the topology of universal submersions. In fact, it defines an arc-sheaf, by \cite[Theorem 5.4]{BhattMathew}. For effectivity of descent data, one can then reduce to the case that $A$ is concentrated in degree $0$. By Theorem~\ref{thm:etalesheavesdescend}, it descends as a sheaf of sets, but the group structure descends as well, and is necessarily torsion.
\end{proof}
\section{Relative perversity}
Finally, we can prove our results on relative perversity. Recall the statement of our main theorem:
\begin{theorem}\label{thm:maintext} Let $f: X\to S$ be a finitely presented map of qcqs $\mathbb Z[\tfrac 1\ell]$-schemes. Consider any of the settings (A), (B) and (C). In case (B), assume moreover that $\Lambda$ is regular (in the weak sense that any truncation of a perfect complex is still perfect). In case (C), assume that any constructible subset of $S$ has finitely many irreducible components.
There is a $t$-structure $({}^{p/S}D^{\leq 0},{}^{p/S} D^{\geq 0})$ on $D(X)$, called the relative perverse $t$-structure, with the following properties.
\begin{enumerate}
\item[{\rm (i)}] An object $A\in D(X)$ lies in ${}^{p/S}D^{\leq 0}$ (resp.~${}^{p/S}D^{\geq 0}$) if and only if for all geometric points $\overline{s}\to S$ with fibre $X_{\overline{s}} = X\times_S \overline{s}$, the restriction $A|_{X_{\overline{s}}}\in D(X_{\overline{s}})$ lies in ${}^p D^{\leq 0}$ (resp.~${}^p D^{\geq 0}$), for the usual (absolute) perverse $t$-structure.
\item[{\rm (ii)}] For any map $S'\to S$ of schemes (with $S'$ satisfying the same condition as $S$, in case (C)) with pullback $X'=X\times_S S'\to X$, the pullback functor $D(X)\to D(X')$ is $t$-exact with respect to the relative perverse $t$-structures.
\item[{\rm (iii)}] In case (A), the full sub-$\infty$-categories ${}^{p/S} \mathcal D^{\leq 0},{}^{p/S} \mathcal D^{\geq 0}\subset \mathcal D(X)$ are stable under all filtered colimits.
\end{enumerate}
\end{theorem}
\begin{remark}\label{rem:hypotheses} The hypothesis in case (B) is clearly necessary, already when $X=S=\mathrm{Spec} K$ is a geometric point. The hypothesis in case (C) is not quite optimal, but we note that it is definitely necessary to assume that all constructible subsets of $S$ have only finitely many \emph{connected} components. Indeed, take $X=S$, in which case we want a naive $t$-structure on $D_\mathrm{cons}(S,\mathcal O_L)$ or $D_\mathrm{cons}(S,L)$. Assume that $S$ has some constructible subset with infinitely many connected components. Replacing $S$ by this constructible subset, we can assume that there is a surjective map
\[
S\to \{0,1,2,\ldots,\infty\}.
\]
Now one can look at the dualizable complex
\[
[\mathbb Z_\ell\to \mathbb Z_\ell]
\]
where the map multiplies by $\ell^n$ in the fibre of $S$ over $n$ (where $\ell^\infty=0$). It is easy to see that the truncations of this complex are not constructible (in particular, the kernel of this complex is trivial except in the fibre of $S$ over $\infty$).
One can show that when $S$ is purely of characteristic $0$, this weaker condition is in fact sufficient. To show this, one argues as in the proof below, but using the strengthening of Theorem~\ref{thm:perverseULAmaintext}~(i) that says that when $S$ is of characteristic $0$, one needs to assume only that $S$ is connected, in which case one can also replace $\eta$ by any point of $S$. The key point here is that if $V$ is a valuation ring of equal characteristic $0$ with algebraically closed fraction field and $A$ is universally locally acyclic on $X/S$, then if the special fibre of $A$ vanishes, all of $A$ vanishes. In turn, the key here is that (when $X$ is normal) at generic points of the special fibre of $X$, the local ring is itself a valuation ring with algebraically closed fraction field.
On the other hand, when $S$ contains points of positive characteristic, this strengthening of Theorem~\ref{thm:perverseULAmaintext}~(i) fails, and in fact one can find nonzero universally locally acyclic perverse sheaves that vanish in some closed fibers, using Artin-Schreier covers. (We thank Haoyu Hu and Enlin Yang for showing us an explicit example of such a sheaf.) Combining such constructions with the above counterexample can be used to show that having only finitely many irreducible components is essentially necessary.
\end{remark}
We will freely use in the proof that such $t$-structures exist in case $S=\mathrm{Spec} K$ is the spectrum of an algebraically closed field $K$. We advise the reader to read only the proofs in settings (A) and (B) on first reading; in fact, this is necessary to avoid vicious circles.
\begin{proof} Parts (ii) and (iii) are formal consequences of (i). For part (i), assume first that we are in setting (A) or (B). In setting (A), we can formally define a $t$-structure on $\mathcal D_\mathrm{\acute{e}t}(X,\Lambda)$ by taking the connective part ${}^{p/S} \mathcal D^{\leq 0}$ to consist of all $A\in \mathcal D_\mathrm{tor}(X,\Lambda)$ such that for all geometric points $\overline{s}\to S$ the restriction $A|_{X_{\overline{s}}}\in {}^p \mathcal D^{\leq 0}(X_{\overline{s}},\Lambda)$, by applying \cite[Proposition 1.4.4.11]{LurieHA}. We have to show that the corresponding coconnective part has the stated characterization, and that it induces a $t$-structure on the constructible objects in case $\Lambda$ is regular.
We start by analyzing the case where $S=\mathrm{Spec} V$ is the spectrum of an absolutely integrally closed valuation ring $V$ of rank $1$. Let $j: X_\eta\subset X$ and $i: X_s\subset X$ be the open and closed immersion of generic and special fibre. Then it follows formally from the definition of the $t$-structure that $A\in {}^{p/S} \mathcal D^{\geq 0}$ if and only if $A|_{X_\eta}\in {}^p \mathcal D^{\geq 0}(X_\eta,\Lambda)$ and $Ri^! A\in {}^p \mathcal D^{\geq 0}(X_s,\Lambda)$. We have to see that these conditions are equivalent to the two conditions $A|_{X_\eta}\in {}^p \mathcal D^{\geq 0}_\mathrm{\acute{e}t}(X_\eta,\Lambda)$ and $i^\ast A\in {}^p \mathcal D^{\geq 0}(X_s,\Lambda)$. Thus, assume $A|_{X_\eta}\in {}^p \mathcal D^{\geq 0}_\mathrm{\acute{e}t}(X_\eta,\Lambda)$. Then we have a triangle
\[
Ri^! A\to i^\ast A\to i^\ast Rj_\ast(A|_{X_\eta})
\]
in $\mathcal D_\mathrm{\acute{e}t}(X_s,\Lambda)$. Thus, it suffices to show that $i^\ast Rj_\ast(A|_{X_\eta})\in {}^p \mathcal D^{\geq 0}_\mathrm{\acute{e}t}(X_s,\Lambda)$. This follows from the perverse $t$-exactness of nearby cycles, Lemma~\ref{lem:nearbycyclestexact} below.
In the case $S=\mathrm{Spec} V$ is the spectrum of an absolutely integrally closed valuation ring $V$ of rank $1$, it remains to show that relative perverse truncation preserves constructible objects in case $\Lambda$ is regular. But constructibility can be checked fibrewise on $S$, and relative perverse truncation commutes with passing to fibres by what we have already established. Thus, the claim reduces to the geometric fibres, where it is standard.
Next, we show that there is a perverse $t$-structure on the full $\infty$-subcategory $\mathcal D_{\mathrm{cons},\mathrm{tor}}(X,\mathbb Z_\ell)\subset \mathcal D_\mathrm{cons}(X,\mathbb Z_\ell)$ of torsion constructible $\mathbb Z_\ell$-complexes. We observe that as the desired $t$-structure automatically behaves well with respect to base change in $S$, it suffices to construct it locally on $S$ as long as the $\infty$-categories satisfy descent in $S$. By Theorem~\ref{thm:arcdescentD}, this is the case for arc-covers. In particular, using v-descent we can reduce to the case that all connected components of $S$ are spectra of absolutely integrally closed valuation rings.
Assume first that $S$ is connected, so the spectrum of an absolutely integrally closed valuation ring $V$. In that case, by approximation, we can reduce to the case $V$ is of finite rank, and then by arc-descent to the case that $V$ is of rank $1$. We have already handled this case, noting that in this case the $\mathrm{Ind}$-category of $\mathcal D_{\mathrm{cons},\mathrm{tor}}(X,\mathbb Z_\ell)$ can be identified with the torsion objects in $\mathcal D(X_\mathrm{\acute{e}t},\mathbb Z_\ell)$, to which the arguments above apply. In general, observe first that if $A\in \mathcal D_{\mathrm{cons},\mathrm{tor}}(X,\mathbb Z_\ell)$ and $B\in \mathcal D_{\mathrm{cons},\mathrm{tor}}(X,\mathbb Z_\ell)$ such that all geometric fibres of $A$ are in ${}^p \mathcal D^{\leq 0}$ and all geometric fibres of $B$ are in ${}^p \mathcal D^{\geq 1}$, then $\mathrm{Hom}(A,B)=0$. Indeed, take any map $f: A\to B$. To see that $f=0$, it suffices to show that $f$ vanishes after pullback to all connected components of $S$. But here it follows from the results on the $t$-structure. Thus, to show that these subcategories define a $t$-structure, it suffices to construct the truncations of any $A\in \mathcal D_{\mathrm{cons},\mathrm{tor}}(X,\mathbb Z_\ell)$. Fix some $c\in \pi_0 S$, giving rise to a connected component $S_c\subset S$. Using the relative perverse $t$-structure on $X_c=X\times_S S_c\to S_c$, we can find a triangle
\[
{}^{p/S_c}\tau^{\leq 0} A_c\to A_c\to {}^{p/S_c}\tau^{\geq 1} A_c
\]
where $A_c=A|_{X_c}$. As everything is constructible, this triangle extends to a similar triangle over an open and closed neighborhood $S'\subset S$ of $S_c$. By Lemma~\ref{lem:perverseamplitude} below, the resulting triangle still reduces to the relative perverse truncation in all fibres, after possibly shrinking $S'$. Thus, the desired truncation functors can be defined on $A$ at least locally on $S$, but then by uniqueness also globally. This finishes the proof of the existence of the $t$-structure on $\mathcal D_{\mathrm{cons},\mathrm{tor}}(X,\mathbb Z_\ell)$.
In particular, passing to $\mathrm{Ind}$-categories when all connected components of $S$ are absolutely integrally closed valuation rings, we get a $t$-structure on the full $\infty$-subcategory of torsion objects in $\mathcal D_\mathrm{\acute{e}t}(X,\mathbb Z_\ell)$, and then by passing to $\Lambda$-modules and v-descent we get the $t$-structure in setting (A). In setting (B), it remains to prove that the perverse truncations preserve $\mathcal D_\mathrm{cons}(X,\Lambda)$. For this, we can again assume that all connected components of $S$ are absolutely integrally closed valuation rings. Using Lemma~\ref{lem:perverseamplitude}, we can reduce to the connected components. By approximation, we can then also assume that these are of finite rank. In that case, constructibility can be checked on geometric fibres; thus, the claim reduces to the case where $S$ is a geometric point, where the result is standard.
In setting (C), we can formally reduce to the case that $L$ is a finite extension of $\mathbb Q_\ell$, and the case of rational coefficients follows formally from the case of integral coefficients by inverting $\ell$. Now we first show that if $A,B\in \mathcal D_\mathrm{cons}(X,\mathcal O_L)$ have the property that all geometric fibres $A|_{X_{\overline{s}}}\in {}^p \mathcal D^{\leq 0}_\mathrm{cons}(X_{\overline{s}},\mathcal O_L)$ (resp.~$B|_{X_{\overline{s}}}\in {}^p \mathcal D^{\geq 1}_\mathrm{cons}(X_{\overline{s}},\mathcal O_L)$), then $\mathrm{Hom}(A,B)=0$. To see this, write $B$ as the derived limit of the reductions $B_n=B/^{\mathbb L} \ell^n$. Then $B_n$ lies in the corresponding category of type (B), and lies in ${}^{p/S} \mathcal D^{\geq 0}_{\mathrm{cons},\mathrm{tor}}(X,\mathcal O_L)$. We claim that the system ${}^{p/S}\mathcal H^0(B_n)$ of relatively perverse sheaves on $X/S$ is pro-zero. More precisely, fix a constructible stratification of $S$ over which $B$ becomes universally locally acyclic, and let $\overline{s}_1,\ldots,\overline{s}_r$ be the geometric generic points of the strata of $S$ (of which there are only finitely many by assumption). Choose some $N$ such that $\ell^N$ kills the torsion part of ${}^p \mathcal H^1(B|_{X_{\overline{s}_i}})\in \mathrm{Perv}(X_{\overline{s}_i})$ for all $i=1,\ldots,r$. Then we claim that the transition map ${}^{p/S}\mathcal H^0(B_{N+n})\to {}^{p/S}\mathcal H^0(B_n)$ is zero for all $n$. This can be checked over the stratification, and then over the closure of each irreducible component, and then by Theorem~\ref{thm:perverseULAmaintext}~(i), it can be checked in the geometric fibres $X_{\overline{s}_i}$ for $i=1,\ldots,r$, where it follows from our choice of $N$.
Thus, we see that
\[
\mathrm{Hom}(A,B) = \varprojlim_n \mathrm{Hom}(A,B_n) = \varprojlim_n \mathrm{Hom}(A,{}^{p/S}\mathcal H^0(B_n))=0,
\]
as desired. It remains to see that any $A\in \mathcal D_\mathrm{cons}(X,\mathcal O_L)$ admits a triangle
\[
{}^{p/S} \tau^{\leq 0}A\to A\to {}^{p/S} \tau^{\geq 1} A
\]
where the first term is fibrewise in ${}^p \mathcal D^{\leq 0}$, and the last term is fibrewise in ${}^p \mathcal D^{\geq 1}$. This can be obtained from the similar triangle for $A_n=A/^{\mathbb L} \ell^n$ by passing to an inverse limit, using a similar argument as above for controlling $\ell$-power torsion.
\end{proof}
The following lemmas were used in the proof.
\begin{lemma}\label{lem:nearbycyclestexact} Let $S=\mathrm{Spec} V$ be the spectrum of an absolutely integrally closed valuation ring $V$ of rank $1$, and let $X$ be a finite type $S$-scheme. Let $j: X_\eta\subset X$ and $i: X_s\subset X$ be the open and closed immersion of generic and special fibre. Then for any torsion $\mathbb Z_\ell$-algebra $\Lambda$, the nearby cycles functor
\[
R\psi = i^\ast Rj_\ast: \mathcal D_\mathrm{\acute{e}t}(X_\eta,\Lambda)\to \mathcal D_\mathrm{\acute{e}t}(X_s,\Lambda)
\]
is $t$-exact with respect to the absolute perverse $t$-structures on source and target.
\end{lemma}
This is the key fact about the usual perverse $t$-structure that we use.
\begin{proof} Forgetting the $\Lambda$-module structure, we can reduce to $\mathcal D_{\mathrm{cons},\mathrm{tor}}(-,\mathbb Z_\ell)$. As $R\psi$ commutes with Verdier duality and Verdier duality exchanges ${}^p \mathcal D^{\leq 0}_{\mathrm{cons},\mathrm{tor}}(-,\mathbb Z_\ell)$ and ${}^p \mathcal D^{\geq 1}_{\mathrm{cons},\mathrm{tor}}(-,\mathbb Z_\ell)$, it suffices to show that $R\psi$ takes ${}^p \mathcal D^{\leq 0}_{\mathrm{cons},\mathrm{tor}}(X_\eta,\mathbb Z_\ell)$ into ${}^p \mathcal D^{\leq 0}_{\mathrm{cons},\mathrm{tor}}(X_s,\mathbb Z_\ell)$. But this follows from Artin vanishing and \cite[R\'eciproque 4.1.6]{BBDG}.
\end{proof}
\begin{lemma}\label{lem:perverseamplitude} Let $f: X\to S$ be a finitely presented map of qcqs $\mathbb Z[\tfrac 1\ell]$-schemes, and let $A\in \mathcal D_{\mathrm{cons},\mathrm{tor}}(X,\mathbb Z_\ell)$ or $A\in\mathcal D_\mathrm{cons}(X,\Lambda)$ in setting (B) with $\Lambda$ regular. The subset $S^{\leq 0}\subset S$ (resp.~$S^{\geq 0}\subset S$) of all points $s\in S$ for which $A|_{X_s}\in {}^p \mathcal D^{\leq 0}$ (resp.~$A|_{X_s}\in {}^p \mathcal D^{\geq 0}$) is a constructible subset of $S$.
\end{lemma}
\begin{proof} The case of $S^{\leq 0}$ is easy: By passing to a stratification of $X$, this case easily reduces to the case that $A$ is locally constant and $X$ is smooth and equidimensional over $S$, where it is clear.
Using Theorem~\ref{thm:nearbycycles} in the case of fields (where it says that all constructible complexes are universally locally acyclic) and Proposition~\ref{prop:ULAfinitaryarc} in order to spread information at points to constructible subsets, we see that there is a constructible stratification of $S$ over which $A$ becomes universally locally acyclic. Passing to this stratification, we can assume that $A$ is universally locally acyclic. If $\Lambda=\mathbb Z_\ell$, we can now use that passing to relative Verdier duals commutes with any pullback, and exchanges ${}^p \mathcal D^{\leq 0}_{\mathrm{cons},\mathrm{tor}}(X_{\overline{s}},\mathbb Z_\ell)$ and ${}^p \mathcal D^{\geq 1}_{\mathrm{cons},\mathrm{tor}}(X_{\overline{s}},\mathbb Z_\ell)$. In the case $\Lambda$-coefficients, let $I$ be an injective $\Lambda$-module given as the direct sum of the injective hulls of all residue fields of $\Lambda$. Then also the formation of $R\mathcal{H}\mathrm{om}(A,Rf^! I)$ commutes with any pullback, and moreover it is given by $\mathbb D_{X/S}(A)\otimes^{\mathbb L}_\Lambda I$, which becomes locally constant over a constructible stratification (although not with perfect fibres, but this does not matter for the argument). Moreover, in each fibre the functor $A\mapsto R\mathcal{H}\mathrm{om}(A,Rf^! I)$ from $\mathcal D_\mathrm{cons}(X_{\overline{s}},\Lambda)^{\mathrm{op}}$ to $\mathcal D_\mathrm{\acute{e}t}(X_{\overline{s}},\Lambda)$ is faithful and $t$-exact for the perverse $t$-structure; this gives the result in general.
\end{proof}
Using the relative perverse $t$-structure, we have the following relative version of Artin vanishing. We note the strong hypothesis on the base scheme. The essential content of this proposition is due to Gabber \cite{GabberOberwolfach2020}.
\begin{proposition}\label{prop:artinvanishing} Let $S=\mathrm{Spec} V$ be the spectrum of an absolutely integrally closed valuation ring $V$, and let $f: Y\to X$ be an affine map of schemes of finite presentation over $V$. Then
\[
Rf_\ast: D(Y)\subset D(Y_\mathrm{pro\acute{e}t},\Lambda)\to D(X_\mathrm{pro\acute{e}t},\Lambda)
\]
takes values in $D(X)\subset D(X_\mathrm{pro\acute{e}t},\Lambda)$ and is right t-exact for the relative perverse $t$-structure, in any of the settings considered in Theorem~\ref{thm:maintext}. Moreover, if $S'=\mathrm{Spec} V'$ is of the similar form and $g: S'\to S$ is flat, with pullback $f': Y'\to X'$ (with $g_Y: Y'\to Y$ and $g_X: X'\to X$), then the base change map
\[
g_X^\ast Rf_\ast\to Rf'_\ast g_Y^\ast
\]
of functors $D(Y)\to D(X')$ is an isomorphism.
\end{proposition}
We note that over any base $S$, and for any affine map $g: Y\to X$ of finitely presented $S$-schemes, the functor $Rg_!$ is left t-exact for the relative perverse t-structure; this assertion immediately reduces to the statement over geometric points. By contrast, Proposition \ref{prop:artinvanishing} does not formally reduce to its version over geometric points, and does not hold over more general bases. (We warn the reader that over $S$ as in the proposition, Verdier duality is not a perfect duality; in fact, it vanishes on all sheaves whose restriction to the generic fibre vanishes. Thus, one cannot control $Rg_\ast$ in terms of $Rg_!$.)
\begin{proof} Setting (C) with rational coefficients reduces to setting (C) with integral coefficients by inverting $\ell$, and this in turn reduces to setting (B). Forgetting the $\Lambda$-module structure, all statements except for preservation of constructibility reduce to the case of $\mathcal D_{\mathrm{cons},\mathrm{tor}}(-,\mathbb Z_\ell)$.
Let us first handle the base change result. By checking sections over all \'etale $X'$-schemes, it suffices to show that the map
\[
R\Gamma(X',g_X^\ast Rf_\ast A)\to R\Gamma(Y',g_Y^\ast A)
\]
is an isomorphism. But Corollary~\ref{cor:invarianceofcohomology} reduces this to $R\Gamma(X,Rf_\ast A) = R\Gamma(Y,A)$ which is clear.
Now we have to show that $Rf_\ast$ preserves constructibility and is right t-exact. By approximation, we can assume that $V$ is of finite rank, and that the sheaf is concentrated in one fibre over $S$. As in the proof of Lemma~\ref{lem:finitecohomdim}, we can then use arc-excision to reduce to the case that $V$ is of rank $1$ (and a sheaf concentrated on the generic fibre).
To show preservation of constructibility, we can now make a d\'evissage to sheaves concentrated on the special fibre, and sheaves $\ast$-extended from the generic fibre. The first case reduces to the known assertion when $S$ is a geometric point, and the second case also reduces to this assertion on the generic fibre, together with Theorem~\ref{thm:nearbycycles}.
It remains to prove right t-exactness, in the case that $V$ is of rank $1$ and the sheaf is concentrated in the generic fibre. We first handle the case that $X=S$ and $Y$ is an affine curve over $S$. In that case, we have to prove that the cohomological dimension of $Y$ is $1$. We can assume that $\mathcal F=j_! L$ for some open immersion $j: V\subset Y$ contained in the generic fibre and some local system $L$ on $V$; we can also assume that $V$ is smooth. Let $W\to V$ be a finite \'etale $G$-torsor trivializing $L$ and let $j': W\subset Z$ be the normalization of $Y$ in $W$. Then $R\Gamma(Y,\mathcal F)$ can be identified with the $G$-homology on $R\Gamma(Z,j'_! L|_W)$. Thus, we can assume that $L$ is trivial, and then reduce to $L=\mathbb F_\ell$. Moreover, we can assume that the generic fibre of $Z$ is smooth. Let $j_Z: Z_\eta\to Z$ be the open immersion, and $i_Z: Z_s \to Z$ the closed immersion of the special fiber. Then the cone of $j'_! \mathbb F_\ell\to j_{Z!} \mathbb F_\ell$ is a skyscraper sheaf at the finitely many points of $Z_\eta\setminus W$, all of which are geometric points, and so we reduce to the sheaf $j_{Z!}\mathbb F_\ell$. This sheaf sits in a triangle \[ j_{Z!}\mathbb F_\ell \to Rj_{Z \ast}\mathbb F_\ell \to i_{Z \ast} i_{Z}^{\ast} Rj_{Z \ast}\mathbb F_\ell \to, \]so applying $R\Gamma(Z,-)$ gives a triangle
\[ R\Gamma(Z, j_{Z!}\mathbb F_\ell) \to R\Gamma(Z_\eta, \mathbb F_\ell) \to R\Gamma(Z_s, i_{Z}^{\ast} Rj_{Z \ast} \mathbb F_\ell) \to. \] Using Lemma \ref{lem:nearbycyclestexact} together with Artin vanishing in the generic and special fibers, we see that the two rightmost terms of this triangle are concentrated in degrees $\leq 1$. This reduces us to the surjectivity of the map $H^1(Z_\eta, \mathbb F_\ell) \to H^1(Z_s, i_{Z}^{\ast} Rj_{Z \ast} \mathbb F_\ell)$, which is Lemma \ref{lem:runge} below.
The rest of the following argument is similar to the proof of Artin vanishing, and inspired by \cite[Th\'eor\`eme 2.4]{IllusieVariation}. We argue by induction on $d(A)$, where for $A\in {}^{p/S} D_{\mathrm{cons},\mathrm{tor}}^{\leq 0}(Y,\mathbb Z_\ell)$, we denote by $d(A)$ the relative dimension of the closure of the support of $A$. Here, the relative dimension of a scheme of finite type over $S$ is the maximum of the dimension of its two fibres. Choosing a closed immersion, we can assume that $Y=\mathbb A^n_X$, and then by induction we reduce to $Y=\mathbb A^1_X$. Let $j_Y: Y_\eta\subset Y$ and $i_Y: Y_s\subset Y$ be the inclusion of the generic and special fibre (and we will use similar notation for $X$). Using the triangle
\[
j_{Y!} A|_{Y_\eta}\to A\to i_{Y\ast} A|_{Y_s}
\]
and Artin vanishing in the special fibre, we reduce to $A=j_{Y!} A_0$ for some $A_0\in {}^p D_{\mathrm{cons},\mathrm{tor}}^{\leq 0}(Y_\eta,\mathbb Z_\ell)$.
We can replace $X$ by a strict henselization at one of its points, which we can assume to lie in the special fibre (as the result is known in the generic fibre). In fact, we can assume that it is a closed point of the special fibre. Indeed, if not, we can find a map $X\to \mathbb A^1_S$ sending $x$ to the generic point of the special fibre, which on strict henselizations will factor over the strict henselization of $\mathbb A^1_S$ at the generic point of the special fibre, which is the spectrum of a valuation ring $W$ whose fraction field has absolute Galois group pro-$p$, where $p$ is the residue characteristic of $V$. As pro-$p$-extensions are insensitive to the desired vanishing, we can then replace $V$ by $W$ and argue by induction. Let $x\in X$ denote the closed point of $X$. We have to show that
\[
R\Gamma(\mathbb A^1_X,A)\in D^{\leq 0}(\mathbb Z_\ell).
\]
Now consider the cartesian diagram
\[\xymatrix{
\mathbb A^1_X\ar[r]^j\ar[d]^{g^\circ} & \mathbb P^1_X\ar[d]^g\\
\mathbb A^1_S\ar[r]^{j'} & \mathbb P^1_S.
}\]
Then by proper base change
\[
R\Gamma(\mathbb A^1_X,A) = R\Gamma(\mathbb P^1_X,Rj_\ast A)=R\Gamma(\mathbb P^1_x,(Rj_\ast A)_{\mathbb P^1_x}).
\]
Moreover, $(Rj_\ast A)|_{\mathbb P^1_x}$ is concentrated on $x\times\{\infty\}$, as $A=j_{Y!} A_0$. It follows that
\[
R\Gamma(\mathbb A^1_X,A) = (Rj_\ast A)_{x\times\{\infty\}}.
\]
Taking strict henselizations at $x\in X$ and $s\in S$ on the right-hand side of the previous cartesian diagram, we get a cartesian diagram
\[\xymatrix{
U\ar[r]^{u}\ar[d]^{h^\circ} & Z\ar[d]^{h}\\
V\ar[r]^{v} & T
}\]
and
\[
(Rj_\ast A)_{x\times\{\infty\}} = R\Gamma(U,A) = R\Gamma(V,Rh^\circ_\ast A).
\]
Now $h^\circ: U\to V$ is a map of affine schemes essentially of finite type over $S$, and $V$ does not map to any closed points of $\mathbb A^1_S$. It follows from the inductive hypothesis (and passage to limits) that $Rh^\circ_\ast A\in {}^{p/S} D^{\leq 0}_\mathrm{tor}(V,\mathbb Z_\ell)$, where we interpret the latter statement in the loose sense that all the stalks sit in the expected degrees. Thus, it remains to show that for all $B\in {}^{p/S} D^{\leq 0}_\mathrm{tor}(V,\mathbb Z_\ell)$, one has
\[
R\Gamma(V,B)\in D^{\leq 0}(\mathbb Z_\ell).
\]
Now $V$ is a limit of affine curves over $S$, so by passage to limits, this reduces to the case of curves already handled.
\end{proof}
\begin{lemma}\label{lem:runge} Let $S=\mathrm{Spec} V$ be the spectrum of an absolutely integrally closed valuation ring of rank one. Let $X$ be an affine curve over $S$ with smooth generic fiber, with $j:X_\eta \subset X$ and $i: X_s \subset X$ the habitual inclusions. Then the natural map $H^1(X_\eta, \mathbb F_\ell) \to H^1(X_s, i^{\ast} Rj_{ \ast} \mathbb F_\ell)$ is surjective.
\end{lemma}
\begin{proof} Let $\hat{X}$ be the formal completion of $X$ along its special fiber, and let $\hat{X}_\eta$ be the associated rigid generic fiber, so $\hat{X}_{\eta}$ is naturally an open affinoid subset of the rigid analytic curve $X_{\eta}^{\mathrm{an}}$. By \cite[Corollary 3.5.14]{HuberBook}, there is a natural isomorphism $H^1(X_s, i^{\ast} Rj_{ \ast} \mathbb F_\ell) \cong H^1(\hat{X}_{\eta}, \mathbb F_\ell)$, under which the map in the lemma identifies with the natural map $H^1(X_\eta^{\mathrm{an}}, \mathbb F_\ell) \to H^1(\hat{X}_{\eta},\mathbb F_\ell)$ induced by restriction. We thus need to see that the latter map is surjective.
By Poincar\'e duality \cite[Chapter 7]{HuberBook}, the map in question is dual to the natural map $a: H^1_c(\hat{X}_{\eta}, \mathbb F_\ell) \to H^1_c(X_\eta^{\mathrm{an}}, \mathbb F_\ell)$, so it suffices to see that $a$ is injective. Let $Y$ be the smooth projective compactification of $X_\eta$, so we have compatible open immersions $j: \hat{X}_{\eta} \to Y^{\mathrm{an}}$ and $j': X_\eta^{\mathrm{an}} \to Y^{\mathrm{an}}$. Taking cohomology on $Y^{\mathrm{an}}$ of the exact sequence $0 \to j_! \mathbb F_\ell \to j'_! \mathbb F_\ell \to (j'_! \mathbb F_\ell)/(j_! \mathbb F_\ell) \to 0$, we get an exact sequence \[0 \to H^0(Y^{\mathrm{an}},(j'_! \mathbb F_\ell)/(j_! \mathbb F_\ell)) \to H^1_c(\hat{X}_{\eta}, \mathbb F_\ell) \overset{a}{\to} H^1_c(X_\eta^{\mathrm{an}}, \mathbb F_\ell). \] However, as any connected component of $Y^{\mathrm{an}}\setminus \hat{X}_\eta$ contains a point of $Y^{\mathrm{an}}\setminus X_\eta^{\mathrm{an}}$, one has
\[
H^0(Y^{\mathrm{an}},(j'_! \mathbb F_\ell)/(j_! \mathbb F_\ell)) = 0.
\]
This gives the result.
\end{proof}
There is also a relative perverse $t$-structure on universally locally acyclic sheaves.
\begin{theorem}\label{thm:ULAmaintext} Assume that $X$ is a separated scheme of finite presentation over $S$, and consider one of the settings (B) and (C). In case (B), assume that $\Lambda$ is regular. In case (C), assume that $S$ has only finitely many irreducible components. Then there is a relative perverse $t$-structure
\[
{}^{p/S} D^{\mathrm{ULA},\leq 0}(X/S),{}^{p/S} D^{\mathrm{ULA},\geq 0}(X/S)\subset D^{\mathrm{ULA}}(X/S)
\]
such that $A\in {}^{p/S} D^{\mathrm{ULA},\leq 0}(X/S)$ (resp.~$A\in {}^{p/S} D^{\mathrm{ULA},\geq 0}(X/S)$) if and only if for all geometric points $\overline{s}\to S$, the fibre $A|_{X_{\overline{s}}}$ lies in ${}^p D^{\leq 0}(X_{\overline{s}})$ (resp.~${}^p D^{\geq 0}(X_{\overline{s}})$).
\end{theorem}
\begin{proof} In setting (B), we have to show that the truncation functors for the relative perverse $t$-structure from Theorem~\ref{thm:maintext} preserves the condition of being universally locally acyclic. As the truncation functors commute with any pullback, Corollary~\ref{cor:ULAtestrank1} reduces us to the case that $S=\mathrm{Spec} V$ is the spectrum of an absolutely integrally closed valuation ring of rank $1$. In that case, Theorem~\ref{thm:nearbycycles} and Lemma~\ref{lem:nearbycyclestexact} give the result.
In setting (C) with integral coefficients, one can now argue exactly as in the proof of Theorem~\ref{thm:maintext}. In setting (C) with rational coefficients, we note that by inverting $\ell$ we get the desired $t$-structure on the full subcategory
\[
D^{\mathrm{ULA}}(X/S,\mathcal O_L)[\tfrac 1\ell]\subset D^{\mathrm{ULA}}(X/S,L).
\]
Moreover, if $A\in D^{\mathrm{ULA},\leq 0}(X/S,\mathcal O_L)[\tfrac 1\ell]$ and $B\in D^{\mathrm{ULA},\geq 1}(X/S,\mathcal O_L)[\tfrac 1\ell]$, then for any scheme $S'/S$, one has $\mathrm{Hom}(A|_{X\times_S S'},B|_{X\times_S S'})=0$. This reduces to the case of integral coefficients, and then to torsion coefficients, arguing as in the proof of Theorem~\ref{thm:maintext} (where the pro-zeroness of some system is proved over $X$, and then follows via base change over $X\times_S S'$). This implies that for any v-cover $S'\to S$ such that $S'$ still only has finitely many irreducible components, the $t$-structure on
\[
D^{\mathrm{ULA}}(X\times_S S'/S',\mathcal O_L)[\tfrac 1\ell]\subset D^{\mathrm{ULA}}(X\times_S S'/S',L)
\]
descends to a $t$-structure on the full subcategory of $D^{\mathrm{ULA}}(X/S,L)$ of those objects whose pullback to $X\times_S S'$ admits a universally locally acyclic integral structure. Indeed, one applies the preceding observation to the $S'$-schemes $S'\times_S S'\times_S\ldots\times_S S'$ to see that the perverse truncations over $S'$ automatically descend to $S$. But by Proposition~\ref{prop:ULAintegralstructure}, all objects of $D^{\mathrm{ULA}}(X/S,L)$ admit such an integral structure over some h-cover of $S$, which we can arrange to have only finitely many irreducible components still (by replacing it by the closure of the preimage of the finitely many generic points of $S$).
\end{proof}
\begin{theorem}\label{thm:perverseULAmaintext} Consider one of the settings (B) and (C). In case (B), assume that $\Lambda$ is regular. Moreover, in all settings, assume that $S$ is irreducible, and let $\eta\in S$ be the generic point, with $j: X_\eta\subset X$ the inclusion.
\begin{enumerate}
\item[{\rm (i)}] The restriction functor
\[
j^\ast: \mathrm{Perv}^{\mathrm{ULA}}(X/S)\to \mathrm{Perv}(X_\eta)
\]
is an exact and faithful functor of abelian categories. If $\Lambda$ is noetherian, the category $\mathrm{Perv}^{\mathrm{ULA}}(X/S)$ is noetherian. If $\Lambda$ is artinian, it is also artinian.
\item[{\rm (ii)}] Assume that $S$ is geometrically unibranch. The restriction functor
\[
j^\ast: \mathrm{Perv}^{\mathrm{ULA}}(X/S)\to \mathrm{Perv}(X_\eta)
\]
is exact and fully faithful, and its image is stable under subquotients.
\end{enumerate}
\end{theorem}
As remarked above, when $S$ is a $\mathbb{Q}$-scheme, part (i) of this theorem can be strengthened to the assertion that for any point $s \to S$ with associated fiber $i:X_s \to X$, the restriction functor $i^{\ast}: \mathrm{Perv}^{\mathrm{ULA}}(X/S)\to \mathrm{Perv}(X_s)$ is exact and faithful.
\begin{proof} In part (i), we already know that the functor is exact. We need to see that it is faithful. Once this is known, the statement that $\mathrm{Perv}^{\mathrm{ULA}}(X/S)$ is noetherian (resp.~artinian) reduces to the analogous assertion for $\mathrm{Perv}(X_\eta)$ where it is standard. Now for exact functors of abelian categories, faithfulness is equivalent to being conservative. In other words, we need to see that if $A\in \mathrm{Perv}^{\mathrm{ULA}}(X/S)$ and $j^\ast A=0$, then $A=0$. As $\eta$ specializes to any other point, we can then assume that $S$ is the spectrum of a valuation ring, and one can assume that its fraction field is algebraically closed. Then the result follows from Theorem~\ref{thm:nearbycycles}.
In part (ii), we already know that the functor is exact and faithful. Consider first setting (B). This can be embedded into setting (A), and we first claim that for any $A\in \mathrm{Perv}^{\mathrm{ULA}}(X/S)$, the map
\[
A\to {}^{p/S}\tau^{\leq 0} Rj_\ast j^\ast A
\]
is an isomorphism. In fact, being universally locally acyclic implies that
\[
Rj_\ast j^\ast A\cong A\otimes^{\mathbb L}_{\Lambda} f^\ast(Rk_\ast \Lambda)
\]
(as in the proof of Proposition~\ref{prop:basicpropertiesULA}) where $k: \eta\subset \mathrm{Spec} S$ is the inclusion. Now it follows from the cone of $M\to Rk_\ast M$ being in degrees $\geq 1$ for any $\ell$-power torsion $\Lambda$-module $M$, which is a simple consequence of being geometrically unibranch. The map $A\to {}^{p/S}\tau^{\leq 0} Rj_\ast j^\ast A$ being an isomorphism implies that $j^\ast: \mathrm{Perv}^{\mathrm{ULA}}(X/S)\to \mathrm{Perv}(X_\eta)$ is fully faithful.
In setting (B), it remains to see that the image is stable under passage to subquotients. It is enough to handle subobjects, so take $A\in \mathrm{Perv}^{\mathrm{ULA}}(X/S)$ and let $B_0\subset j^\ast A\in \mathrm{Perv}(X_\eta)$ be a subobject. First, we show that if $S'\to S$ is a projective birational map such that $B_0$ admits an extension to $B'\in \mathrm{Perv}^{\mathrm{ULA}}(X_{S'}/S')$, then $B_0$ even extends to $B\in \mathrm{Perv}^{\mathrm{ULA}}(X/S)$. By v-descent, it suffices to see that the two pullbacks of $B'\subset A|_{X_{S'}}$ to $X_{S'\times_S S'}$ agree (as sub-perverse sheaves of $A|_{X_{S'\times_S S'}}$). They clearly agree when restricted to the diagonal $S'\subset S'\times_S S'$. But each geometric fibre $S'_{\overline{s}}$ of $S'\to S$, over a geometric point $\overline{s}\to S$, is a connected projective variety (as $S$ is geometrically unibranch), and thus by Lemma~\ref{lem:familiessubperversesheaves} the restriction of $B'$ to $X_{S'_{\overline{s}}}$ must be a constant sub-perverse sheaf of $A|_{X_{\overline{s}}}$ base-changed to $S'_{\overline{s}}$. This gives the desired claim.
For any such $S'\to S$, we can look at the maximal open subscheme $U'\subset S'$ to which $B_0$ extends as a universally locally acyclic perverse sheaf. (Here, as an exception, $U'$ may not be quasicompact.) Assume that $U'\neq S'$ for all such $S'\to S$. Then we can find a compatible family of points in $S'\setminus U'$ over all $S'\to S$, giving in the inverse limit a valuation ring $\mathrm{Spec} V\to S$ with $\mathrm{Spec} K=\eta\subset S$, where $K$ is the fraction field of $V$, and by Proposition~\ref{prop:ULAfinitaryarc} the non-existence of an extension of $B_0$ to a universally locally acyclic (necessarily perverse) sheaf to $S'\setminus U'$ implies that there is no such extension to $\mathrm{Spec} V$ either. In other words, we can assume $S=\mathrm{Spec} V$ is the spectrum of a valuation ring. We can now similarly pass up the tower of finite covers of $V$ (noting that taking generically \'etale extensions with Galois group $G$, any extension will automatically be $G$-equivariant and hence descend; while inseparable extensions do not matter). Thus, we can assume that the fraction field of $V$ is algebraically closed. But now Theorem~\ref{thm:nearbycycles} shows that $B_0$ must extend (and necessarily to a sub-relatively perverse sheaf, by Lemma~\ref{lem:nearbycyclestexact}).
It remains to prove (ii) in setting (C). With integral coefficients, this reduces easily to setting (B). To deduce it with rational coefficients, it suffices to show that any $A\in \mathrm{Perv}^{\mathrm{ULA}}(X/S,L)$ admits an $\ell$-torsion free integral structure $A_0\in \mathrm{Perv}^{\mathrm{ULA}}(X/S,\mathcal O_L)$. In fact, such integral structures are equivalent to $\ell$-torsion free integral structures of $A_\eta$ (which, over a field, are automatically universally locally acyclic). It follows from the case of integral coefficients that such an integral structure $A_0$ of $A$ is determined by the integral structure of $A_\eta$ (i.e., the forgetful functor is fully faithful); to see that it is essentially surjective, we can argue as in the previous two paragraphs, using the second part of Lemma~\ref{lem:familiessubperversesheaves}.
\end{proof}
We used the following lemma.
\begin{lemma}\label{lem:familiessubperversesheaves} Let $k$ be an algebraically closed field, let $X/k$ be a separated scheme of finite type, let $\Lambda$ be a regular $\mathbb Z_\ell$-algebra and let $A\in \mathrm{Perv}(X,\Lambda)$ in setting (B). The functor taking a $k$-scheme $S$ to the set of universally locally acyclic sub-relative perverse sheaves $B\subset A|_{X_S}$ in $\mathrm{Perv}^{\mathrm{ULA}}(X_S/S)$ is representable by a $k$-scheme that is a disjoint union of copies of $\mathrm{Spec} k$.
Similarly, if $A\in \mathrm{Perv}(X,L)$ in setting (C), then the functor taking any $k$-scheme $S$ to the set of universally locally acyclic $A_0\in \mathcal D_\mathrm{cons}(X_S,\mathcal O_L)$ with $A_0[\tfrac 1\ell]\cong A|_{X_S}$ and such that $A_0/^{\mathbb L}\ell\in \mathcal D_\mathrm{cons}(X_S,\mathcal O_L/\ell)$ is relatively perverse, is representable by a $k$-scheme that is a disjoint union of copies of $\mathrm{Spec} k$.
\end{lemma}
\begin{proof} In both cases, we need to see that this functor is the constant sheaf on its value on $S=\mathrm{Spec} k$. By adjunction, there is a map, and both functors are finitary arc-sheaves. It is thus sufficient to show that it induces an isomorphism on $S=\mathrm{Spec} V$-valued points where $V$ is an absolutely integrally closed valuation ring over $k$. By Theorem~\ref{thm:nearbycycles} and Lemma~\ref{lem:nearbycyclestexact}, one can reduce to the generic fibre $K$ of $V$. Now it is a simple consequence of general properties of invariance under change of algebraically closed base field. Indeed, in the first setting one can filter $A$ by intermediate extensions of local systems on (smooth) strata to reduce to the case of local systems on smooth $X$. In that case $B$ is also necessarily a local system, and the result follows from $\pi_1(X_K)\to \pi_1(X)$ being surjective. A similar argument works in the second setting.
\end{proof}
Finally, we note that the results also give the following result.
\begin{proposition}\label{prop:ULAintegralstructureunibranch} Assume that $S$ is geometrically unibranch and irreducible. Let $f: X\to S$ be a separated scheme of finite presentation and $A\in D_\mathrm{cons}^{\mathrm{ULA}}(X,L)$ in setting (C) with rational coefficients. Then there is some $A_0\in D_\mathrm{cons}^{\mathrm{ULA}}(X,\mathcal O_L)$ with $A\cong A_0[\tfrac 1\ell]$. If $A$ is relatively perverse, one can find such an $A_0$ that is also relatively perverse and $\ell$-torsion free (as a relatively perverse sheaf).
\end{proposition}
\begin{proof} Passing to a filtration of $A$, we can assume that $A$ is relatively perverse. In that case, there is an $A_0$ that is relatively perverse and $\ell$-torsion free, as was proved at the end of the proof of Theorem~\ref{thm:perverseULAmaintext}.
\end{proof}
\bibliographystyle{amsalpha}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.